Combination Product Industry News & Guidance
Sharing device-related information and wisdom
that will help you succeed
The Adventures of Stat-Woman:
Smart Statistical Decisions Across the Combination Product Battlefield
No matter what the engineering discipline, statistics always finds a way into the fight. Sometimes it arrives quietly as a simple summary or chart and other times it shows up in full force, demanding decisions that can determine the fate of an entire development program. Statistics gives teams incredible power, the ability to understand variability, uncover patterns, and make informed, defensible decisions even in the most complex combination product landscapes. But with that power comes a challenge: there are so many tools available that choosing the right one can feel overwhelming.
Anyone who has survived a statistics course, consulted their resident statistician, or opened Minitab knows the feeling. There is rarely a clear, step-by-step Stat-Signal that tells you exactly which statistical technique to use and when. Faced with non-normal data, limited samples, competing risks, and aggressive timelines, even experienced teams can find themselves frozen in the headlights thinking, “Holy non-normal data, Stat-Woman!”
Rather than claiming there is a single “perfect” statistical solution for every scenario, this article is about building and sharpening your Statistical Toolbelt. Like any good superhero, Stat-Woman doesn’t rely on one gadget for every battle. She knows when to pull a simple descriptive tool, when to deploy a more advanced method, and when combining multiple tools delivers the strongest blow against Dr. Data and his collaborators known as The Rogue Distribution. Ultimately, statistics is most powerful when it is used intentionally, sometimes on its own and often alongside engineering judgment, risk management, and domain expertise. That ability to choose wisely is what separates a good statistical hero from a truly great one.
Tonight, you’ll join Stat-Woman as she patrols the streets of Variance City, opening her toolbelt, evaluating the situation, and choosing the right statistical techniques to bring order to uncertainty to help guide you on your own personal heroic journey.
Suiting Up in the Stat-Cave:
Where Does the Statistical Journey Begin?
Once the Stat-Signal goes off from the dark depths of Variance City, it’s time to assess which tools in Stat-Woman’s arsenal are best suited for the job. But before she ever takes to the streets, it’s often critical to step back and evaluate the situation from the Stat-Cave. One of the most common mistakes in statistics is jumping straight into analysis taking whatever data happens to be available and applying the most familiar or convenient technique. While this may produce an answer, it doesn’t guarantee it’s the right one. Fortunately, Stat-Woman’s lifelong friend and trusted caretaker, Mr. Ledger, is always nearby to help slow things down, challenge assumptions, and cast a critical eye toward lurking confounding variables.
Even though the mean streets of Variance City can be confusing, there is still hope. Many core statistical concepts such as confidence, reliability, null versus alternative hypotheses, and others apply across nearly every stage of development. The real challenge lies in interpretation. It’s often easy to generate results that appear to support a hypothesis, but far more difficult to truly understand what the data is saying and translate those results into a meaningful, defensible story especially when the original hypothesis doesn’t hold.
And as Stat-Woman knows all too well, one of the most insidious villains Variance City has ever faced is the dastardly Miss-Interpretation…but that’s a story for another evening…
The Training Grounds:
Exploratory and Learning-Focused Statistical Tools
Mr. Ledger often recalls the early days, before Stat-Woman ever patrolled the streets of Variance City. Back then, most nights were spent in the Training Grounds where she was testing ideas, pushing systems to their limits, and learning through failure. It was here that Stat-Woman learned an essential lesson: early statistical tools aren’t meant to prove a point or deliver final answers. They exist to explore, reveal variability, and uncover risk while the cost of being wrong is still low. In this phase, statistics serve as a guide rather than a judge, shaping understanding long before rigor and acceptance criteria take center stage.
In those early days, it wasn’t unusual for Stat-Woman to throw hypotheses at the wall to see what stuck. There was rarely a single clear strategy for taking on The Rogue Distribution, so experimentation was critical. Simple descriptive statistics helped establish baselines and expose unexpected variability. Visual tools such as histograms, box plots, scatter plots, and run charts were often the first line of defense, quickly revealing patterns, shifts, and outliers that no summary table ever could.
As questions became more focused, Stat-Woman leaned on comparative tools like basic t-tests and nonparametric alternatives to explore differences between concepts or configurations. Screening Design of Experiments (DOEs) helped identify which factors mattered most and which could safely be deprioritized. Even quick, informal studies, napkin-sketch data collections, pilot builds, and/or benchtop trials played a valuable role, helping narrow the problem space and avoid chasing every possible variable at once.
Additionally, one important aspect that tends to come up frequently is non-normal data. The natural reaction is to start panicking and figure out ways to justify any deviations from normality, but like in life (especially for the characters in Variance City), its ok to be non-normal. If there is a physical reason for data to behave non-normally (usually due to material properties) and/or unique testing circumstances, then make note of that and carry forward that assumption.
During these early stages of development, this may happen frequently and its up to the design team to figure out what is worth investigating vs not based on the risk. Whichever direction is taken though, it is critical to not deviate too much down the line in Design Verification as it could be seen as overfitting the data to just pass a test.
Measurement systems themselves were also questioned early. Before trusting any result, Stat-Woman learned to ask whether the data could be trusted at all. Early measurement system evaluations, repeatability checks, and sanity tests helped distinguish real signals from noise which was an especially important step in a city where variability thrives in the shadows. Definitive conclusions were still hard to come by, but that was never the point.
Even partial insight was far better than operating blindly, especially in Variance City, where uncertainty has a way of ricocheting when least expected.
Fortunately, Stat-Woman’s laboratory notebook discipline was impeccable. Years spent generating ideas, documenting assumptions, and capturing results meant that no lesson was truly lost. Over time, these fragmented experiments could finally be aggregated into something meaningful. When the data were brought together and fed into the Stat-Computer, patterns began to emerge, hypotheses sharpened, and plans started to take shape.
There were still variables to narrow into specific values, assumptions to formalize, and system requirements to justify, but the groundwork had been laid. These early statistical tools didn’t deliver answers…they delivered understanding. And that understanding would soon be tested far beyond the safety of the Training Grounds.
Into the Field:
Applying Statistical Tools with Intent During Design Verification
Eventually, every training exercise ends. When Stat-Woman steps out of the Training Grounds and into the field, the rules change. Design Verification is no longer about exploring possibilities: it’s about making decisions with intent. In the streets of Variance City, statistical tools are chosen deliberately, studies are planned with purpose, and results must stand up to scrutiny.
This is where learning gives way to confirmation and where statistical rigor begins to matter not just for insight, but for accountability.
Eventually, every training exercise ends. When Stat-Woman steps out of the Training Grounds and into the field, the rules change. Design Verification is no longer about exploring possibilities: it’s about making decisions with intent. In the streets of Variance City, statistical tools are chosen deliberately, studies are planned with purpose, and results must stand up to scrutiny. This is where learning gives way to confirmation and where statistical rigor begins to matter not just for insight, but for accountability.
Intriguingly, even as the work becomes more serious and the pace of testing accelerates, there is still flexibility in how statistical tools are applied, provided those choices are justified and documented up front. In some cases, a confidence interval may be appropriate. In others, a tolerance interval with K-factor analysis is the better fit. Some industries rely heavily on capability metrics while others demand time-based reliability testing conducted over hundreds of cycles…or even years. One of Stat-Woman’s earliest battles with Captain Assumption, in fact, required a deliberate combination of all these approaches within a single Design Verification Test Matrix.
As with every other statistical decision, it always comes back to intent. What is the purpose of a given requirement or test? Is it meant to describe performance at a single point in time, over repeated cycles, or across multiple parallel data sets? The sooner these questions are answered, the faster Dr. Data’s mist of uncertainty begins to lift allowing sound, defensible decisions to take shape.
Now fast forward several months into a Design Verification testing regimen and Stat-Woman finds herself face-to-face with one of her most unusual foes: The Outlier. Known for causing disruption in testing, regulatory submissions, and analyses built on assumptions of normality, The Outlier is often misunderstood. The instinctive reaction is to dismiss him outright, to exclude him without justification or, worse, pretend he never existed.
The harsh reality is that The Outlier is often trying to reveal a hidden nugget of gold. He may represent a hidden failure mode, an unaccounted-for source of variability, or a limitation in the test setup itself. In some cases, bringing The Outlier into the fold, carefully and transparently with historical context, can strengthen the overall understanding of system behavior. But caution is always required. While he may be an ally today, left uninvestigated, The Outlier has been known to return later with consequences severe enough to undermine an entire data set and even Dr. Data himself…
The Final Battle:
Validation and Regulatory Decisions When Rigor Matters Most
Every patrol in Variance City eventually leads to a final confrontation. For Stat-Woman, this moment comes during validation and regulatory decision-making. This is when preparation gives way to proof and every choice must withstand scrutiny. There is no room for improvisation here. The statistical tools selected for validation are not chosen for convenience or familiarity, but for their ability to deliver clear, defensible answers to very specific questions coming from Mayor Threshold.
In the Final Battle, intent is everything. Validation is no longer about learning what might happen, but it is about demonstrating what does happen, consistently and reliably, under defined conditions. Statistical methods at this stage must be pre-specified, assumptions justified, and acceptance criteria established well before testing begins. Confidence intervals become commitments. Reliability estimates carry real-world consequences. Tolerance limits draw firm boundaries between acceptable and unacceptable performance.
Mr. Ledger is never far from Stat-Woman’s side during these moments. He reminds her that every test protocol, every analysis plan, and every statistical decision must be traceable back to a requirement, a risk, or an intended use. Regulators don’t ask whether a result is interesting: they ask whether it is justified, reproducible, and appropriate for the decision being made. In this phase, documentation is not an afterthought, but it is part of the statistical method itself.
Validation often demands a different mix of tools than earlier stages. Hypothesis testing may be required to demonstrate compliance, while confidence and tolerance intervals provide the context needed to understand variability and uncertainty. Reliability testing may span hundreds of cycles or years requiring careful planning to ensure assumptions remain valid over time. Sample sizes are no longer flexible guesses as they are deliberate choices balanced against risk, feasibility, and regulatory expectation.
This is also the phase where shortcuts are most tempting…and most dangerous. Reusing exploratory methods without reconsidering assumptions, adjusting acceptance criteria midstream, or post-hoc reinterpretation of results are all moves Dr. Data is eager to exploit. Stat-Woman knows that statistical rigor at this stage isn’t about being overly conservative. It’s about being intentional, transparent, and defensible when the stakes are highest.
When the Final Battle is won, it isn’t marked by celebration so much as it’s marked by confidence. Confidence that the data tells a clear story. Confidence that decisions can be defended months or years later. And confidence that the product entering the real world will behave as expected, even when Variance City inevitably throws something unexpected its way. And that’s not to say Variance City is the only stomping ground around either. As Mr. Ledger would occasionally remind Stat-Woman, he cut his teeth back in the days before Control City was ever whipped into shape…
But as Stat-Woman knows all too well, defeating any of the foes in The Rogue Distribution doesn’t mean the city is safe forever. The fight doesn’t end here. It simply changes form.
Staying Sharp in Variance City:
Avoiding Statistical Pitfalls Across the Product Lifecycle
As Stat-Woman knows well, no victory in Variance City is ever final. Even after a successful design verification or regulatory submission, the work continues. Maintaining statistical continuity across the product lifecycle is what keeps yesterday’s lessons from becoming tomorrow’s surprises. When assumptions, methods, and interpretations shift without intention, that’s when Variance City’s most dangerous villains begin to resurface.
One of the most common mistakes teams make is treating each phase of development as a statistical reset. Data gathered during early exploration, design development, verification, and post-market monitoring are often analyzed in isolation, using inconsistent assumptions or disconnected metrics. Over time, this lack of continuity creates blind spots, places where trends go unnoticed, risks re-emerge, and confidence quietly erodes.
These gaps are exactly where The Rogue Distribution thrives.
Captain Assumption is quick to return when earlier assumptions are forgotten or left unchallenged as systems evolve. The Confounder lurks in cross-functional handoffs, quietly reshaping conclusions when variables change between studies. Powerless strikes when sample sizes that once seemed sufficient are reused without considering how the question has changed. And The Outlier, ever unpredictable, reappears when historical context is lost and new data are judged in isolation.
Stat-Woman stays sharp by ensuring that statistical decisions remain connected across the lifecycle. Early exploratory insights inform later acceptance criteria. Verification results shape post-market monitoring strategies. Measurement systems, risk assessments, and data interpretations evolve together, not independently. This continuity allows her to recognize when something truly new has emerged and when it’s simply an old problem wearing a new disguise.
Equally important is knowing when not to change course. Not every anomaly demands a new method and not every result requires escalation. Consistency, when paired with sound judgment and documentation, is a powerful defense against overreaction and Miss-Interpretation. As Mr. Ledger often reminds her, the strongest statistical stories are the ones that can be followed from beginning to end without contradiction.
In the end, staying sharp in Variance City isn’t about avoiding villains entirely. It’s about recognizing them early, understanding their patterns, and responding with intent. Statistical tools may change over time, but disciplined thinking, continuity, and context are what keep Stat-Woman one step ahead.
The Hero’s Takeaway: Final Thoughts
In the end, Stat-Woman’s greatest strength isn’t her toolbelt. It’s her judgment. Statistics, like any powerful tool, only delivers value when it’s applied with intent, context, and discipline across the entire product lifecycle. From early exploration to validation and beyond, the goal is never to find a result, but to make the right decision at the right time.
Whether you’re navigating the uncertainty of Variance City or striving to bring your product safely into Control City, the same principles hold true: understand the question before choosing the method, respect the assumptions behind the data, and never lose sight of how today’s decisions echo downstream. Mastering that mindset is what turns statistical tools into strategic advantages and what separates teams that simply analyze data from those that use it to intelligently lead.
But one patrol through Variance City can only cover so much ground. In the months ahead, this series will return to each phase of the product lifecycle for a closer look, diving deeper into the Stat-Cave, revisiting the Training Grounds, walking the streets of the Design Verification District, and standing alongside Stat-Woman through the Final Battle of validation and regulatory decision-making. Each installment will sharpen a different set of tools in the Statistical Toolbelt and bring new encounters with The Rogue Distribution.
And if you’ve been paying close attention, you may have noticed that one name has been conspicuously absent from tonight’s patrol. The Rogue Distribution is dangerous, but its members are only part of the problem. Somewhere in the shadows of Variance City, a far more patient villain has been quietly tipping the scales long before any data was ever collected. Stat-Woman knows she’s out there. She always is. But that reckoning will come in due time as we wouldn’t want this introduction to have too much…Bias.
Interested in reading more tales of Stat-Woman?
AUTHOR
Alex Spivak, Principal Consultant, Suttons Creek – Alex is a highly strategic Biomedical Engineer with 13 years of MedTech industry experience, specializing in solving complex R&D challenges and ensuring strict Quality Assurance and Compliance. Alex transforms product development through rigorous statistical analysis and process optimization, consistently delivering substantial quality enhancements and guaranteeing minimal Post Market Surveillance complaints. As the lead Quality Engineer for numerous global commercial product launches, he has successfully operated in diverse environments, from large corporations to small startups, and across multiple domestic and international locations, including Japan.