Combination Product Industry News & Guidance

Sharing device-related information and wisdom
that will help you succeed

Pitfalls in Auto-Injector Verification Reports

Opening Scenario: The Illusion of Passing Data

Imagine you’re reviewing a verification report for a combination product that includes a widely-used device platform. Everything appears pristine: tables show neat rows of values, acceptance criteria are met, and the conclusion reads confidently, “All results pass.” But as you look closer, questions pile up. The numbers are rounded beyond usefulness, statistical assumptions are unverified, and most concerningly, there’s no narrative to explain what the data means. You ask for raw data and supporting analysis, and the rabbit hole only deepens.

Sound familiar? Unfortunately, this scenario plays out far too often when sponsors accept surface-level reports from external test sites without critical evaluation. And it introduces real risk to regulatory success.

The Temptation of “Good Enough” Reports

External test sites, especially those working with established delivery platforms, often reuse standardized templates for verification reporting. These documents may appear complete at first glance but are frequently designed for speed and efficiency, not transparency. Sponsors may find themselves reviewing a report that includes:

  • Rounded summary statistics that obscure data distribution
  • Missing or overly simplified descriptions of statistical methods
  • No discussion of assumptions (e.g., normality) or rationale for transformations
  • A conclusion that offers no insights beyond “all tests passed”

This can leave sponsors with two bad options: either trust the report at face value, or dig through raw outputs, spreadsheets, and statistical formulas themselves, often weeks after the data was generated.

Where This Goes Wrong: Statistical Rigor Without Context

Verification analysis is more than just math. While many test sites use robust tools that walk through tolerance limit calculations and normality checks, the value is lost when interpretation is absent. For instance:

  • Chebyshev’s inequality based logic may indicate that a normality check isn’t needed when results fall well within spec. But this rule of thumb shouldn’t override thoughtful analysis, especially when other datasets from equivalent groups behave differently.
  • Data transformations are a valid statistical tool, but applying them without discussing why a group deviates from normality (e.g., unusual distribution in one out of three near-identical samples) misses an opportunity to uncover real root causes.
  • Test artifacts, component lot mixing, or operator variability can all explain anomalies. But if no one is asking these questions, valuable insights go undiscovered.

Without a thoughtful narrative, even technically correct analysis can feel hollow and invite regulatory questions that delay your timeline.

Regulatory Expectations: Transparency Matters

While regulators like the FDA may not always require full raw data in an initial submission, reviewers are trained to spot red flags in statistical analysis. When summary reports lack clear justification, discussion, or reference to underlying assumptions, reviewers may issue follow-ups or even request additional testing.

In our experience, sponsors are better served when they proactively:

  • Obtain and retain full data packages from test sites, including raw data, analysis workbooks, and documentation of assumptions
  • Request explicit statements regarding statistical methods (e.g., how normality was tested and what thresholds were used)
  • Ensure narrative discussion is included in the report, especially for outlier groups or non-normal data
  • Establish expectations up front with external test sites about the level of detail and transparency required

As one of our regulatory SMEs shared during internal review, even if an analysis appears acceptable, the tone of the review can shift dramatically if the broader application has other areas of concern. Sparse documentation becomes a liability in contentious reviews.

Why This Matters: The Cost of Rework and Review Delays

When sponsors fail to scrutinize verification data until after a submission, they invite the risk of FDA information requests (IRs), longer review timelines, and even costly supplemental testing. By contrast, embedding expert judgment early in the review of test site deliverables allows sponsors to:

  • Address ambiguities before submission
  • Preempt reviewer concerns
  • Build confidence in product performance

At Suttons Creek, we routinely support clients in preemptively identifying these issues and establishing stronger partnerships with their test sites. We’ve seen firsthand how a thoughtful approach to verification reporting can reduce regulatory friction and accelerate approval.

If you’re navigating similar challenges or preparing for a complex combination product submission, we’d be happy to share more insights. Contact us at discuss@suttonscreek.com.

AUTHOR

Bryan Bobo, Associate Technical Director, Suttons Creek – With over 9 years of experience in the pharmaceutical industry, Bryan has extensive knowledge of combination product development and what it takes to launch products on the market. He has practical and hands-on experience in various types of drug delivery systems from his years as both a Device Development Engineer as well as project and program management positions. In addition to his direct product development experience, Bryan engages with the global combination product community through his participation on various international standards committees as well as other various industry forums.