Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Dec;100(6):713-729.
doi: 10.1002/cpt.514. Epub 2016 Oct 19.

PIPELINEs: Creating Comparable Clinical Knowledge Efficiently by Linking Trial Platforms

Affiliations
Free PMC article

PIPELINEs: Creating Comparable Clinical Knowledge Efficiently by Linking Trial Platforms

M R Trusheim et al. Clin Pharmacol Ther. .
Free PMC article

Abstract

Adaptive, seamless, multisponsor, multitherapy clinical trial designs executed as large scale platforms, could create superior evidence more efficiently than single-sponsor, single-drug trials. These trial PIPELINEs also could diminish barriers to trial participation, increase the representation of real-world populations, and create systematic evidence development for learning throughout a therapeutic life cycle, to continually refine its use. Comparable evidence could arise from multiarm design, shared comparator arms, and standardized endpoints-aiding sponsors in demonstrating the distinct value of their innovative medicines; facilitating providers and patients in selecting the most appropriate treatments; assisting regulators in efficacy and safety determinations; helping payers make coverage and reimbursement decisions; and spurring scientists with translational insights. Reduced trial times and costs could enable more indications, reduced development cycle times, and improved system financial sustainability. Challenges to overcome range from statistical to operational to collaborative governance and data exchange.

Figures

Figure 1
Figure 1
Illustrative simple PIPELINE design showing a hypothesis generation basket trial feeding candidates to seamlessly linked Phase II Proof of Concept (PoC)/Phase III umbrella platforms. A pragmatic clinical trial platform then continues studying effectiveness and regimen optimization. A real world patient repository collects observational and pragmatic trial information to prompt new hypothesis generation, aid propensity scoring and representativeness analysis of the inclusion/exclusion criteria, and natural history progression to refine endpoint impact estimates.
Figure 2
Figure 2
Schematic Clinical Trial Designs. (a) Signature adds and closes cohorts, using Bayesian methods. (b) NCI‐MATCH adds and closes treatment arms, within a master protocol. (c) With its master protocol Lung‐MAP adds and closes treatment arms, which are each randomized, and graduates winning arms to phase III seamlessly with inferential linking. (d) I‐SPY 2 adaptively randomizes to efficiently find the graduates for I‐SPY 3 or other phase III trials. (e) PIPELINEs will have the scale and diversification of options to allow information to flow in multiple directions. To the left, an adaptive basket approach could identify subpopulations for treatment, which are then compared with appropriate control in an umbrella approach. To the right, an umbrella type adaptive basket finds a graduate for phase III and supplemental indications are then explored in a basket trial.
Figure 3
Figure 3
Adaptive umbrella‐based PIPELINEs test multiple therapeutics (blue) and combinations (orange) in parallel with a shared comparator arm (black), usually in a single broad indication with multiple sub‐populations. Adaptive randomization occurs within each phase with operational and inferential seamless graduation between phases. Real‐world evidence may continue the adaptive umbrella designs or use more classic designs. Not all indications succeed as indicated by “X”. Arrow thickness connotes patient numbers.
Figure 4
Figure 4
Basket‐based PIPELINE employs hypothesis generation designs early in development followed by confirmatory basket designs capable of creating pivotal data for regulatory submission. Real‐world evidence pragmatic designs could employ either basket or more classic approaches. Each row indicates a therapeutic being tested in an indication. Not all indications succeed as indicated by “X”. Arrow thickness connotes patient numbers. For simplicity, simultaneous multiple therapeutics are not shown.
Figure 5
Figure 5
PIPELINE variety 3: integrated designs link basket, umbrella, adaptive and pragmatic platforms and pilot new designs that may be integrated such as the SMART design into I‐SPY2 adaptive platform. Phases are indicated by color. Abutting platforms use inferential and operational seamless graduation. RWE repositories aid all platforms in patient identification, population representativeness assessment, propensity scoring, historical control definitions, natural history understanding and hypothesis generation. New design pilots systematically test approaches and standard operating procedures prior to their incorporation into the PIPELINE.
Figure 6
Figure 6
Operational efficiency leads to multiyear time‐saving potential. Classic case benchmark data from published IMS Health, Medidata, Tufts Center for Drug Development analysis and academic center studies. PIPELINE benchmarks taken from current I‐SPY platform performance levels.
Figure 7
Figure 7
PIPELINE feedback loops create a virtuous cycle for their growth and sustainability by providing stakeholder benefits.

Similar articles

See all similar articles

Cited by 6 articles

See all "Cited by" articles

References

    1. Kinch M.S., Haynesworth A., Kinch S.L. & Hoyer D. An overview of FDA‐approved new molecular entities: 1827‐2013. Drug Discov. Today 19, 1033–1039 (2014). - PubMed
    1. DiMasi J.A., Grabowski H.G. & Hansen R.W. Innovation in the pharmaceutical industry: new estimates of R&D costs. J. Health Econ. 47, 20–33 (2016). - PubMed
    1. BIO . Clinical development success rates 2006–2015. Report 1–12 (2016).
    1. Terry C. & Lesser N. Measuring the return from pharmaceutical innovation 2015 Transforming R & D returns in uncertain times. Deloitte Centre for Health Solutions. <https://www.bio.org/sites/default/files/Clinical Development Success Rates 2006‐2015 — BIO, Biomedtracker, Amplion 2016.pdf> (2015).
    1. Lloyd I. Pharma R & D Annual Review 2012 . <https://citeline.com/pharmaprojects‐pharma‐rd‐annual‐review‐2016/> (2016).

MeSH terms

Feedback