摘要
See Article, p 664 Randomized controlled trials (clinical trials), like all medical research, should create new knowledge that eventually leads to improvement in patient outcomes. Their value depends on validity (accuracy), reliability (precision), and clinical relevance. Clinical trials are less likely to be biased when compared with observational studies; they are the engine room of evidence-based medicine.1,2 But not all trials are equal, and many have no impact on clinical practice. High standards in study design come at higher cost and complexity, adding substantial burdens to the conduct of clinical trials—extensive regulation and governance requirements, staffing and other resources for patient recruitment and data collection, and unexpected delays. These higher costs require large-scale funding. Investigator-initiated trials are dependent on highly competitive national funding agencies, and industry trials must take risks on upfront cost outlays. For multicenter clinical trials, there is a need for extensive communications, multisite start-up meetings, variable and sometimes unreliable patient enrollments, paper-based data collection and (usually) web-based data entry, numerous data queries, and, finally, multisite close outs. This is then followed by centralized data cleaning, statistical analyses, article preparation and journal submission(s), eventual publication, and dissemination of results. After all of this, there may still be a protracted period before changes in practice occur.3 Deficiencies in concept, design, conduct, analysis, or reporting lead to many clinical trials being worthless.4 These challenges simply cannot be overcome with more money or a stronger push for evidence-based practice. The widely appreciated triple aim of a health care system (ie, experience of care, improving health, reducing costs) has been adapted to a triple aim of clinical research:5 (1) improving the research participant’s experience, (2) promoting the health of populations, and (3) reducing the costs of research. REDUCING RESEARCH WASTE Worldwide, approximately 85% of research funding and resources (>$200 billion) is wasted, with the findings from this research often viewed as having modest value to health policy makers and clinicians.6 There is no reason to believe the situation is much different in anesthesia and its related subspecialities.7 Clinicians far too often modify their practice on the basis of weak evidence provided by observational (nonrandomized) studies or small randomized trials. Well-designed and -conducted randomized trials provide the most reliable evidence of treatment effect.1,2,8 However, narrow eligibility criteria and inflexible methodologies can limit generalizability. Pragmatic or practical clinical trials aim to include a broader range of patients across diverse health care settings and, therefore, provide greater generalizability.9 This increases their relevance and, thus, impact. Clinical trials should eventually lead to improved patient outcomes. There needs to be less emphasis on surrogate end points and a stronger focus on patient-reported outcomes (PROs).9,10 PROs should be rated by the patient, giving attention to their assessment of the impact of disease and treatment on their symptoms, function, and health-related quality of life. International, consensus-based, PRO-specific guidelines are now available,11 and it is reassuring to see PROs being increasingly included in anesthesia trials.12–14 Too many clinical trials evaluating the effectiveness of a treatment report outcomes with markedly different definitions.15 This muddies clinical interpretation and weakens the validity of pooling results in meta-analyses. Replication is a bedrock of science, so clear and consistent definitions of what is being measured are a necessary prerequisite. Efforts to standardize trial end points in anesthesia are well underway.16,17 Missed or slow recruitment to clinical trials is sources of research waste. This situation is likely to be improved if potential participants have a sense of more informed choices about their participation. There needs to be greater efforts to include trial participants that reflect the racial identity, ethnicity, age, sex, and gender of the surgical and anesthetic population of interest.18 Underprivileged groups, those in low-resource settings, and those with rare diseases must not be left out and behind. Consumer Engagement Patients and the community place a high value on medical research. However, research should not be thought of as being performed “on” patients, but rather “with” patients.19 Access to clinical trials should be a right, not a privilege. There is an expectation by the public, which directly and indirectly largely funds clinical trials, that research has value—the research must be meaningful to them and the wider community. These expectations have a political persuasion, with organizations such as the Patient-Centered Outcomes Research Institute (PCORI) in the United States20 and global Health Technology Assessment international (HTAi),21 wanting to support research that is patient-centered, and including key stakeholders throughout the research process—including setting the research agenda. Funding agencies and advisory groups such as these also look for diversity, equity, and inclusion. Anesthesia researchers are coming on board. Table 1. - Examples of Institutions, Sponsors, and Patients Being Connected for Clinical Trials Through Websites, Mobile Apps, Registries, and Electronic Medical Records Organization Website ClinicaTrials.gov (find a study) https://www.clinicaltrials.gov/ SHARE www.registerforshare.org Clintrial Refer www.clintrialrefer.org.au TriNetX global health research network www.trinetx.com Deep 6 AI www.deep6.ai CALIBER platform https://www.caliberresearch.org All of us https://allofus.nih.gov/ TrialCheck https://eviticlinicaltrials.com/Services/ Antidote https://www.antidote.me/ Abbreviations: CALIBER, Cardiovascular Disease Research Using Linked Bespoke Studies and Electronic Health Records; SHARE, Scottish Health Research Register & Biobank. Future clinical trials will need to be designed around the needs of participants rather than researchers, making it easier for them to participate (eg, embracing digital health to advertise the trials and collect data) and promoting decentralized studies.5 There are a growing number of websites and mobile apps, some including artificial intelligence and machine learning, to screen electronic medical records (EMRs) that can enhance the efficiency of connecting sponsors, health care organizations, and patients to active clinical trials (Table). In the perioperative setting, local programmers can build on natural language processing of the hospital EMR to identify suitable patients with relevant inclusion criteria for clinical trials.22 Clinical Trial Networks The conduct and efficient completion of large clinical trials are enhanced within established clinical trial networks.23 Beginning in the 1990s, many anesthesia trial networks have been created across the world. The sharing of expertise, centralized data management and statistical analysis, infrastructure, and other resources, improves efficiency and enhances collaborations. This has a scientific and social benefit for the researchers and expands opportunities for the funding of clinical trials. Collaborative, multicenter clinical trials with sufficient external funding are less likely to be associated with waste.23,24 Analyses on return on investment in Australia and the United States have found a favorable benefit to cost ratio of up to 6:1 when clinical trials are conducted within a network.23,25 INNOVATIVE TRIAL DESIGNS Large, pragmatic multicenter trials aim to enroll a broad spectrum of patients in diverse settings and are rightly considered effectiveness trials that can provide reliable and relevant information to guide clinical practice.9 They are, thus, referred to as practical clinical trials.9 But even these can be burdensome and costly, limiting their progress and delaying their implementation. Novel Trial Designs Novel trial designs facilitate enrollment of large numbers of patients at modest cost, offering greater opportunities to demonstrate clinical effectiveness.26 The conventional parallel, 1:1 randomized controlled trial should be familiar to all readers, but other common designs include crossover, cluster, and factorial trials. These are not new but remain uncommonly used in anesthesia research. There has been more recent interest in cluster-crossover and stepped-wedge designs.26 These novel trial designs usually increase clinician engagement, lessen the complexity of site initiations because they can be orientated to the actual treatment they are expected to deliver at the relevant time, simplify patient enrollment (some do not require individual consent), and, most often, compare existing treatments for which there is more likely to be equipoise and clinician interest. Adaptive Designs and Platform Trials The effort and time required to design each new clinical trial in any particular area of medicine result in long delays and lead times before patient recruitment can even begin. This includes a new cycle of ethics and governance review for each trial, production of a new case report form and numerous other documents, and repeat site initiations. The principles underpinning platform trials and other adaptive designs include greater efficiency with typically minimal data collection, at less expense, targeting a single disease entity, aiming to answer >1 question and building on accruing knowledge. They have become popular in many fields of medicine.27 In a platform trial, patients with a single disease are randomly assigned to one of several treatment groups with an overarching goal of determining whether any of the treatments under study has benefit.27 The established study team(s), infrastructure, trial procedures, and networks are invaluable and should be continued from 1 treatment to another within the overarching platform. Ineffective or harmful treatments can be discarded, and promising treatment options can be added, during the lifecycle of the trial. The platform can continue to exist into the future. The master protocol of the platform defines the study population(s) and incorporates standardized trial procedures, including data collection and statistical analyses.27 Variations on the platform theme include basket and umbrella trials, which have particular relevance for cancer and chronic diseases.27,28 Adaptive design elements that can be modified during the conduct of a clinical trial include: (1) study population, (2) sample size, (3) allocation ratio, (4) study drug administration or dose, and (4) eligibility criteria.29 Adaptive trials will nearly always have an unknown sample size because modifications are based on the analyses of accumulating data. If one or more treatment groups seem to have a more favorable treatment effect, then a modification to the random assignment ratio favoring this group(s)—response-adaptive randomization—can be included, which should provide additional ethics-equipoise. However, clear adaptation rules should be prespecified in the study protocol. Adaptive trials may increase the safety of participants because those treatments found to be harmful or ineffective, or those patients at greater risk of side effects or more likely to be nonresponsive, can be excluded based on accumulating data. Adaptive designs reduce the risk of a failed or “negative” trial by embracing flexibility from the initial stages and throughout the conduct of the trial such that they are always seeking to pick winners—regression to the efficacious—all the while maintaining scientific and statistical rigor. These methods do, however, require complex statistical planning and analyses.28 An excellent example of a platform trial is the ongoing Randomized Evaluation of COVID-19 Therapy (RECOVERY) trial emanating from the United Kingdom.30 This enterprise began early in the coronavirus infectious disease 2019 (COVID-19) pandemic by studying dexamethasone, hydroxychloroquine, lopinavir-ritonavir, and azithromycin, in a randomized adaptive design. Evaluations of aspirin, colchicine, convalescent plasma, and tocilizumab have been followed, and more are ongoing. RECOVERY has become the world’s largest collaborative clinical trial, with >180 hospitals and 40,000 hospitalized patients to date. There is no doubt that the findings from RECOVERY demonstrating a reduction in mortality with dexamethasone have saved millions of lives worldwide. This occurred in the first 6 months of the trial’s instigation! RECOVERY offers a roadmap for how many more clinical trials could be conducted in the future: linking into existing trial networks and health care organizations, using simple but robust methods, and evaluating existing and novel therapeutics, to solve the many global health challenges. For anesthesiologists, there is no reason why platforms and networks could not be established to systematically study pain relief in labor, postoperative respiratory complications, delirium, sepsis, cancer recurrence, and the many other big topics in perioperative medicine. COMPARATIVE EFFECTIVENESS RESEARCH Comparative effectiveness research (CER) is an all-encompassing category, aiming to generate evidence demonstrating the impact of readily available interventions in real-world settings, that is, outside the often-ideal conditions of a traditional clinical trial. Electronic medical and health record systems collect extensive process and outcome data that should be a valuable resource for clinical research at lower cost.31 The core bases of CER are broad eligibility criteria, available interventions, simple procedures, and routinely collected outcome data, within low-burden cost-efficient designs. Individual patient consent may not be necessary.Figure.: Designing low-cost, efficient clinical trials that minimize research waste and maximize impact to improve outcomes for our patients. RCT indicates randomized controlled trial.CER most often uses information available in EMRs, registries, and other observational data sources.31–35 While these data sources are commonly being promoted as an alternative to randomized trial data as a source of real-world evidence, any nonrandomized treatment decisions will always raise doubts about bias and confounding.1,34,35 The magic of randomization8 is too often underappreciated in discussions of CER. Importantly, CER can also be done within a randomization construct (Figure).8,33,34 Embedding Randomized Trials Within Registries National and international disease and device registries abound. These typically monitor demographic, clinical, device, and/or other treatments and outcomes data. They are an ideal resource for real-world data and, as stated above, are an excellent source for CER. A registry-based randomized trial is where treatment and outcomes data are collected in an existing registry, but with random assignment to the treatment under study. This fulfills some of the ideals of CER, namely, aiming to address questions in real-world settings, at low cost and with enhanced generalizability of findings, including their implementation when compared with conventional (otherwise generalizable) pragmatic trials but with the additional strength of randomization.8,34 Registry-based randomized trials are exemplars of CER. EMRs also offer an opportunity to automatically screen patients on admission to identify those who would be eligible for participation in any clinical trial clinical trial conducted at that institution.22 Smartphone apps, mobile phone-based interventions and reminders, just-in-time adaptive interventions,36 and other information technology-based innovations can greatly simplify enrollment, randomization, and follow-up outcome data collection. Data scientists will play an increasing role in clinical trials of the future. Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long-Term Effectiveness (ADAPTABLE)37 is an excellent example of randomized CER that takes advantage of electronic databases. This trial compared 2 dosages of aspirin (81 vs 325 mg) for the secondary prevention of cardiovascular disease in the community setting, for which they used creative but readily available digital health methods to simplify the identification, recruitment, and follow-up of patients. This included the use of an algorithm to interrogate EMR data to identify eligible patients across 40 centers within the National Patient-Centered Clinical Research Network, with patient consent being simplified via a website that also notified them of their randomly assigned aspirin regimen (which they then purchased themselves). All trial visits were done virtually or by telephone, with outcomes ascertained remotely and without adjudication. Trial outcomes were automatically collated via multiple data sources, including patient report at scheduled clinic visits, EMRs, private health plan, or US Centers for Medicare and Medicaid Services (CMS) claims data. They found no significant differences in cardiovascular events or major bleeding between the 2 aspirin dose regimens. Long-term adherence was better with the 81-mg dosing strategy. The practical message from this trial is that higher doses of aspirin for secondary prevention offer no benefits but cause some harm for people with cardiovascular disease. This trial has impact: important question and widely applicable patient-centered information. An interesting innovation that incorporates a novel trial design in a CER construct is an elaborate multiple-period, cluster randomized, crossover trial design.38 This takes advantage of a robust design with internal validity (very low bias), simplified procedures as outlined above with other novel designs, and a real-world setting that has external validity (generalizability)—surely, this is the ideal CER? IMPLEMENTATION SCIENCE One of the biggest frustrations of those looking to promote evidence-based practice is the delays between the acquisition of new knowledge (clinical trial results) and the translation of that knowledge into improvements in patient care.3 Future clinical trials should have an implementation plan built into the protocol. There is recently published guidance on implementation strategies for clinical trials, including reporting of trial aims, trial recruitment and retention strategies, randomization and design methods, use of implementation science theory and frameworks, and trial reporting.39 Implementation science is “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.”40 It considers behavioral, clinical, organizational, and the broader community and policy contexts. The importance of implementation science should not be limited to investigator-initiated trials. There are a growing number of national drug-approval agencies looking to new drug and medical devices registries for ongoing evidence on beneficial health outcomes in a so-called lifecycle approach.41 An expert group has proposed a set of guiding principles for both the pharmaceutical industry and device manufacturers to acquire and report comparative data in the postmarketing period.33 This includes development of “customized evidence generation plans” to be implemented postapproval and incorporating active comparators, preferentially randomized and with innovative designs. This program needs to include financial incentives and penalties,33 and may include expansion or contraction of labeled indications. A LEARNING HEALTH CARE SYSTEM Hospitals of the future need the active participation of clinicians and patients in research. Health services and health care systems whose clinicians conduct more research tend to have better survival rates, greater organizational efficiency, and higher staff retention, and both patient and staff satisfaction.42 Integration of clinical research with medical practice can speed up the pace and spread of new knowledge emanating from clinical trials. This is a feature of self-improving, sustainable health care systems.43,44 This is underpinned by instilling a culture of shared responsibility in maximizing recruitment into clinical trials, to include clinicians, patients, and administrators.5,19,43 NEED FOR TRAINING All medical disciplines, including anesthesia and its related subspecialties,7 and hospitals, need to build academic capacity. Training in the basics of research methodology, statistics, and critical appraisal should begin in medical school and permeate every level of clinical practice. Opportunities for advanced training to masters or doctorate level should be encouraged and supported for interested clinicians. CONCLUSIONS The future of clinical trials depends on relevance and reliability, with clear evidence of cost-effective improvements in desired health outcomes and widespread implementation.23 This requires training and experience in conducting clinical trials, consumer engagement, adequate funding, implementation science, and a learning health care system. Novel trial designs, clinical trial networks, and digital health can enhance the efficiency of research. DISCLOSURES Name: Paul S. Myles, MPH, MD, DSc. Contribution: This author helped write all drafts and devise the Table and Figure. This manuscript was handled by: Thomas R. Vetter, MD, MPH.