Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

How informative were early SARS-CoV-2 treatment and prevention trials? a longitudinal cohort analysis of trials registered on ClinicalTrials.gov

  • Nora Hutchinson,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Studies of Translation, Ethics, and Medicine (STREAM), Biomedical Ethics Unit, McGill University, Montreal, Québec, Canada

  • Katarzyna Klas,

    Roles Conceptualization, Data curation, Investigation, Methodology, Writing – review & editing

    Affiliation Faculty of Health Sciences, Research Ethics in Medicine Study Group (REMEDY), Jagiellonian University Medical College, Krakow, Poland

  • Benjamin G. Carlisle,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Writing – review & editing

    Affiliation BIH QUEST Center for Transforming Biomedical Research, Berlin Institute of Health at Charité (BIH), Berlin, Germany

  • Jonathan Kimmelman,

    Roles Methodology, Writing – review & editing

    Affiliation Studies of Translation, Ethics, and Medicine (STREAM), Biomedical Ethics Unit, McGill University, Montreal, Québec, Canada

  • Marcin Waligora

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Supervision, Writing – review & editing

    m.waligora@uj.edu.pl

    Affiliation Faculty of Health Sciences, Research Ethics in Medicine Study Group (REMEDY), Jagiellonian University Medical College, Krakow, Poland

Abstract

Background

Early in the SARS-CoV-2 pandemic, commentators warned that some COVID trials were inadequately conceived, designed and reported. Here, we retrospectively assess the prevalence of informative COVID trials launched in the first 6 months of the pandemic.

Methods

Based on prespecified eligibility criteria, we created a cohort of Phase 1/2, Phase 2, Phase 2/3 and Phase 3 SARS-CoV-2 treatment and prevention efficacy trials that were initiated from 2020-01-01 to 2020-06-30 using ClinicalTrials.gov registration records. We excluded trials evaluating behavioural interventions and natural products, which are not regulated by the U.S. Food and Drug Administration (FDA). We evaluated trials on 3 criteria of informativeness: potential redundancy (comparing trial phase, type, patient-participant characteristics, treatment regimen, comparator arms and primary outcome), trials design (according to the recommendations set-out in the May 2020 FDA guidance document on SARS-CoV-2 treatment and prevention trials) and feasibility of patient-participant recruitment (based on timeliness and success of recruitment).

Results

We included all 500 eligible trials in our cohort, 58% of which were Phase 2 and 84.8% were directed towards the treatment of SARS-CoV-2. Close to one third of trials met all three criteria and were deemed informative (29.9% (95% Confidence Interval 23.7–36.9)). The proportion of potentially redundant trials in our cohort was 4.1%. Over half of the trials in our cohort (56.2%) did not meet our criteria for high quality trial design. The proportion of trials with infeasible patient-participant recruitment was 22.6%.

Conclusions

Less than one third of COVID-19 trials registered on ClinicalTrials.gov during the first six months met all three criteria for informativeness. Shortcomings in trial design, recruitment feasibility and redundancy reflect longstanding weaknesses in the clinical research enterprise that were likely amplified by the exceptional circumstances of a pandemic.

Introduction

Starting in early 2020, commentators warned of COVID-19 clinical trial design deficiencies and lack of coordination of research efforts [14]. The large volume of small trials investigating the efficacy of repurposed medications, such as hydroxychloroquine, in the treatment of COVID-19, drew particular attention [5,6]. Such studies confounded an effective public health response by producing spurious findings, or by diverting patients and resources from well designed and executed studies.

Appropriate design, implementation and reporting is captured by the concept of trial “informativeness” [3,7]. For a trial to be informative to clinical practice, it must fulfill five conditions [3,7]. First, it must ask a clinically important question. Second, it must be designed to provide a clear answer to that question. Third, it must have both a feasible enrollment target and primary completion timeline. Fourth, it must be analyzed in a manner that supports statistically valid inference. Fifth, it must report results in a complete and timely manner [3,7].

In the following longitudinal cohort analysis of SARS-CoV-2 treatment and prevention trials registered within the first 6 months of 2020, we assess three features of an informative clinical trial—potential redundancy, design quality and feasibility of patient-participant recruitment. Multiple cross-sectional analyses and systematic reviews of SARS-CoV-2 treatment and prevention trials have been performed [2,5,6,811], reporting on intervention types, study characteristics and choice of outcome measure. We go beyond a description of trial characteristics and provide the first in-depth evaluation of SARS-CoV-2 trial informativeness. Knowing the prevalence of potentially uninformative trials conducted in the early stages of the pandemic can help motivate the development of more effective research policy in anticipation of future public health crises.

Methods

Sample, design and trials selection

Our cohort consisted of interventional SARS-CoV-2 treatment and prevention trials registered on ClinicalTrials.gov with a start date between 2020-01-01 and 2020-06-30. We included “Completed”, “Terminated”, “Suspended”, “Active, not recruiting”, “Enrolling by invitation” and “Recruiting” Phase 1/2, Phase 2, Phase 2/3 and Phase 3 interventional clinical trials testing an efficacy hypothesis in their primary outcome. We included trials evaluating any of the following interventions: drug, biological, surgical, radiotherapy, procedural or device. We excluded trials evaluating behavioural interventions, trials of natural products and Phase 1 trials, all of which have no legal requirement to register on ClinicalTrials.gov [12]. See S1 File for complete inclusion/exclusion criteria. Trial inclusion and exclusion criteria were independently assessed by two researchers (KK & LZ), with disagreements resolved by an arbiter (NH or MW). We did not perform a sample size calculation, as we included all trials meeting our eligibility criteria within our designated sampling timeframe.

Data curation

We downloaded clinical trial data directly as a zipped folder of XML files from the web front-end of ClinicalTrials.gov on 2020-12-01 and again on 2021-01-04 (see S2 File for ClinicalTrials.gov search criteria). This allowed us to evaluate data at the 6-month mark (from date of trial start) for all trials in our cohort (see S3 File for data directly downloaded from ClinicalTrials.gov). Additional items requiring human curation were independently assessed and coded by two researchers (KK & LZ), these included: i) treatment type (according to the World Health Organization (WHO) COVID-19 Classification of treatment types [13]); ii) illness severity (as stated by the study investigators or guided by the WHO disease severity classification [14]); iii) location of care (ambulatory, hospitalized, intensive care, unclear/not stated); iv) presence of a placebo or standard of care arm; and, v) type of primary outcome (clinical, surrogate, procedural) (see S4 File for additional double-coded data points). Disagreements were resolved by an arbiter (NH or MW) (Please see S1 Table for inter-rater agreement).

Measures

Trials were assessed based on three elements of informativeness: i) potential redundancy (as a marker of trial importance); ii) trial design quality; and iii) successful patient-participant recruitment (as a marker of feasibility). Assessment criteria for each element were designed based on face validity and easy applicability over a large trial sample.

Potential redundancy.

We assessed potential redundancy by evaluating non-redundancy of the trial hypothesis. Non-redundancy was defined as: absence of a trial of the same phase, type of trial (SARS-CoV-2 prevention versus treatment), patient-participant characteristics (including location of care, disease severity and age of trial participants), regimen (including interventions used in combination in a single arm), comparator arm(s) and primary outcome (evaluating primary outcome domain and specific measurement, based on framework from [15]) launched prior to the start date of the trial of interest (as indicated in the registration record active at the 6-month mark since trial start) (S5 File). Only the trial with the later start date was labelled as potentially redundant. The assessment was independently performed by two raters (NH & KK), with disagreements resolved by an arbiter (MW of BC). We performed an additional post hoc assessment applying a broad criterion for trial similarity, which we defined as presence of a trial with an earlier start date of the same type, phase, patient-participant characteristics and treatment regimen.

Design quality.

We analyzed trial design quality for those studies in our sample that were aimed at informing clinical practice–namely Phase 2/3 and Phase 3 trials. Based on the U.S. Food & Drug Association (FDA) May 2020 guidance document for SARS-CoV-2 drug and biological treatment and prevention trials [16], we considered a trial to be well-designed if it was randomized, placebo-controlled or with a standard of care comparator arm, double-blinded and included participants aged 60 years or over (as a proxy for an at-risk population). To be considered well-designed, a trial must also measure an appropriate primary outcome–a clinical primary outcome in the case of trials aimed at treating COVID-19, or the presence of laboratory-confirmed SARS-CoV-2 infection for trials testing a preventive measure.

Feasibility of patient-participant recruitment.

We assessed timeliness and success of patient-participant recruitment for each trial in our cohort. A single trial was considered non-feasible if it met any of the following criteria: i) trial status was “terminated” or “suspended” and reason for stopping contained a rationale unrelated to trial efficacy, safety or the progression of science; ii) trial status was “completed” or “active, not recruiting” and final enrollment was less than 85% of the anticipated enrollment reported in the trial registration at the time of trial launch (given concerns for compromised statistical power for the primary outcome when recruitment is below the stated threshold (based on previously published methods [17]); or, iii) trial status was “recruiting” or “enrolling by invitation” and the recruitment period had been extended to at least twice as long as the anticipated length in the version of ClinicalTrials.gov registration record at the time of trial start.

Data analysis

We report the overall proportion of trials meeting all three criteria of informativeness (potential redundancy, design quality and feasibility of patient-participant recruitment) as well as the proportion meeting each of our three criteria. We performed a stratified analysis of the proportion of i) non-redundant; ii) well-designed; and iii) feasible trials by sponsor (industry versus non-industry), trial country location (USA versus non-USA), trial type (treatment versus prevention) and number of trial centers (single center versus multicenter). Ninety-five percent confidence intervals were calculated for the difference between two proportions using the prop.test package in R [18]. All tests were 2-tailed. We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines for cohort studies (S1 Checklist) [19].

Tools and data synthesis

We performed data extraction using Numbat Systematic Review Manager v. 2.11 (RRID:SCR_019207) [20]. All analyses were performed using R version 3.6.3 [21]. We retrieved historical versions of ClinicalTrials.gov using R package ‘cthist’ (RRID:SCR_019229).

Our study was not subject to Institutional Review Board/Ethics Committee approval, as it relies on publicly accessible data and did not involve interaction with research participants. The study protocol was prospectively registered on Open Science Framework [22]. We listed the deviations from the protocol in S6 File. The code [23] and data sets [22] used in this analysis are available online.

Results

We included 500 interventional SARS-CoV-2 treatment and prevention efficacy trials (see S1 Fig for Flow Diagram). The number of trials was arrived at by chance and was not predetermined. The majority (58.0%) of trials in our cohort were Phase 2 trials; 84.6% were randomized; 84.8% were directed at the treatment of SARS-CoV-2. Study status at 6 months since trial start was “Completed” in 54 of 500 trials (10.8%) and “Recruiting” in 67.0% (Tables 1, S2 and S3). Median anticipated enrollment per trial (based on the enrollment stated in the last registration record prior to trial start) was 180 patient-participants (range 5–15000 patient-participants; interquartile range (IQR) 60–437). Median actual patient-participant enrollment at the 6-month mark, for those trials that provided actual enrollment numbers, was 129 (range 0–4891 patient-participants; IQR 32–320).

Less than one third (29.9%, 95% CI 23.7–36.9%) of the 194 trials eligible for assessment of all 3 criteria were deemed informative. Nineteen trials were classified as potentially redundant (4.1%), of which 10 investigated convalescent plasma and a further 4 investigated hydroxychloroquine. Sixty-three trials (13.6%) differed only by primary outcome. In our post hoc analysis, 81.9% (380 of 464 trials) were similar with respect to trial type, regimen, phase and patient-participant characteristics.

Of the subset of 210 Phase 2/3 and Phase 3 trials in our cohort, 92 (43.8%) met our criteria for trial design quality [20] (Fig 1; Table 2). The proportion of feasible trials in our cohort was 77.4% (387 of 500 trials); 113 trials were non-feasible. Of these, 12 were “Suspended” or “Terminated “for a reason unrelated to efficacy, safety or the progression of science; 20 trials were “Active, not recruiting” or completed but failed to enrol at least 85% of their target patient-participant enrollment (S2 Fig); 81 trials still “Recruiting” had exceeded at least two times the intended recruitment period (S3 Fig).

thumbnail
Fig 1. Flow diagram for trial design quality of Phase 2/3 and Phase 3 SARS-CoV-2 trials.

a) Refers to trial that is either placebo-controlled or has a standard of care comparator arm. b) Refers to a treatment trial with a clinical primary outcome or a prevention trial with either a clinical primary outcome or laboratory-confirmed SARS-CoV-2.

https://doi.org/10.1371/journal.pone.0262114.g001

thumbnail
Table 2. Evaluation of design quality of trials meant to inform clinical practice.

https://doi.org/10.1371/journal.pone.0262114.t002

Discussion

Prior studies have examined the COVID-19 trial landscape, evaluating trial design quality [24,25], choice of outcome [26], and presenting descriptive statistics on COVID-19 trials characteristics [2,5,6,811]. This is the first study to assess the prevalence of informative COVID-19 clinical trials. In our analysis, 29.9% of early COVID-19 trials registered on ClinicalTrials.gov met our 3 criteria for informativeness. Many (56.2%) did not use rigorous design, based on assessment of randomization, control group, blinding, primary outcome, and inclusion of an at-risk population. Of these, the greatest number (110 of 210 trials, 52.4%) did not demonstrate adequate blinding. Lack of blinding among COVID-19 trials has been highlighted in several recent analyses [2,5,6,9,10] and may reflect the challenges of trial conduct in pandemic circumstances, in which significant research infrastructure and oversight is required to implement and maintain blinding. Yet, deficits in trial design were not uniform. Our stratified results (Table 3) demonstrated that trials with at least one center in the USA, in addition to trials with industry sponsorship, SARS-CoV-2 prevention trials and multicenter trials, demonstrated a greater proportion of well-designed trials than their counterparts.

thumbnail
Table 3. Stratified analysis of redundancy, design, trial feasibility and informativeness by sponsor, country location, trial type, number of trial centers.

https://doi.org/10.1371/journal.pone.0262114.t003

Despite elevated SARS-CoV-2 cases, many trials (22.6% (113 of 500 trials)) were unable to adequately and expeditiously complete patient-participant recruitment. This estimate is in keeping with other studies in which close to one third of COVID-19 trials registered on ClinicalTrials.gov or on the World Health Organization International Clinical Trials Registry Platform stopped before attaining 75% accrual [27]. In some cases failure to reach recruitment goals can be explained by decreasing case counts in the setting of rapid suppression of a COVID outbreak. For example, early stoppage of a Remdesivir multicenter randomized controlled trial after recruitment of 237 of 453 patient-participants in Wuhan, China, resulted in an underpowered trial with inconclusive results [28,29]. This has also been seen in other settings, such as in the 2014–2016 Ebola outbreak [30]. However, infeasible recruitment targets, despite high case counts, have also been documented during the COVID-19 pandemic [31]. Trial feasibility may be particularly challenging in the fractured US healthcare setting due to inter-trial competition in patient-participant recruitment, as supported by our stratified analysis in which non-USA trials were significantly more likely to be feasible than USA trials.

Lack of coordination and trial prioritization, resulting in a high level of multiplicity in investigated interventions, is a contributing factor to infeasible patient-participant recruitment. Concern about trial redundancy has been brought up frequently during the COVID-19 pandemic [1,2,4,5]. In our study, only 4.1% of trials were deemed potentially redundant, of which 4 investigated hydroxychloroquine and 10 investigated the efficacy of convalescent plasma. Our categorization of trials as potentially redundant involved matching of trial phase, type of trial (treatment versus prevention), patient-participant characteristics, regimen, comparator and primary outcome. It differs from other assessments of SARS-CoV-2 trial duplication, in which trial intervention has been the main focus of assessment [2]. While a low proportion of potentially redundant trials may be seen as an encouraging result, deeper examination reveals that sixty-three trials (13.6%) assessed for potential redundancy differed only by the choice of primary outcome, with endpoints often demonstrating small deviations from comparator trials, of questionable clinical relevance. For instance, some trials expressed the primary endpoint as a function of time e.g., time to death, whereas in others as a rate e.g., case fatality rate. Our post hoc analysis of trial similarity, which evaluated trial type, regimen, phase and patient-participant characteristics, revealed that 81.9% of trials were similar, reflecting the extent to which early clinical trials during the COVID-19 pandemic pursued comparable study designs.

Replication in research is important to clarify study results. However, lack of research coordination and harmonization of primary outcome endpoints during the COVID-19 pandemic [2,4,32,33] can thwart efforts to clarify net effects through meta-analyses. This is particularly relevant in the setting of multiple small trials of specific interventions, where the probability is elevated that at least one trial produces a positive result by chance alone [2,5]. Prospective meta-analyses (PMA), which encourage harmonization of core outcomes and draw on individual participant data, can help clarify treatment effects and reduce research waste [34]. In this way, individually underpowered studies can help address questions of significant clinical importance. Although successfully employed in other medical settings [35,36], PMAs were unfortunately not widely deployed in the early COVID-19 pandemic.

Concerns regarding research waste predated the pandemic [3743] but intensified in the setting of this international public health crisis. Our results support arguments for devising coordinated research plans in advance of public health emergencies [44], and evaluating and prioritizing trials at institutional [45,46], state and national levels [47]. The success of multicenter national platform trials, such as RECOVERY, in the United Kingdom, in both recruiting patient-participants (over 45580 have been enrolled as of December 9 2021, https://www.recoverytrial.net) and in generating practice-changing evidence, speaks to the promise of national research prioritization [48]. Additional strategies to improve pandemic preparedness include: i) promotion of individual participant data sharing platforms to capitalize on data generated, even from small trials [49]; ii) prioritization of adaptive master protocol trials investigating promising interventions [44,49]; and, iii) increased research collaboration, in the model of the Coalition for Epidemic Preparedness Innovations (CEPI). In our stratified analysis, industry-sponsored trials were significantly more likely to meet all 3 informativeness criteria than non-industry sponsored trials (Table 3). This suggests that academic researchers require more institutional support, as well as assistance from research consortia and funding bodies to produce informative results.

Limitations

First, we limited our assessment to 3 aspects of trial informativeness–potential redundancy, design quality and feasibility of patient-participant recruitment. Other aspects of informativeness, such as integrity and reporting, were not evaluated in our study, as they cannot be assessed without access to final trial results (430 of 500 trials, 86.0% had not yet completed or terminated at the end of our 6-month follow-up period). A follow-up study evaluating data 24 months after trial launch would enable a comprehensive assessment of trial informativeness, and thus represents an area for future research. Second, we used proxy measures of informativeness, which are imperfect. For example, we adopted strict criteria for potential redundancy, resulting in only 19 trials labelled potentially redundant, many of which differed based on primary outcome alone. Our post hoc analysis resulted in over eighty percent of trials deemed similar, based on assessment of trial type, regimen, phase and patient-participant characteristics. These two results (4.1% and 81.9%) can be viewed as lower and upper bounds for the proportion of redundant trials. Missing from our assessment was an evaluation of the availability and quality (as assessed by GRADE [50]) of pre-existent evidence of intervention efficacy which may render subsequent trials redundant. We also did not assess the extent to which individual participant data were made publicly available (for example, through the Vivli platform [51]), and subsequently incorporated into meta-analyses. Our redundancy evaluation should thus be interpreted with caution and future research will be required to provide a more precise estimate. Our assessment of trial design quality, as guided by the May 2020 FDA guidance document [16], required that all trials be, at a minimum, double-blinded. We acknowledge that this may unfairly penalize the small minority of trials evaluating interventions in which double-blinding is not practicable. In addition, our assessment of the inclusion of at-risk populations was limited only to age. We did not assess whether the study included a population with other risk factors such as comorbidities. However, no trials failed our design criteria based on failure to include an at-risk population. Third, our assessment of the informativeness of COVID-19 trials depends on the accuracy of ClinicalTrials.gov registration records. Fourth, our findings may not be generalizable to all COVID-19 interventional clinical trials. For example, public health behavioural interventions are frequently labelled as “Phase NA” and would therefore not be included in our findings.

Conclusions

The SARS-CoV-2 pandemic was met with a vigorous response from clinical researchers. However, less than one third of early COVID-19 trials registered on ClinicalTrials.gov met our 3 criteria for informativeness. Shortcomings in trial design, recruitment feasibility and redundancy reflect longstanding vulnerabilities in the clinical research enterprise that were magnified by the urgency of a pandemic. Much knowledge has been gained since the first six months of the COVID-19 pandemic, both in terms of effective measures aimed at treatment and prevention of the virus, but also with respect to the conduct of informative clinical research. The task ahead will be for investigators, research institutions, sponsors and regulators alike to take stock of lessons learned and devise solutions to benefit the global research enterprise as we move forward.

Supporting information

S1 Checklist. STROBE statement—Checklist of items that should be included in reports of cohort studies.

https://doi.org/10.1371/journal.pone.0262114.s001

(DOCX)

S1 Fig. Flow diagram of trial inclusion/exclusion.

https://doi.org/10.1371/journal.pone.0262114.s002

(DOCX)

S2 Fig. Ratio of actual to estimated number of patients enrolled.

https://doi.org/10.1371/journal.pone.0262114.s003

(DOCX)

S3 Fig. Ratio of actual to estimated recruitment length.

https://doi.org/10.1371/journal.pone.0262114.s004

(DOCX)

S2 Table. Additional characteristics of trial cohort.

https://doi.org/10.1371/journal.pone.0262114.s006

(DOCX)

S3 Table. Range of anticipated and actual enrollment.

https://doi.org/10.1371/journal.pone.0262114.s007

(DOCX)

S1 File. Trial inclusion and exclusion criteria.

https://doi.org/10.1371/journal.pone.0262114.s008

(DOCX)

S2 File. ClinicalTrials.gov search criteria.

https://doi.org/10.1371/journal.pone.0262114.s009

(DOCX)

S3 File. Data downloaded from ClinicalTrials.gov.

https://doi.org/10.1371/journal.pone.0262114.s010

(DOCX)

Acknowledgments

We thank Lucja Zabrowska for her important contribution to the data extraction for this project and Maciej Polak for statistical consultancy.

References

  1. 1. Glasziou PP, Sanders S, Hoffmann T. Waste in covid-19 research. BMJ. 2020;369:m1847. pmid:32398241
  2. 2. Kouzy R, Abi Jaoude J, Garcia Garcia CJ, El Alam MB, Taniguchi CM, Ludmir EB. Characteristics of the Multiplicity of Randomized Clinical Trials for Coronavirus Disease 2019 Launched During the Pandemic. JAMA Netw Open. 2020;3(7):e2015100. pmid:32658285
  3. 3. London AJ, Kimmelman J. Against pandemic research exceptionalism. Science. 2020;368(6490):476–477. pmid:32327600
  4. 4. Naci H, Kesselheim AS, Rottingen JA, Salanti G, Vandvik PO, Cipriani A. Producing and using timely comparative evidence on drugs: lessons from clinical trials for covid-19. BMJ. 2020;371:m3869. pmid:33067179
  5. 5. Jones CW, Woodford AL, Platts-Mills TF. Characteristics of COVID-19 clinical trials registered with ClinicalTrials.gov: cross-sectional analysis. BMJ Open. 2020;10(9):e041276. pmid:32948577
  6. 6. Mehta HB, Ehrhardt S, Moore TJ, Segal JB, Alexander GC. Characteristics of registered clinical trials assessing treatments for COVID-19: a cross-sectional analysis. BMJ Open. 2020;10(6):e039978. pmid:32518212
  7. 7. Zarin DA, Goodman SN, Kimmelman J. Harms From Uninformative Clinical Trials. JAMA. 2019;322(9):813–814. pmid:31343666
  8. 8. Fragkou PC, Belhadi D, Peiffer-Smadja N, et al. Review of trials currently testing treatment and prevention of COVID-19. Clin Microbiol Infect. 2020;26(8):988–998. pmid:32454187
  9. 9. Janiaud P, Axfors C, Van’t Hooft J, et al. The worldwide clinical trial research response to the COVID-19 pandemic—the first 100 days. F1000Res. 2020;9:1193. pmid:33082937
  10. 10. Pundi K, Perino AC, Harrington RA, Krumholz HM, Turakhia MP. Characteristics and Strength of Evidence of COVID-19 Studies Registered on ClinicalTrials.gov. JAMA Intern Med. 2020;180(10). pmid:32730617
  11. 11. Wang Y, Zhou Q, Xu M, Kang J, Chen Y. Characteristics of Clinical Trials relating to COVID-19 registered at ClinicalTrials.gov. J Clin Pharm Ther. 2020;45(6):1357–1362. pmid:32734670
  12. 12. U.S. Public Law 110–85 (Food and Drug Administration Amendments Act of 2007), Title VIII, Section 801. (https://www.govinfo.gov/content/pkg/PLAW-110publ85/pdf/PLAW-110publ85.pdf) Accessed 2019/07/24.
  13. 13. World Health Organization. WHO R&D Blueprint COVID 19 Experimental Treatments. 2020.
  14. 14. World Health Organization. Clinical management of COVID-19 Interim Guidance. 27 May 2020. https://apps.who.int/iris/bitstream/handle/10665/332196/WHO-2019-nCoV-clinical-2020.5-eng.pdf. Accessed 2020-09-24.
  15. 15. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database—update and key issues. N Engl J Med. 2011;364(9):852–860. pmid:21366476
  16. 16. U.S. Food & Drug Association. COVID 19: Developing Drugs and Biological Products for Treatment or Prevention, Guidance for Industry. May 2020. https://www.fda.gov/media/137926/download Accessed June 18 2020.
  17. 17. Carlisle B, Kimmelman J, Ramsay T, MacKinnon N. Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials. Clin Trials. 2015;12(1):77–83. pmid:25475878
  18. 18. https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/prop.test. Accessed 2021/12/01.
  19. 19. von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007;4(10):e296. pmid:17941714
  20. 20. Carlisle BG. Numbat Systematic Review Manager. The Grey Literature. 2014. https://numbat.bgcarlisle.com.
  21. 21. R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2013. http://R-project.org.
  22. 22. The informativeness of trials in COVID-19: Lessons learned from the Coronavirus pandemic. DOI 10.17605/OSF.IO/FP726. https://osf.io/fp726/.
  23. 23. https://codeberg.org/bgcarlisle/C19InformAnalysis.
  24. 24. Honarmand K, Penn J, Agarwal A, et al. Clinical trials in COVID-19 management & prevention: A meta-epidemiological study examining methodological quality. J Clin Epidemiol. 2021;139:68–79. pmid:34274489
  25. 25. Raimond V, Mousques J, Avorn J, Kesselheim AS. Characteristics of Clinical Trials Launched Early in the COVID-19 Pandemic in the US and in France. J Law Med Ethics. 2021;49(1):139–151. pmid:33966651
  26. 26. Sakamaki K, Uemura Y, Shimizu Y. Definitions and elements of endpoints in phase III randomized trials for the treatment of COVID-19: a cross-sectional analysis of trials registered in ClinicalTrials.gov. Trials. 2021;22(1):788. pmid:34749761
  27. 27. Janiaud P, Axfors C, Ioannidis JPA, Hemkens LG. Recruitment and Results Reporting of COVID-19 Randomized Clinical Trials Registered in the First 100 Days of the Pandemic. JAMA Netw Open. 2021;4(3):e210330. pmid:33646310
  28. 28. Norrie JD. Remdesivir for COVID-19: challenges of underpowered studies. The Lancet. 2020;395(10236):1525–1527. pmid:32423580
  29. 29. Wang Y, Zhang D, Du G, et al. Remdesivir in adults with severe COVID-19: a randomised, double-blind, placebo-controlled, multicentre trial. The Lancet. 2020;395(10236):1569–1578. pmid:32423584
  30. 30. Venkatraman N, Silman D, Folegatti PM, Hill AVS. Vaccines against Ebola virus. Vaccine. 2018;36(36):5454–5459. pmid:28780120
  31. 31. Cunniffe NG, Gunter SJ, Brown M, et al. How achievable are COVID-19 clinical trial recruitment targets? A UK observational cohort study and trials registry analysis. BMJ Open. 2020;10(10):e044566. pmid:33020111
  32. 32. von Cube M, Grodd M, Wolkewitz M, et al. Harmonizing Heterogeneous Endpoints in Coronavirus Disease 2019 Trials Without Loss of Information. Crit Care Med. 2021;49(1):e11–e19. pmid:33148952
  33. 33. Zarin DA, Rosenfeld S. Lack of harmonization of coronavirus disease ordinal scales. Clin Trials. 2021;18(2):263–264. pmid:33322940
  34. 34. Seidler AL, Hunter KE, Cheyne S, Ghersi D, Berlin JA, Askie L. A guide to prospective meta-analysis. BMJ. 2019;367:l5342. pmid:31597627
  35. 35. Askie LM, Darlow BA, Finer N, et al. Association Between Oxygen Saturation Targeting and Death or Disability in Extremely Preterm Infants in the Neonatal Oxygenation Prospective Meta-analysis Collaboration. JAMA. 2018;319(21):2190–2201. pmid:29872859
  36. 36. Askie LM, Espinoza D, Martin A, et al. Interventions commenced by early infancy to prevent childhood obesity-The EPOCH Collaboration: An individual participant data prospective meta-analysis of four randomized controlled trials. Pediatr Obes. 2020;15(6):e12618. pmid:32026653
  37. 37. Chalmers I, Bracken MB, Djulbegovic B, et al. How to increase value and reduce waste when research priorities are set. The Lancet. 2014;383(9912):156–165. pmid:24411644
  38. 38. Chan A-W, Song F, Vickers A, et al. Increasing value and reducing waste: addressing inaccessible research. The Lancet. 2014;383(9913):257–266. pmid:24411650
  39. 39. Glasziou P, Altman DG, Bossuyt P, et al. Reducing waste from incomplete or unusable reports of biomedical research. The Lancet. 2014;383(9913):267–276. pmid:24411647
  40. 40. Ioannidis JPA, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. The Lancet. 2014;383(9912):166–175.
  41. 41. Moher D, Glasziou P, Chalmers I, et al. Increasing value and reducing waste in biomedical research: who’s listening? The Lancet. 2016;387(10027):1573–1586. pmid:26423180
  42. 42. Salman RA-S, Beller E, Kagan J, et al. Increasing value and reducing waste in biomedical research regulation and management. The Lancet. 2014;383(9912):176–185.
  43. 43. Yordanov Y, Dechartres A, Atal I, et al. Avoidable waste of research related to outcome planning and reporting in clinical trials. BMC Med. 2018;16(1):87. pmid:29886846
  44. 44. Madariaga A, Kasherman L, Karakasis K, et al. Optimizing clinical research procedures in public health emergencies. Med Res Rev. 2021;41(2):725–738. pmid:33174617
  45. 45. Gelinas L, Lynch HF, Bierer BE, Cohen IG. When clinical trials compete: prioritising study recruitment. J Med Ethics. 2017;43(12):803–809. pmid:28108613
  46. 46. North CM, Dougan ML, Sacks CA. Improving Clinical Trial Enrollment—In the Covid-19 Era and Beyond. N Engl J Med. 2020;383(15):1406–1408. pmid:32668133
  47. 47. Meyer MN, Gelinas L, Bierer BE, et al. An ethics framework for consolidating and prioritizing COVID-19 clinical trials. Clin Trials. 2021:1740774520988669. pmid:33530721
  48. 48. Angus DC, Gordon AC, Bauchner H. Emerging Lessons from COVID-19 for the US Clinical Research Enterprise. JAMA. 2021;325(12):1159–1161. pmid:33635309
  49. 49. Park JJH, Mogg R, Smith GE, et al. How COVID-19 has fundamentally changed clinical research in global health. The Lancet Global Health. 2021;9(5):e711–e720. pmid:33865476
  50. 50. Guyatt GH, Oxman AD, Vist GE, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–926. pmid:18436948
  51. 51. Li R, Wood J, Baskaran A, et al. Timely access to trial data in the context of a pandemic: the time is now. BMJ Open. 2020;10(10):e039326. pmid:33122319