Randomized controlled trials of public health interventions are often complex: practitioners may not deliver interventions as researchers intended, participants may not initiate interventions and may not behave as expected, and interventions and their effects may vary with environmental and social context.
Reports of randomized controlled trials can be misleading when they omit information about the implementation of interventions, yet such data are frequently absent in trial reports, even in journals that endorse current reporting guidelines.
Particularly for complex interventions, the Consolidated Standards of Reporting Trials (CONSORT) statement does not include all types of information needed to understand the results of randomized controlled trials. CONSORT should be expanded to include more information about the implementation of interventions in all trial arms.
REPORTING THE DESIGN OF an intervention tells part of a complex story, but public health interventions may involve multiple sites and practitioners, clinical decisions, and patient preferences. Practitioners may not deliver all parts of interventions or may add components; experimental participants may not take up interventions completely, and control participants may receive unintended services; and experimental interventions themselves may change according to contextual demands.
Trial reporting has improved since the introduction of guidelines that emphasize transparent reporting of methods and results1; however, evidence demonstrates that trial reports continue to lack information about the implementation of interventions—their actual delivery by practitioners and uptake by participants.
Implementation data increase the external validity of trials and aid the application of results by practitioners.2,3 Policymakers, administrators, and researchers need these data to assess the generalizability of findings, to synthesize literature,4 to design future trials, to determine the feasibility of interventions,5 and to develop treatment guidelines.6 The importance of implementation data is emphasized in the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement,7 a guide for reporting nonrandomized controlled trials that complements the Consolidated Standards of Reporting Trials (CONSORT) statement,8 a guide for reporting randomized controlled trials. Implementation data are needed to understand the results and implications of both randomized and nonrandomized trials, but unlike TREND, CONSORT gives little attention to practitioner actions and participant experiences.
On the basis of previous research findings, I propose that CONSORT be expanded to encourage the inclusion of implementation data in reports of randomized controlled trials.
There is extensive behavioral literature about operationally defining and measuring dependent and independent variables.(e.g.,9,10) Reviews have consistently shown that independent variables are poorly defined and infrequently measured in trial reports; reports would be more useful if they contained richer information about actual similarities and differences between trial arms. These reviews also demonstrate that the quality of implementation reporting has not improved in recent years despite improvements in overall report quality.
One review of 539 studies published in the Journal of Applied Behavior Analysis between 1968 and 1980 found that among the surveyed studies presenting operational definitions, only an average of 16% (range 3%–34%) also performed some check on the accuracy of the implementation of the independent variable.11 A similar review of school-based studies found that 64 of 181 (35%) operationally defined the intervention and 45 (25%) monitored or measured its implementation.12 In a review of studies involving people with learning disabilities, 12 of 65 (18%) measured implementation of the independent variable.13 A review of 148 studies on parent training research published in 18 journals between 1975 and 1990 found that almost all reports failed to examine differences between program design and implementation.14
In a broader review, fewer than 6% of 359 psychosocial trials included a treatment manual, implementer supervision, and an adherence check; 55% did not report using any of these methods to promote and verify implementation.15 An analysis of 162 prevention studies found that 39 (24%) reported a method for verifying intervention delivery,16 and reviews of the 1990 editions of Behavior Therapy and the Journal of Consulting and Clinical Psychology found that 9 of 25 (36%) and 7 of 22 (32%) articles, respectively, assessed treatment delivery directly.17
In 2005, the National Institutes of Health Behavior Change Consortium published one of the most comprehensive analyses of implementation data from 342 health behavior intervention studies; 71% of studies reported theoretical models, whereas only 27% reported mechanisms to monitor adherence.18
Recently, systematic reviews have been used to highlight the omission of implementation information in trial reports.19–21 For example, a review about smoking cessation concluded that studies should describe “the intervention in sufficient detail for its replication even if the detail requires a separate paper.”22(p10) A review of interventions to promote smoke detectors identified what later authors labeled “systematic deficiencies in the literature in reporting context, methods, and details of implementation.”23(p150)
Recurrent omissions of implementation data may prevent readers from acting on the results of trials. Worse, results can be misleading when implementation data are not considered. A review of tap water for wound cleansing concluded that tap water might be as effective as sterile water or sterile saline water for preventing infection and promoting healing24; however, most trials took place in settings with sanitary tap water. The results applied only to similar settings.25
The CONSORT statement includes practical, evidence-based recommendations for reporting randomized trials. Since their introduction, the quality of trial reports has improved,1 but only 1 of 22 CONSORT items (item 4) explicitly mentions the design and administration of interventions. Even articles in journals that have adopted CONSORT frequently report implementation inadequately26 and omit the number of participants receiving the treatment allocated.27 These omissions may occur because CONSORT focuses on the examination rather than the implementation of interventions. For example, CONSORT asks researchers to report evidence that blinding occurred as planned, but it does not ask researchers to report evidence that interventions occurred as planned.28
The Transparent Reporting of Evaluations with Nonrandomized Designs statement complements CONSORT and strongly emphasizes the importance of implementation data2: “Sufficient detail and clarity in the report allow readers to understand the conduct and findings of the intervention study and how the study was different from or similar to other studies in the field.”8(p361) The same logic surely applies to the reporting of randomized trials.
Implementation data may not be collected for practical and scientific reasons (e.g., monitoring adherence might confound a trial); however, information about implementation is generally undervalued. Researchers may exclude implementation information because it does not seem important or to give positive impressions of interventions that encounter problems in delivery or compliance. Journal editors may not demand implementation data because of space restrictions. Furthermore, funding bodies neglect mixed-methods research about putting interventions into practice.29 Expanding CONSORT would signal the importance of implementation information, expose its frequent omission, and encourage its measurement and reporting.
A review of a 2006 series of reports of the Women’s Health Initiative Randomized Controlled Dietary Modification Trial30–32 shows that reports of well-conducted trials in journals endorsing CONSORT could be improved by including data about the implementation of interventions.
The Women’s Health Initiative trial tracked nearly 49 000 women for more than 8 years to investigate the impact of “18 group sessions in the first year and quarterly maintenance sessions thereafter”30(p631) on cardiovascular disease, breast cancer, and colorectal cancer. Although the trial tested a behavioral intervention, reports implied that the study was designed to “directly address the health effects of a low-fat eating pattern”31(p644); an early paper said “the intervention is a dietary pattern.”33(pS95) The trial allowed substantial variation across sites,34 and some aspects of delivery were monitored,35 but the reports neither included nor referenced information about the actual delivery of the intervention by program staff.
The titles of these reports (“Low-fat dietary pattern and. . .”) and popular media accounts of them(e.g., 36) unintentionally confused diet (what people actually eat) and a diet (a behavioral modification program). In the first year of the study, 68.6% of women in the intervention group did not reduce their fat consumption to the target level (20% of total energy intake); in the sixth year, 85.6% exceeded their target, reducing their fat consumption to less than 20% of their total energy intake.31 One abstract reported that “women in the comparison group continued their usual eating pattern,”31(p643) but in the first year of the study, women in the control group reduced both their energy and fat intakes.30 Furthermore, self-reported food intake was inconsistent with changes in weight for both groups.37
Information about participants33 and recruitment38 has been published elsewhere, but participant uptake (e.g., attendance at sessions) was mentioned only in a definition of dropout and as a statistical variable in an analysis comparing women who attended specified numbers of sessions. Reports considered the impact of compliance on statistical power, but reports did not consider why the intervention failed to produce expected behavioral changes.
Although reports suggest that the Women’s Health Initiative trial was well designed and internally valid, more implementation data would increase the utility of its results. Implementation data would help readers understand the trial and help health professionals design and improve other dietary interventions.
The original CONSORT statement was criticized for lack of process data (e.g., sessions attended)38 and was revised accordingly.7 CONSORT now includes deviations from protocol as part of participant flow (item 13); when a participant does not receive or complete treatment, “the nature of protocol deviation and the exact reason for excluding participants after randomization should always be reported.”28(p679) Protocol deviations related to subjects included in analyses are equally relevant.
In reports of randomized trials, authors often report what they intended rather than what actually happened. Information about intentional and unintentional deviations in the delivery of interventions by practitioners, as well as information about deviations because of external factors, helps readers understand and apply the results of randomized trials.
The Transparent Reporting of Evaluations with Nonrandomized Designs statement includes incentives used as part of the intervention (items 4 and 21) and asks for “discussion of the success of and barriers to implementing the intervention”8(p365) (item 20). This information is similarly important in reports of randomized trials.
Adding such items to CONSORT would encourage authors to include, as far as possible, information needed to replicate trials as they happened, including delivery of nonspecific treatment components and receipt of interventions outside trial protocols.
The implementation of interventions demands more attention than it receives in most trial reports. Moncher and Prinz argued in 1991 that journal editors should give more attention to implementation.15 In 2004, Bellg et al. expressed hope that funders and publishers would require information about the delivery of interventions.2 With the explosive growth of evidence-based practice and the introduction of CONSORT, it is both unfortunate and surprising that so little has changed.
Critics of evidence-based practice are right to argue that many researchers value statistical outcomes at the expense of other types of data; limited reporting of qualitative and descriptive data prevents researchers, practitioners, and policymakers from using the application of the results of randomized controlled trials appropriately.
CONSORT has helped improve the quality of trial reports, yet it remains true that “in many clinical trials we must still guess what treatment was actually tested.”17(p1) Implementation data are essential to users of trial reports. Although it may be impossible to include complete data in all printed reports, implementation data could be included in extended online journals, in referenced papers or Web sites, or in trial registries. CONSORT should ask researchers to include or reference implementation data in reports of randomized controlled trials or to justify their absence.
I owe special thanks to Paul Montgomery for his comments on the article and for his guidance. I am grateful to Frances Gardner, Janet Harris, Don Operario, and Kristen Underhill for their ideas and suggestions and to Todd Berzon for his editorial assistance.