This Office of Adolescent Health and the AJPH supplement on adolescent pregnancy prevention illustrates the implications and practical lessons that behavioral scientists and health educators face in large-scale replication of evidence-based adolescent pregnancy prevention programs.
Of the program models replicated during the 2010 to 2015 initiative, a variety of approaches—abstinence, sexual health education, youth development, and programs for clinical settings and specific populations—were represented. Throughout these evaluations, the focus on replication was largely aimed at assuring program fidelity, which was measured using facilitator’s self-reported adherence to program models and observations by independent observers. Findings from the research presented in this theme issue suggest there is valuable information to be gained through replication studies. Information provided by this first cohort of adolescent pregnancy prevention grantees can inform the evidence base and provide insight into what is needed for program replication in other fields.
Funders, researchers, and practitioners have shown greater attention to scientific replication in recent years through a variety of efforts, ranging from
attempting to replicate theoretical findings, such as the work conducted since 2008 by the American Psychological Association, which replicated (or failed to replicate) the findings of 100 prominent articles in the discipline;
summarizing the effectiveness of interventions for specific health problems (e.g., systematic reviews);
expanding the evidence base and identifying interventions with demonstrated effectiveness;
documenting the continued effectiveness of interventions;
working to expand the generalizability of interventions to new settings, populations, and implementation models; and
attempting to identify core and modifiable elements of existing interventions.
There may be multiple types of replication research, many of which are represented in this special issue. Valentine et al. identify five types of replications including: statistical replications, which aim to replicate an existing study with a new sample and test whether the original results are attributable to random effects; generalizability replications, in which one aspect of the study design is altered, such as the target population, to determine the extent to which results may generalize from one population or setting to another; implementation replications, where some implementation details are varied, such as number of sessions or contacts; theory development replications, in which variations in the intervention allow for better understanding how the intervention works, such as variations in mediators or modifiers; and ad hoc replications, in which interventions may vary from each other in multiple and usually unsystematic ways.1
Why and how a replication is being conducted, or the typology of replication research, are important because research models and standards vary according to the purpose of the replication. For example, statistical replications would generally require the most fidelity to an implementation model because the purpose is to replicate previous or existing findings. Generalizability replications, on the other hand, allow for systematic variations in program settings or populations.
This typology of replication is useful to interpret study designs and results of adolescent pregnancy prevention. In peer reviews of the articles in this issue, one of the most frequently raised criticisms was that while studies adhered to the original program model, they were implemented in new settings and with new populations. This criticism represents a common misconception in replication studies—that replications have to be conducted in the precise manner as the previous study. However, as noted by Valentine et al., exact replication is virtually impossible and the type of replication attempted will affect judgments about the quality of the study.1 As a result, readers are encouraged to take into account the type of replication being attempted and how the replication adds to our understanding of the effectiveness of specific approaches, strategies, and theories.
Moreover, decisions about the effectiveness of interventions should be based on more than one replication study. We need to use all evidence available to make decisions about intervention effectiveness.1
In the past, evaluation studies providing information to determine if a program should be deemed “evidence-based” have frequently lacked transparency. Reporting guidelines for scientific research have been developed to remedy this issue2 and improve transparency by requiring detailed information needed for others to replicate research.
At this point in time, hundreds of reporting guidelines, ranging in scope and purpose, exist; the Equator Network has created an inventory of guidelines, which is available online (http://www.equator-network.org).3 Reporting guidelines address study designs, including parallel group randomized trials (CONSORT 20104), observational studies (STROBE), and qualitative research (SRQR5). In recent years, guideline extensions have been developed to emphasize important aspects of interventions (TIDieR6). A checklist and explanatory document typically describe the guideline’s components and characteristics. Guidelines assist authors in thoroughly documenting their studies in peer-reviewed literature, thus improving other researcher’s ability to replicate studies and interpret findings.
Several reporting guidelines apply to the adolescent pregnancy prevention replication studies, three of which, and two extensions, being geared toward rigorous evaluation studies and interventions. These guidelines include (1) CONSORT (Consolidated Standards of Reporting Trials), which was revised in 2010 to offer guidance about reporting parallel group randomized trials4; (1a) an extension to CONSORT for social and psychological interventions (SPI), which is specifically relevant to public health; (1b) TIDieR (Template for Intervention Description and Replication) an extension to CONSORT used for reporting interventions6; (2) TREND (Transparent Reporting of Evaluations with Nonrandomized Designs), which is used for reporting nonrandomized designs7; and (3) STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) which is used for reporting on observational studies in epidemiology.
Initiatives supporting adolescent pregnancy prevention are moving the field forward in productive ways. However, an emphasis on replication that embraces the various types and purposes of replication research will benefit both the science and practice of adolescent pregnancy prevention, as well as strengthen our ability to apply social and behavioral sciences to public health problems.
Embracing replication research is not without challenges. The development of the knowledge and intervention base for adolescent pregnancy prevention will require considerable resources, collective effort, and a systematic approach to replication that addresses the multiple goals and strategies for building the evidence.