Objectives. We have described the practice of designing for dissemination among researchers in the United States with the intent of identifying gaps and areas for improvement.
Methods. In 2012, we conducted a cross-sectional study of 266 researchers using a search of the top 12 public health journals in PubMed and lists available from government-sponsored research. The sample involved scientists at universities, the National Institutes of Health, and the Centers for Disease Control and Prevention in the United States.
Results. In the pooled sample, 73% of respondents estimated they spent less than 10% of their time on dissemination. About half of respondents (53%) had a person or team in their unit dedicated to dissemination. Seventeen percent of all respondents used a framework or theory to plan their dissemination activities. One third of respondents (34%) always or usually involved stakeholders in the research process.
Conclusions. The current data and the existing literature suggest considerable room for improvement in designing for dissemination.
The effective dissemination of information on priorities, health risks, and evidence-based interventions in public health is a formidable challenge.1,2 Dissemination is an active approach of spreading evidence-based information to the target audience via determined channels using planned strategies.3 Studies from both clinical and public health settings suggest that evidence-based practices are not being disseminated effectively.4–6 For example, in a study of US adults, only 55% of overall care received was based on what is recommended in the scientific literature.7 In a study of US public health departments, an estimated 58% of programs and policies were reported as “evidence-based.”8
To illustrate the dissemination challenges and possible solutions, research on evidence-based interventions has now taught us several important lessons:
dissemination generally does not occur spontaneously and naturally4;
passive approaches to dissemination are largely ineffective9,10;
single-source prevention messages are generally less effective than comprehensive, multilevel approaches11,12;
stakeholder involvement in the research or evaluation process is likely to enhance dissemination (so-called practice-based research)13–19;
theory and frameworks for dissemination are beneficial20,21; and
the process of dissemination needs to be tailored to specific audiences.22
The difficulty in dissemination is the result of differing priorities.23,24 For researchers, the priority is often on discovery (not application) of new knowledge, whereas for practitioners and policymakers, the priority is often on practical ways for applying these discoveries for their settings.25 The chasm between researchers and practitioners was illustrated in a “Designing for Dissemination” workshop sponsored by the US National Cancer Institute.26 In this workshop, all participants acknowledged the importance of dissemination. Researchers reported their role was to identify effective interventions, but that they were not responsible for dissemination of research findings. Similarly, practitioners did not believe they were responsible for dissemination.
It has been recommended that researchers should identify dissemination partners before conducting discovery research, so that those who might adopt the discoveries will see the research process and results in a collaborative manner.24,27 Ultimately, we need to better understand how to design interventions with the elements most critical for external validity in mind,28–30 addressing these issues during early, developmental phases, and not near the end of a project.24,31 To date, few studies have evaluated the extent to which researchers are designing their studies for dissemination and how the design process may differ by researcher background and setting of research.
In the present study, we described the practice of designing for dissemination (D4D) among researchers in the United States with the intent of identifying gaps and areas for improvement.
To begin the sampling, we conducted a PubMed search for the top 12 journals in the category “public, environmental and occupational health” with the highest impact factors.32 The following journals were searched: American Journal of Epidemiology; American Journal of Preventive Medicine; American Journal of Public Health; Annual Review of Public Health; Bulletin of the World Health Organization; Cancer Epidemiology, Biomarkers & Prevention; Environmental Health Perspectives; Epidemiologic Reviews; Epidemiology; International Journal of Epidemiology; Tobacco Control; and World Health Organization Technical Report Series. The goal of the search was to identify a set of lead authors to constitute the sample. The search was limited to authors based in the United States, from October 1, 2008, to October 1, 2011, and excluded certain article types (commentaries, biographies, historical articles, classical articles, reviews, meta-analyses, webcasts). We chose the set of 12 journals and time frame to represent a range of disciplines across public health and enough researchers from the National Institutes of Health (NIH) and the Centers for Disease Control and Prevention (CDC) to allow subgroup analyses. The lead author’s affiliation was used to identify 100 intramural NIH researchers and 91 CDC researchers. We then drew a random sample of 200 names of the 2738 researchers not affiliated with NIH or CDC.
Next, using the NIH RePORTER database (an electronic tool for searching NIH-funded research projects), we identified 57 NIH extramural grantees in dissemination and implementation research, In addition, we randomly selected 100 names from a list of 335 investigators affiliated with the CDC’s Prevention Research Centers (PRCs) Program (compiled from the PRC website). We identified e-mail addresses through PubMed, RePORTER, university websites, and the directory of the US Department of Health and Human Services (for NIH and CDC employees). Together, these sources provided an initial pool of 548 researchers. Of these researchers, 488 constituted the valid number of participants for the denominator (excluding failed e-mail addresses, previous unknown duplicates, and individuals who were deceased or disabled).
The survey tool was based in part on a similar study conducted in the United Kingdom (UK) by Wilson et al.33 The project team obtained the UK tool, modified certain questions for the US context, and added new questions (e.g., questions asking participants to rank barriers to dissemination and to identify methods for involving stakeholder groups in dissemination efforts). The tool was pilot tested with 10 researchers housed at the PRC in St. Louis, Missouri. It was then revised, resulting in a final questionnaire with 35 questions. The questionnaire is available from the first author on request.
Data were collected using the online survey software developed by Qualtrics (Provo, UT).34 Each participant received a unique link to the survey, and nonrespondents received 3 reminder emails. Because the initial response rates were lower from NIH and CDC respondents, 5 weeks after the launch we attempted to call nonrespondents at NIH and CDC. Because incentives increase response rates,35 we offered a $20 gift card (from Amazon.com) to all participants who completed the survey. The survey remained open for 8 weeks (from January 10, 2012, to March 6, 2012). The overall response rate was 54.5% (n = 266 of 488), with a response rate of 61% among university researchers, 41% among CDC scientists, and 38% among NIH scientists. The median completion time for the survey was 11 minutes.
Descriptive analyses were conducted. Sample sizes varied in subgroup analyses because of missing data (e.g., “don’t know” coded to missing). Bivariate relationships were analyzed using the independent sample t-test or the Pearson χ2 test.
The largest percentage of respondents worked in a university (65%), followed by the CDC (13%), NIH (9%), and a variety of other settings, such as nonprofit organizations, think tanks, and health departments (13%). Across the entire sample, 66% had previously worked in a practice or policy setting where research findings might be applied. The highest percentage of those with previous practice experience was shown for CDC researchers (79%) and the lowest for NIH researchers (36%). When respondents were asked to estimate the time they spent on dissemination, 73% of respondents in the pooled sample estimated they spent less than 10% of their time on dissemination. In this same sample, 11% spent at least 20% of their time on dissemination. There was considerable variation by setting, with the highest percentage of those spending at least 20% of the time on dissemination among university researchers (13.5%) and the lowest among NIH researchers (4%).
Findings related to infrastructure (i.e., people or physical structures to support dissemination) showed that 73% of respondents had formal training in communication or ready access to someone with skills in health communication (Table 1). Researchers at NIH and CDC had the highest likelihood of access to health communication experts (88%). About half of respondents (53%) had a person or team in their unit dedicated to dissemination. Researchers at NIH had the highest likelihood of a dedicated unit (80%), and those in universities not affiliated with PRCs had the lowest frequency of access to such a unit (41%). Those with a dedicated person or unit were asked to provide characteristics of the unit (open-ended responses). This was nearly always a person or unit in health communication, media relations, or public relations. A few respondents reported having a person to craft messages for policymakers or develop newsletters for community members. Those who did not have a formal dissemination unit were asked if they would like to have such a unit, and 68% expressed desire for such an entity. About one third of respondents reported that their unit or department had a formal communication or dissemination strategy (32%), with the highest proportion with a strategy at NIH (52%) and the lowest at non-PRC universities (22%).
Dissemination Infrastructure by Setting: United States, Public Health Researchers Designing for Dissemination Survey, 2012
|Variable||All (n = 266), No. (%)||No PRC Affiliation (n = 109), No. (%)||PRC Affiliation (n = 63), No. (%)||NIH (n = 25), No. (%)||CDC (n = 34), No. (%)||Other (n = 34), No. (%)||Pa|
|Formal training in or ready access to someone with formal training in communication|
|Yes||194 (73)||68 (62)||54 (86)||22 (88)||30 (88)||19 (56)||< .001|
|No||71 (27)||41 (38)||9 (14)||3 (12)||4 (12)||14 (41)|
|Not sure||1 (0.4)||0||0||0||0||1 (2.9)|
|Dedicated person/team for dissemination in unit/organization|
|Yes||140 (53)||45 (41)||31 (49)||20 (80)||23 (68)||20 (59)||.001|
|No||109 (41)||55 (51)||32 (51)||3 (12)||8 (24)||11 (32)|
|Not sure||17 (6)||9 (8)||0||2 (8)||3 (9)||3 (8.8)|
|Unit/department has formal communication/ dissemination strategy|
|Yes||84 (32)||24 (22)||21 (34)||13 (52)||16 (47)||10 (29)||.002|
|No||114 (43)||56 (52)||32 (52)||3 (12)||9 (27)||14 (41)|
|Not sure||66 (25)||28 (26)||9 (15)||9 (36)||9 (27)||10 (29)|
Note. CDC = Centers for Disease Control and Prevention; NIH = National Institutes of Health; PRC = Prevention Research Center.
aP values were determined by using the Pearson χ2 test (excludes “Other” category and “Not sure” when there were 0 cells).
Seventeen percent of all respondents always or usually used a framework or theory to plan their dissemination activities (Table 2). Those respondents affiliated with a PRC (26%) and those with experience in a practice setting (20%) were more likely than other subgroups to use a framework or theory to plan dissemination activities. Among those reporting use of a framework or theory (reported in an open-ended question), the most commonly used theory was Diffusion of Innovations. Other frameworks or theories included RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance), social cognitive theory, social marketing, network theory, and several different frameworks for community-based participatory research. One third of respondent always or usually produced summaries (e.g., issue briefs) for nonresearch audiences. Participants from universities with a PRC were most likely to produce summaries for nonresearch audiences (54% reported always or usually doing so), and university researchers without a PRC were least likely (20% reported always or usually doing so).
Dissemination Activities by Setting and Practice Experience: United States, Public Health Researchers Designing for Dissemination Survey, 2012
|Place of Work||Ever Worked in Practice/Policy Setting|
|Variable||All (n = 266), No. (%)||University—No PRC Affiliation (n = 109), No. (%)||University With PRC Affiliation (n = 63), No. (%)||NIH (n = 25), No. (%)||CDC (n = 34), No. (%)||Pa||Yes (n = 173), No. (%)||No/Not Sure (n = 91), No. (%)||Pa|
|Use of framework/theory to plan dissemination activities|
|Always/usually||44 (17)||17 (16)||16 (26)||3 (12)||5 (15)||.002||35 (20)||9 (10)||< .001|
|Sometimes/rarely||101 (38)||40 (37)||30 (48)||6 (24)||13 (39)||77 (45)||23 (26)|
|Never||68 (26)||29 (27)||13 (21)||4 (16)||11 (33)||42 (24)||26 (29)|
|Not sure||23 (9)||10 (9)||3 (5)||5 (20)||0 (0)||10 (6)||12 (14)|
|Does not plan dissemination-related activities||27 (10)||12 (11)||0 (0)||7 (28)||4 (12)||8 (5)||19 (21)|
|How often produce summaries for nonresearch audiences|
|Always/usually||85 (32)||22 (20)||34 (54)||7 (28)||11 (32)||.001||68 (39)||14 (18)||< .001|
|Sometimes/rarely||159 (60)||76 (70)||26 (41)||15 (60)||22 (65)||96 (56)||53 (67)|
|Never||21 (8)||10 (9)||3 (5)||3 (12)||1 (3)||9 (5)||12 (15)|
|How often stakeholders involved|
|Always/usually||87 (34)||24 (23)||42 (67)||4 (19)||9 (27)||< .001||69 (40)||16 (19)||< .001|
|Sometimes/rarely||126 (49)||53 (50)||21 (33)||10 (48)||21 (62)||83 (48)||43 (51)|
|Never||45 (17)||29 (27)||0 (0)||7 (33)||4 (12)||20 (12)||25 (30)|
|How stakeholders are involvedb|
|Engage stakeholders as advisors||154 (72)||51 (66)||54 (86)||9 (64)||17 (57)||.013||112 (74)||40 (68)||.392|
|Work with stakeholders to make your research relevant to their setting||133 (62)||49 (64)||46 (73)||6 (43)||15 (50)||.062||103 (69)||28 (48)||.006|
|Understand how to enhance the relevance of your study to stakeholders||126 (59)||47 (61)||41 (65)||8 (57)||12 (40)||.134||101 (66)||23 (39)||< .001|
|Involve stakeholders in the research process||110 (52)||34 (44)||42 (67)||7 (50)||11 (37)||.018||90 (59)||19 (32)||< .001|
|Understand how study findings fit with their organizational goals/mission||105 (49)||37 (48)||36 (57)||6 (43)||12 (40)||.413||88 (58)||17 (29)||< .001|
|Seek resources for dissemination||73 (34)||23 (30)||28 (44)||5 (36)||6 (20)||.098||59 (39)||13 (22)||.021|
|Stage of stakeholder involvement|
|Proposal||73 (27)||30 (28)||27 (43)||2 (18)||6 (18)||< .001||56 (32)||16 (18)||.006|
|Data collection/analysis or draft report/manuscript||39 (14)||10 (9)||10 (16)||0 (0)||10 (29)||22 (13)||16 (18)|
|Final report/manuscript||65 (24)||32 (29)||5 (8)||11 (44)||13 (38)||42 (24)||23 (25)|
|All stages||47 (18)||20 (18)||16 (25)||2 (8)||1 (3)||34 (20)||13 (14)|
|Rarely plans these dissemination activities||42 (16)||17 (16)||5 (8)||10 (40)||4 (12)||19 (11)||23 (25)|
Note. CDC = Centers for Disease Control and Prevention; NIH = National Institutes of Health; PRC = Prevention Research Center.
aP values were determined by using the Pearson χ2 test.
bRespondents who never involve stakeholders (n = 45) did not see this question.
The extent of stakeholder involvement in the research process was examined (Table 2). One third of respondents (34%) always or usually involved stakeholders in the research process. The subgroups most likely to involve stakeholders were those affiliated with a PRC (67%) and those with practice experience (40%). The respondents least likely to involve stakeholders were those at NIH (19%). The 3 most common methods of stakeholder involvement included engaging them as advisors (72%), working together to make research more relevant to their setting (62%), and understanding how to enhance relevance of research (59%). Among the various stages of involvement, stakeholders were more likely to be involved during proposal development (27%) and at the time of final report or article writing (24%). One quarter of respondents at PRCs reported stakeholder involvement at all stages of the research process, and 3% of researchers at CDC also reported such involvement.
Designing for dissemination is an active process that helps to ensure that public health interventions, often evaluated by researchers, are developed in ways that match well with adopters’ needs, assets, and time frames. The findings from this study, which might be the first of its kind in the United States, showed gaps in how well researchers are designing for dissemination. For example, only 17% of respondents always or usually based efforts on a framework or theory, and 34% of researchers always or usually involved stakeholders in their study processes. Although some of the D4D factors were harder to change (e.g., infrastructure), other D4D activities were readily within the control of the research team (e.g., involving stakeholders, producing user-friendly summaries of research).
There were interesting patterns by setting and location. For infrastructure variables (Table 1), federal respondents (NIH and CDC) had greater access than university researchers to a dedicated person or team for dissemination and were more likely to have a formal dissemination strategy. However, for dissemination activities (Table 2), university respondents with a PRC affiliation were the most likely to use a framework or theory, produce summaries for nonresearch audiences, and engage stakeholders in the process. The higher level of activity for PRC respondents was not surprising, given their focus on translational research and participatory methods.36 The present findings could be compared with results from a recent study from the United Kingdom that sampled 243 principal investigators and had a response rate similar to ours.33 In the present study, 73% of respondents spent less than 10% of their time on dissemination compared with 66% in the United Kingdom. Researchers in the United States appeared to have greater access to a person or team for dissemination than those in the United Kingdom (United States = 53%; United Kingdom = 20%) and were more likely to have a formal communication or dissemination strategy (United States = 32%; United Kingdom = 20%). Future research could better address the type of research setting and country differences via triangulated (quantitative and qualitative) methods.37
Only 17% respondents always or usually relied on a framework or theory to guide their dissemination efforts. Furthermore, respondents reported using Diffusion of Innovations, RE-AIM, social cognitive theory, social marketing, or networking; however, this only represented a fraction of the frameworks available to researchers. Although these might be the frameworks most common in public health, many frameworks that researchers would find beneficial might have been developed in other fields. Given the value of theory38 and the large number of available theories and frameworks that are quite diverse in a number of factors, including origin and scope,20 an initial action could involve reviewing possible frameworks and adopting 1 or more within a research unit. Several reviews demonstrated that there is a wealth of such frameworks from which researchers and practitioners can draw.20,21,39
It is also important to consider that the D4D process may vary, depending on the type of research (e.g., analysis of existing data) and maturity of a body of literature. Not all research should lead to public health action because findings may be equivocal, the cost of action is high, or public health audiences may not be equipped to implement new interventions. Some have referred to the “anxiety of the week” syndrome, where early or equivocal findings can receive intense media attention.40 Even in these cases, there are advantages to having public health practice and policy involved and informed. For researchers who primarily analyze existing data (i.e., collected by others) or etiologic research, it is unrealistic to expect a D4D process that involves stakeholders at every stage from proposal development to data analysis and interpretation. Many of these D4D challenges need particular attention in settings with high health disparities, where the system constraints may be the greatest, and the delivery systems are often underdeveloped.1,41
These findings raised a number of important questions: What might enhance efforts in dissemination? What system-level changes are needed? What might motivate and encourage researchers to focus more strongly on dissemination? Based on the findings of the present study and related literature,10,20,22,24,28,31,42–44 Table 3 addresses some of these issues by describing possible D4D actions at the levels of system, process, and production. At the systems level, fundamental changes are needed in how research is funded and how researchers are incentivized. These actions (e.g., how research grants are evaluated, promotion or tenure guidelines) will require a long-term commitment to dissemination, and in some cases, are not easy to change. There is evidence of progress on several systems-level actions, including an NIH Program Announcement on dissemination and implementation research,45 some grant mechanisms that include a specific section calling for a dissemination plan,46 and new tools for tracking measures, such as the National Cancer Institute’s Grid-Enabled Measures.47 System-level variations may in part explain the differences between the United States and the United Kingdom.
Principles for Designing for Dissemination: United States, Public Health Researchers Designing for Dissemination Survey, 2012
|Shift research funder priorities and processes||Make dissemination (e.g., a dissemination plan) a scoreable part of funding announcements|
|Include stakeholders in the grant review process|
|Provide rapid funding for practice-based research with high dissemination potential|
|Provide supplemental funding for dissemination|
|Shift researcher incentives and opportunities||Provide academic incentives and credit, including impacts on promotion and tenure decisions (provide prototype promotion and tenure policies)|
|Hire faculty with practice experience|
|Provide opportunities for faculty to spend time in practice settings|
|Conduct trainings to improve dissemination, implementation, evaluation, and translation|
|Develop new measures and tools||Identify measures for evaluating dissemination efforts|
|Maintain systems for tracking the measures|
|Develop tools for designing for dissemination|
|Develop new reporting standards||Develop standards for reporting research that focus more fully on dissemination|
|Promote new dissemination and implementation reporting standards|
|Identify infrastructure requirements||Identify people required for dissemination and evaluation|
|Identify system requirements (information technology, media)|
|Involve stakeholders as early in the process as possible||Engage as advisors and collaborators|
|Engage in the research process|
|Engage key stakeholders (receptors) for research through audience research||Identify gaps in research, relevance of methods, messagesEnsure stakeholders represent potential adopter organizationsIdentify opinion leaders for uptakeIdentify barriers to disseminationIdentify success and failure stories|
|Identify frameworks or theories for dissemination efforts||Review existing frameworks for applicable constructs|
|Pilot test measures for assessing model constructs among key stakeholders|
|Develop models for dissemination actions that are context relevant|
|Identify the appropriate means of delivering the message||Identify the optimal disseminator (usually not the researcher)|
|Link the researcher, practice, and policy specialists with the disseminator|
|Identify channels for dissemination or mode of knowledge transfer|
|Identify the appropriate message||For interventions, document evidence of effectiveness, cost of implementation, and cost-effectiveness|
|For etiologic research, address risk communication|
|Document evidence of disseminability or ease of use|
|Develop summaries of research in user-friendly, nonacademic formats (audience tailoring)||Develop issue briefs, policy briefs, case studiesIdentify potential roles for social media (e.g., Twitter, Facebook) Deliver presentations to stakeholders|
In general, these D4D principles apply best to research studies that are testing interventions or other approaches in dissemination research where an evidence-based approach and target audience is clear. This set of principles also raises the question of who the disseminator should be. For more than half of the respondents to our survey, that was likely to be an expert or team in dissemination or communication. In most cases, the disseminator should not be the researcher.15,25 As articulated by Kreuter et al.,25 the public health sector can learn from commercial marketers, where systems are set up for distribution management, marketing, technical assistance, and other services. In addition to these marketing skills, successful D4D efforts are likely to involve transdisciplinary teams that involve experts in communication, participatory research, and translational research.
In addressing the D4D actions in Table 3, researchers should recognize that different audiences require specific communication messages and channels.24,26,43 The intent of the D4D activities is to ensure that findings from research projects are useful, relevant, and ready for widespread dissemination when the research funding ends. Two related fields of inquiry can assist in these efforts. First, health communication calls for audience segmentation, where experts ask thoughtful questions, conduct audience research, and learn from the literature.44 In addition, principles of community-based participatory research (i.e., systematic inquiry that involves participation of those affected by the issue being studied, for the purposes of education and taking action48) places emphasis on genuine stakeholder engagement throughout the research process and the use of findings to help bring about change.18 These participatory methods are likely to be useful in the D4D process.
There were a few limitations of this study. First, the data were self-reported, and we had no way to objectively compare the reported dissemination activities with actual infrastructure and practices; additionally, reliability of the measures was not assessed. Second, among the various D4D activities, there was sparse literature on the relative effectiveness of various approaches, making it difficult to evaluate their precise impacts on public health practice and policy. Third, the focus of the present study was on high impact journals, which might omit important, practice-oriented journals with a significant focus on dissemination. Finally, response bias might be present, given our response rate of 54.5%.
In summary, the current data and the existing literature suggested that considerable room for improvement exists for better D4D. Researchers need to better recognize the practical applications of their findings, and learn to identify collaborations and build partnerships with key stakeholders that can address the many complexities of moving a project from discovery to widespread dissemination. To accomplish more effective dissemination, it is important to address D4D issues early in the research process (not when a grant is ending). In this time of increasing pressure on scientific resources, researchers should continue to meet the implied obligation to the public that the billions of dollars invested in scientific discovery will continue to yield specific and tangible benefits to their health.
This study was supported in part by the Centers for Disease Control and Prevention (Cooperative Agreement Number U48/DP001903; the Prevention Research Centers Program); National Cancer Institute (Transdisciplinary Research in Energetics and Cancer; grant U48/CA155496); the National Institutes of Health-National Center for Research Resources and the National Center for Advancing Translational Sciences (grants UL1 TR000448 and TL1 TR000449/KL2 TR000450); and the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK; grant 1P30DK092950).
We are grateful to Paul M. Wilson of the University of York, United Kingdom for sharing his study instrument. We also thank David Chambers, Russ Glasgow, and Jon Kerner for their helpful input.
Note. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the NIDDK.
Human Participant Protection
Human participant approval was obtained from the Washington University institutional review board.