Opponents of public health and environmental regulations often try to “manufacture uncertainty” by questioning the validity of scientific evidence on which the regulations are based. Though most identified with the tobacco industry, this strategy has also been used by producers of other hazardous products. Its proponents use the label “junk science” to ridicule research that threatens powerful interests.
This strategy of manufacturing uncertainty is antithetical to the public health principle that decisions be made using the best evidence available. The public health system must ensure that scientific evidence is evaluated in a manner that assures the public’s health and environment will be adequately protected.
Every bottle of aspirin sold in the United States today includes a warning label advising parents that aspirin consumption by children with viral illnesses increases the child’s risk of developing Reye’s syndrome. Before the mandatory warnings were imposed by the Food and Drug Administration, the toll of Reye’s syndrome was substantial: 555 cases reported in 1980. One in three children who developed Reye’s syndrome died from it.1 Aspirin consumption increases risk of Reye’s syndrome by an estimated 4000 percent.2 Today, less than a handful of Reye’s syndrome cases are reported each year; the warning label and public education campaign have saved the lives of hundreds of children.1,3,4
Although the disappearance of Reye’s syndrome is considered a “public health triumph,”5 it is a bittersweet one. An untold number of children became disabled or died from Reye’s syndrome while the aspirin industry delayed government efforts to warn parents, arguing that the scientific evidence was incomplete, unclear, or uncertain.
In 1980, following the publication of four studies showing that children with chicken pox or flu who took aspirin were more likely to develop Reye’s syndrome, the Centers for Disease Control (CDC) issued an alert to the medical community. But the aspirin industry, with the assistance of the White House’s Office of Management and Budget, was able to delay a major government public educational program for two years, and mandatory labels for four years.6 Although the four studies were enough for the CDC to issue warnings, the industry raised 17 specific “flaws” in the studies7 and insisted that more reliable studies were needed to establish a causal association between aspirin and Reye’s syndrome. The aspirin industry continued to assert this despite a Federal Advisory Committee’s conclusion that children with viral infections should avoid aspirin, going so far as to fund a public service announcement claiming, “We do know that no medication has been proven to cause Reye’s” (emphasis in the original).8 Litigation by Public Citizen’s Health Research Group (HRG) eventually forced the recalcitrant Reagan administration to make the warnings mandatory in 1986.
The aspirin manufacturers did not invent the strategy of questioning the underlying science in order to prevent regulation; it had been successfully employed for decades by polluters and producers of hazardous products. The strategy has now become so common that it is unusual for the science behind a public health or environmental regulation proposed in the United States not to be challenged by a corporation facing regulation. The US National Toxicology Program (NTP), for example, publishes a list of substances that can cause cancer.9 Before a new substance is added to the list, there is a public process involving several independent scientific reviews. In an effort to avoid the “cancer-causing” label, industry-employed scientists opposed the designation of cancer-causing for alcoholic beverages,10 beryllium,11,12 crystalline silica,13,14 ethylene oxide,15–17 nickel compounds,18 and certain wood dusts,19 challenging the evidence underlying the proposed designation. In each of these cases, the substance had already been categorized by the International Agency for Research on Cancer as carcinogenic to humans.20 Further, in each of these cases, the panel of nongovernment scientists reviewing the NTP nominations weighed the available evidence and voted to uphold the designation of cancer-causing.
When new regulations are being considered, opponents raise the issue of scientific uncertainty no matter how powerful or conclusive the evidence. Within the scientific community, for example, there is widespread consensus that broad-spectrum ultraviolet (UV) radiation from sunlight and tanning lamps causes skin cancer. Yet the Indoor Tanning Association21 and others22,23 have attempted to derail the NTP designation of cancer-causing by questioning the scientific evidence with which UV radiation was labeled a carcinogen.
Environmental activists can be also guilty of using the existence of scientific uncertainty to advance policy aims through an overzealous application of what has been labeled “the precautionary principle.” If the weighing of potential risks and benefits is transformed into a demand for certainty that a policy or action will result in no harm, scientific advances or public health interventions with the potential to genuinely improve the human condition can be disparaged and delayed.24,25
In parallel to their attempts to delay or prevent regulation through assertions of scientific uncertainty, manufacturers of pollution and hazardous products have promoted the “junk science” movement, which attempts to influence public opinion by ridiculing scientists whose research threatens powerful interests, irrespective of the quality of those scientists’ research. Advocates for this perspective allege that many of the scientific studies (and even scientific methods) used in the regulatory and legal arenas are fundamentally flawed, contradictory, or incomplete, asserting it wrong or premature to regulate the exposure in question or to compensate the worker or community resident who may have been made sick by the exposure.
Scientific uncertainty is inevitable in designing disease prevention programs. Scientists cannot feed toxic chemicals to people, for example, to see what dose causes cancer; instead, we study the effects on laboratory animals, and we harness the “natural experiments” where human exposures have already happened. Both epidemiologic and laboratory studies have many uncertainties, and scientists must extrapolate from study-specific evidence to make causal inferences and recommend protective measures. Absolute certainty is rarely an option.
By magnifying and exploiting these uncertainties, polluters and manufacturers of dangerous products have been remarkably successful in delaying, often for decades, regulations and other measures designed to protect the health and safety of individuals and communities.
This strategy, which began as a public relations tool, is now applied in the legal and regulatory arenas, constraining the ability of the judicial and regulatory systems to address issues of public health and victim compensation. The US Supreme Court’s 1993 Daubert v Merrell Dow Pharmaceuticals, Inc26 decision has enabled manufacturers of products alleged to have caused harm to exclude credible science and scientists from court cases.27 Similarly, the Data Quality Act28 provides a mechanism for parties to magnify differences between scientists in order to avoid regulation and victim compensation.
Our objective is to examine the historical development and current applications of the “manufacturing uncertainty” and “junk science” strategies, considering their relationship to what might be best labeled as the public health paradigm. Preventing disease and promoting health are the fundamental goals of public health; the public health paradigm asserts that actions taken to protect the public must be based on the best evidence currently available. The public health paradigm runs head-on into these orchestrated campaigns to manufacture uncertainty, pitting advocates for safety and health protections who acknowledge scientific uncertainty against opponents who capitalize on the unknown to avert protective action.
Perhaps no industry has employed the strategy of promoting doubt and uncertainty more effectively for a longer period than has the tobacco industry. For almost half a century, the tobacco companies hired scientists to dispute first, that smokers were at greater risk of dying of lung cancer; second, the role of tobacco use in heart disease and other illnesses; and finally, the evidence that environmental tobacco smoke increased disease risk in nonsmokers. In each case, the scientific community eventually reached the consensus that tobacco smoke caused these conditions.29–31 Despite the overwhelming scientific evidence and the smoking-related deaths of millions of smokers, the tobacco industry was able to wage a campaign that successfully delayed regulation and victim compensation for decades.32–34
Following a strategic plan developed in the mid-1950s by Hill and Knowlton (H&K), the tobacco industry hired scientists and commissioned research to challenge the growing scientific consensus linking cigarette smoking and severe health effects. Initially, H&K was engaged to minimize the public impact of an American Cancer Society report linking tobacco with lung cancer. On the advice of H&K’s experts, the tobacco industry emphasized three basic points: “That cause-and-effect relationships have not been established in any way; that statistical data do not provide the answers; and that much more research is needed.”35
The tobacco industry’s goal was to promote scientific uncertainty. In one confidential memorandum, H&K consultants boasted that after 5½ years of effort, they successfully created “an awareness of the doubts and uncertainties about the cigarette charges.” H&K credited tobacco-funded research that “forced a recognition that the cigarette theory of lung cancer causation is not established scientifically” and “raised many cogent questions concerning the validity of the cigarette theory.”36
The tobacco industry recognized the value of magnifying the debate in the scientific community on the cause-and-effect relationship between smoking and lung cancer. In the 1960s, the Tobacco Institute published a journal entitled Tobacco and Health Research, aimed at physicians and scientists. The criteria for publishing articles in the journal were straightforward: “The most important type of story is that which casts doubt on the cause-and-effect theory of disease and smoking.” In order to ensure that the message was clearly communicated, the PR firm advised that headlines “should strongly call out the point—Controversy! Contradiction! Other Factors! Unknowns!”37
The same message was communicated to the public. According to one tobacco industry executive: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy (emphasis added).”38
The boldness and success of this campaign, together with the almost unimaginable human toll associated with cigarette smoking, have resulted in the tobacco industry being labeled in the public consciousness as a uniquely nefarious, if not criminal, enterprise. (Just as there had been dispute over the scientific evidence, the tobacco industry now promotes an alternative interpretation of the history of this dispute. Historian Robert Proctor has reported that the industry has retained several historians who testify in court cases that “everyone has always known that cigarettes were dangerous, and that even after 1964 there was still ‘room for responsible disagreement’ with the US Surgeon General’s conclusion that year that tobacco was a major cause of death and injury.”)39 But the tobacco industry is not alone; manufacturing uncertainty and creating doubt about scientific evidence is ubiquitous in the organized opposition to the government’s attempts to regulate health hazards.
Starting in the earliest years of the 20th century, there were a series of episodes in which industries, facing allegations that their products might be harmful to human health, attempted to dispute the science on which the health concerns were based. Industries that produced hazardous products reacted by reassuring the public of the products’ safety; they accomplished this by attacking the studies that suggested users could be harmed by these products.40,41
Gerald Markowitz and David Rosner 42,43 and Christian Warren44 have recounted efforts by the lead industry to mislead decision-makers and the public in order to protect their ability to sell leaded paint and leaded gasoline. These public health historians note that early in the 1900s, lead was well known as an occupational hazard and several European countries had already banned the use of white lead as an ingredient in interior paint. In the United States, however, when cases of lead poisoning in workers appeared in the 1920s, the industry masterfully refocused attention from the poisoned workers and emphasized that many other lead-exposed workers, such as chauffeurs, did not show adverse health effects.42 They shifted the blame from the lead itself and the manufacturing process, and claimed that the workers had sloppy habits and were careless. By the 1930s and 1940s, when articles reporting cases of lead-poisoned children were published in medical journals, the industry rejected the claims and defended their products again by shifting blame, this time to the poisoned children who “were sub-normal to begin with.”42
The chemical industry became alarmed in the early 1950s when a well-publicized congressional investigation fed the public’s concern about carcinogens in the food supply. Congressman James J. Delaney’s House Select Committee to Investigate the Use of Chemicals in Foods and Cosmetics conducted a two-year inquiry into the “nature, extent and effect of the use of chemicals” in food. The committee heard testimony about the presence of chemicals used in food that had been shown to be carcinogenic in animals.45 The Manufacturing Chemists’ Association (MCA) feared that to allay the public’s growing concern about food additives and pesticides, Congress might force the industry to test chemicals that were added to or contaminated food.46 In response, the MCA hired H&K in 1951; John W. Hill personally attended the monthly MCA directors’ meetings and helped plan the MCA’s response to Delaney.47 For the most part, the MCA public relations effort was successful. Congress did not pass legislation mandating testing, although weaker legislation was enacted enabling the FDA to begin to regulate chemicals in the food supply. Rep. Delaney was able to insert the prohibition of the inclusion of any cancer-causing chemical in food, known as the “Delaney clause,” in a later piece of food safety legislation enacted in 1958.45 Having developed a program to defend the presence of chemicals in the food supply, H&K was well positioned to design the campaign to convince the world that cigarette smoking was not dangerous.48
Starting in the first decades of the 20th century, there were numerous indicators that asbestos was a potent cause of lung disease and cancer. Barry Castleman,49 Paul Brodeur,50 and others51,52 have documented the asbestos industry’s activities to prevent information about the risks associated with asbestos exposure from reaching the scientific literature and the popular press.
In the face of a massive epidemic, the industry questioned and distorted the science. In 1967, Johns-Manville, the largest North American asbestos producer, retained H&K, which recommended that the industry form the Asbestos Information Association (AIA); the co-director of H&K’s Division of Scientific, Technical, and Environmental Affairs served as the AIA’s first full-time executive director. The strategy developed by the public relations firm was for the asbestos industry “to admit to the hazards of asbestos where they are demonstrable, (emphasis added) publicize efforts of the industry to identify and control asbestos hazards, and, finally, to combat the often hysterical charges of some groups concerning hazards of infinitesimal amounts of asbestos in the environment.”53
The early 1970s ushered in the modern regulatory state in the United States. Agencies known by acronyms (e.g., EPA, OSHA, MSHA, CPSC, NHTSA) were created with the goals of protecting the environment and the public’s health and safety.54 The sophistication of the regulated industries has grown along with the development of the regulatory apparatus.
Opponents of proposed regulation relied (and continue to rely) on a menu of themes about the underlying science. Employers facing regulation by the Occupational Safety and Health Administration (OSHA) often claimed that because they had not documented an elevated rate of disease among their own employees exposed to a particular substance, that substance did not require stronger regulation. These claims were generally made in the absence of an epidemiologic investigation capable of detecting all but the most overwhelming exposure–disease relationship. Opponents of regulation made other arguments as well: the human data are not representative, the animal data are not relevant, or the exposure data are incomplete or not reliable. These assertions were often accompanied by the declaration that more research is needed before protective action is justified.
In January 1973, the Oil, Chemical, and Atomic Workers (OCAW) union and the Health Research Group (HRG) petitioned OSHA for an emergency temporary standard to prevent workers’ exposure to numerous carcinogens. According to the OSH Act, the secretary of labor may issue an emergency temporary standard when he or she determines that employees are exposed to a “grave danger.” OSHA responded to the OCAW and HRG petition on May 3, 1973, by issuing an emergency temporary standard.
Several of the carcinogens addressed by OSHA’s emergency temporary standard were aromatic amines, chemical building blocks necessary to produce many commercially important dyes. Decades earlier, scientists had identified several of these aromatic amines, including benzidine and beta-naphthylamine, as potent bladder carcinogens.55,56 In fact, when OSHA later published its final carcinogens rule (in January 1974), the agency noted “the Benzidine Task Force of the Synthetic Organic Chemical Manufacturers Association (SOCMA) does not oppose OSHA considering benzidine as carcinogenic to humans.”57 There was little disagreement from manufacturers as to the carcinogenicity of benzidine; that debate had concluded decades earlier.
Indeed, SOCMA and other opponents of OSHA’s plan to regulate benzidine acknowledged that the chemical caused bladder cancer in humans. To justify their opposition to OSHA’s rule, SOCMA asserted that although workers had been exposed to dangerous levels of benzidine, current workplace conditions were much improved and did not pose a risk to workers. In their testimony to OSHA they reported: “All of the reported instances of bladder tumors in benzidine workers of which we are aware involve employees who were exposed to benzidine before the improved production and use procedures were adopted.”58
Another substance included in OSHA’s carcinogens rulemaking was dichlorobenzidine (DCB), a chemical structurally similar to benzidine. The manufacturers of DCB strongly opposed regulating DCB as a carcinogen, asserting in June 1973 that it “is not a known human carcinogen and that there is quite good evidence to show affirmatively that it is not carcinogenic to man.”59 The manufacturer’s trade association DCB subcommittee told OSHA “not a single case [emphasis in original] of cancer or other serious illness can be attributed to its use.”60 By then, however, there were already several studies in the scientific literature demonstrating the ability of DCB to cause cancer in animals.61 Six months earlier, a team of scientists sent by the National Institute for Occupational Safety and Health (NIOSH) conducted a field survey of Allied Chemical’s Buffalo, NY, facility where both benzidine and DCB were manufactured. NIOSH found that while rigorous controls were in place to control benzidine exposure, the same was not true for DCB. The manufacturers’ position was that there was “good evidence” that DCB was not a human carcinogen; in contrast, NIOSH researchers noted that the manufacturers’ evidence was merely based on claims that they have “never seen a case” of human bladder cancer caused by dichlorobenzidine, and ignored evidence suggesting that DCB was a potent animal carcinogen.62
Around the same period, the Upjohn Company also manufactured DCB at its North Haven, Conn, plant; Upjohn had switched from benzidine to DCB production there in the mid-1960s. Like Allied Chemical, Upjohn opposed the proposed OSHA standard, asserting that the cases of bladder cancer at its plant among workers exposed to both benzidine and DCB “were probably attributable to benzidine.”63 Not acknowledged were the obvious limits to that opinion: Upjohn workers had not been exposed to DCB long enough for it alone to have caused a recognizable increase in the incidence of bladder cancer at the facility. By 1985, however, cancer cases started appearing in workers who were first employed at the plant after benzidine was phased out. A study conducted in 1995 found an eight-fold excess risk of bladder cancer among workers who began work at that facility after exposure to benzidine stopped.64
Another substance OSHA planned to address with its carcinogens regulation was 4,4 methlyenebis (2-chloroaniline), referred to as MOCA or MBOCA. The primary scientific evidence on which OSHA relied to justify its proposed action came from studies using laboratory animals. The opposition to OSHA’s rule for this substance was fierce, with opponents asserting that OSHA’s decision to rely on data from animal studies was “illogical.”65 The Polyurethane Manufacturers Association asserted that “no epidemiological or clinical evidence exists to even hint at carcinogenicity in humans even though studies have been undertaken covering in excess of 18 years of human exposure to MOCA at the DuPont Company.”66
OSHA’s proposed MOCA standard was never promulgated, and the two US producers of MOCA ceased manufacturing the chemical by 1980. NIOSH researchers later conducted a screening program at one of the facilities reporting that three employees, among 385 screened, were found to have tumors of the bladder. Two of the men were nonsmokers under age 30 and were first exposed to MOCA 8 and 11 years, respectively, before the cancers were diagnosed.67
In early 1974, the plastics industry was in crisis. A B.F. Goodrich physician in Louisville, Ky, reported four cases of angiosarcoma of the liver among workers at one factory producing vinyl chloride monomer (VCM) for production of polyvinyl chloride (PVC), one of the industry’s most important products. This type of cancer is exceedingly rare in humans, and the report of four cases in one facility was sufficient to cause alarm.68 Federal scientists mounted epidemiological investigations immediately after the B.F. Goodrich report. Dozens of workers in other VCM/PVC facilities were found with this rare form of liver cancer 69–72 and epidemiological studies also suggested that VCM/PVC workers were at greater risk of developing brain cancer.73
But the crisis facing the plastics industry was heightened by what was occurring in a research laboratory. Angiosarcomas were being detected in laboratory animals exposed to levels of VCM below the OSHA standard in effect at the time, and the manufacturers had intentionally concealed this information from federal regulators.42 Since relatively low levels of VCM exposure had been implicated in cancer causation, and there was no known safe level of exposure, OSHA proposed a new VCM standard of “no detectable level.”74
The Society for Plastics Industry (SPI) did what many industries do when they find out that one of their most important products was a carcinogen: it hired a public relations firm. H&K was brought in to help the industry prepare for OSHA’s public hearings and to assist SPI in convincing OSHA to accept a more relaxed standard.
H&K’s advice was consistent with the guidance they offered to other corporate clients faced with damning scientific evidence about the hazards of their products. SPI promoted an alternative exposure level, one that was less stringent than the one OSHA had proposed. To manufacture the appearance that SPI’s recommendation was science-based, the public relations firm instructed SPI to emphasize scientific uncertainty and assert: “It has not been demonstrated that a health hazard exists at the levels recommended by SPI.”75 In its internal documents, however, H&K reminded SPI that “it should also be remembered that the corollary to this statement is that it has not been scientifically demonstrated that the SPI recommended levels are truly safe.”75
Currently, the “junk science” movement is the most prominent public face of the attack on the scientific basis for compensating individuals injured by environmental exposures, and for protecting the health of the public from many of the same environmental exposures. Advocates for this perspective allege that many of the scientific studies (and even scientific methods) used in the regulatory and legal arenas are fundamentally flawed, contradictory or incomplete, making it wrong or premature to regulate the exposure in question or to compensate the worker or community resident allegedly made sick by the exposure.
The label “junk science” was invented and widely publicized to denigrate science supporting environmental regulation and victim compensation. The junk science movement, which attempts to ridicule research that threatens powerful interests (irrespective of the quality of that research), was spawned by these same industries that have been manufacturing uncertainty for decades.
Defenders of pollution and dangerous products often call for policies and legal decisions to be based in “sound science.” This is a concept that is also rarely defined, but presumably signifies the opposite of whatever has been labeled as junk science. University of California researchers Elisa Ong and Stanton Glantz traced the origins of the sound science movement by examining thousands of pages of tobacco industry documents made public after litigation. They documented the central but disguised role of Philip Morris in engineering and funding the sound science effort in operating an organization called The Advancement for Sound Science Coalition (TASSC).76
It is difficult to find a meaningful definition of the term “junk science.” Peter Huber, who is often credited with coining the term, offers a broad-ranging “I know it when I see it” description rather than definition: “Junk science is the mirror image of real science, with much of the same form but none of the substance. . . . It is a hodgepodge of biased data, spurious inference, and logical legerdemain. . . . It is a catalog of every conceivable kind of error: data dredging, wishful thinking, truculent dogmatism, and, now and again, outright fraud.”77
The junkscience.com website (which was founded and is run by the former executive director of TASSC), defines junk science as “faulty scientific data and analysis used to further a special agenda.”78 The site contains a roster of “junk scientists,” including six elected members of the Institute of Medicine of the National Academy of Sciences, as well as four recipients of the American College of Epidemiology’s highest honor, the Abraham Lillienfeld Award.79 It appears that when scientists have been asked to identify their most outstanding colleagues, they do not share the opinions of the promoters of the junk science label.
The accusation of junk science is not always used in actual regulatory proceedings, perhaps because its use would expose the antiscientific bent of opponents of public health regulation. It is more effectively used in public forums, where attacks on the scientific basis of public health standards are weapons in the political opposition to the standards. When genuine scientific uncertainty does not exist, corporations fearing regulation follow the strategy developed by the tobacco industry. They hire scientists who, while not denying that a relationship exists between the exposure and the disease, argue that “the evidence is inconclusive.” As a result, a lucrative business of science for hire has emerged. Consultants in epidemiology, biostatistics, and toxicology are frequently engaged by industries facing regulation to dispute data used by regulatory agencies in developing public health and safety standards. These consultants often reanalyze studies that had reported positive findings, with the elevated risks of disease disappearing in the reanalysis.
Further proof of the mercenary, rather than scientific, basis for the magnification and manufacture of scientific uncertainty comes from Frank Luntz, a political consultant to the Republican Party. In early 2003, Luntz advised his clients that “Winning the Global Warming Debate” could be accomplished by focusing on uncertainty and differences among scientists:
Voters believe that there is no consensus about global warming within the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate. . . . The scientific debate is closing [against us] but not yet closed. There is still a window of opportunity to challenge the science [emphasis in original].80
In reality, there is a great deal of consensus among climate scientists about climate change.81–83 Luntz understands that it is possible to oppose (and delay) regulation without being branded as antienvironmental, by focusing on scientific uncertainty and by manufacturing uncertainty if it does not exist.
As the above discussion makes clear, the junk science movement has little relation to actual science. The movement’s adherents have never established a method to distinguish junk science from the real thing. As a result, the label means little more than “I don’t like your study.”84 Beyond this, however, the junk science label was invented by, and has been a powerful tool in the hands of opponents of public health and environmental regulation and litigation. Although its meaning disappears when examined carefully, the term has gained widespread acceptance in the current debate over the use of scientific evidence in public policy.85 Although part of its success can be attributed to the extensive financial support junk science proponents receive from corporations eager to avoid regulation and litigation, some of the success of the junk science movement lies in the very nature of scientific evidence dealing with human beings. It is likely that in any given scientific debate involving human health, there will be various published studies with inconsistent or even contradictory findings.
The success of the junk science movement can be seen in its two primary institutional manifestations: the Daubert26 decision and the Data Quality Act.28 Both of these are structured to force the piece-by-piece examination of scientific evidence, in contrast to the weight-of-the-evidence approach used by most scientists in reaching conclusions in the face of uncertainty.
In June 1993, the US Supreme Court issued a ruling in Daubert v Merrell Dow Pharmaceuticals, Inc, requiring federal judges to serve as scientific gatekeepers, allowing into evidence only expert testimony that they deem relevant and reliable.26 A recent analysis found that judges are requiring physicians who testify as experts to apply standards of causal inference that exceed those which physicians use to diagnose and treat their own patients.86
The effects of the Daubert decision on litigation that alleges harm from hazardous products can be seen in several cases involving Parlodel, a drug used through the early 1990s to stop postpartum lactation. Until it was withdrawn from the market, a number of young women who had been prescribed Parlodel had severe circulatory system episodes (including heart attacks and strokes) shortly after taking the drug. On the basis of case reports and animal studies, and the fact that Parlodel can cause a rapid rise in blood pressure in humans, the US Food and Drug Administration (FDA) in 1985 requested that the drug’s manufacturer include warnings about hypertension, seizure, and stroke in the drug’s labeling. The evidence continued to accumulate; the FDA’s concern was so great that in 1994, it requested that Parlodel’s manufacturer stop selling the drug to lactating women.87
Yet when several women sued the drug’s manufacturers, claiming Parlodel was responsible for their illness, their cases were essentially thrown out of court for lack of scientific certainty. Judges in several jurisdictions refused to allow jurors to consider the testimony of scientists or physicians who agreed with the FDA that, on the basis of case reports, animal studies, and the way the drug works in the body, Parlodel could cause circulatory disorders. Applying the Daubert rule, the judges demanded a level of certainty that was virtually impossible to provide.86
For more than 10 years, Daubert has been the law of the land. Scholars and other authors have written on its impact and used actual judicial decisions to illustrate the disconnect between legal proof and scientific evidence.88–92 Few authors, however, have explored the organized movement to extend Daubert’s reach from the judiciary into the executive branch, in particular, into the federal rulemaking arena.
Emboldened by the success of Daubert in limiting the use of scientific evidence in the courts, antiregulatory interests are promoting the application of Daubert principles in judicial review of federal regulation.93–96 Most notably, Daubert is prominently featured in the official position on scientific information in federal rulemaking of the US Chamber of Commerce:
The same standards of relevance and reliability that safeguard the rights of litigants in federal courts should safeguard the public interest in the regulatory process. Regulations affecting business and the public should have a scientific, not political, foundation. That’s why we advocate the adoption of an Executive Order requiring all federal agencies to apply the Daubert standards in the administrative rule-making process.97
Proponents of public health protections, especially those advanced in the face of scientific uncertainty, should be wary of calls to extend Daubert to the regulatory arena. The legal, economic, and political obstacles faced by regulators will increase dramatically when Daubert-like criteria are applied to each piece of scientific evidence used to support a regulation.
Those who oppose public health regulations or seek methods to delay health protections have a new tool in their arsenal: the Data Quality Act (DQA). The law originated as a rider on the appropriations bill for the Treasury Department, slipped into the legislation by Rep. Jo Ann Emerson (R-MO). It consisted of two short paragraphs in the 712-page Consolidated Appropriations Act of 2001,28 sandwiched between provisions to transfer ownership of land in Grand Rapids, Mich, and to settle litigation on nonforeign area cost-of-living allowances.98 There were no hearings or debate on the DQA, meaning no legislative history exists to help clarify Congress’s intentions in passing it.
The DQA authorized the Office of Management and Budget (OMB) to develop guidelines to “ensure and maximize data quality” and to establish procedures allowing formal challenges to information disseminated by federal agencies. If someone believes that information disseminated by an agency is not of sufficient “quality, objectivity, utility, or integrity,” they may request a correction to it. The DQA sounds harmless; it is difficult to argue against ensuring the quality and integrity of government-disseminated information. Yet, its devious conception suggests its intentions are not completely innocent.
It has been widely reported that Rep. Emerson inserted these provisions at the request of Jim Tozzi, an OMB economist during the 1970s and 1980s, 99–101 and founder of Multinational Business Services. Mr. Tozzi has been an advocate for industry-funded “regulatory reform” efforts and the founder of the Center for Regulatory Effectiveness. Mr. Tozzi proudly boasts about the convergence of the junk science movement and the DQA. “The law,” he suggested, “will simply stop the ‘junk science’ that can lead to useless and expensive regulations.”102
A petition filed in 2003 asked the EPA to discontinue disseminating its 1986 publication Guidance for Preventing Asbestos Disease Among Auto Mechanics, asserting the booklet “is routinely used to convey the misperceptions that EPA has conducted a complete analysis of the scientific and medical literature and has concluded that brake mechanic work is in fact hazardous and that as a direct result brake mechanics are at increased risk of contracting an asbestos-related disease, including mesothelioma, from such exposure.”103
In response, EPA withdrew the publication from its Web site and announced plans to replace it with a revised publication.104 More than a year after receiving the petition, EPA has not issued a new booklet.
Every first-year public health student is taught how John Snow stopped a cholera epidemic in London. During a 10-day period in September 1854, during which more than 500 Londoners died from the disease, Snow used a city map to mark the location of each household with a case of cholera. He quickly determined that Londoners who drank from one particular water source were at the highest risk for the disease, and he recommended removing the handle of the pump supplying water from that source.105 By using the best evidence available at the time, additional deaths were avoided. If government officials in London had demanded absolute certainty, no preventive measures would have been taken for another 30 years, until the cholera bacterium (Vibrio cholerae) was identified.
Protecting the public’s health requires regulatory policies and approaches that explicitly acknowledge uncertainty, while providing parameters that support decisionmaking based on limited data in situations where significant risk to human health or the environment exists. These parameters should be based in the fundamental paradigm governing public health: decisions must be made using the best evidence currently available. Even if these parameters for decisionmaking are rigorously applied, the debate over the science underpinning public health regulation is unlikely to disappear because protective actions often involve substantial financial costs. This debate is further complicated by the reliance of government agencies on regulated parties for much of the scientific information used to formulate regulations, a dependence made necessary by limited federal research funding.
In order to limit the impact of manufactured uncertainty and to restore scientific integrity to the regulatory process, the public health system must reestablish procedures to enable practitioners to evaluate and apply scientific evidence in a manner that assures the public’s health and environment will be adequately protected. Although there are no magic bullets to cure this problem, increased transparency concerning conflicts of interest, especially involving the financial relationship between the authors and sponsors of studies used in regulatory and legal proceedings, is clearly warranted.
Following a series of alarming instances in which the sponsors of research used their financial control to the detriment of the public’s health, a group of leading biomedical journals have established policies that make their published articles transparent to commercial bias and that require authors to accept full control and responsibility for their work. These journals will now only publish studies done under contracts in which the investigators had the right to publish the findings without the consent or control of the sponsor. In a joint statement, the editors of the journals asserted that contractual arrangements allowing sponsor control of publication “erode the fabric of intellectual inquiry that has fostered so much high-quality clinical research.”106
Federal regulatory agencies, charged with protecting the public’s health and environment, have no requirements for “research integrity” comparable to those of medical journals. When studies are submitted to the EPA or OSHA, for example, for consideration in rulemaking, the agencies do not have the authority to inquire who paid for the studies, and whether these studies would have seen the light of day if the sponsor didn’t approve the results. As a result, sponsors with clear conflicts of interest have no incentive to relinquish control over sponsored research governing their products and activities.
Federal agencies should adopt, at a minimum, requirements for research integrity comparable to those used by biomedical journals: Parties that submit data from research they have sponsored must disclose if the investigators had the contractual right to publish their findings without the consent or influence of the sponsor.107
Some policymakers fail to recognize that all studies are not created equal. Opponents of regulation often hire scientific consulting firms that specialize in “product defense” to reanalyze data from the studies used to support or shape public health and environmental protections. This sometimes results in the existence of what appear to be equal and opposite studies, encouraging policymakers to do nothing in the face of what appear to be contradictory findings.
Epidemiologists recognize that the results from post hoc analyses do not have the same validity as the findings of studies designed to test a prior hypothesis. Regulators, jurists, and other policymakers are often called on to ascribe a relative weight to different studies; while no evidence should be totally discarded, the findings of post hoc analyses (and reanalyses) should be labeled accordingly and not be treated as equal to those of original research, and should be accorded less weight and significance.
In our current regulatory system, debate over science has become a substitute for debate over policy. Opponents of regulation use the existence of uncertainty, no matter its magnitude or importance, as a tool to counter imposition of public health protections that may cause them financial difficulty. It is important that those charged with protecting the public’s health recognize that the desire for absolute scientific certainty is both counterproductive and futile. This recognition underlies the wise words of Sir Austin Bradford Hill, delivered in an address to the Royal Society of Medicine in 1965:
All scientific work is incomplete—whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone action that it appears to demand at a given time. . . . Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute on the 8:30 next day.108
This work was supported by the Project on Scientific Knowledge and Public Policy (SKAPP). Major support for SKAPP is provided by the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability Litigation.
The authors appreciate the helpful comments provided by members of the SKAPP planning committee, Carl Cranor, and two other peer reviewers.