Opponents of public health and environmental regulations often try to “manufacture uncertainty” by questioning the validity of scientific evidence on which the regulations are based. Though most identified with the tobacco industry, this strategy has also been used by producers of other hazardous products. Its proponents use the label “junk science” to ridicule research that threatens powerful interests.

This strategy of manufacturing uncertainty is antithetical to the public health principle that decisions be made using the best evidence available. The public health system must ensure that scientific evidence is evaluated in a manner that assures the public’s health and environment will be adequately protected.

Every bottle of aspirin sold in the United States today includes a warning label advising parents that aspirin consumption by children with viral illnesses increases the child’s risk of developing Reye’s syndrome. Before the mandatory warnings were imposed by the Food and Drug Administration, the toll of Reye’s syndrome was substantial: 555 cases reported in 1980. One in three children who developed Reye’s syndrome died from it.1 Aspirin consumption increases risk of Reye’s syndrome by an estimated 4000 percent.2 Today, less than a handful of Reye’s syndrome cases are reported each year; the warning label and public education campaign have saved the lives of hundreds of children.1,3,4

Although the disappearance of Reye’s syndrome is considered a “public health triumph,”5 it is a bittersweet one. An untold number of children became disabled or died from Reye’s syndrome while the aspirin industry delayed government efforts to warn parents, arguing that the scientific evidence was incomplete, unclear, or uncertain.

In 1980, following the publication of four studies showing that children with chicken pox or flu who took aspirin were more likely to develop Reye’s syndrome, the Centers for Disease Control (CDC) issued an alert to the medical community. But the aspirin industry, with the assistance of the White House’s Office of Management and Budget, was able to delay a major government public educational program for two years, and mandatory labels for four years.6 Although the four studies were enough for the CDC to issue warnings, the industry raised 17 specific “flaws” in the studies7 and insisted that more reliable studies were needed to establish a causal association between aspirin and Reye’s syndrome. The aspirin industry continued to assert this despite a Federal Advisory Committee’s conclusion that children with viral infections should avoid aspirin, going so far as to fund a public service announcement claiming, “We do know that no medication has been proven to cause Reye’s” (emphasis in the original).8 Litigation by Public Citizen’s Health Research Group (HRG) eventually forced the recalcitrant Reagan administration to make the warnings mandatory in 1986.

The aspirin manufacturers did not invent the strategy of questioning the underlying science in order to prevent regulation; it had been successfully employed for decades by polluters and producers of hazardous products. The strategy has now become so common that it is unusual for the science behind a public health or environmental regulation proposed in the United States not to be challenged by a corporation facing regulation. The US National Toxicology Program (NTP), for example, publishes a list of substances that can cause cancer.9 Before a new substance is added to the list, there is a public process involving several independent scientific reviews. In an effort to avoid the “cancer-causing” label, industry-employed scientists opposed the designation of cancer-causing for alcoholic beverages,10 beryllium,11,12 crystalline silica,13,14 ethylene oxide,1517 nickel compounds,18 and certain wood dusts,19 challenging the evidence underlying the proposed designation. In each of these cases, the substance had already been categorized by the International Agency for Research on Cancer as carcinogenic to humans.20 Further, in each of these cases, the panel of nongovernment scientists reviewing the NTP nominations weighed the available evidence and voted to uphold the designation of cancer-causing.

When new regulations are being considered, opponents raise the issue of scientific uncertainty no matter how powerful or conclusive the evidence. Within the scientific community, for example, there is widespread consensus that broad-spectrum ultraviolet (UV) radiation from sunlight and tanning lamps causes skin cancer. Yet the Indoor Tanning Association21 and others22,23 have attempted to derail the NTP designation of cancer-causing by questioning the scientific evidence with which UV radiation was labeled a carcinogen.

Environmental activists can be also guilty of using the existence of scientific uncertainty to advance policy aims through an overzealous application of what has been labeled “the precautionary principle.” If the weighing of potential risks and benefits is transformed into a demand for certainty that a policy or action will result in no harm, scientific advances or public health interventions with the potential to genuinely improve the human condition can be disparaged and delayed.24,25

In parallel to their attempts to delay or prevent regulation through assertions of scientific uncertainty, manufacturers of pollution and hazardous products have promoted the “junk science” movement, which attempts to influence public opinion by ridiculing scientists whose research threatens powerful interests, irrespective of the quality of those scientists’ research. Advocates for this perspective allege that many of the scientific studies (and even scientific methods) used in the regulatory and legal arenas are fundamentally flawed, contradictory, or incomplete, asserting it wrong or premature to regulate the exposure in question or to compensate the worker or community resident who may have been made sick by the exposure.

Scientific uncertainty is inevitable in designing disease prevention programs. Scientists cannot feed toxic chemicals to people, for example, to see what dose causes cancer; instead, we study the effects on laboratory animals, and we harness the “natural experiments” where human exposures have already happened. Both epidemiologic and laboratory studies have many uncertainties, and scientists must extrapolate from study-specific evidence to make causal inferences and recommend protective measures. Absolute certainty is rarely an option.

By magnifying and exploiting these uncertainties, polluters and manufacturers of dangerous products have been remarkably successful in delaying, often for decades, regulations and other measures designed to protect the health and safety of individuals and communities.

This strategy, which began as a public relations tool, is now applied in the legal and regulatory arenas, constraining the ability of the judicial and regulatory systems to address issues of public health and victim compensation. The US Supreme Court’s 1993 Daubert v Merrell Dow Pharmaceuticals, Inc26 decision has enabled manufacturers of products alleged to have caused harm to exclude credible science and scientists from court cases.27 Similarly, the Data Quality Act28 provides a mechanism for parties to magnify differences between scientists in order to avoid regulation and victim compensation.

Our objective is to examine the historical development and current applications of the “manufacturing uncertainty” and “junk science” strategies, considering their relationship to what might be best labeled as the public health paradigm. Preventing disease and promoting health are the fundamental goals of public health; the public health paradigm asserts that actions taken to protect the public must be based on the best evidence currently available. The public health paradigm runs head-on into these orchestrated campaigns to manufacture uncertainty, pitting advocates for safety and health protections who acknowledge scientific uncertainty against opponents who capitalize on the unknown to avert protective action.

Perhaps no industry has employed the strategy of promoting doubt and uncertainty more effectively for a longer period than has the tobacco industry. For almost half a century, the tobacco companies hired scientists to dispute first, that smokers were at greater risk of dying of lung cancer; second, the role of tobacco use in heart disease and other illnesses; and finally, the evidence that environmental tobacco smoke increased disease risk in nonsmokers. In each case, the scientific community eventually reached the consensus that tobacco smoke caused these conditions.2931 Despite the overwhelming scientific evidence and the smoking-related deaths of millions of smokers, the tobacco industry was able to wage a campaign that successfully delayed regulation and victim compensation for decades.3234

Following a strategic plan developed in the mid-1950s by Hill and Knowlton (H&K), the tobacco industry hired scientists and commissioned research to challenge the growing scientific consensus linking cigarette smoking and severe health effects. Initially, H&K was engaged to minimize the public impact of an American Cancer Society report linking tobacco with lung cancer. On the advice of H&K’s experts, the tobacco industry emphasized three basic points: “That cause-and-effect relationships have not been established in any way; that statistical data do not provide the answers; and that much more research is needed.”35

The tobacco industry’s goal was to promote scientific uncertainty. In one confidential memorandum, H&K consultants boasted that after 5½ years of effort, they successfully created “an awareness of the doubts and uncertainties about the cigarette charges.” H&K credited tobacco-funded research that “forced a recognition that the cigarette theory of lung cancer causation is not established scientifically” and “raised many cogent questions concerning the validity of the cigarette theory.”36

The tobacco industry recognized the value of magnifying the debate in the scientific community on the cause-and-effect relationship between smoking and lung cancer. In the 1960s, the Tobacco Institute published a journal entitled Tobacco and Health Research, aimed at physicians and scientists. The criteria for publishing articles in the journal were straightforward: “The most important type of story is that which casts doubt on the cause-and-effect theory of disease and smoking.” In order to ensure that the message was clearly communicated, the PR firm advised that headlines “should strongly call out the point—Controversy! Contradiction! Other Factors! Unknowns!”37

The same message was communicated to the public. According to one tobacco industry executive: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public. It is also the means of establishing a controversy (emphasis added).”38

The boldness and success of this campaign, together with the almost unimaginable human toll associated with cigarette smoking, have resulted in the tobacco industry being labeled in the public consciousness as a uniquely nefarious, if not criminal, enterprise. (Just as there had been dispute over the scientific evidence, the tobacco industry now promotes an alternative interpretation of the history of this dispute. Historian Robert Proctor has reported that the industry has retained several historians who testify in court cases that “everyone has always known that cigarettes were dangerous, and that even after 1964 there was still ‘room for responsible disagreement’ with the US Surgeon General’s conclusion that year that tobacco was a major cause of death and injury.”)39 But the tobacco industry is not alone; manufacturing uncertainty and creating doubt about scientific evidence is ubiquitous in the organized opposition to the government’s attempts to regulate health hazards.

Starting in the earliest years of the 20th century, there were a series of episodes in which industries, facing allegations that their products might be harmful to human health, attempted to dispute the science on which the health concerns were based. Industries that produced hazardous products reacted by reassuring the public of the products’ safety; they accomplished this by attacking the studies that suggested users could be harmed by these products.40,41

The Lead Industry

Gerald Markowitz and David Rosner 42,43 and Christian Warren44 have recounted efforts by the lead industry to mislead decision-makers and the public in order to protect their ability to sell leaded paint and leaded gasoline. These public health historians note that early in the 1900s, lead was well known as an occupational hazard and several European countries had already banned the use of white lead as an ingredient in interior paint. In the United States, however, when cases of lead poisoning in workers appeared in the 1920s, the industry masterfully refocused attention from the poisoned workers and emphasized that many other lead-exposed workers, such as chauffeurs, did not show adverse health effects.42 They shifted the blame from the lead itself and the manufacturing process, and claimed that the workers had sloppy habits and were careless. By the 1930s and 1940s, when articles reporting cases of lead-poisoned children were published in medical journals, the industry rejected the claims and defended their products again by shifting blame, this time to the poisoned children who “were sub-normal to begin with.”42

The Chemical Industry

The chemical industry became alarmed in the early 1950s when a well-publicized congressional investigation fed the public’s concern about carcinogens in the food supply. Congressman James J. Delaney’s House Select Committee to Investigate the Use of Chemicals in Foods and Cosmetics conducted a two-year inquiry into the “nature, extent and effect of the use of chemicals” in food. The committee heard testimony about the presence of chemicals used in food that had been shown to be carcinogenic in animals.45 The Manufacturing Chemists’ Association (MCA) feared that to allay the public’s growing concern about food additives and pesticides, Congress might force the industry to test chemicals that were added to or contaminated food.46 In response, the MCA hired H&K in 1951; John W. Hill personally attended the monthly MCA directors’ meetings and helped plan the MCA’s response to Delaney.47 For the most part, the MCA public relations effort was successful. Congress did not pass legislation mandating testing, although weaker legislation was enacted enabling the FDA to begin to regulate chemicals in the food supply. Rep. Delaney was able to insert the prohibition of the inclusion of any cancer-causing chemical in food, known as the “Delaney clause,” in a later piece of food safety legislation enacted in 1958.45 Having developed a program to defend the presence of chemicals in the food supply, H&K was well positioned to design the campaign to convince the world that cigarette smoking was not dangerous.48

The Asbestos Industry

Starting in the first decades of the 20th century, there were numerous indicators that asbestos was a potent cause of lung disease and cancer. Barry Castleman,49 Paul Brodeur,50 and others51,52 have documented the asbestos industry’s activities to prevent information about the risks associated with asbestos exposure from reaching the scientific literature and the popular press.

In the face of a massive epidemic, the industry questioned and distorted the science. In 1967, Johns-Manville, the largest North American asbestos producer, retained H&K, which recommended that the industry form the Asbestos Information Association (AIA); the co-director of H&K’s Division of Scientific, Technical, and Environmental Affairs served as the AIA’s first full-time executive director. The strategy developed by the public relations firm was for the asbestos industry “to admit to the hazards of asbestos where they are demonstrable, (emphasis added) publicize efforts of the industry to identify and control asbestos hazards, and, finally, to combat the often hysterical charges of some groups concerning hazards of infinitesimal amounts of asbestos in the environment.”53

The early 1970s ushered in the modern regulatory state in the United States. Agencies known by acronyms (e.g., EPA, OSHA, MSHA, CPSC, NHTSA) were created with the goals of protecting the environment and the public’s health and safety.54 The sophistication of the regulated industries has grown along with the development of the regulatory apparatus.

Opponents of proposed regulation relied (and continue to rely) on a menu of themes about the underlying science. Employers facing regulation by the Occupational Safety and Health Administration (OSHA) often claimed that because they had not documented an elevated rate of disease among their own employees exposed to a particular substance, that substance did not require stronger regulation. These claims were generally made in the absence of an epidemiologic investigation capable of detecting all but the most overwhelming exposure–disease relationship. Opponents of regulation made other arguments as well: the human data are not representative, the animal data are not relevant, or the exposure data are incomplete or not reliable. These assertions were often accompanied by the declaration that more research is needed before protective action is justified.

Bladder Carcinogens

In January 1973, the Oil, Chemical, and Atomic Workers (OCAW) union and the Health Research Group (HRG) petitioned OSHA for an emergency temporary standard to prevent workers’ exposure to numerous carcinogens. According to the OSH Act, the secretary of labor may issue an emergency temporary standard when he or she determines that employees are exposed to a “grave danger.” OSHA responded to the OCAW and HRG petition on May 3, 1973, by issuing an emergency temporary standard.

Several of the carcinogens addressed by OSHA’s emergency temporary standard were aromatic amines, chemical building blocks necessary to produce many commercially important dyes. Decades earlier, scientists had identified several of these aromatic amines, including benzidine and beta-naphthylamine, as potent bladder carcinogens.55,56 In fact, when OSHA later published its final carcinogens rule (in January 1974), the agency noted “the Benzidine Task Force of the Synthetic Organic Chemical Manufacturers Association (SOCMA) does not oppose OSHA considering benzidine as carcinogenic to humans.”57 There was little disagreement from manufacturers as to the carcinogenicity of benzidine; that debate had concluded decades earlier.

Indeed, SOCMA and other opponents of OSHA’s plan to regulate benzidine acknowledged that the chemical caused bladder cancer in humans. To justify their opposition to OSHA’s rule, SOCMA asserted that although workers had been exposed to dangerous levels of benzidine, current workplace conditions were much improved and did not pose a risk to workers. In their testimony to OSHA they reported: “All of the reported instances of bladder tumors in benzidine workers of which we are aware involve employees who were exposed to benzidine before the improved production and use procedures were adopted.”58

Another substance included in OSHA’s carcinogens rulemaking was dichlorobenzidine (DCB), a chemical structurally similar to benzidine. The manufacturers of DCB strongly opposed regulating DCB as a carcinogen, asserting in June 1973 that it “is not a known human carcinogen and that there is quite good evidence to show affirmatively that it is not carcinogenic to man.”59 The manufacturer’s trade association DCB subcommittee told OSHA “not a single case [emphasis in original] of cancer or other serious illness can be attributed to its use.”60 By then, however, there were already several studies in the scientific literature demonstrating the ability of DCB to cause cancer in animals.61 Six months earlier, a team of scientists sent by the National Institute for Occupational Safety and Health (NIOSH) conducted a field survey of Allied Chemical’s Buffalo, NY, facility where both benzidine and DCB were manufactured. NIOSH found that while rigorous controls were in place to control benzidine exposure, the same was not true for DCB. The manufacturers’ position was that there was “good evidence” that DCB was not a human carcinogen; in contrast, NIOSH researchers noted that the manufacturers’ evidence was merely based on claims that they have “never seen a case” of human bladder cancer caused by dichlorobenzidine, and ignored evidence suggesting that DCB was a potent animal carcinogen.62

Around the same period, the Upjohn Company also manufactured DCB at its North Haven, Conn, plant; Upjohn had switched from benzidine to DCB production there in the mid-1960s. Like Allied Chemical, Upjohn opposed the proposed OSHA standard, asserting that the cases of bladder cancer at its plant among workers exposed to both benzidine and DCB “were probably attributable to benzidine.”63 Not acknowledged were the obvious limits to that opinion: Upjohn workers had not been exposed to DCB long enough for it alone to have caused a recognizable increase in the incidence of bladder cancer at the facility. By 1985, however, cancer cases started appearing in workers who were first employed at the plant after benzidine was phased out. A study conducted in 1995 found an eight-fold excess risk of bladder cancer among workers who began work at that facility after exposure to benzidine stopped.64

Another substance OSHA planned to address with its carcinogens regulation was 4,4 methlyenebis (2-chloroaniline), referred to as MOCA or MBOCA. The primary scientific evidence on which OSHA relied to justify its proposed action came from studies using laboratory animals. The opposition to OSHA’s rule for this substance was fierce, with opponents asserting that OSHA’s decision to rely on data from animal studies was “illogical.”65 The Polyurethane Manufacturers Association asserted that “no epidemiological or clinical evidence exists to even hint at carcinogenicity in humans even though studies have been undertaken covering in excess of 18 years of human exposure to MOCA at the DuPont Company.”66

OSHA’s proposed MOCA standard was never promulgated, and the two US producers of MOCA ceased manufacturing the chemical by 1980. NIOSH researchers later conducted a screening program at one of the facilities reporting that three employees, among 385 screened, were found to have tumors of the bladder. Two of the men were nonsmokers under age 30 and were first exposed to MOCA 8 and 11 years, respectively, before the cancers were diagnosed.67

Vinyl Chloride

In early 1974, the plastics industry was in crisis. A B.F. Goodrich physician in Louisville, Ky, reported four cases of angiosarcoma of the liver among workers at one factory producing vinyl chloride monomer (VCM) for production of polyvinyl chloride (PVC), one of the industry’s most important products. This type of cancer is exceedingly rare in humans, and the report of four cases in one facility was sufficient to cause alarm.68 Federal scientists mounted epidemiological investigations immediately after the B.F. Goodrich report. Dozens of workers in other VCM/PVC facilities were found with this rare form of liver cancer 6972 and epidemiological studies also suggested that VCM/PVC workers were at greater risk of developing brain cancer.73

But the crisis facing the plastics industry was heightened by what was occurring in a research laboratory. Angiosarcomas were being detected in laboratory animals exposed to levels of VCM below the OSHA standard in effect at the time, and the manufacturers had intentionally concealed this information from federal regulators.42 Since relatively low levels of VCM exposure had been implicated in cancer causation, and there was no known safe level of exposure, OSHA proposed a new VCM standard of “no detectable level.”74

The Society for Plastics Industry (SPI) did what many industries do when they find out that one of their most important products was a carcinogen: it hired a public relations firm. H&K was brought in to help the industry prepare for OSHA’s public hearings and to assist SPI in convincing OSHA to accept a more relaxed standard.

H&K’s advice was consistent with the guidance they offered to other corporate clients faced with damning scientific evidence about the hazards of their products. SPI promoted an alternative exposure level, one that was less stringent than the one OSHA had proposed. To manufacture the appearance that SPI’s recommendation was science-based, the public relations firm instructed SPI to emphasize scientific uncertainty and assert: “It has not been demonstrated that a health hazard exists at the levels recommended by SPI.”75 In its internal documents, however, H&K reminded SPI that “it should also be remembered that the corollary to this statement is that it has not been scientifically demonstrated that the SPI recommended levels are truly safe.”75

Currently, the “junk science” movement is the most prominent public face of the attack on the scientific basis for compensating individuals injured by environmental exposures, and for protecting the health of the public from many of the same environmental exposures. Advocates for this perspective allege that many of the scientific studies (and even scientific methods) used in the regulatory and legal arenas are fundamentally flawed, contradictory or incomplete, making it wrong or premature to regulate the exposure in question or to compensate the worker or community resident allegedly made sick by the exposure.

The label “junk science” was invented and widely publicized to denigrate science supporting environmental regulation and victim compensation. The junk science movement, which attempts to ridicule research that threatens powerful interests (irrespective of the quality of that research), was spawned by these same industries that have been manufacturing uncertainty for decades.

Defenders of pollution and dangerous products often call for policies and legal decisions to be based in “sound science.” This is a concept that is also rarely defined, but presumably signifies the opposite of whatever has been labeled as junk science. University of California researchers Elisa Ong and Stanton Glantz traced the origins of the sound science movement by examining thousands of pages of tobacco industry documents made public after litigation. They documented the central but disguised role of Philip Morris in engineering and funding the sound science effort in operating an organization called The Advancement for Sound Science Coalition (TASSC).76

What Is Junk Science?

It is difficult to find a meaningful definition of the term “junk science.” Peter Huber, who is often credited with coining the term, offers a broad-ranging “I know it when I see it” description rather than definition: “Junk science is the mirror image of real science, with much of the same form but none of the substance. . . . It is a hodgepodge of biased data, spurious inference, and logical legerdemain. . . . It is a catalog of every conceivable kind of error: data dredging, wishful thinking, truculent dogmatism, and, now and again, outright fraud.”77

The junkscience.com website (which was founded and is run by the former executive director of TASSC), defines junk science as “faulty scientific data and analysis used to further a special agenda.”78 The site contains a roster of “junk scientists,” including six elected members of the Institute of Medicine of the National Academy of Sciences, as well as four recipients of the American College of Epidemiology’s highest honor, the Abraham Lillienfeld Award.79 It appears that when scientists have been asked to identify their most outstanding colleagues, they do not share the opinions of the promoters of the junk science label.

The accusation of junk science is not always used in actual regulatory proceedings, perhaps because its use would expose the antiscientific bent of opponents of public health regulation. It is more effectively used in public forums, where attacks on the scientific basis of public health standards are weapons in the political opposition to the standards. When genuine scientific uncertainty does not exist, corporations fearing regulation follow the strategy developed by the tobacco industry. They hire scientists who, while not denying that a relationship exists between the exposure and the disease, argue that “the evidence is inconclusive.” As a result, a lucrative business of science for hire has emerged. Consultants in epidemiology, biostatistics, and toxicology are frequently engaged by industries facing regulation to dispute data used by regulatory agencies in developing public health and safety standards. These consultants often reanalyze studies that had reported positive findings, with the elevated risks of disease disappearing in the reanalysis.

Further proof of the mercenary, rather than scientific, basis for the magnification and manufacture of scientific uncertainty comes from Frank Luntz, a political consultant to the Republican Party. In early 2003, Luntz advised his clients that “Winning the Global Warming Debate” could be accomplished by focusing on uncertainty and differences among scientists:

Voters believe that there is no consensus about global warming within the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate. . . . The scientific debate is closing [against us] but not yet closed. There is still a window of opportunity to challenge the science [emphasis in original].80

In reality, there is a great deal of consensus among climate scientists about climate change.8183 Luntz understands that it is possible to oppose (and delay) regulation without being branded as antienvironmental, by focusing on scientific uncertainty and by manufacturing uncertainty if it does not exist.

As the above discussion makes clear, the junk science movement has little relation to actual science. The movement’s adherents have never established a method to distinguish junk science from the real thing. As a result, the label means little more than “I don’t like your study.”84 Beyond this, however, the junk science label was invented by, and has been a powerful tool in the hands of opponents of public health and environmental regulation and litigation. Although its meaning disappears when examined carefully, the term has gained widespread acceptance in the current debate over the use of scientific evidence in public policy.85 Although part of its success can be attributed to the extensive financial support junk science proponents receive from corporations eager to avoid regulation and litigation, some of the success of the junk science movement lies in the very nature of scientific evidence dealing with human beings. It is likely that in any given scientific debate involving human health, there will be various published studies with inconsistent or even contradictory findings.

The success of the junk science movement can be seen in its two primary institutional manifestations: the Daubert26 decision and the Data Quality Act.28 Both of these are structured to force the piece-by-piece examination of scientific evidence, in contrast to the weight-of-the-evidence approach used by most scientists in reaching conclusions in the face of uncertainty.

The Daubert Decision

In June 1993, the US Supreme Court issued a ruling in Daubert v Merrell Dow Pharmaceuticals, Inc, requiring federal judges to serve as scientific gatekeepers, allowing into evidence only expert testimony that they deem relevant and reliable.26 A recent analysis found that judges are requiring physicians who testify as experts to apply standards of causal inference that exceed those which physicians use to diagnose and treat their own patients.86

The effects of the Daubert decision on litigation that alleges harm from hazardous products can be seen in several cases involving Parlodel, a drug used through the early 1990s to stop postpartum lactation. Until it was withdrawn from the market, a number of young women who had been prescribed Parlodel had severe circulatory system episodes (including heart attacks and strokes) shortly after taking the drug. On the basis of case reports and animal studies, and the fact that Parlodel can cause a rapid rise in blood pressure in humans, the US Food and Drug Administration (FDA) in 1985 requested that the drug’s manufacturer include warnings about hypertension, seizure, and stroke in the drug’s labeling. The evidence continued to accumulate; the FDA’s concern was so great that in 1994, it requested that Parlodel’s manufacturer stop selling the drug to lactating women.87

Yet when several women sued the drug’s manufacturers, claiming Parlodel was responsible for their illness, their cases were essentially thrown out of court for lack of scientific certainty. Judges in several jurisdictions refused to allow jurors to consider the testimony of scientists or physicians who agreed with the FDA that, on the basis of case reports, animal studies, and the way the drug works in the body, Parlodel could cause circulatory disorders. Applying the Daubert rule, the judges demanded a level of certainty that was virtually impossible to provide.86

For more than 10 years, Daubert has been the law of the land. Scholars and other authors have written on its impact and used actual judicial decisions to illustrate the disconnect between legal proof and scientific evidence.8892 Few authors, however, have explored the organized movement to extend Daubert’s reach from the judiciary into the executive branch, in particular, into the federal rulemaking arena.

Emboldened by the success of Daubert in limiting the use of scientific evidence in the courts, antiregulatory interests are promoting the application of Daubert principles in judicial review of federal regulation.9396 Most notably, Daubert is prominently featured in the official position on scientific information in federal rulemaking of the US Chamber of Commerce:

The same standards of relevance and reliability that safeguard the rights of litigants in federal courts should safeguard the public interest in the regulatory process. Regulations affecting business and the public should have a scientific, not political, foundation. That’s why we advocate the adoption of an Executive Order requiring all federal agencies to apply the Daubert standards in the administrative rule-making process.97

Proponents of public health protections, especially those advanced in the face of scientific uncertainty, should be wary of calls to extend Daubert to the regulatory arena. The legal, economic, and political obstacles faced by regulators will increase dramatically when Daubert-like criteria are applied to each piece of scientific evidence used to support a regulation.

The Data Quality Act

Those who oppose public health regulations or seek methods to delay health protections have a new tool in their arsenal: the Data Quality Act (DQA). The law originated as a rider on the appropriations bill for the Treasury Department, slipped into the legislation by Rep. Jo Ann Emerson (R-MO). It consisted of two short paragraphs in the 712-page Consolidated Appropriations Act of 2001,28 sandwiched between provisions to transfer ownership of land in Grand Rapids, Mich, and to settle litigation on nonforeign area cost-of-living allowances.98 There were no hearings or debate on the DQA, meaning no legislative history exists to help clarify Congress’s intentions in passing it.

The DQA authorized the Office of Management and Budget (OMB) to develop guidelines to “ensure and maximize data quality” and to establish procedures allowing formal challenges to information disseminated by federal agencies. If someone believes that information disseminated by an agency is not of sufficient “quality, objectivity, utility, or integrity,” they may request a correction to it. The DQA sounds harmless; it is difficult to argue against ensuring the quality and integrity of government-disseminated information. Yet, its devious conception suggests its intentions are not completely innocent.

It has been widely reported that Rep. Emerson inserted these provisions at the request of Jim Tozzi, an OMB economist during the 1970s and 1980s, 99101 and founder of Multinational Business Services. Mr. Tozzi has been an advocate for industry-funded “regulatory reform” efforts and the founder of the Center for Regulatory Effectiveness. Mr. Tozzi proudly boasts about the convergence of the junk science movement and the DQA. “The law,” he suggested, “will simply stop the ‘junk science’ that can lead to useless and expensive regulations.”102

A petition filed in 2003 asked the EPA to discontinue disseminating its 1986 publication Guidance for Preventing Asbestos Disease Among Auto Mechanics, asserting the booklet “is routinely used to convey the misperceptions that EPA has conducted a complete analysis of the scientific and medical literature and has concluded that brake mechanic work is in fact hazardous and that as a direct result brake mechanics are at increased risk of contracting an asbestos-related disease, including mesothelioma, from such exposure.”103

In response, EPA withdrew the publication from its Web site and announced plans to replace it with a revised publication.104 More than a year after receiving the petition, EPA has not issued a new booklet.

Every first-year public health student is taught how John Snow stopped a cholera epidemic in London. During a 10-day period in September 1854, during which more than 500 Londoners died from the disease, Snow used a city map to mark the location of each household with a case of cholera. He quickly determined that Londoners who drank from one particular water source were at the highest risk for the disease, and he recommended removing the handle of the pump supplying water from that source.105 By using the best evidence available at the time, additional deaths were avoided. If government officials in London had demanded absolute certainty, no preventive measures would have been taken for another 30 years, until the cholera bacterium (Vibrio cholerae) was identified.

Protecting the public’s health requires regulatory policies and approaches that explicitly acknowledge uncertainty, while providing parameters that support decisionmaking based on limited data in situations where significant risk to human health or the environment exists. These parameters should be based in the fundamental paradigm governing public health: decisions must be made using the best evidence currently available. Even if these parameters for decisionmaking are rigorously applied, the debate over the science underpinning public health regulation is unlikely to disappear because protective actions often involve substantial financial costs. This debate is further complicated by the reliance of government agencies on regulated parties for much of the scientific information used to formulate regulations, a dependence made necessary by limited federal research funding.

In order to limit the impact of manufactured uncertainty and to restore scientific integrity to the regulatory process, the public health system must reestablish procedures to enable practitioners to evaluate and apply scientific evidence in a manner that assures the public’s health and environment will be adequately protected. Although there are no magic bullets to cure this problem, increased transparency concerning conflicts of interest, especially involving the financial relationship between the authors and sponsors of studies used in regulatory and legal proceedings, is clearly warranted.

Following a series of alarming instances in which the sponsors of research used their financial control to the detriment of the public’s health, a group of leading biomedical journals have established policies that make their published articles transparent to commercial bias and that require authors to accept full control and responsibility for their work. These journals will now only publish studies done under contracts in which the investigators had the right to publish the findings without the consent or control of the sponsor. In a joint statement, the editors of the journals asserted that contractual arrangements allowing sponsor control of publication “erode the fabric of intellectual inquiry that has fostered so much high-quality clinical research.”106

Federal regulatory agencies, charged with protecting the public’s health and environment, have no requirements for “research integrity” comparable to those of medical journals. When studies are submitted to the EPA or OSHA, for example, for consideration in rulemaking, the agencies do not have the authority to inquire who paid for the studies, and whether these studies would have seen the light of day if the sponsor didn’t approve the results. As a result, sponsors with clear conflicts of interest have no incentive to relinquish control over sponsored research governing their products and activities.

Federal agencies should adopt, at a minimum, requirements for research integrity comparable to those used by biomedical journals: Parties that submit data from research they have sponsored must disclose if the investigators had the contractual right to publish their findings without the consent or influence of the sponsor.107

Some policymakers fail to recognize that all studies are not created equal. Opponents of regulation often hire scientific consulting firms that specialize in “product defense” to reanalyze data from the studies used to support or shape public health and environmental protections. This sometimes results in the existence of what appear to be equal and opposite studies, encouraging policymakers to do nothing in the face of what appear to be contradictory findings.

Epidemiologists recognize that the results from post hoc analyses do not have the same validity as the findings of studies designed to test a prior hypothesis. Regulators, jurists, and other policymakers are often called on to ascribe a relative weight to different studies; while no evidence should be totally discarded, the findings of post hoc analyses (and reanalyses) should be labeled accordingly and not be treated as equal to those of original research, and should be accorded less weight and significance.

In our current regulatory system, debate over science has become a substitute for debate over policy. Opponents of regulation use the existence of uncertainty, no matter its magnitude or importance, as a tool to counter imposition of public health protections that may cause them financial difficulty. It is important that those charged with protecting the public’s health recognize that the desire for absolute scientific certainty is both counterproductive and futile. This recognition underlies the wise words of Sir Austin Bradford Hill, delivered in an address to the Royal Society of Medicine in 1965:

All scientific work is incomplete—whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone action that it appears to demand at a given time. . . . Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute on the 8:30 next day.108

This work was supported by the Project on Scientific Knowledge and Public Policy (SKAPP). Major support for SKAPP is provided by the Common Benefit Trust, a fund established pursuant to a court order in the Silicone Gel Breast Implant Products Liability Litigation.

The authors appreciate the helpful comments provided by members of the SKAPP planning committee, Carl Cranor, and two other peer reviewers.

References

1. Belay E, Bresee J, Holman R, Khan A, Shahriari A, Schonberger L. Reye’s syndrome in the United States from 1981 through 1997. N Engl J Med. 1999; 340:1377–1382. Crossref, MedlineGoogle Scholar
2. Hurwitz E, Barrett M, Bregman D, et al. Public health service study of Reye’s syndrome and medications: Report of the main study. JAMA. 1987;257:1905–1911. Crossref, MedlineGoogle Scholar
3. Goldman L. Epidemiology in the regulatory arena. Am J Epidemiol. 2001;154:S18–S26. Crossref, MedlineGoogle Scholar
4. Davis D, Buffler P. Reduction of deaths after drug labeling for risk of Reye’s syndrome. Lancet. 1992; 340:1042. Crossref, MedlineGoogle Scholar
5. Monto A. The disappearance of Reye’s syndrome: a public health triumph. N Engl J Med. 1999;340:1423–1424. Crossref, MedlineGoogle Scholar
6. Hilts P. Protecting America’s Health. New York, NY: Alfred A. Knopf; 2003. Google Scholar
7. 47 Federal Register 57886 (1982). Labeling for salicylate-containing products, advanced notice of proposed rulemaking, Food and Drug Administration, December 28, 1982. Google Scholar
8. Lurie P, Wolfe S. Aspirin and Reye’s syndrome. In: Lurie P, Cary D, Tanio C, Wolfe S, Paradigms for Change: A Public Health Textbook for Medical, Dental, Pharmacy and Nursing Students. Washington, DC: Public Citizen’s Health Research Group (unpublished). Google Scholar
9. Tenth Report on Carcinogens. US Department of Health and Human Services, National Toxicology Program. Available at: http://ehp.niehs.nih.gov/roc/toc10.html. Accessed December 4, 2003. Google Scholar
10. Waddell W. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors. December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360. Accessed March 14, 2005. Google Scholar
11. Roth HD. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, January 20–21, 2000. http://ntp.niehs.nih.gov/index.cfm?objectid=06F5398C-F235-2FF4-F5DAF01B3FB43BD7. Accessed March 15, 2005. Google Scholar
12. Trichopoulos D. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, January 20–21, 2000. http://ntp.niehs.nih.gov/index.cfm?objectid=06F5398C-F235-2FF4-F5DAF01B3FB43BD7. Accessed March 15, 2005. Google Scholar
13. Glenn R. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360, Accessed March 14, 2005. Google Scholar
14. Moll W. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360, Accessed March 14, 2005. Google Scholar
15. Preston J. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360, Accessed March 14, 2005. Google Scholar
16. Teta MJ. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360, Accessed March 14, 2005. Google Scholar
17. Schotland S. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360. Accessed March 14, 2005. Google Scholar
18. Oller A. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 2–3, 1998. http://ntp.niehs.nih.gov/index.cfm?objectid=06F3F9F7-0DEB-D47E-C4A683A074666360, Accessed March 14, 2005. Google Scholar
19. Blot W. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 13–15, 2000. Available at: http://ntp-server.niehs.nih.gov/htdocs/Liason/121300.pdf. Accessed December 3, 2003. Google Scholar
20. International Agency for Research on Cancer. Overall evaluations of carcinogenicity to humans. Available at: http://monographs.iarc.fr/monoeval/crthgr01.html. Accessed May 29, 2003. Google Scholar
21. Ross S. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 13–15, 2000. Available at: http://ntp-server.niehs.nih.gov/htdocs/Liason/121300.pdf. Accessed December 3, 2003. Google Scholar
22. Smith D. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 13–15, 2000. Available at: http://ntp-server.niehs.nih.gov/htdocs/Liason/121300.pdf. Accessed December 3, 2003. Google Scholar
23. Deveney J. Public comment, as reported in the Report of Carcinogens Subcommittee Meeting, National Toxicology Program Board of Scientific Counselors, December 13–15, 2000. Available at: http://ntp-server.niehs.nih.gov/htdocs/Liason/121300.pdf. Accessed December 3, 2003. Google Scholar
24. Ozonoff D. On being careful what we wish for: some difficulties with operationalizing the Precautionary Principle. Int J Occup Med Environ Health. 2004; 17(1):35–41. MedlineGoogle Scholar
25. Ozonoff D. Anthrax: the precautionary principle goes postal. Public Health Rep. 2002;117(6):513–520. Crossref, MedlineGoogle Scholar
26. Daubert v Merrell Dow Pharmaceuticals, Inc. 113 S.Ct. 2786 (1993). Google Scholar
27. Daubert: the most influential Supreme Court ruling you’ve never heard of. Available at: http://www.defendingscience.org. Accessed August 1, 2004. Google Scholar
28. §515 of the Consolidated Appropriations Act of 2001, P.L. 106–554. Google Scholar
29. Smoking and Health: A Report of the Advisory Committee to the Surgeon General of the Public Health Service. Washington, DC: US Department of Health, Education, and Welfare; 1964. DHEW publication 1103. Google Scholar
30. Smoking and Health: A Report of the Surgeon General. Washington, DC: US Department of Health, Education, and Welfare; 1979. DHEW publication 79-50066. Google Scholar
31. Women and Smoking 2001: A Report of the Surgeon General. Washington, DC: US Department of Health and Human Services; 2001. Google Scholar
32. Kessler D. A Question of Intent: A Great American Battle with a Deadly Industry. New York, NY: Public Affairs; 2001. Google Scholar
33. Glantz S, Slade J, Bero L, Hanauer P, Barnes D. The Cigarette Papers. Berkeley: University of California Press; 1996. Google Scholar
34. Kluger R. Ashes to Ashes: America’s Hundred-Year Cigarette War, the Public Health, and the Unabashed Triumph of Philip Morris. New York, NY: Random House; 1996. Google Scholar
35. Confidential Public Relations Report to the TIRC, November 3, 1955. Lorillard Document 82106862–82106866. Available at: http://tobaccodocuments.org/lor. Accessed December 4, 2004. Google Scholar
36. Hill and Knowlton. Public Relations Proposal for the Tobacco Industry (draft for discussion). July 9, 1959. RJ Reynolds Document 502051941-502051954. Available at: http://tobaccodocuments.org/rjr. Accessed December 4, 2004. Google Scholar
37. Thompson C. Memorandum to William Kloepfer, Jr, and the Tobacco Institute, Inc, from Hill and Knowlton, Inc; October 18, 1968. Tobacco Institute Document TIMN0071488-1491. Available at: http://tobaccodocuments.org/ti. Accessed December 15, 2004. Google Scholar
38. Smoking and Health Proposal. Brown & Williamson Document No. 332506. Available at: http://tobaccodocuments.org/bw. Accessed December 11, 2004. Google Scholar
39. Proctor RN. Should medical historians be working for the tobacco industry? Lancet. 2004;363:1174–1175. Crossref, MedlineGoogle Scholar
40. Markowitz G, Rosner D. Industry challenges to the principle of prevention in public health: the precautionary principle in historical perspective. Public Health Rep. 2002;117(6):501–512. Crossref, MedlineGoogle Scholar
41. Proctor RN. Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York, NY: Basic Books, 1995. Google Scholar
42. Markowitz G, Rosner D. Deceit and Denial: The Deadly Politics of Industrial Pollution. Berkeley: University of California Press; 2002. Google Scholar
43. Markowitz G, Rosner D. “Cater to the children”: the role of the lead industry in a public health tragedy, 1900–1955. Am J Public Health. 2000;90(1):36–46. LinkGoogle Scholar
44. Warren C. Brush with Death: A Social History of Lead Poisoning. Baltimore, Md: Johns Hopkins University Press; 2000. Google Scholar
45. Wilson BS. Legislative history of the Pesticides Residues Amendment of 1954 and the Delaney Clause of the Food Additives Amendment of 1958. In: Regulating Pesticides in Food: The Delaney Paradox. Washington, DC: National Academies Press; 1987. Google Scholar
46. Jackson M. Subject: Manufacturing Chemists’ Association; Hill and Knowlton, Inc, memo to John W. Hill. August 1, 1951. Google Scholar
47. Minutes of the Meeting of Directors of the Manufacturing Chemists’ Association, Inc 1951–1952. Chemical Industry Archives of the Environmental Working Group. Available at: http://www.chemicalindustryarchives.org, Accessed December 4, 2003. Google Scholar
48. Miller K. The Voice of Business: Hill and Knowlton and Postwar Public Relations. Chapel Hill: University of North Carolina Press; 1999. Google Scholar
49. Castleman B. Asbestos: Medical and Legal Aspects. New York, NY: Harcourt Brace Jovanovich; 1984. Google Scholar
50. Brodeur P. Outrageous Misconduct: The Asbestos Industry on Trial. New York, NY: Pantheon Books; 1985. Google Scholar
51. Ozonoff D. Failed warnings: asbestos-related disease and industrial medicine. In: Bayer R, ed., Case Studies in the Politics of Professional Responsibility. New York, NY: Oxford University Press, 1988. Google Scholar
52. Egilman D, Fehnel C, Bohme SR. Exposing the “myth” of ABC, “anything but chrysotile”: a critique of the Canadian asbestos mining industry and McGill University chrysotile studies. Am J Ind Med. 2003;44:540–57. Crossref, MedlineGoogle Scholar
53. Marder H. (senior vice president, Hill and Knowlton) Letter to James E. Gulick, vice president, Brush Wellman, Inc; Attachment entitled: Division of Scientific, Technical and Environmental Affairs; February 21, 1989. Google Scholar
54. Cranor CF. Regulating Toxic Substances: A Philosophy of Science and the Law. New York, NY: Oxford University Press, 1993. Google Scholar
55. Michaels D. Waiting for the body count: corporate decision making and bladder cancer in the U.S. dye industry. Med Anthro Q. 1988;2:215–232. CrossrefGoogle Scholar
56. Michaels D. When science isn’t enough: Wilhelm Hueper, Robert A. M. Case and the limits of scientific evidence in preventing occupational bladder cancer. Int J Occup Environ Health. 1995;1:278–288. Crossref, MedlineGoogle Scholar
57. 39 Federal Register 3756. (1974). Part 1910, Occupational Safety and Health Standards, Carcinogens. Vol 39, No. 20, January 29, 1974, page 3756. Google Scholar
58. Comments on the production and use of benzidine, SOCMA Benzidine Task Force, March 9, 1973. Google Scholar
59. Morgan DL, of Cleary, Gottlieb, Steen & Hamilton, on behalf of domestic manufacturers or users of 3,3′-Dichlorobenzidine. Comments to OSHA’s Standards Advisory Committee on Carcinogens [Letter]; June 26, 1973. Google Scholar
60. DCB Subcommittee of the Dry Color Manufacturers’ Association. Comments to OSHA on the Use of Dichlorobenzidine, March 9, 1973 [Docket H003, Exhibit 2–8]. Google Scholar
61. Hueper WC. Occupational and Environmental Cancers of the Urinary System. New Haven, Conn: Yale University Press; 1969. Google Scholar
62. Parnes W, Shuler PJ, Donaldson HM. Field Survey of Allied Chemical Corporation. Cincinnati, Ohio: National Institute for Occupational Safety and Health, Division of Field Studies and Clinical Investigations; October 16, 1972. Google Scholar
63. Murray WM. Affidavit concerning 3,3′ Dichlorobenzidine. May 15, 1973. Submitted to OSHA’s Standards Advisory Committee on Carcinogens by Donald L. Morgan of Cleary, Gottlieb, Steen & Hamilton; June 26, 1973. Google Scholar
64. Ouellet-Hellstrom R, Rench JD. Bladder cancer incidence in acrylamine workers. J Occup Environ Med. 1996;38:1239–1247. Crossref, MedlineGoogle Scholar
65. National Association of Manufacturers. Comments on OSHA’s emergency temporary standard for certain carcinogens [Docket H-003] September 12, 1973. Google Scholar
66. Korbitz BC, on behalf of the Polyurethane Manufacturers Association. Comments on OSHA’s proposed standard on 4,4′ methylene bis (2-chloroaniline) Docket H-050A. March 10, 1975. Google Scholar
67. Ward E, Halperin W, Thun M, et al. Bladder tumors in two young males occupationally exposed to MBOCA. Am J Ind Med. 1988;14:267–272. Crossref, MedlineGoogle Scholar
68. Angiosarcoma of the liver among polyvinyl chloride workers. Morbidity and Mortality Weekly Report, February 9, 1974. Atlanta, GA: US Department of Health, Education and Welfare; 1974. Google Scholar
69. Forman D, Bennett B, Stafford J, Doll R. Exposure to vinyl chloride and angiosarcomas of the liver: a report of the register of cases. Br J Ind Med. 1985;42:750–753. MedlineGoogle Scholar
70. Spirtas R, Kaminski R. Angiosarcoma of the liver in vinyl chloride/polyvinyl chloride workers:1977 update of the NIOSH register. J Occup Med. 1978;20:427–429. MedlineGoogle Scholar
71. Delorme F, Theriault G. Ten cases of angiosarcoma of the liver in Shawinigan, Quebec. J Occup Med. 1978;20:338–340. MedlineGoogle Scholar
72. Brady J, Liberatore F, Harper P, et al. Angiosarcoma of the liver: an epidemiologic survey. J Natl Cancer Inst. 1977;59:1383–1385. Crossref, MedlineGoogle Scholar
73. Wagoner JK, Infante PF. Vinyl chloride: a case for the use of laboratory bioassay in the regulatory control procedure. In: Hiatt HH, Watson JD, Winsten JA, eds. Origins of Human Cancer: Book C. Human Risk Assessment. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory; 1977. Google Scholar
74. 39 Federal Register. 16896 (1974). Vinyl chloride proposed standard. Vol. 39, No. 92, May 10, 1974, page 16896. Google Scholar
75. Moore J. Letter from Hill and Knowlton, Inc, to John Spano, Monsanto Company, with accompanying Recommendations for Public Affairs Program for SPI’s Vinyl Chloride Committee. Phase I: Preparation for OSHA Hearing. Document No. 19740612_001_ 00004426. June 12, 1974. Available at: http://www.chemicalindustryarchives.org. Accessed December 1, 2003. Google Scholar
76. Ong EK, Glantz SA. Constructing “sound science” and “good epidemiology”: Tobacco, lawyers, and public relations firms. Am J Public Health. 2001;91:1749–1758. LinkGoogle Scholar
77. Huber PW. Galileo’s Revenge. New York, NY: Basic Books; 1991. Google Scholar
78. JunkScience.com: all the junk that’s fit to debunk. Available at: http://www.junkscience.com/define.htm. Accessed August 13, 2004. Google Scholar
79. Junk science at-large: junk scientists. Available at: http://www.junkscience.com/roster.html. Accessed August 13, 2004. Google Scholar
80. Luntz memo on the environment. Available at Environmental Working Group at: http://www.ewg.org:16080/briefings/luntzmemo. Accessed August 13, 2004. Google Scholar
81. National Academy of Sciences. Climate Change Science: An Analysis of Some Key Questions. Washington, DC: National Academies Press; 2001. Available at: http://books.nap.edu/books/0309075742/html. Accessed August 4, 2004. Google Scholar
82. National Academy of Sciences. Planning Climate and Global Change Research: A Review of the Draft U.S. Climate Change Science Program Strategic Plan. Washington, DC: National Academies Press; 2003. Google Scholar
83. Kennedy D. The policy drought on climate change. Science. 2003;299:309. Crossref, MedlineGoogle Scholar
84. Edmond G, Mercer D. Trashing “junk science.” Stan Tech L Rev. 1998;3. Google Scholar
85. Herrick CN, Jamieson D. Junk science and environmental policy: obscuring public debate with misleading discourse. Philos Public Policy Q. 2001;21:11–16. Google Scholar
86. Kassirer JP, Cecil JS. Inconsistency in evidentiary standards for medical testimony. JAMA. 2002;288:1382–1386. Crossref, MedlineGoogle Scholar
87. FDA moves to end use of bromocriptine for postpartum breast engorgement. FDA news release, August 17, 1994. Available at: http://www.fda.gov/bbs/topics/ANSWERS/ANS00594.html. Accessed August 11, 2004. Google Scholar
88. Berger MA. What has a decade of Daubert wrought? Am J Public Health. 2005;95:S59–S65. LinkGoogle Scholar
89. Cecil JS. Ten years of judicial gatekeeping under Daubert. Am J Public Health. 2005;95:S74–S80. LinkGoogle Scholar
90. Haack S. Trial and error: the Supreme Court’s philosophy of science. Am J Public Health. 2005;95: S66–S73. LinkGoogle Scholar
91. Cranor CF, Eastmond DA. Scientific ignorance and reliable patterns of evidence in toxic tort causation: is there a need for liability reform? Law Contemp Probs. 2001;64:5–48. CrossrefGoogle Scholar
92. Edmond G, Mercer D. Daubert and the exclusionary ethos: the convergence of corporate and judicial attitudes towards the admissibility of expert evidence in tort litigation. Law & Policy. 2004;26:231–257. CrossrefGoogle Scholar
93. Goldsmith WJ. Occupational Safety and Health Act’s rulemaking procedures. Testimony presented before US House of Representatives, Subcommittee on Workforce Protections, House Committee on Education and the Workforce; June 14, 2001. http://edworkforce.house.gov/hearings/107th/wp/osha61401/goldsmith.htm. Accessed verified March 14, 2005. Google Scholar
94. Raul AC, Dwyer JZ. “Regulatory Daubert”: A proposal to enhance judicial review of agency science by incorporating Daubert principles into administrative law. Law Contemp Probs. 2003;66:7–44. Google Scholar
95. McGarity TO. On the prospect of “Daubertizing” judicial review of risk assessment. Law Contemp Probs. 2003;66:155–225. Google Scholar
96. Wagner WE. Importing Daubert to administrative agencies through the information quality act. J Law Policy. 2004;12:589–617. Google Scholar
97. US Chamber of Commerce. Scientific Information in Federal Rulemaking. Policy statement, January 2002. Available at: http://www.uschamber.com/government/issues/regulatory/scientific_rulemaking.htm. Accessed August 13, 2004. Google Scholar
98. §514 and §516 of the Consolidated Appropriations Act of 2001, P.L. 106–554 Google Scholar
99. Twohey M. Jim Tozzi: on jazz and OMB. The Federal Paper. November 18, 2002:1. Google Scholar
100. Tozzi J. Memorandum to Matthew Winokur, director of worldwide regulatory affairs, Philip Morris. Subject: Meeting with EPA officials on data access/data quality. January 11, 1999. Philip Morris Document 2065230935. Available at: http://tobaccodocuments.org/pm. Accessed December 4, 2003. Google Scholar
101. Tozzi J. Letter to Robert Elves, Philip Morris. February 29, 2000. Philip Morris Document 2072826830. Available at: http://tobaccodocuments.org/pm. Accessed December 4, 2003. Google Scholar
102. New Law Means More Federal Rules Can Be Challenged. The Center for Regulatory Effectiveness, CRE in the News. Available at: http://www.thecre.com/news.html. Accessed December 8, 2003. Google Scholar
103. Morgan, Lewis & Brockius, LLP. Letter to US Environmental Protection Agency; August 19, 2003. Available at: http://www.epa.gov/oei/qualityguidelines/afreqcorrectionsub/12467.pdf. Accessed December 8, 2003. Google Scholar
104. EPA notice about revising Guidance for Preventing Asbestos Disease Among Auto Mechanics (“Gold Book”). Available at: http://www.epa.gov/asbestos/auto.html. Accessed August 13, 2004. Google Scholar
105. Snow J. On the mode of communication of cholera [published in 1855 in London]. In: Frost WH, ed. Snow on Cholera. New York, NY: The Commonwealth Fund; 1936. Google Scholar
106. Davidoff F, DeAngelis CD, Drazen JF, et al. Sponsorship, authorship, and accountability. JAMA. 2001; 286:1232. Crossref, MedlineGoogle Scholar
107. Michaels D, Wagner W. Disclosures in regulatory science. Science. 2003;302:2073. Crossref, MedlineGoogle Scholar
108. Hill AB. The environment and disease: association or causation? Proc Royal Soc Med 1965;58:295–300. MedlineGoogle Scholar

Related

No related items

TOOLS

SHARE

ARTICLE CITATION

David Michaels, PhD, MPH, and Celeste Monforton, MPHDavid Michaels and Celeste Monfortonis are with the Department of Environmental and Occupational Health, George Washington University School of Public Health and Health Services, Washington, DC. “Manufacturing Uncertainty: Contested Science and the Protection of the Public’s Health and Environment”, American Journal of Public Health 95, no. S1 (July 1, 2005): pp. S39-S48.

https://doi.org/10.2105/AJPH.2004.043059

PMID: 16030337