Historically, quality assurance studies have received scant ethical attention. The advent of information systems capable of supporting research-grade continuous quality improvement projects demands that we clearly define how these projects differ from research and when they require external review. The ethical obligation for the performance of quality assurance projects, with its emphasis on identifiable immediate action for a served population, is a critical distinction. The obligation to perform continuous quality improvement is a deliverable of the social contract entered into implicitly by patients and health care providers and systems.

In this article, the authors review the ethical framework that requires these studies, evaluate the differences between quality assurance studies and classic research, and propose criteria for requiring external review.

Federal regulations intended to protect human research subjects require institutional review boards (IRBs) to review and approve the design and process of research to enhance subjects' understanding, protect autonomy, and minimize risk.1 In contrast, hospital quality assurance studies have been conducted in private settings, often explicitly shielded by state law, to encourage honest exploration of mistakes by physician-reviewers.2 Since their goal of improving patient care seems morally unambiguous, quality assurance studies have received scant ethical attention, and there has been no call for supervision external to the participants. Historically, quality assurance has consisted primarily of retrospective reviews of physician practice triggered by alarming outcomes.3 However, the public's perception of the frequency of errors in medical practice is evolving.4 Oversight agencies and the public are demanding a transition to an active process of continuous quality improvement.5

Ideally, continuous quality improvement should involve reviewing all records and being constantly vigilant. It should be a nearly concurrent retrospective surveillance of patients seen in the immediate past, while there remains the opportunity to intervene both in the patient's care and in the practitioner's perception of the events. Its data analyses should be sufficiently comprehensive to detect the systems that failed to prevent human error.68 No human review of paper records has these 2 characteristics. Only automated examinations of the footprints of the clinical encounter stored in electronic medical information systems can hope to accomplish these objectives.

In this article, we explore the differences between continuous quality improvement and research and suggest that some of the interventions proposed in the name of quality improvement raise competing ethical considerations that, like research, need external review for their adjudication.

Donabedian defined 3 components of health care quality: structure, process, and outcome.10 New information technologies can radically improve the surveillance of process and outcome and preclude the need for sentinel events to trigger quality assurance review.3 Computer surveillance of the electronic medical record will soon be able to identify 3 patterns of deficient care, only the first of which was identifiable in the past.

The first pattern—a dramatic deviation from the expected outcome, as in the sentinel event of unexpected death—was the subject of historical quality assurance. The second—a reasonable outcome despite a less-than-optimal process—is a warning that the system is not in control and may deteriorate further, resulting in an adverse event or a pattern of preventable morbidity. As an example, consider increasing delays in emergency department triage time. At first, the increased delays are merely disturbing; however, should a critically ill patient present, the previously disturbing delay can prove catastrophic. The delay is a latent error—an accident waiting to happen.

The third category—a reasonable outcome with an acceptable process that might be further improved beyond the prevailing standard—will soon be identifiable with computerized clinical tracking. As an example, there has been a common practice of keeping patients with uncomplicated community-acquired pneumonia in the hospital on intravenous antibiotics for 1 day after their fever has remitted. Given that few of these patients require intervention on their last day, a careful review would show that an earlier discharge would arguably be appropriate, thus lowering costs and reducing the risks of iatrogenic complications.

Because of the differing severities of consequence, each of the above scenarios has a different urgency. All are quality improvement and should be monitored as technology permits.

Quality assurance efforts in the first half of the 20th century were restricted to a reactive approach—evaluating bad outcomes. With limited staff time for chart review, only the more egregiously bad outcomes could routinely be investigated, resulting in a case-by-case rather than a systemwide perspective. Only academics, with their legion of fellows, could review the experience of patient cohorts drawn from the hospital's paper records.

In the same period, researchers developed a new and powerful paradigm—the double-blind, prospective, randomized controlled trial.11,12 This archetype of clinical research shares no common characteristics with the extant “bad event–focused quality assurance” process (Table 1). Research seeks generalizable knowledge because appropriate therapy is not known. Quality assurance classically assumes that the appropriate therapy is known and that departures from the known standard should be identified and corrected.

Those who participated in quality assurance efforts, with its focus on patient safety, did not confuse their activity with clinical research. They tested no hypotheses, generalized to no larger group, assigned no therapy, and used no highly sophisticated statistical and graphing tools.7 The last distinction, more symbolic than relevant, created an important psychologic distance between the 2 activities. While it is true that chart-review case-series research superficially mimicked quality assurance chart-review activity, academic researchers did not consider members of the quality assurance community to be their peers.

With the advent of cohort creation as a quality assurance tool, the significantly enhanced sophistication of the statistical methods used, and the breadth and depth of the electronic database available, the superficial distinctions have vanished. We are now forced to confront the substantive differences and to establish rational expectations of oversight appropriate to function rather than to label.

The profession of medicine and the institutions in which it is practiced have the clear moral obligation to monitor the quality of care provided. This duty applies to provider–patient interactions in doctors' offices and to diagnostic and therapeutic interventions in hospitals, nursing homes, and ambulatory settings.

The ethical obligations of health care institutions derive from multiple sources. First, the health care institution is a natural extension of physician practice and, so being, is bound by professional oaths and ethics to promote the patient's best interests.13,14 The compact of trust between patient and physician encompasses the expectation that the care provided will be characterized by skill, judgment, attention, and concern. The health care organization demonstrates this concern through a rigorous, continuous quality improvement process.

Second, the institution is a distinct moral agent with responsibilities separate from and in addition to those of the individuals who compose it.1518 The notion of moral agency has particular significance for the health care institution, because its coordination of the efforts of many people is essential to create the fundamental good—a health benefit. As an example, mammography screening for breast cancer is a deliverable requiring a scheduling secretary, a registration clerk, an x-ray technician, a radiologist to interpret, a secretary to type, a clerk to put the results in the chart, and a clinician with interpretive and interpersonal skills to communicate results to the patient. Without the coordinated activity of all, effective screening would not be accomplished. The organization takes the moral responsibility19 for maintaining quality at each of the distinct steps that, in the aggregate, lead to the health good—breast cancer screening.

A third argument derives from the construct of the social contract. Society has traditionally granted medicine hegemony in the guardianship of patient health, in exchange for which physicians are obligated to practice according to accepted standards and promote patient well-being through quality assurance review.20 A similar social contract with health care organizations must be made explicit. Without this organizational acceptance of ethical responsibility, less ethically committed health care organizations will exploit the natural variation in the medical practice of individual clinicians, selecting the cheap and potentially substandard while off-loading responsibility to the individual clinician when accountability for bad outcomes is demanded. This ethically dubious risk transfer scheme shifts the responsibility for detection and remediation to the party least able to collect and analyze the data—the individual physician.

With rare exceptions,21,22 individual physicians are incapable of monitoring events in the aggregate among their patient population to detect unacceptable practice patterns and outcomes. The information technology infrastructure of hospitals or managed care consortiums has this unique capability, from which flows a distinct ethical responsibility to monitor.

Despite traditional reluctance, both the medical and lay communities now acknowledge risk and error as inherent in medical practice.4 The quality assurance process has been renamed “continuous quality improvement,” reflecting the recognition that the process is one of continued improvement in the context of error rather than an idealized, unachievable notion of perfection. Continuous quality improvement is expected by patients and required by certifying agencies such as the Joint Commission on Accreditation of Healthcare Organizations23 and the National Committee for Quality Assurance.24 In an implicit modern reflection of the social contract, the patient consents to and pays for treatment and the medical care community obligates itself to prevent errors, identify them when they occur, learn from them, and preclude their repetition. Unlike research, an optional external activity imposed on the physician–patient relationship, continuous quality improvement is ethically intrinsic to providing care. The notion of a formal consent process has thus been considered irrelevant, although the issue has never been debated vigorously.

Experimental interventions intrude upon the usual assumptions of the physician–patient relationship. In daily practice, physicians are ethically required to choose the most efficacious, least harmful therapies on the basis of knowledge, judgment, and intuition. In the experimental research paradigm, with the appropriate consent of the patient and within well-defined constraints, both the patient and the physician subordinate this process to the rigid methodologies of the study. The potentially conflicting ethical values inherent in this radically changed relationship make the jurisdiction of an independent review panel mandatory. This requirement to review extends beyond clinical trials, as even chartreview research, with its far less intrusive process, requires a balancing of ethical values of privacy against potential societal benefit.25,26

Clinical research falls into the category of an ethically permissible rather than a morally and legally mandatory activity. Whereas society supports research to advance knowledge, no particular individual or institution is obligated to perform research, and subject participation is optional.

In contrast to medical practice, where discussion of risks and benefits may be subsumed in physician advice,27 the risks inherent in clinical research are clearly identified in the review process of the IRB; the IRB must evaluate scientific merit (since a protocol that will not produce useful data can support no risk), determine that risks of the protocol are reasonable given the anticipated benefits, and review the process of informed consent.28 The IRB is charged with sensitizing hypothesis-focused investigators to the issues of protecting human subjects and counterbalancing the potential for exploiting the doctor–patient relationship.

Clinical research derives much of its legitimacy from its generalizability. Many researchers see ethical responsibility as ending at publication. This lack of connectedness to an immediately benefited patient population is the very reason that IRBs evaluate scientific merit, since others must incorporate the findings into their practice for there to be an ultimate public benefit.

The continuous quality improvement process is embedded in an organizational matrix committed to using the results of the study to inform immediately the process of care. It is this feedback loop, with its expectation of responsiveness, that motivates and legitimates continuous quality improvement reviews. Statistical rigor and sound methodology further enhance the ethical legitimacy of continuous quality improvement.

Quality improvement projects that review practice for conformance with accepted norms require no external oversight. Their activities are mandatory and inherently legitimate. Their major ethical challenge has been maintaining patient privacy, allowing for exceptions for the shortest period of time necessary for evaluation with the subsequent elimination of identifiers. Future ethical challenges will include preventing the continuous quality improvement process from succumbing to the single-minded focus of cost containment, with the potential of jeopardizing patient well-being.

The random assignment of standard or alternative treatments or placebos—the hallmark of the gold standard, the double-blind, randomized controlled trial—is not a component of classic quality improvement projects. With the assignment of therapy comes a whole range of responsibilities and considerations that, for clinical research, have been in the scientific and ethical purview of a review body external to the researchers—the IRB.2932 The IRB must be convinced that the better practice is not known at the time the study is initiated. Clinical equipoise33 must be established at the outset of the study and must be continuously present as the study progresses. Monitoring the accumulated data for statistical significance and for the shattering of equipoise is the ongoing responsibility of the data and safety monitoring boards that are increasingly required by good research practice.

Quality improvement projects can be of 2 distinct types: retrospective review or prospective interventional. For the retrospective review of records, the critical determinant of nonresearch status is the commitment, in advance of data collection, to a corrective action plan given any one of a number of possible outcomes. The sponsor of this review must have both clinical supervisory responsibility and the authority to impose change. Even the creation of pseudocohorts by the random review of charts with specific characteristics, the use of advanced statistical models, or the extraction of generalizable knowledge for publication does not change the essential character of the work. However, the same record review performed without this commitment is research and subject to external review by an IRB.

Prospective interventional studies, even when sponsored by the clinical authority responsible for quality improvement, require external scrutiny. We present 3 examples.

Example 1

A health maintenance organization (HMO) would like to test the efficacy and cost of 2 established therapies for hypertension. Under present practice standards, the HMO can differentially reimburse patients or reward physicians for compliance with the cost-saving standard. Suppose the HMO wished to rigorously evaluate its approach to care by randomizing patients to 1 of 2 strategies. It is clear that such a study requires review by an IRB and informed consent. Even though the primary endpoint, hypertension control, can be achieved through either therapy, the side-effect profiles are different. Patients have a right to expect that their physicians will optimize their treatment both for the primary endpoint and with consideration for their preferences for side effects. At the very least, patients expect an honest relationship with the physician and expect to be told if some motivation other than their best interest is driving the decision. The fact that the HMO can virtually order the use of a particular regimen by restricting its formulary is immaterial. If it differentially assigns therapy to explore outcome, it engages in research.

Example 2

An HMO wishes to test whether the provision of 5 follow-up in-home nursing evaluations of patients discharged from the hospital with a diagnosis of congestive heart failure will ultimately reduce rehospitalizations or emergency room visits enough to offset the cost of the program. No one will be denied the standard package of services, and this service is believed to provide benefit without risk. We argue that external review is required. We are chastened by the history of research that has demonstrated our inability to distinguish the purely helpful from the possibly dangerous.34,35 We can envision a scenario where the increased nursing services might bring iatrogenic risk—wrong advice from the visiting nurse or miscommunication between the physician and the visiting nurse. More important, the patient and family may have privacy concerns and may object to having strangers imposed on them. In addition to these theoretic concerns, we believe, requiring oversight of this “experiment,” even though it is an extremely benign example, will have created a preemptive protective zone of prohibition that will provide a useful precedent. Informed consent should be required either at the outset or after randomization for all those who will be offered the intervention.

Example 3

A continuous quality improvement department is trying to decide whether investing the resources for the ongoing review of care of AIDS patients is worth the effort. It has the technical computer capability to review automatically, on a monthly basis, the CD4 counts, viral loads, and medications of all of its patients. The costly part of the intervention would be convening case reviews and patient interviews to attempt to improve the outcome of care. To answer the question rigorously, the study design must create 2 de facto standards of care in the clinic. Were the clinic to prospectively assign intervention differentially, there would be no question that this would be research and IRB review would be required. In a clever manipulation, however, the continuous quality improvement department randomizes one half of the patients to monthly computer quality improvement review. The restriction of quality improvement review to a randomly selected subset is clearly within the quality improvement tradition. Those randomized to surveillance who are found to be failing their therapies are reported to the medical director.

Once presented with evidence that individuals are not achieving desired endpoints, the director of the AIDS clinic decides to convene 3 senior physicians to discuss those patients. The team reviews the charts for evidence of medical errors, examining the appointment log and prescription pickup record for evidence of noncompliance. The director then develops a program to ensure improved compliance or improved attention to standards of care by physicians for the identified patients.

The intervention is generated by the clinical director independently of the randomization. The collection of data morally compels the director to provide “the best standard of care” but does not formally assign the rest of the clinic to an inferior standard. The standard care of the other patients in the clinic is not functionally interfered with by the medical director, who is ignorant of any flaws in their care. After a year of this creative manipulation, the computer system is asked to review the clinical outcomes of both groups.

At no point in this process has there been a formal violation of continuous quality improvement procedure or practice, nor has there been specific assignment of intervention; however, this is clearly a technical manipulation to avoid the designation of research and to sidestep external review. While the study itself might be legitimate, the manipulation makes it a potentially dangerous model. In addition, the lack of a formal review process and of an honest design precludes the use of interim statistical analyses, through which the department might find benefit early, end the “trial,” and provide the intervention to the “control group.”

In this last example, the line between research and continuous quality improvement is the most permeable. Prudence and a respect for the history of research and the excesses of medical researchers demand protection for patients and accountability for clinician-researchers, whether within or outside of the continuous quality improvement process. The open discussion of the protocol, even if the dangers to any patient randomized to care are exceedingly remote, promotes the values of transparency and patient-informed choice, which hidden assignment would preclude.

The line between quality improvement and research is rapidly being effaced. With new medical information systems that can identify patients with particular medical problems, characterize the intervention, and evaluate the care delivered against an agreed-upon standard, there is a clear ethical imperative to advance the quality improvement process to lessen mistakes and prevent substandard care. The relevant ethical issue is whether there is a need for some oversight mechanism, external to the quality improvement process, to protect patients. We say that there is.

In the history of research, the abuse of human subjects led to the creation of clear guidelines for the review of research protocols.3640 IRBs examine the possible risks and benefits for the subjects and approve the informed consent process that is designed to educate and empower prospective subjects. This review is necessitated by the experience that research protocols can either provide benefit or actually harm subjects and on the ethical premise that voluntary informed consent must precede assumption of risk.

Prospective quality improvement evaluations that allocate treatment with or without randomization to different cohorts—generally to identify the most cost-effective care but sometimes to identify best practice—should, like research, be subject to review and should trigger considerations of informed consent. This rule should apply whether or not generalizable information is created for public presentation or dissemination.

We suggest that each institution create a collaborative process with the IRB to establish a standing committee on quality improvement. It is hoped that with experience, the IRB and the continuous quality improvement departments would come to jointly agree on which designs are quality improvement that may proceed without IRB review and which are sufficiently hybridized to require review. We make this suggestion knowing full well that IRBs are presently under enhanced scrutiny regarding standards of practice, but hoping that in the review of IRB responsibility and authority these considerations of an enhanced agenda will be included. This augmented workload should hasten enhanced financial and administrative support for these deliberative panels.

In the days before managed care, the conflicts addressed by the IRB were those that might pit the interests of the researcher against those of the patient-subject. At present, the greatest threat to patient well-being may be those administrative decisions that intrude on physician discretion as a way of cutting the costs of care. Much of the quality improvement process is retrospective or concurrent surveillance designed to improve health care delivery against a defined agreed-upon standard. Some activities, however, intervene prospectively and should be reviewed by an IRB to ensure that they do not compromise patient autonomy or safety.

TABLE 1— Comparison of Criterion Standard Research and Bad Outcome–Focused Quality Assurance (QA)
TABLE 1— Comparison of Criterion Standard Research and Bad Outcome–Focused Quality Assurance (QA)
Characteristics of Criterion Standard Research (Randomized Controlled Trial)Characteristics of Bad Outcome–Focused QA
The study assigns treatment to the patient.The legitimate patient–physician relationship has determined a therapy for the patient's direct and immediate benefit. The study performed “after the fact of care” has no impact on therapy assignment.
The assignment of therapy is made randomly, permitted by clinical uncertainty (equipoise) as to whether the investigational drug, or an alternative or a placebo, is advantageous for the individual patient.Therapeutic assignment is independent of the study and for the purpose of producing the best outcome, given the available medical knowledge.
A group is assigned “control status,” with the therapeutic intervention intentionally withheld to allow assessment of efficacy.No control group is intentionally generated at the time of care delivery for the purpose of the study. A pseudocontrol group might be assembled after the fact, taking advantage of natural variation in clinical practice.
Therapy is delivered in a blinded fashion. Neither the investigator nor the patient knows to whom the active agent or placebo was given.The physician and patient are both aware of the drugs administered, their side effects, and their putative utility.
Assessment of outcome is blinded to therapy for the purpose of establishing efficacy of the intervention.Process that led to the known “bad outcome” is assessed for the purpose of discovering errors in practice compared with “community standard of practice.” Should errors in practice be found, corrective measures for the involved physicians and other health care workers will follow.
Society is the beneficiary. New knowledge developed is generalizable to other patient populations that will benefit.Society is the beneficiary, since “dangerous doctors” will correct their errors or be denied access to patients. The index patient cannot be helped or harmed because this study assesses what led to the bad outcome, which cannot now be reversed. New generalizable medical knowledge is not an expected outcome of the review.


1. Code of Federal Regulations. Public Welfare (Protection of Human Subjects), 45 CFR §46 (1991). Google Scholar
2. NY Pub Health, §2805-M (2000). Google Scholar
3. What Every Hospital Should Know About Sentinel Events. Oakbrook Terrace, Ill: Joint Commission on Accreditation of Healthcare Organizations; 2000. Google Scholar
4. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999. Google Scholar
5. President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry. Quality First: Better Health Care for All Americans: Final Report to the President of the United States. Washington, DC: US Government Printing Office; 1998. Google Scholar
6. Kazandjian VA, Lied TR. Healthcare Performance Measurement: Systems Design and Evaluation. Milwaukee, Wis: Quality Press; 1999. Google Scholar
7. Kelley LD. How to Use Control Charts for Healthcare. Milwaukee, Wis: Quality Press; 1999. Google Scholar
8. Altman L. Big doses of chemotherapy drug killed patient, hurt 2nd. New York Times. April 24, 1995:A18. Google Scholar
9. Code of Federal Regulations. Public Welfare (Protection of Human Subjects), 45 CFR §46.102(d) (1991). Google Scholar
10. Donabedian A. The Definition of Quality and Approaches to Its Assessment. Ann Arbor, Mich: Health Administration Press; 1980. Google Scholar
11. Feinstein AR. Current problems and future challenges in randomized clinical trials. Circulation.1984;705:767–774. CrossrefGoogle Scholar
12. Byar DP, Simon RM, Friedewald WT, et al. Randomized clinical trials: perspectives on some recent ideas. N Engl J Med.1976;295:74–80. Crossref, MedlineGoogle Scholar
13. Ahronheim JC, Moreno J, Zuckerman C. Ethics in Clinical Practice. New York, NY: Little Brown & Co; 1994. Google Scholar
14. Pellegrino ED, Thomasina DC. For the Patient's Good: The Restoration of Beneficence in Health Care. New York, NY: Oxford University Press; 1998. Google Scholar
15. French P. The corporation as a moral person. Am Philos Q.1979;3:207–215. Google Scholar
16. Spencer EM. Organizational Ethics in Health Care. New York, NY: Oxford University Press; 2000. Google Scholar
17. Blustein JB, Post LF, Dubler NN. A Handbook on Organizational Ethics: Theory, Cases and Tools for Implementation. New York, NY: United Hospital Fund. In press. Google Scholar
18. Spencer EM, Whitley EM, Healey GF. A corporate approach to health care ethics. HEC Forum.1995;75:296–301. Google Scholar
19. Goodpaster K, Mathews JB Jr. Can a corporation have a conscience? In: Donaldson T, Werhane P, eds. Ethical Issues in Business. 3rd ed. Englewood Cliffs, NJ: Prentice Hall; 1982:139–149. Google Scholar
20. Arras JD. The fragile web of responsibility: AIDS and the duty to treat. Hastings Cent Rep.1988;18:10–20. Crossref, MedlineGoogle Scholar
21. Codman EA. A Study in Hospital Efficiency: As Demonstrated by the Case Report of the First Five Years of a Private Hospital. Boston, Mass: Th Todd Co; 1918. Google Scholar
22. Pickles WN. Epidemiology in Country Practice. Bristol, England: John Wright & Sons Ltd; 1939. Google Scholar
23. 1999 Hospital Accreditation Standards. Oakbrook Terrace, Ill: Dept of Publications, Joint Commission on Accreditation of Healthcare Organizations; 1999. Google Scholar
24. Iglehart JK. The National Committee for Quality Assurance. N Engl J Med.1996;335:995–999. Crossref, MedlineGoogle Scholar
25. Hyman SE. The need for database research and for privacy collide. Am J Psychiatry.2000;157:1723–1724. Crossref, MedlineGoogle Scholar
26. Simon GE, Unutzer J, Young BE, Pincus HA. Large medical databases, population-based research, and patient confidentiality. Am J Psychiatry.2000;157:1731–1737. Crossref, MedlineGoogle Scholar
27. Katz J. Informed consent: must it remain a fairy tale? J Contemp Health Law Policy.2000;10:67. Google Scholar
28. Code of Federal Regulations. Public Welfare (Protection of Human Subjects), 45 CFR §46.108 (1991). Google Scholar
29. Brett A, Grodin M. Ethical aspects of human experimentation in health services research. JAMA.1991;265:1854–1857. Crossref, MedlineGoogle Scholar
30. Doyal L, Tobias JS, Warnock M, Power L, Goodare H. Informed consent in medical research. BMJ.1998;316:1000–1005. Crossref, MedlineGoogle Scholar
31. Smith R. Informed consent: the intricacies. BMJ.1997;314:1059–1060. Crossref, MedlineGoogle Scholar
32. Lynn J, Johnson J, Levine RJ. The ethical conduct of health services research: a case study of 55 institutions' applications to the SUPPORT project. Clin Res.1994;421:3–10. Google Scholar
33. Freedman B. Equipoise and the ethics of clinical research. N Engl J Med.1987;317:141–145. Crossref, MedlineGoogle Scholar
34. Rogers WJ, Epstein AE, Arciniegas JG, et al. Preliminary report: effect of encainide and flecainide on mortality in a randomized trial of arrhythmia suppression after myocardial infarction. The Cardiac Arrhythmia Suppression Trial (CAST) Investigators. N Engl J Med.1989;321:406–412. Crossref, MedlineGoogle Scholar
35. Ruskin JN. The Cardiac Arrhythmia Suppression Trial (CAST). N Engl J Med.1989;321:386–388. Crossref, MedlineGoogle Scholar
36. Code of Federal Regulations. Public Welfare (Protection of Human Subjects), 45 CFR §46.405 (1991). Google Scholar
37. World Medical Association Declaration of Helsinki. Recommendations guiding physicians in biomedical research involving human subjects. Cardiovasc Res.1997;351:2–3. Google Scholar
38. Shuster E. The Nuremberg Code: Hippocratic ethics and human rights. Lancet.1998;351:974–977. Crossref, MedlineGoogle Scholar
39. King PA. Twenty years after: the legacy of the Tuskegee Syphilis Study: the dangers of differences. Hastings Cent Rep.1992;22:35–38. Crossref, MedlineGoogle Scholar
40. Yankauer A. The neglected lesson of the Tuskegee Study [letter]. Am J Public Health.1998;88:1406. MedlineGoogle Scholar


No related items




Eran Bellin, MD, and Nancy Neveloff Dubler, Llb Eran Bellin is with the Department of Outcomes Analysis and Decision Support, Montefiore Medical Center, and the Department of Epidemiology and Social Medicine, Albert Einstein College of Medicine, Bronx, NY. Nancy Neveloff Dubler is with the Division of Bioethics, Montefiore Medical Center, and the Department of Bioethics, Albert Einstein College of Medicine, Bronx, NY. “The Quality Improvement–Research Divide and the Need for External Oversight”, American Journal of Public Health 91, no. 9 (September 1, 2001): pp. 1512-1517.


PMID: 11527790