Abstract
Doctors typically think about medical errors as potential causes of malpractice litigation, as failures by individuals, and as evidence of personal incompetence that may deserve sanctions. Other professions take a different view: designing of safer systems, rather than criticism and punishment, is the way to prevent unintentional mishaps. In his article, Jeffrey Janofsky shows how psychiatrists can think about making care systems safer for patients. He also provides a splendid example of how forensic psychiatrists should conceptualize legal and medical problems encountered in clinical practice.
When I was a boy, auto fatalities occurred at a rate of 6 deaths per 100 million vehicle miles traveled.1 Corporate executives of the major U.S. automakers publicly insisted that the cause of all these deaths was bad drivers. As one vice president of General Motors told the New York Times in 1965, “If the drivers do everything they should, there wouldn't be any accidents, would there?” (Ref. 2, p 227). But that same year, in his classic, Unsafe at Any Speed,3 Ralph Nader articulated a different view: cars had “designed-in dangers” that could be eliminated, and requiring cars to have many already-developed safety features (e.g., seat belts) would save many lives.
In 1966, Congress authorized the federal government to set standards for motor vehicle safety. The first director of the National Highway Safety Bureau, William Haddon, Jr., was a physician who recognized that standard public health measures could prevent motor vehicle injuries.1 Over the next several years, vehicles were required to have all the safety features we now take for granted: seat belts, energy-absorbing steering wheels, shatter-resistant windshields, cushioned interior surfaces, and air bags. After four decades of car improvements, plus safer road design, better public awareness about auto safety, and sensible driving requirements (e.g., child safety seats), the auto fatality rate has fallen to 1.28 deaths per 100 million vehicle miles,4 less than one-fourth the rate in the 1960s.
A significant number of patients is harmed by medical errors,5 but physicians and policy-makers have only recently recognized that health care-induced deaths and injuries are a public health problem. Like other physicians, forensic psychiatrists often look at health care errors and adverse medical events through the lens of malpractice law, which explicitly focuses on failures by individuals. As the Mississippi Supreme Court put it, “Medical malpractice is legal fault by a physician or surgeon. It arises from the failure of a physician [a single individual] to provide the quality of care required by law” (Ref. 6, p 866).
Physicians may know intellectually that “to err is human,” but we don't feel that way about our professional actions. The professional socialization of physicians instills an ideal of error-free practice,7 from which it follows that good physicians should be virtually infallible. Legalistic and self-critical thinking has led physicians to believe that medical error occurs only because of negligence. Physicians often personalize this even further, concluding (consciously or unconsciously) that medical errors reflect underlying character flaws.8
When medical errors occur, the consequences are sometimes tragic, because clearly innocent victims (that is, patients) pay for those errors with their bodies and lives. The reaction, in both legal and medical settings, is to find those individuals who are to blame and punish them. Although this response is understandable, it is ultimately counterproductive. Modern medical practice is a complex affair, and we now know, from looking at how safety improvements have occurred in other high-risk enterprises, that: …fear, reprisal, and punishment produce not safety, but rather defensiveness, secrecy, and enormous human anguish. Scientific studies…make it clear that, in complex systems, safety depends not on exhortation, but rather on the proper design of equipment, jobs, support systems, and organisations. If we truly want safer care we will have to design safer care systems [Ref. 9, p 136].
Jeffrey Janofsky's presidential address10 contains a vivid, clear description of efforts to design a safer care system. It also serves as a splendid example of the intellectual contributions that forensic psychiatrists can make concerning the legal and medical problems that we encounter in our practice.
Suicide is the most frequently identifiable impetus for psychiatric malpractice litigation11 and the second most frequent “sentinel event” reported to the Joint Commission on Accreditation of Health Care Organizations (JCAHO).12 Texts and articles that address prevention of malpractice lawsuits13,14 usually focus on methods of assessment and individualized interventions—that is, potential actions and decisions by individual caregivers that might avert suicide attempts. Recently, however, other perspectives on suicide have entered forensic psychiatry's intellectual arena. These perspectives recognize a clash between the still-prevalent, blame-the-individual ethos of courts and medical organizations, and the systems-oriented ethos of fields that study error scientifically.15,16
The program that Janofsky describes is designed to reduce handoff errors that arise when patient care data are imperfectly transmitted from one caregiver to the next. Harm to patients frequently results from faulty communication,17 and a few years ago, the JCAHO began requiring hospitals to develop standards for improving handoffs out of a recognition that poor handoffs are the single largest source of medical error.18
A focus on better communication as an anti-suicide strategy makes sense, both from what research on hospital errors tells us in general and from the discovery by Janofsky and his colleagues that in implementing observation practices, “most critical observation failure modes were caused by communication failures” (Ref. 10, p 21). Janofsky also notes that suicide observation practices are plagued by a fundamental communication problem: a striking lack of consistency in the terms used to define and describe the type of observation taking place. To address this, Janofsky and his colleagues have adopted an easy-to-understand, clearly defined set of labels for four potential levels (or intensities) of observation.
Notwithstanding my enthusiasm for Janofsky's contribution, I wish I were more confident that the enterprise he describes will reduce inpatient suicides. The following five comments summarize my reservations, which in many cases relate to concerns and problems that Janofsky explicitly acknowledges.
First, we know that a simple procedure used for years to reduce aviation accidents, a checklist, can also reduce medical mishaps and complications from anesthesia,19 central line placement,20 and surgery.21 But procedures in anesthesia and the mechanics of central line placement are united across all care sites by similarities in equipment and human anatomy. Are psychiatric units and the patients who occupy them similar enough to make generalizations about useful, error-saving processes? How adaptable is the workflow diagram that Janofsky has produced to other psychiatric inpatient settings?
Second, some hospital adverse events (e.g. certain types of infections) are so frequent that one can measure the impact of an error-reducing intervention at a single institution in just a few months. At any given psychiatric inpatient unit, however, suicides are rare. To find out whether implementing better communication and clearer nomenclature for observation levels would really reduce inpatient suicides, one might need to conduct a study that involves monitoring outcomes at a large number of institutions. Is such a study feasible? To have a good chance of detecting a benefit (i.e., to have adequate statistical power), how large might the study have to be, and how long would it have to last?
Third, Janofsky describes the limitations of current publications on inpatient suicides and the inability of most would-be investigators to obtain data that might illuminate why inpatient suicides occur. Is there any prospect for this to change? Might a government-initiated effort provide a framework within which data on suicide (along with many other adverse hospital events) might be available for examination by independent researchers?
Fourth, though many physicians might come to appreciate the insights of human factors analysis, few physicians possess the expertise to apply human factor techniques to their own work places. Human factor researchers have taken an interest in the activities of some medical specialists.22 Might psychiatrists find ways to interest these researchers in the dilemmas of our specialty?
Finally, as Janofsky notes, suicide attempts are intentional behavior, and the inpatient who attempts to harm himself is trying to undermine or sabotage staff members’ efforts. Yet the human factors literature assumes that all personnel involved want the system to work and want to prevent adverse outcomes. This raises the question of whether the techniques used in human factors analysis offer the right approach to inpatient suicide. If “sabotage” is the right metaphor for inpatient suicide, would some other analytic or conceptual framework—one drawn from the criminology literature, perhaps—be better suited for preventing inpatient suicide?
Good scientific articles both provide useful ideas and inspire new questions. By this criterion, Janofsky's contribution is one that The Journal is rightly proud to publish.
- American Academy of Psychiatry and the Law