Showing posts with label failure. Show all posts
Showing posts with label failure. Show all posts

Saturday, August 22, 2009

Patient safety -- Canada and France


Patient safety is a key issue in managing and assessing a regional or national health system. There are very sizable variations in patient safety statistics across hospitals, with significantly higher rates of infection and mortality in some institutions than others. Why is this? And what can be done in order to improve the safety performance of low-safety institutions, and to improve the overall safety performance of the hospital environment nationally?

Previous posts have made the point that safety is the net effect of a complex system within a hospital or chemical plant, including institutions, rules, practices, training, supervision, and day-to-day behavior by staff and supervisors (post, post). And experts on hospital safety agree that improvements in safety require careful analysis of patient processes in order to redesign processes so as to make infections, falls, improper medications, and unnecessary mortality less likely. Institutional design and workplace culture have to change if safety performance is to improve consistently and sustainably. (Here is a posting providing a bit more discussion of the institutions of a hospital; post.)

But here is an important question: what are the features of the social and legal environment that will make it most likely that hospital administrators will commit themselves to a thorough-going culture and management of safety? What incentives or constraints need to exist to offset the impulses of cost-cutting and status quo management that threaten to undermine patient safety? What will drive the institutional change in a health system that improving patient safety requires?

Several measures seem clear. One is state regulation of hospitals. This exists in every state; but the effectiveness of regulatory regimes varies widely across context. So understanding the dynamics of regulation and enforcement is a crucial step to improving hospital quality and patient safety. The oversight of rigorous hospital accreditation agencies is another important factor for improvement. For example, the Joint Commission accredits thousands of hospitals in the United States (web page) through dozens of accreditation and certification programs. Patient safety is the highest priority underlying Joint Commission standards of accreditation. So regulation and the formulation of standards are part of the answer. But a particularly important policy tool for improving safety performance is the mandatory collection and publication of safety statistics, so that potential patients can decide between hospitals on the basis of their safety performance. Publicity and transparency are crucial parts of good management behavior; and secrecy is a refuge of poor performance in areas of public concern such as safety, corruption, or rule-setting. (See an earlier post on the relationship between publicity and corruption.)

But here we have a little bit of a conundrum: achieving mandatory publication of safety statistics is politically difficult, because hospitals have a business interest in keeping these data private. So there was a lot of resistance to mandatory reporting of basic patient safety data in the US over the past twenty years. Fortunately, the public interest in having these data readily available has largely prevailed, and hospitals are now required to publish a broader and broader range of data on patient safety, including hospital-induced infection rates, ventilator-induced pneumonias, patient falls, and mortality rates. Here is a useful tool from USA Today that lets the public and the patient gather information about his/her hospital options and how these compare with other hospitals regionally and nationally. This is an effective accountability mechanism that inevitably drives hospitals towards better performance.

Canada has been very active in this area. Here is a website published by the Ontario Ministry of Health and Long-Term Care. The province requires hospitals to report a number of factors that are good indicators of patient safety: several kinds of hospital-born infections; central-line primary bloodstream infection and ventilator-associated pneumonia; surgical-site infection prevention activity; and hospital-standardized mortality ratio. The user can explore the site and find that there are in fact wide variations across hospitals in the province. This is likely to change patient choice; but it also serves as an instant guide for regulatory agencies and local hospital administrators as they attempt to focus attention on poor management practices and institutional arrangements. (It would be helpful for the purpose of comparison if the data could be easily downloaded into a spreadsheet.)

On first principles, it seems likely that any country that has a hospital system in which the safety performance of each hospital is kept secret will also show a wide distribution of patient safety outcomes across institutions, and will have an overall safety record that is much lower than it could be. This is because secrecy gives hospital administrators the ability to conceal the risks their institutions impose on patients through bad practices. So publicity and regular publication of patient safety information seems to be a necessary precondition to maintaining a high-safety hospital system.

But here is the crucial point: many countries continue to permit secrecy when it comes to hospital safety. In particular, this seems to be true in France. It seems that the French medical and hospital system continues to display a very high degree of secrecy and opacity when it comes to patient safety. In fact, anecdotal information about French hospitals suggests a wide range of levels of hospital-born infections in different hospitals. Hospital-born infections (infections nosocomiales) are an important and rising cause of patient illness and morbidity. And there are well-known practices and technologies that substantially reduce the incidence of these infections. But the implementation of these practices requires strong commitment and dedication at the unit level; and this degree of commitment is unlikely to occur in an environment of secrecy.

In fact, I have not been able to discover any of the tools that are now available for measuring patient safety in hospitals in North America in application to hospitals in France. But without this regular reporting, there is no mechanism through which institutions with bad safety performance can be "ratcheted" up into better practices and better safety outcomes. The impression that is given in the French medical system is that the doctors and the medical authorities are sacrosanct; patients are not expected to question their judgment, and the state appears not to require institutions to report and publish fundamental safety information. Patients have very little power and the media so far seem to have paid little attention to the issues of patient safety in French hospitals. This 2007 article in LePoint seems to be a first for France in that it provides quantitative rankings of a large number of hospitals in their treatment of a number of diseases. But it does not provide the kinds of safety information -- infections, falls, pneumonias -- that are core measures of patient safety.

There is a French state agency, OFFICE NATIONAL D'INDEMNISATION DES ACCIDENTS MÉDICAUX (ONIAM), that provides compensation to patients who can demonstrate that their injuries are the result of hospital-induced causes, including especially hospital-associated infections. But it appears that this agency is restricted to after-the-fact recognition of hospital errors rather than pro-active programs designed to reduce hospital errors. And here is a French government web site devoted to the issue of hospital infections. It announces a multi-pronged strategy for controlling the problem of infections nosocomiales, including the establishment of a national program of surveillance of the rates of these infections. So far, however, I have not been able to locate web resources that would provide hospital-level data about infection rates.

So I am offering a hypothesis that I would be very happy to find to be refuted: that the French medical establishment continues to be bureaucratically administered with very little public exposure of actual performance when it comes to patient safety. And without this system of publicity, it seems very likely that there are wide and tragic variations across French hospitals with regard to patient safety.

Are there French medical sociologists and public health researchers who are working on the issue of patient safety in French hospitals? Can good contemporary French sociologists like Céline Béraud, Baptiste Coulmont, and Philippe Masson offer some guidance on this topic (post)? If readers are aware of databases and patient safety research programs in France that are relevant to these topics, I would be very happy to hear about them.

Update: Baptiste Coulmont (blog) passes on this link to Réseau d'alerte d'investigations et de surveillance des infections nosocomia (RAISIN) within the Institut de veille sanitaire. The site provides research reports and regional assessments of nosocomia incidence. It does not appear to provide data at the level of the specific hospitals and medical centers. Baptiste refers also to work by Jean Peneff, a French medical sociologist and author of La France malade de ses médecins. Here is a link to a subsequent research report by Peneff. Thanks, Baptiste.

Tuesday, July 15, 2008

Safety as a social effect


Some organizations pose large safety issues for the public because of the technologies and processes they encompass. Industrial factories, chemical and nuclear plants, farms, mines, and aviation all represent sectors where safety issues are critically important because of the inherent risks of the processes they involve. However, "safety" is not primarily a technological characteristic; instead, it is an aggregate outcome that depends as much on the social organization and management of the processes involved as it does on the technologies they employ. (See an earlier posting on technology failure.)

We can define safety by relating it to the concept of "harmful incident". A harmful incident is an occurrence that leads to injury or death of one or more persons. Safety is a relative concept, in that it involves analysis and comparison of the frequencies of harmful incidents relative to some measure of the volume of activity. If the claim is made that interstate highways are safer than county roads, this amounts to the assertion that there are fewer accidents per vehicle-mile on the former than the latter. If it is held that commercial aviation is safer than automobile transportation, this amounts to the claim that there are fewer harms per passenger-mile in air travel than auto travel. And if it is observed that the computer assembly industry is safer than the mining industry, this can be understood to mean that there are fewer harms per person-day in the one sector than the other. (We might give a parallel analysis of the concept of a healthy workplace.)

This analysis highlights two dimensions of industrial safety: the inherent capacity for creating harms associated with the technology and processes in use (heavy machinery, blasting, and uncertain tunnel stability in mining, in contrast to a computer and a red pencil on the editorial offices of a newspaper), and the processes and systems that are in place to guard against harm. The first set of factors is roughly "technological," while the second set is social and organizational.

Variations in safety records across industries and across sites within a given industry provide an excellent tool for analyzing the effects of various institutional arrangements. It is often possible to pinpoint a crucial difference in organization -- supervision, training, internal procedures, inspection protocols, etc. -- that can account for a high accident rate in one factory and a low rate in an otherwise similar factory in a different state.

One of the most important findings of safety engineering is that organization and culture play critical roles in enhancing the safety characteristics of a given activity -- that is to say, safety is strongly influenced by social factors that define and organize the behaviors of workers, users, or managers. (See Charles Perrow, Normal Accidents: Living with High-Risk Technologies and Nancy Leveson, Safeware: System Safety and Computers, for a couple of excellent treatments of the sociological dimensions of safety.)

This isn't to say that only social factors can influence safety performance within an activity or industry. In fact, a central effort by safety engineers involves modifying the technology or process so as to remove the source of harm completely -- what we might call "passive" safety. So, for example, if it is possible to design a nuclear reactor in such a way that a loss of coolant leads automatically to shutdown of the fission reaction, then we have designed out of the system the possibility of catastrophic meltdown and escape of radioactive material. This might be called "design for soft landings".

However, most safety experts agree that the social and organizational characteristics of the dangerous activity are the most common causes of bad safety performance. Poor supervision and inspection of maintenance operations leads to mechanical failures, potentially harming workers or the public. A workplace culture that discourages disclosure of unsafe conditions makes the likelihood of accidental harm much greater. A communications system that permits ambiguous or unclear messages to occur can lead to air crashes and wrong-site surgeries.

This brings us at last to the point of this posting: the observation that safety data in a variety of industries and locations permit us to probe organizational features and their effects with quite a bit of precision. This is a place where institutions and organizations make a big difference in observable outcomes; safety is a consequence of a specific combination of technology, behaviors, and organizational practices. This is a good opportunity for combining comparative and statistical research methods in support of causal inquiry, and it invites us to probe for the social mechanisms that underlie the patterns of high or low safety performance that we discover.

Consider one example. Suppose we are interested in discovering some of the determinants of safety records in deep mining operations. We might approach the question from several points of view.
  • We might select five mines with "best in class" safety records and compare them in detail with five "worst in class" mines. Are there organizational or techology features that distinguish the cases?
  • We might do the large-N version of this study: examine a sample of mines from "best in class" and "worst in class" and test whether there are observed features that explain the differences in safety records. (For example, we may find that 75% of the former group but only 10% of the latter group are subject to frequent unannounced safety inspection. This supports the notion that inspections enhance safety.)
  • We might compare national records for mine safety--say, Poland and Britain. We might then attempt to identify the general characteristics that describe mines in the two countries and attempt to explain observed differences in safety records on the basis of these characteristics. Possible candidates might include degree of regulatory authority, capital investment per mine, workers per mine, ...
  • We might form a hypothesis about a factor that should be expected to enhance safety -- a company-endorsed safety education program, let's say -- and then randomly assign a group of mines to "treated" and "untreated" groups and compare safety records. (This is a quasi-experiment; see an earlier posting for a discussion of this mode of reasoning.) If we find that the treated group differs significantly in average safety performance, this supports the claim that the treatment is causally relevant to the safety outcome.

Investigations along these lines can establish an empirical basis for judging that one or more organizational features A, B, C have consequences for safety performance. In order to be confident in these judgments, however, we need to supplement the empirical analysis with a theory of the mechanisms through which features like A, B, C influence behavior in such a way as to make accidents more or less likely.

Safety, then, seems to be a good area of investigation for researchers within the general framework of the new institutionalism, because the effects of institutional and organizational differences emerge as observable differences in the rates of accidents in comparable industrial settings. (See Mary Brinton and Victor Nee, The New Institutionalism in Sociology, for a collection of essays on this approach.)


Wednesday, March 26, 2008

Explaining technology failure

Technology failure is often spectacular and devastating -- witness Bhopal, Three Mile Island, Chernobyl, the Challenger disaster, and the DC10 failures of the 1970s. But in addition to being a particularly important cause of human suffering, technology failures are often very complicated social outcomes that involve a number of different kinds of factors. And this makes them interesting topics for social science study.

It is fairly common to attribute spectacular failures to a small number of causes -- for example, faulty design, operator error, or a conjunction of unfortunate but singly non-fatal accidents. What sociologists who have studied technology failures have been able to add is the fact that the root causes of disastrous failures can often be traced back to deficiencies of the social organizations in which they are designed, used, or controlled (Charles Perrow, Normal Accidents: Living with High-Risk Technologies). Technology failures are commonly the result of specific social organizational defects; so technology failure is often or usually a social outcome, not simply a technical or mechanical misadventure. (Dietrich Dorner's The Logic of Failure: Recognizing and Avoiding Error in Complex Situations is a fascinating treatment of a number of cases of failure; Eliot Cohen's Military Misfortunes: The Anatomy of Failure in War provides an equally interesting treatment of military failures; for example, the American failure to suppress submarine attacks on merchant shipping off the US coast in the early part of World War II.)

First, a few examples. The Challenger space shuttle was destroyed as a result of O-rings in the rocket booster units that became brittle because of the low launch temperature -- evidently an example of faulty design. But various observers have asked the more fundamental question: what features of the science-engineering-launch command process that was in place within NASA and between NASA and its aerospace suppliers led it to break down so profoundly (Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA)? What organizational defects made it possible for this extended group of talented scientists and engineers to come to the decision to launch over the specific warnings that were brought forward by the rocket provider's team about the danger of a cold-temperature launch? Edward Tufte attributes the failure to poor scientific communication (Visual Explanations: Images and Quantities, Evidence and Narrative); Morton Thiokol engineer Roger Boisjoly attributes it to an excessively hierarchical and deferential relation between the engineers and the launch decision-makers. Either way, features of the NASA decision-making process -- social-organizational features -- played a critical role.

Bhopal represents another important case. Catastrophic failure of a Union Carbide pesticide plant in Bhopal, India in 1984 led to a release of a highly toxic gas. The toxic cloud passed into the densely populated city of Bhopal. Half a million people were affected, and between 16 and 30 thousand people died as a result. A chemical plant is a complex physical system. But even more, it is operated and maintained by a complex social organization, involving training, supervision, and operational assessment and oversight. In his careful case study of Bhopal, Paul Shrivastava maintains that this disaster was caused by a set of persistent and recurring organizational failures, especially in the areas of training and supervision of operators (Bhopal: Anatomy of Crisis).

Close studies of the nuclear disasters at Chernobyl and Three Mile Island have been equally fruitful in terms of shedding light on the characteristics of social, political, and business organization that have played a role in causing these great disasters. The stories are different in the two cases; but in each case, it turns out that social factors, including both organizational features internal to the nuclear plants and political features in the surrounding environment, played a role in the occurrence and eventual degree of destruction associated with the disasters.

These cases illustrate several important points. First, technology failures and disasters almost always involve a crucial social dimension -- in the form of the organizations and systems through which the technology is developed, deployed, and maintained and the larger social environment within which the technology is situated. Technology systems are social systems. Second, technology failures therefore constitute an important subject matter for sociological and organizational research. Sociologists can shed light on the ways in which a complex technology might fail. And third, and most importantly, the design of safe systems -- particularly systems that have the potential for creating great harms -- needs to be an interdisciplinary effort. The perspectives of sociologists and organizational theorists need to be incorporated as deeply as those of industrial and systems engineers into the design of systems that will preserve a high degree of safety. This is an important realization for the high profile risky industries -- aviation, chemicals, nuclear power. But it is also fundamental for other important social institutions, including especially hospitals and health systems. Safe technologies will only exist when they are embedded in safe, fault-tolerant organizations and institutions. And all of this means, in turn, that there is an urgent need for a sociology of safety.