Thursday, August 25, 2022

Organizational factors and nuclear power plant safety

image: Peach Bottom Nuclear Plant

The Nuclear Regulatory Commission has responsibility for ensuring the safe operations of the nuclear power reactors in the United States, of which there are approximately 100. There are significant reasons to doubt whether its regulatory regime is up to the task. Part of the challenge is the technical issue of how to evaluate and measure the risks created by complex technology systems. Part is the fact that it seems inescapable that organizational and management factors play key roles in nuclear accidents -- factors the NRC is ill-prepared to evaluate. And the third component of the challenge is the fact that the nuclear industry is a formidable adversary when it comes to "intrusive" regulation of its activities. 

Thomas Wellock is the official historian of the NRC, and his work shows an admirable degree of independence from the "company line" that the NRC wishes to present to the public. Wellock's book, Safe Enough?: A History of Nuclear Power and Accident Risk, is the closest thing we have to a detailed analysis of the workings of the commission and its relationships to the industry that it regulates. A central focus in Safe Enough is the historical development of the key tool used by the NRC in assessing nuclear safety, the methodology of "probabilistic risk assessment" (PRA). This is a method for aggregating the risks associated with multiple devices and activities involved in a complex technology system, based on failure rates and estimates of harm associated with failure. 

This preoccupation with developing a single quantitative estimate of reactor safety reflects the engineering approach to technology failure. However, Charles Perrow, Diane Vaughan, Scott Sagan, and numerous other social scientists who have studied technology hazards and disasters have made clear that organizational and managerial failures almost always play a key role in the occurrence of a major accident such as Three Mile Island, Fukushima, or Bhopal. This is the thrust of Perrow's "normal accident" theory and Vaughan's "normalization of deviance" theory. And organizational effectiveness and organizational failures are difficult to measure and quantify. Crucially, these factors are difficult to incorporate into the methodology of probabilistic risk assessment. As a result, the NRC has almost no ability to oversee and enforce standards of safety culture and managerial effectiveness.

Wellock addresses this aspect of an incomplete regulatory system in "Social Scientists in an Adversarial Environment: The Nuclear Regulatory Commission and Organizational Factors Research" (link). The problem of assessing "human factors" has been an important element of the history of the NRC's efforts to regulate the powerful nuclear industry, and failure in this area has left the NRC handicapped in its ability to address pervasive ongoing organizational faults in the nuclear industry. Wellock's article provides a detailed history of efforts by the NRC to incorporate managerial assessment and human-factors analysis into its safety program -- to date, with very little success. And, ironically, the article demonstrates a key dysfunction in the organization and setting of the NRC itself; because of the adversarial relationship that exists with the nuclear industry, and the influence that the industry has with key legislators, the NRC is largely blocked from taking commonsense steps to include evaluation of safety culture and management competence into its regulatory regime.

Wellock makes it clear that both the NRC and the public have been aware of the importance of organizational dysfunctions in the management of nuclear plants since the Three Mile Island accident in 1979. However, the culture of the organization itself makes it difficult to address these dysfunctions. Wellock cites the experience of Valerie Barnes, a research psychologist on staff at the NRC, who championed the importance of focusing attention on organizational factors and safety culture. "She recalled her engineering colleagues did not understand that she was an industrial psychologist, not a therapist who saw patients. They dismissed her disciplinary methods and insights into human behavior and culture as 'fluffy,' unquantifiable, and of limited value in regulation compared to the hard quantification bent of engineering disciplines" (1395). 

The NRC took the position that organizational factors and safety culture could only properly be included in the regulatory regime if they could be measured, validated, and incorporated into the PRA methodology. The question of the quantifiability and statistical validity of human-factors research and safety-culture research turned out to be insuperable -- largely because these were the wrong standards for evaluating the findings of these areas of the social sciences. "In the new program [in the 1990s], the agency avoided direct evaluation of unquantifiable factors such as licensee safety culture" (1395). (It is worth noting that this presumption reflects a thoroughly positivistic and erroneous view of scientific knowledge; linklink. There are valid methods of sociological investigation that do not involve quantitative measurement.) 

After the Three Mile Island disaster, both the NRC and external experts on nuclear safety had a renewed interest in organizational effectiveness and safety culture. Analysis of the TMI disaster made organizational dysfunctions impossible to ignore. Studies by the Battelle Human Affairs Research Center were commissioned in 1982 (1397), to permit design of a regulatory regime that would evaluate management effectiveness. Here again, however, the demand for quantification and "correlations" blocked the creation of a regulatory standard for management effectiveness and safety culture. Moreover, the nuclear industry was able to resist efforts to create "intrusive" inspection regimes involving assessment of management practices. "In the mid-1980s, the NRC deferred to self-regulating initiatives under the leadership of the Institute for Nuclear Power Operations (INPO). This was not the first time the NRC leaned on INPO to avoid friction with industry" (1397). 

A serious event at the Davis-Besse plant in Ohio in 1983 focused attention on the importance of management, organizational dysfunction, and safety culture, and a National Academy of Sciences report in 1988 once again recommended that the NRC must give high priority to these factors -- quantifiable or not (Human Factors Research and Nuclear Safety; link).

The panel called on the NRC to prioritize research into organizational and management factors. “Management can make or break a plant,” Moray told the NRC’s Advisory Committee for Reactor Safeguards. Even more than the man-machine interface, he said, it was essential that the NRC identify what made for a positive organizational culture of reliability and safety and develop appropriate regulatory feedback mechanisms that would reduce accident risk. (1400)

These recommendations led  the NRC to commission an extensive research consultancy with a group of behavioral scientists at Brookhaven Laboratory. The goal of this research, once again, was to identify observable and measurable factors of organizations and safety culture that would permit quantification of the quality of both intangible features of nuclear plants -- and ultimately to permit incorporation of these factors into PRA models. 

 Investigators identified over 20 promising organizational factors under five broad categories of control systems, communications, culture, decision making, and personnel systems. Brookhaven concluded the best measurement methodologies included research surveys, behavioral checklists, structured interview protocols, and behavioral-anchored rating scales. (1401)

However, this research foundered on three problems: the cost of evaluating a nuclear operator on this basis; the "intrusiveness" of the methods needed to evaluate these organizational systems, and the intransigent and adversarial opposition of the operators of nuclear plants against these kinds of assessment. It also emerged that it was difficult to establish correlations between the organizational factors identified and the safety performance of a range of plants. NRC backed down from its effort to directly assess organizational effectiveness and safety culture, and instead opted for a new "Reactor Oversight Process" (ROP) that made use only of quantitative factors associated with safety performance (1403).

A second and more serious incident at the Davis-Besse nuclear plant in 2002 resulted in a near-miss loss-of-coolant accident (link), and investigation by NRC and GAO compelled the NRC to once again bring safety culture back into the regulatory agenda. Executives, managers, operators, and inspectors were all found to have behaved in ways that greatly increased the risk of a highly damaging LOCA accident at Davis-Besse. The NRC imposed more extensive organizational and managerial requirements on the operators of the Davis-Besse plant, but these protocols were not extended to other plants.

It is evident from Wellock's 2021 survey of the NRC history of human-factors research and organizational research that the commission is currently incapable of taking seriously the risks to reactor safety created by the kinds of organizational failures documented by Charles Perrow, Diane Vaughan, Andrew Hopkins, Scott Sagan, and many others. NRC has shown that it is aware of these social-science studies of technology system safety. But its intellectual commitment to a purely quantitative methodology for risk assessment, combined with the persistent ability of the nuclear operators to prevent forms of "intrusive" evaluation that they don't like, leads to a system in which major disasters remain a distinct possibility. And this is very bad news for anyone who lives within a hundred miles of a nuclear power plant.


1 comment:

Mr Safe said...

OMG, thanks for presentation;)