Monday, August 12, 2019

Testing the NRC


Serious nuclear accidents are rare but potentially devastating to people, land, and agriculture. (It appears that minor to moderate nuclear accidents are not nearly so rare, as James Mahaffey shows in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima.) Three Mile Island, Chernobyl, and Fukushima are disasters that have given the public a better idea of how nuclear power reactors can go wrong, with serious and long-lasting effects. Reactors are also among the most complex industrial systems around, and accidents are common in complex, tightly coupled industrial systems. So how can we have reasonable confidence in the safety of nuclear reactors?

One possible answer is that we cannot have reasonable confidence at all. However, there are hundreds of large nuclear reactors in the world, and 98 active nuclear reactors in the United States alone. So it is critical to have highly effective safety regulation and oversight of the nuclear power industry. In the United States that regulatory authority rests with the Nuclear Regulatory Commission. So we need to ask the question: how good is the NRC at regulating, inspecting, and overseeing the safety of nuclear reactors in our country?

One would suppose that there would be excellent and detailed studies within the public administration literature that attempt to answer this question, and we might expect that researchers within the field of science and technology studies might have addressed it as well. However, this seems not to be the case. I have yet to find a full-length study of the NRC as a regulatory agency, and the NRC is mentioned only twice in the 600-plus page Oxford Handbook of Regulation. However, we can get an oblique view of the workings of the NRC through other sources. One set of observers who are in a position to evaluate the strengths and weaknesses of the NRC are nuclear experts who are independent of the nuclear industry. For example, publications from the Bulletin of the Atomic Scientists include many detailed reports on the operations and malfunctions of nuclear power plants that permit a degree of assessment of the quality of oversight provided by the NRC (link). And a detailed (and scathing) report by the General Accounting Office on the near-disaster at the Davis-Besse nuclear power plant is another expert assessment of NRC functioning (link).

David Lochbaum, Edwin Lyman, and Susan Stranahan fit the description of highly qualified independent scientists and observers, and their detailed case history of the Fukushima disaster provides a degree of insight into the workings of the NRC as well as the Japanese nuclear safety agency. Their book, Fukushima: The Story of a Nuclear Disaster, is jointly written by the authors under the auspices of the Union of Concerned Scientists, one of the best informed networks of nuclear experts we have in the United States. Lochbaum is director of the UCS Nuclear Safety Project and author of Nuclear Waste Disposal Crisis. The book provides a careful and scientific treatment of the unfolding of the Fukushima disaster hour by hour, and highlights the background errors that were made by regulators and owners in the design and operation of the Fukushima plant as well. The book makes numerous comparisons to the current workings of the NRC which permit a degree of assessment of the US regulatory agency.

In brief, Lochbaum and his co-authors appear to have a reasonably high opinion of the technical staff, scientists, and advisors who prepare recommendations for NRC consideration, but a low opinion of the willingness of the five commissioners to adopt costly recommendations that are strongly opposed by the nuclear industry. The authors express frustration that the nuclear safety agencies in both countries appear to have failed to have learned important lessons from the Fukushima disaster:
“The [Japanese] government simply seems in denial about the very real potential for another catastrophic accident.... In the United States, the NRC has also continued operating in denial mode. It turned down a petition requesting that it expand emergency evacuation planning to twenty-five miles from nuclear reactors despite the evidence at Fukushima that dangerous levels of radiation can extend at least that far if a meltdown occurs. It decided to do nothing about the risk of fire at over-stuffed spent fuel pools. And it rejected the main recommendation of its own Near-Term Task Force to revise its regulatory framework. The NRC and the industry instead are relying on the flawed FLEX program as a panacea for any and all safety vulnerabilities that go beyond the “design basis.” (kl 117)
They believe that the NRC is excessively vulnerable to influence by the nuclear power industry and to elected officials who favor economic growth over hypothetical safety concerns, with the result that it tends to err in favor of the economic interests of the industry.
Like many regulatory agencies, the NRC occupies uneasy ground between the need to guard public safety and the pressure from the industry it regulates to get off its back. When push comes to shove in that balancing act, the nuclear industry knows it can count on a sympathetic hearing in Congress; with millions of customers, the nation’s nuclear utilities are an influential lobbying group. (36)
They note that the NRC has consistently declined to undertake more substantial reform of its approach to safety, as recommended by its own panel of experts. The key recommendation of the Near-Term Task Force (NTTF) was that the regulatory framework should be anchored in a more strenuous standard of accident prevention, requiring plant owners to address "beyond-design-basis accidents". The Fukushima earthquake and tsunami events were "beyond-design-basis"; nonetheless, they occurred, and the NTTF recommended that safety planning should incorporate consideration of these unlikely but possible events.
The task force members believed that once the first proposal was implemented, establishing a well-defined framework for decision making, their other recommendations would fall neatly into place. Absent that implementation, each recommendation would become bogged down as equipment quality specifications, maintenance requirements, and training protocols got hashed out on a case-by-case basis. But when the majority of the commissioners directed the staff in 2011 to postpone addressing the first recommendation and focus on the remaining recommendations, the game was lost even before the opening kickoff. The NTTF’s Recommendation 1 was akin to the severe accident rulemaking effort scuttled nearly three decades earlier, when the NRC considered expanding the scope of its regulations to address beyond-design accidents. Then, as now, the perceived need for regulatory “discipline,” as well as industry opposition to an expansion of the NRC’s enforcement powers, limited the scope of reform. The commission seemed to be ignoring a major lesson of Fukushima Daiichi: namely, that the “fighting the last war” approach taken after Three Mile Island was simply not good enough. (kl 253)
As a result, "regulatory discipline" (essentially the pro-business ideology that holds that regulation should be kept to a minimum) prevailed, and the primary recommendation was tabled. The issue was of great importance, in that it involved setting the standard of risk and accident severity for which the owner needed to plan. By staying with the lower standard, the NRC left the door open to the most severe kinds of accidents.

The NTTF task force also addressed the issue of "delegated regulation" (in which the agency defers to the industry in many issues of certification and risk assessment) (Here is the FAA's definition of delegated regulation; link.)
The task force also wanted the NRC to reduce its reliance on industry voluntary initiatives, which were largely outside of regulatory control, and instead develop its own “strong program for dealing with the unexpected, including severe accidents.” (252)
Other more detail-oriented recommendations were refused as well -- for example, a requirement to install reliable hardened containment vents in boiling water reactors, with a requirement that these vents should incorporate filters to remove radioactive gas before venting. 
But what might seem a simple, logical decision—install a $15 million filter to reduce the chance of tens of billions of dollars’ worth of land contamination as well as harm to the public—got complicated. The nuclear industry launched a campaign to persuade the NRC commissioners that filters weren’t necessary. A key part of the industry’s argument was that plant owners could reduce radioactive releases more effectively by using FLEX equipment.... In March 2013, they voted 3–2 to delay a requirement that filters be installed, and recommended that the staff consider other alternatives to prevent the release of radiation during an accident. (254)
The NRC voted against including the requirement of filters on containment vents, a decision that was based on industry arguments that the cost of the filters was excessive and unnecessary.

The authors argue that the NRC needs to significantly rethink its standards of safety and foreseeable risk.
What is needed is a new, commonsense approach to safety, one that realistically weighs risks and counterbalances them with proven, not theoretical, safety requirements. The NRC must protect against severe accidents, not merely pretend they cannot occur. (257)
Their recommendation is to make use of an existing and rigorous plan for reactor safety incorporating the results of "severe accident mitigation alternatives" (SAMA) analysis already performed -- but largely disregarded.

However, they are not optimistic that the NRC will be willing to undertake these substantial changes that would significantly enhance safety and make a Fukushima-scale disaster less likely. Reporting on a post-Fukushima conference sponsored by the NRC, they write:
But by now it was apparent that little sentiment existed within the NRC for major changes, including those urged by the commission’s own Near-Term Task Force to expand the realm of “adequate protection.”
Lochbaum and his co-authors also make an intriguing series of points about the use of modeling and simulation in the effort to evaluate safety in nuclear plants. They agree that simulation methods are an essential part of the toolkit for nuclear engineers seeking to evaluate accident scenarios; but they argue that the simulation tools currently available (or perhaps ever available) fall far short of the precision sometimes attributed to them. So simulation tools sometimes give a false sense of confidence in the existing safety arrangements in a particular setting.
Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it. The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them. (263)
In addition to specific observations about the functioning of the NRC the authors identify chronic failures in the nuclear power system in Japan that should be of concern in the United States as well. Conflict of interest, falsification of records, and punishment of whistleblowers were part of the culture of nuclear power and nuclear regulation in Japan. And these problems can arise in the United States as well. Here are examples of the problems they identify in the Japanese nuclear power system; it is a valuable exercise to attempt to determine whether these issues arise in the US regulatory environment as well.

Non-compliance and falsification of records in Japan
Headlines scattered over the decades built a disturbing picture. Reactor owners falsified reports. Regulators failed to scrutinize safety claims. Nuclear boosters dominated safety panels. Rules were buried for years in endless committee reviews. “Independent” experts were financially beholden to the nuclear industry for jobs or research funding. “Public” meetings were padded with industry shills posing as ordinary citizens. Between 2005 and 2009, as local officials sponsored a series of meetings to gauge constituents’ views on nuclear power development in their communities, NISA encouraged the operators of five nuclear plants to send employees to the sessions, posing as members of the public, to sing the praises of nuclear technology. (46)
The authors do not provide evidence about similar practices in the United States, though the history of the Davis-Besse nuclear plant in Ohio suggests that similar things happen in the US industry. Charles Perrow treats the Davis-Besse near-disaster in a fair amount of detail; link. Descriptions of the Davis-Besse nuclear incident can be found herehere, here, and here.
Conflict of interest
Shortly after the Fukushima accident, Japan’s Yomiuri Shimbun reported that thirteen former officials of government agencies that regulate energy companies were currently working for TEPCO or other power firms. Another practice, known as amaagari, “ascent to heaven,” spins the revolving door in the opposite direction. Here, the nuclear industry sends retired nuclear utility officials to government agencies overseeing the nuclear industry. Again, ferreting out safety problems is not a high priority.
Punishment of whistle-blowers
In 2000, Kei Sugaoka, a nuclear inspector working for GE at Fukushima Daiichi, noticed a crack in a reactor’s steam dryer, which extracts excess moisture to prevent harm to the turbine. TEPCO directed Sugaoka to cover up the evidence. Eventually, Sugaoka notified government regulators of the problem. They ordered TEPCO to handle the matter on its own. Sugaoka was fired. (47)
There is a similar story in the Davis-Besse plant history.

Factors that interfere with effective regulation

In summary: there appear to be several structural factors that make nuclear regulation less effective than it needs to be.

First is the fact of the political power and influence of the nuclear industry itself. This was a major factor in the background of the Chernobyl disaster as well, where generals and party officials pushed incessantly for rapid completion of reactors; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe. Lochbaum and his collaborators demonstrate the power that TEPCO had in shaping the regulations under which it built the Fukushima complex, including the assumptions that were incorporated about earthquake risk and tsunami risk. Charles Perrow demonstrates a comparable ability by the nuclear industry in the United States to influence the rules and procedures that govern their use of nuclear power as well (link). This influence permits the owners of nuclear power plants to influence the content of regulation as well as the systems of inspection and oversight that the agency adopts.

A related factor is the set of influences and lobbying points that come from the needs of the economy and the production pressures of the energy industry. (Interestingly enough, this was also a major influence on Soviet decision-making in choosing the graphite-moderated light water reactor for use at Chernobyl and numerous other plants in the 1960s; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe.)

Third is the fact emphasized by Charles Perrow that the NRC is primarily governed by Congress, and legislators are themselves vulnerable to the pressures and blandishments of the industry and demands for a low-regulation business environment. This makes it difficult for the NRC to carry out its role as independent guarantor of the health and safety of the public. Here is Perrow's description of the problem in The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters (quoting Lochbaum from a 2004 Union of Concerned Scientists report):
With utilities profits falling when the NRC got tough after the Time story, the industry not only argued that excessive regulation was the problem, it did something about what it perceived as harassment. The industry used the Senate subcommittee that controls the agency’s budget, headed by a pro-nuclear Republican senator from New Mexico, Pete Domenici. Using the committee’s funds, he commissioned a special study by a consulting group that was used by the nuclear industry. It recommended cutting back on the agency’s budget and size. Using the consultant’s report, Domenici “declared that the NRC could get by just fine with a $90 million budget cut, 700 fewer employees, and a greatly reduced inspection effort.” (italics supplied) The beefed-up inspections ended soon after the threat of budget cuts for the agency. (Mangels 2003) And the possibility for public comment was also curtailed, just for good measure. Public participation in safety issues once was responsible for several important changes in NRC regulations, says David Lochbaum, a nuclear safety engineer with the Union of Concerned Scientists, but in 2004, the NRC, bowed to industry pressure and virtually eliminated public participation. (Lochbaum 2004) As Lochbaum told reporter Mangels, “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.”  (The Next Catastrophe, kl 2799)
A fourth important factor is a pervasive complacency within the professional nuclear community about the inherent safety of nuclear power. This is a factor mentioned by Lochbaum:
Although the accident involved a failure of technology, even more worrisome was the role of the worldwide nuclear establishment: the close-knit culture that has championed nuclear energy—politically, economically, socially—while refusing to acknowledge and reduce the risks that accompany its operation. Time and again, warning signs were ignored and near misses with calamity written off. (kl 87)
This is what we might call an ideological or cultural factor, in that it describes a mental framework for thinking about the technology and the public. It is very real factor in decision-making, both within the industry and in the regulatory world. Senior nuclear engineering experts at major research universities seem to share the view that the public "fear" of nuclear power is entirely misplaced, given the safety record of the industry. They believe the technical problems of nuclear power generation have been solved, and that a rational society would embrace nuclear power without anxiety. For rebuttal to this complacency, see Rose and Sweeting's report in the Bulletin of the Atomic Scientists, "How safe is nuclear power? A statistical study suggests less than expected" (link). Here is the abstract to their paper:
After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available.
Lee Clarke's book on planning for disaster on the basis of unrealistic models and simulations is relevant here. In Mission Improbable: Using Fantasy Documents to Tame Disaster Clarke argues that much of the planning currently in place for largescale disasters depends upon models, simulations, and scenario-building tools in which we should have very little confidence.

The complacency about nuclear safety mentioned here makes safety regulation more difficult and, paradoxically, makes the safe use of nuclear power more unlikely. Only when the risks are confronted with complete transparency and honesty will it be possible to design regulatory systems that do an acceptable job of ensuring the safety and health of the public.

In short, Lochbaum and his co-authors seem to provide evidence for the conclusion that the NRC is not in a position to perform its primary function: to establish a rational and scientifically well grounded set of standards for safe reactor design and operation. Further, its ability to enforce through inspection seems impaired as well by the power and influence the nuclear industry can deploy through Congress to resist its regulatory efforts. Good expert knowledge is canvassed through the NRC's processes; but the policy recommendations that flow from this scientific analysis are all too often short-circuited by the ability of the industry to fend off new regulatory requirements. Lochbaum's comment quoted by Perrow above seems all too true: “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.” 

It is very interesting to read the transcript of a 2014 hearing of the Senate Committee on Environment and Public Works titled "NRC'S IMPLEMENTATION OF THE FUKUSHIMA NEAR-TERM TASK FORCE RECOMMENDATIONS AND OTHER ACTIONS TO ENHANCE AND MAINTAIN NUCLEAR SAFETY" (link). Senator Barbara Boxer, California Democrat and chair of the committee, opened the meeting with these words:
Although Chairman Macfarlane said, when she announced her resignation, she had assured that ‘‘the agency implemented lessons learned from the tragic accident at Fukushima.’’ She said, ‘‘the American people can be confident that such an accident will never take place here.’’

I say the reality is not a single one of the 12 key safety recommendations made by the Fukushima Near-Term Task Force has been implemented. Some reactor operators are still not in compliance with the safety requirements that were in place before the Fukushima disaster. The NRC has only completed its own action 4 of the 12 task force recommendations.
This is an alarming assessment, and one that is entirely in accord with the observations made by Lochbaum above.

Sunday, August 11, 2019

Hegel on labor and freedom



Hegel provided a powerful conception of human beings in the world and a rich conception of freedom. Key to that conception is the idea of self-creation through labor. Hegel had an "aesthetic" conception of labor: human beings confront the raw given of nature and transform it through intelligent effort into things they imagine that will satisfy their needs and desires.

Alexandre Kojève's reading of Hegel is especially clear on Hegel's conception of labor and freedom. This is provided in Kojève's analysis of the Master-Slave section of Hegel's Phenomenology in his Introduction to the Reading of Hegel. The key idea is expressed in these terms:
The product of work is the worker's production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it.... Therefore, it is by work, and only by work, that man realizes himself objectively as man. (Kojève, Introduction to the Reading of Hegel)
It seems to me that this framework of thought provides an interesting basis for a philosophy of technology as well. We might think of technology as collective and distributed labor, the processes through which human beings collectively transform the world around themselves to better satisfy human needs. Through intelligence and initiative human beings and organizations transform the world around them to create new possibilities for human need satisfaction. Labor and technology are emancipating and self-creating. Labor and technology help to embody the conditions of freedom.

However, this assessment is only one side of the issue. Technologies are created for a range of reasons by a heterogeneous collection of actors: generating profits, buttressing power relations, serving corporate and political interests. It is true that new technologies often serve to extend the powers of the human beings who use them, or to satisfy their needs and wants more fully and efficiently. Profit motives and the market help to ensure that this is true to some extent; technologies and products need to be "desired" if they are to be sold and to generate profits for the businesses that produce them. But given the conflicts of interest that exist in human society, technologies also serve to extend the capacity of some individuals and groups to wield power over others.

This means that there is a dark side to labor and technology as well. There is the labor of un-freedom. Not all labor allows the worker to fulfill him- or herself through free exercise of talents. Instead the wage laborer is regulated by the time clock and the logic of cost reduction. This constitutes Marx's most fundamental critique of capitalism, as a system of alienation and exploitation of the worker as a human being. Here are a few paragraphs on alienated labor from Marx's Economic and Philosophical Manuscripts:
The worker becomes all the poorer the more wealth he produces, the more his production increases in power and size. The worker becomes an ever cheaper commodity the more commodities he creates. The devaluation of the world of men is in direct proportion to the increasing value of the world of things. Labor produces not only commodities; it produces itself and the worker as a commodity – and this at the same rate at which it produces commodities in general. 
This fact expresses merely that the object which labor produces – labor’s product – confronts it as something alien, as a power independent of the producer. The product of labor is labor which has been embodied in an object, which has become material: it is the objectification of labor. Labor’s realization is its objectification. Under these economic conditions this realization of labor appears as loss of realization for the workers objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation. 
So much does the labor’s realization appear as loss of realization that the worker loses realization to the point of starving to death. So much does objectification appear as loss of the object that the worker is robbed of the objects most necessary not only for his life but for his work. Indeed, labor itself becomes an object which he can obtain only with the greatest effort and with the most irregular interruptions. So much does the appropriation of the object appear as estrangement that the more objects the worker produces the less he can possess and the more he falls under the sway of his product, capital. 
All these consequences are implied in the statement that the worker is related to the product of labor as to an alien object. For on this premise it is clear that the more the worker spends himself, the more powerful becomes the alien world of objects which he creates over and against himself, the poorer he himself – his inner world – becomes, the less belongs to him as his own. It is the same in religion. The more man puts into God, the less he retains in himself. The worker puts his life into the object; but now his life no longer belongs to him but to the object. Hence, the greater this activity, the more the worker lacks objects. Whatever the product of his labor is, he is not. Therefore, the greater this product, the less is he himself. The alienation of the worker in his product means not only that his labor becomes an object, an external existence, but that it exists outside him, independently, as something alien to him, and that it becomes a power on its own confronting him. It means that the life which he has conferred on the object confronts him as something hostile and alien.
So does labor fulfill freedom or create alienation? Likewise, does technology emancipate and fulfill us, or does it enthrall and disempower us? Marx's answer to the first question is that it does both, depending on the social relations within which it is defined, managed, and controlled.

It would seem that we can answer the second question for ourselves, in much the same terms. Technology both extends freedom and constricts it. It is indeed true that technology can extend human freedom and realize human capacities. The use of technology and science in agriculture means that only a small percentage of people in advanced countries are farmers, and those who are enjoy a high standard of living compared to peasants of the past. Communication and transportation technologies create new possibilities for education, personal development, and self-expression. The enhancements to economic productivity created by technological advances have permitted a huge increase in the wellbeing of ordinary people in the past century -- a fact that permits us to pursue the things we care about more freely. But new technologies also can be used to control people, to monitor their thoughts and actions, and to wage war against them. More insidiously, new technologies may "alienate" us in new ways -- make us less social, less creative, and less independent of mind and thought.

So it seems clear on its face that technology is both favorable to the expansion of freedom and the exercise of human capacities, and unfavorable. It is the social relations through which technology is exercised and controlled that make the primary difference in which effect is more prominent.

Friday, August 9, 2019

The sociology of scientific discipline formation


There was a time in the philosophy of science when it may have been believed that scientific knowledge develops in a logical, linear way from observation and experiment to finished theory. This was something like the view presupposed by the founding logical positivists like Carnap and Reichenbach. But we now understand that the creation of a field of science is a social process with a great deal of contingency and path-dependence. The institutions through which science proceeds -- journals, funding agencies, academic departments, Ph.D. programs -- are all influenced by the particular interests and goals of a variety of actors, with the result that a field of science develops (or fails to develop) with a huge amount of contingency. Researchers in the history of science and the sociology of science and technology approach this problem in fairly different ways.

Scott Frickel's 2004 book Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology represents an effort to trace out the circumstances of the emergence of a new scientific sub-discipline, genetic toxicology. "This book is a historical sociological account of the rise of genetic toxicology and the scientists' social movement that created it" (kl 37).

Frickel identifies two large families of approaches to the study of scientific disciplines: "institutionalist accounts of discipline and specialty formation" and "cultural studies of 'disciplinarity' [that] make few epistemological distinctions between the cognitive core of scientific knowledge and the social structures, practices, and processes that advance and suspend it" (kl 63). He identifies himself primarily with the former approach:
I draw from both modes of analysis, but I am less concerned with what postmodernist science studies call the micropolitics of meaning than I am with the institutional politics of knowledge. This perspective views discipline building as a political process that involves alliance building, role definition, and resource allocation. ... My main focus is on the structures and processes of decision making in science that influence who is authorized to make knowledge, what groups are given access to that knowledge, and how and where that knowledge is implemented (or not). (kl 71)
Crucial for Frickel's study of genetic toxicology is this family of questions: "How is knowledge produced, organized, and made credible 'in-between' existing disciplines? What institutional conditions nurture interdisciplinary work? How are porous boundaries controlled? Genetic toxicology's advocates pondered similar questions. Some complained that disciplinary ethnocentrism prevented many biologists' appreciation for the broader ecological implications of their own investigations.... " (kl 99).

The account Frickel provides involves all of the institutional contingency that we might hope for; at the same time, it is an encouraging account for anyone committed to the importance of scientific research in charting a set of solutions to the enormous problems humanity currently faces.
Led by geneticists, these innovations were also intensely interdisciplinary, reflecting the efforts of scientists working in academic, government, and industry settings whose training was rooted in more than thirty disciplines and departments ranging across the biological, agricultural, environmental, and health sciences. Although falling short of some scientists' personal visions of what this new science could become, their campaign had lasting impacts. Chief among these outcomes have been the emergence of a set of institutions, professional roles, and laboratory practices known collectively as "genetic toxicology." (kl 37)
Frickel gives prominence to the politics of environmental activism in the emergence and directions of the new discipline of genetic toxicology. Activists on campus and in the broader society gave impetus to the need for new scientific research on the various toxic effects of pesticides and industrial chemicals; but they also affected the formation of the scientists themselves.

Also of interest is an edited volume on interdisciplinary research in the sciences edited by Frickel, Mathieu Albert, and Barbara Prainsack, Investigating Interdisciplinary Collaboration: Theory and Practice across Disciplines. The book takes special notice of some of the failures of interdisciplinarity, and calls for a careful assessment of the successes and failures of interdisciplinary research projects.
 We think that these celebratory accounts give insufficient analytical attention to the insistent and sustained push from administrators, policy makers, and funding agencies to engineer new research collaborations across disciplines. In our view, the stakes of these efforts to seed interdisciplinary research and teaching "from above" are sufficiently high to warrant a rigorous empirical examination of the academic and social value of interdisciplinarity. (kl 187)
In their excellent introduction Frickel, Albert, and Prainsack write:
A major problem that one confronts in assuming the superiority of interdisciplinary research is a basic lack of studies that use comparative designs to establish that measurable differences in fact exist and to demonstrate the value of interdisciplinarity relative to disciplinary research. (kl 303)
They believe that the appreciation of "interdisciplinary research projects" for its own sake depends on several uncertain presuppositions: that interdisciplinary knowledge is better knowledge, that disciplines constrain interdisciplinary knowledge, and that interdisciplinary interactions are unconstrained by hierarchies. They believe that each of these assumptions is dubious.

Both books are highly interesting to anyone concerned with the development and growth of scientific knowledge. Once we abandoned the premises of logical positivism, we needed a more sophisticated understanding of how the domain of scientific research, empirical and theoretical, is constituted in actual social institutional settings. How is it that Western biology did better than Lysenko? How can environmental science re-establish its credentials for credibility with an increasingly skeptical public?  How are we to cope with the proliferation of pseudo-science in crucial areas -- health and medicine, climate, the feasibility of human habitation on Mars? Why should we be confident that the institutions of university science, peer review, tier-one journals, and National Academy selection committees succeed in guiding us to better, more veridical understandings of the empirical world around us?

Earlier posts have addressed topics concerning social studies of science; link, link, link.)

Thursday, August 1, 2019

Pervasive organizational and regulatory failures


It is intriguing to observe how pervasive organizational and regulatory failures are in our collective lives. Once you are sensitized to these factors, you see them everywhere. A good example is in the business section of today's print version of the New York Times, August 1, 2019. There are at least five stories in this section that reflect the consequences of organizational and regulatory failure.

The first and most obvious story is one that has received frequent mention in Understanding Society, the Boeing 737 Max disaster. In a story titled “FAA oversight of Boeing scrutinized", the reporters give information about a Senate hearing on FAA oversight earlier this week.  Members of the Senate Appropriations Committee questioned the process of certification of new aircraft currently in use by the FAA.
Citing the Times story, Ms. Collins raised concerns over “instances in which FAA managers appeared to be more concerned with Boeing’s production timeline, rather than the safety recommendations of its own engineers.”
Senator Jack Reed referred to the need for a culture change to rebalance the relationship between regulator and industry. Agency officials continued to defend the certification process, which delegates 96% of the work of certification to the manufacturer.

This story highlights two common sources of organizational and regulatory failure. There is first the fact of “production pressure” coming from the owner of a risky process, involving timing, supply of product, and profitability. This pressure leads the owner to push the organization hard in an effort to achieve goals -- often leading to safety and design failures. The second factor identified here is the structural imbalance that exists between powerful companies running complex and costly processes, and the safety agencies tasked to oversee and regulate their behavior. The regulatory agency, in this case the FAA, is under-resourced and lacks the expert staff needed to carry out in depth a serious process of technical oversight.  The article does not identify the third factor which has been noted in prior posts on the Boeing disaster, the influence which Boeing has on legislators, government officials, and the executive branch.

 A second relevant story (on the same page as the Boeing story) refers to charges filed in Germany against the former CEO of Audi who has been charged concerning his role in the vehicle emissions scandal. This is part of the long-standing deliberate effort by Volkswagen to deceive regulators about the emissions characteristics of their diesel engine and exhaust systems. The charges against the Audi executive involved ordering the development of software designed to cheat diesel emissions testing for their vehicles. This ongoing story is primarily a story about corporate dysfunction, in which corporate leaders were involved in unethical and dishonest activities on behalf of the company. Regulatory failure is not a prominent part of this story, because the efforts at deception were so carefully calculated that it is difficult to see how normal standards of regulatory testing could have defeated them. Here the pressing problem is to understand how professional, experienced executives could have been led to undertake such actions, and how the corporation was vulnerable to this kind of improper behavior at multiple levels within the corporation. Presumably there were staff at multiple levels within these automobile companies who were aware of improper behavior. The story quotes a mid-level staff person who writes in an email that “we won’t make it without a few dirty tricks.” So the difficult question for these corporations is how their internal systems were inadequate to take note of dangerously improper behavior. The costs to Volkswagen and Audi in liability judgments and government penalties are truly vast, and surely outweigh the possible gains of the deception. These costs in the United States alone exceed $22 billion.

A similar story, this time from the tech industry, concerns a settlement of civil claims against Cisco Systems to settle claims “that it sold video surveillance technology that it knew had a significant security flaw to federal, state and local government agencies.” Here again we find a case of corporate dishonesty concerning some of its central products, leading to a public finding of malfeasance. The hard question is, what systems are in place for companies like Cisco that ensure ethical and honest presentation of the characteristics and potential defects of the products that they sell? The imperatives of working always to maximize profits and reduce costs lead to many kinds of dysfunctions within organizations, but this is a well understood hazard. So profit-based companies need to have active and effective programs in place that encourage and enforce honest and safe practices by managers, executives, and frontline workers. Plainly those programs broke down at Cisco, Volkswagen, and Audi. (One of the very useful features of Tom Beauchamp's book Case Studies in Business, Society, and Ethics is the light Beauchamp sheds through case studies on the genesis of unethical and dishonest behavior within a corporate setting.)

Now we go on to Christopher Flavelle's story about home-building in flood zones. From a social point of view, it makes no sense to continue to build homes, hotels, and resorts in flood zones. The increasing destruction of violent storms and extreme weather events has been evident at least since the devastation of Hurricane Katrina. Flavelle writes:
There is overwhelming scientific consensus that rising temperatures will increase the frequency and severity of coastal flooding caused by hurricanes, storm surges, heavy rain and tidal floods. At the same time there is the long-term threat of rising seas pushing the high-tide line inexorably inland.
However, Flavelle reports research by Climate Central that shows that the rate of home-building in flood zones since 2010 exceeds the rate of home-building in non-flood zones in eight states. So what are the institutional and behavioral factors that produce this amazingly perverse outcome? The article refers to incentives of local municipalities in generating property-tax revenues and of potential homeowners subject to urban sprawl and desires for second-home properties on the water. Here is a tragically short-sighted development official in Galveston who finds that "the city has been able to deal with the encroaching water, through the installation of pumps and other infrastructure upgrades": "You can build around it, at least for the circumstances today. It's really not affected the vitality of things here on the island at all." The factor that is not emphasized in this article is the role played by the National Flood Insurance Program in the problem of coastal (and riverine) development. If flood insurance rates were calculated in terms of the true riskiness of the proposed residence, hotel, or resort, then it would no longer be economically attractive to do the development. But, as the article makes clear, local officials do not like that answer because it interferes with "development" and property tax growth. ProPublica has an excellent 2013 story on the perverse incentives created by the National Flood Insurance Program, and its inequitable impact on wealthier home-owners and developers (link). Here is an article by Christine Klein and Sandra Zellmer in the SMU Law Review on the dysfunctions of Federal flood policy (link):
Taken together, the stories reveal important lessons, including the inadequacy of engineered flood control structures such as levees and dams, the perverse incentives created by the national flood insurance program, and the need to reform federal leadership over flood hazard control, particularly as delegated to the Army Corps of Engineers.
Here is a final story from the business section of the New York Times illustrating organizational and regulatory dysfunctions -- this time from the interface between the health industry and big tech. The story here is an effort that is being made by DeepMind researchers to use artificial intelligence techniques to provide early diagnosis of otherwise mysterious medical conditions like "acute kidney injury" (AKI). The approach proceeds by analyzing large numbers of patient medical records and attempting to identify precursor conditions that would predict the occurrence of AKI. The primary analytical tool mentioned in the article is the set of algorithms associated with neural networks. In this instance the organizational / regulatory dysfunction is latent rather than explicit and has to do with patient privacy. DeepMind is a business unit within the Google empire of businesses, Alphabet. DeepMind researchers gained access to large volumes of patient data from the UK National Health Service. There is now regulatory concern in the UK and the US concerning the privacy of patients whose data may wind up in the DeepMind analysis and ultimately in Google's direct control. "Some critics question whether corporate labs like DeepMind are the right organization to handle the development of technology with such broad implications for the public." Here the issue is a complicated one. It is of course a good thing to be able to diagnose disorders like AKI in time to be able to correct them. But the misuse and careless custody of user data by numerous big tech companies, including especially Facebook, suggests that sensitive personal data like medical files need to be carefully secured by effective legislation and regulation. And so far the regulatory system appears to be inadequate for the protection of individual privacy in a world of massive databases and largescale computing capabilities. The recent FTC $5 billion settlement imposed on Facebook, large as it is, may not suffice to change the business practices of Facebook (link).

(I didn't find anything in the sports section today that illustrates organizational and regulatory dysfunction, but of course these kinds of failures occur in professional and college sports as well. Think of doping scandals in baseball, cycling, and track and field, sexual abuse scandals in gymnastics and swimming, and efforts by top college football programs to evade NCAA regulations on practice time and academic performance.)