Thursday, October 10, 2019

Organizational culture


It is of both intellectual and practical interest to understand how organizations function and how the actors within them choose the actions that they pursue. A common answer to these questions is to refer to the rules and incentives of the organization, and then to attempt to understand the actor's choices through the lens of rational preference theory. However, it is now increasingly clear that organizations embody distinctive "cultures" that significantly affect the actions of the individuals who operate within their scope. Edgar Schein is a leading expert on the topic of organizational culture. Here is how he defines the concept in Organizational Culture and Leadership. Organizational culture, according to Schein, consists of a set of "basic assumptions about the correct way to perceive, think, feel, and behave, driven by (implicit and explicit) values, norms, and ideals" (Schein, 1990).
Culture is both a dynamic phenomenon that surrounds us at all times, being constantly enacted and created by our interactions with others and shaped by leadership behavior, and a set of structures, routines, rules, and norms that guide and constrain behavior. When one brings culture to the level of the organization and even down to groups within the organization, one can see clearly how culture is created, embedded, evolved, and ultimately manipulated, and, at the same time, how culture constrains, stabilizes, and provides structure and meaning to the group members. These dynamic processes of culture creation and management are the essence of leadership and make one realize that leadership and culture are two sides of the same coin. (3rd edition, p. 1)
According to Schein, there is a cognitive and affective component of action within an organization that has little to do with rational calculation of interests and more to do with how the actors frame their choices. The values and expectations of the organization help to shape the actions of the participants. And one crucial aspect of leaders, according to Schein, is the role they play in helping to shape the culture of the organizations they lead.

It is intriguing that several pressing organizational problems have been found to rotate around the culture of the organization within which behavior takes place. The prevalence of sexual and gender harassment appears to depend a great deal on the culture of respect and civility that an organization has embodied -- or has failed to embody. The ways in which accidents occur in large industrial systems seems to depend in part on the culture of safety that has been established within the organization. And the incidence of corrupt and dishonest practices within businesses seems to be influenced by the culture of integrity that the organization has managed to create. In each instance experience seems to demonstrate that "good" culture leads to less socially harmful behavior, while "bad" culture leads to more such behavior.

Consider first the prominence that the idea of safety culture has come to play in the nuclear industry after Three Mile Island and Chernobyl. Here are a few passages from a review document authored by the Advisory Committee on Reactor Safeguards (link).
There also seems to be a general agreement in the nuclear community on the elements of safety culture. Elements commonly included at the organization level are senior management commitment to safety, organizational effectiveness, effective communications, organizational learning, and a working environment that rewards identifying safety issues. Elements commonly identified at the individual level include personal accountability, questioning attitude, and procedural adherence. Financial health of the organization and the impact of regulatory bodies are occasionally identified as external factors potentially affecting safety culture. 
The working paper goes on to consider two issues: has research validated the causal relationship between safety culture and safe performance? And should the NRC create regulatory requirements aimed at observing and enhancing the safety culture in a nuclear plant? They note that current safety statistics do not permit measurement of the association between safety culture and safe performance, but that experience in the industry suggests that the answers to both questions are probably affirmative:
On the other hand, even at the current level of industry maturity, we are confronted with events such as the recent reactor vessel head corrosion identified so belatedly at the Davis-Besse Nuclear Power Plant. Problems subsequently identified in other programmatic areas suggest that these may not be isolated events, but the result of a generally degraded plant safety culture. The head degradation was so severe that a major accident could have resulted and was possibly imminent. If, indeed, the true cause of such an event proves to be degradation of the facility's safety culture, is it acceptable that the reactor oversight program has to wait for an event of such significance to occur before its true root cause, degraded culture, is identified? This event seems to make the case for the need to better understand the issues driving the culture of nuclear power plants and to strive to identify effective performance indicators of resulting latent conditions that would provide leading, rather than lagging, indications of future plant problems. (7-8)
Researchers in the area of sexual harassment have devoted quite a bit of attention to the topic of workplace culture as well. This theme is emphasized in the National Academy study on sexual and gender harassment (link); the authors make the point that gender harassment is chiefly aimed at expressing disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:
Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)
Ben Walsh is representative of this approach. Here is the abstract of a research article by Walsh, Lee, Jensen, McGonagle, and Samnani on workplace incivility (link):
Scholars have called for research on the antecedents of mistreatment in organizations such as workplace incivility, as well as the theoretical mechanisms that explain their linkage. To address this call, the present study draws upon social information processing and social cognitive theories to investigate the relationship between positive leader behaviors—those associated with charismatic leadership and ethical leadership—and workers’ experiences of workplace incivility through their perceptions of norms for respect. Relationships were separately examined in two field studies using multi- source data (employees and coworkers in study 1, employees and supervisors in study 2). Results suggest that charismatic leadership (study 1) and ethical leadership (study 2) are negatively related to employee experiences of workplace incivility through employee perceptions of norms for respect. Norms for respect appear to operate as a mediating mechanism through which positive forms of leadership may negatively relate to workplace incivility. The paper concludes with a discussion of implications for organizations regarding leader behaviors that foster norms for respect and curb uncivil behaviors at work.
David Hess, an expert on corporate corruption, takes a similar approach to the problem of corruption and bribery by officials of multinational corporations (link). Hess argues that bribery often has to do with organizational culture and individual behavior, and that effective steps to reduce the incidence of bribery must proceed on the basis of an adequate analysis of both culture and behavior. And he links this issue to fundamental problems in the area of corporate social responsibility.
Corporations must combat corruption. By allowing their employees to pay bribes they are contributing to a system that prevents the realization of basic human rights in many countries. Ensuring that employees do not pay bribes is not accomplished by simply adopting a compliance and ethics program, however. This essay provided a brief overview of why otherwise good employees pay bribes in the wrong organizational environment, and what corporations must focus on to prevent those situations from arising. In short, preventing bribe payments must be treated as an ethical issue, not just a legal compliance issue, and the corporation must actively manage its corporate culture to ensure it supports the ethical behavior of employees.
As this passage emphasizes, Hess believes that controlling corrupt practices requires changing incentives within the corporation while equally changing the ethical culture of the corporation; he believes that the ethical culture of a company can have effects on the degree to which employees engage in bribery and other corrupt practices.

What is in common among each of these examples -- and other examples are available as well -- is that intangible features of the work environment are likely to influence behavior of the actors in that environment, and thereby affect the favorable and unfavorable outcomes of the organization's functioning as well. Moreover, if we take the lead offered by Schein and work on the assumption that leaders can influence culture through their advocacy for the values that the organization embodies, then leadership has a core responsibility to facilitate a work culture that embodies these favorable outcomes. Work culture can be cultivated to encourage safety and to discourage bad outcomes like sexual harassment and corruption.

Monday, September 30, 2019

The functionality of artifacts


We think of artifacts as being "functional" in a specific sense: their characteristics are well designed and adjusted for their "intended" use. Sometimes this is because of the explicit design process through which they were created, and sometimes it is the result of a long period of small adjustments by artisan-producers and users who recognize a potential improvement in shape, material, or internal workings that would lead to superior performance. Jon Elster described these processes in his groundbreaking 1983 book, Explaining Technical Change: A Case Study in the Philosophy of Science.

Here is how I described the gradual process of refinement of technical practice with respect to artisanal winegrowing in a 2009 post (link):
First, consider the social reality of a practice like wine-making. Pre-modern artisanal wine makers possess an ensemble of techniques through which they grow grapes and transform them into wine. These ensembles are complex and developed; different wine "traditions" handle the tasks of cultivation and fermentation differently, and the results are different as well (fine and ordinary burgundies, the sweet gewurztraminers of Alsace versus Germany). The novice artisan doesn't reinvent the art of winemaking; instead, he/she learns the techniques and traditions of the elders. But at the same time, the artisan wine maker may also introduce innovations into his/her practice -- a wrinkle in the cultivation techniques, a different timing in the fermentation process, the introduction of a novel ingredient into the mix.
Over time the art of grape cultivation and wine fermentation improves.

But in a way this expectation of "artifact functionality" is too simple and direct. In the development of a technology or technical practice there are multiple actors who are in a position to influence the development of the outcome, and they often have divergent interests. These differences of interests may lead to substantial differences in performance for the technology or technique. Technologies reflect social interests, and this is as evident in the history of technology as it is in the current world of high tech. In the winemaking case, for example, landlords may have interests that favor dense planting, whereas the wine maker may favor more sparse planting because of the superior taste this pattern creates in the grape. More generally, the owner's interest in sales and profitability exerts a pressure on the characteristics of the product that run contrary to the interest of the artisan-producer who gives primacy to the quality of the product, and both may have interests that are somewhat inconsistent with the broader social good.

Imagine the situation that would result if a grain harvesting machine were continually redesigned by the profit-seeking landowner and the agricultural workers. Innovations that are favorable to enhancing profits may be harmful for safety and welfare of agricultural workers, and vice versa. So we might imagine a see-saw of technological development, as the landowner and the worker gains more influence over the development of the technology.

As an undergraduate at the University of Illinois in the late 1960s I heard the radical political scientist Michael Parenti tell just such a story about his father's struggle to maintain artisanal quality in the Italian bread he baked in New York City in the 1950s. Here is an online version of the story (link). Michael Parenti's story begins like this:
Years ago, my father drove a delivery truck for the Italian bakery owned by his uncle Torino. When Zi Torino returned to Italy in 1956, my father took over the entire business. The bread he made was the same bread that had been made in Gravina, Italy, for generations. After a whole day standing, it was fresh as ever, the crust having grown hard and crisp while the inside remained soft, solid, and moist. People used to say that our bread was a meal in itself.... 
Pressure from low-cost commercial bread companies forced his father into more and more cost-saving adulteration of the bread. And the story ends badly ...
But no matter what he did, things became more difficult. Some of our old family customers complained about the change in the quality of the bread and began to drop their accounts. And a couple of the big stores decided it was more profitable to carry the commercial brands. 
Not long after, my father disbanded the bakery and went to work driving a cab for one of the big taxi fleets in New York City. In all the years that followed, he never mentioned the bread business again.
Parenti's message to activist students in the 1960s was stark: this is the logic of capitalism at work.

Of course market pressures do not always lead to the eventual ruin of the products we buy; there is also an economic incentive created by consumers who favor higher performance and more features that leads businesses to improve their products. So the dynamic that ruined Michael Parenti's father's bread is only one direction that market competition can take. The crucial point is this: there is nothing in the development of technology and technique that guarantees outcomes that are more and more favorable for the public.

Sunday, September 29, 2019

Flood plains and land use


An increasingly pressing consequence of climate change is the rising threat of flood in coastal and riverine communities. And yet a combination of Federal and local policies have created land use incentives that have led to increasing development in flood plains since the major floods of the 1990s and 2000s (Mississippi River 1993, Hurricane Katrina 2005, Hurricane Sandy 2016, ...), with the result that economic losses from flooding have risen sharply. Many of those costs are born by tax payers through Federal disaster relief and subsidies to the Federal flood insurance program.

Christine Klein and Sandra Zellmer provide a highly detailed and useful review of these issues in their brilliant SMU Law Review article, "Mississippi River Stories: Lessons from a Century of Unnatural Disasters" (link). These arguments are developed more fully in their 2014 book Mississippi River Tragedies: A Century of Unnatural Disaster. Klein and Zellmer believe that current flood insurance policies and disaster assistance policies at the federal level continue to support perverse incentives for developers and homeowners and need to be changed. Projects and development within 100-year flood plains need to be subject to mandatory flood insurance coverage; flood insurance policies should be rated by degree of risk; and government units should have the legal ability to prohibit development in flood plains. Here are their central recommendations for future Federal policy reform:
Substantive requirements for watershed planning and management would effectuate the Progressive Era objective underlying the original Flood Control Act of 1928: treating the river and its floodplain as an integrated unit from source to mouth, "systematically and consistently," with coordination of navigation, flood control, irrigation, hydropower, and ecosystem services. To accomplish this objective, the proposed organic act must embrace five basic principles:
(1) Adopt sustainable, ecologically resilient standards and objectives;
(2) Employ comprehensive environmental analysis of individual and cumulative effects of floodplain construction (including wetlands fill); (3) Enhance federal leadership and competency by providing the Corps with primary responsibility for flood control measures, cabined by clear standards, continuing monitoring responsibilities, and oversight through probing judicial review, and supported by a secure, non-partisan funding source; (4) Stop wetlands losses and restore damaged floodplains by re-establishing natural areas that are essential for floodwater retention; and (5) Recognize that land and water policies are inextricably linked and plan for both open space and appropriate land use in the floodplain. (1535-36)
Here is Klein and Zellmer's description of the US government's response to flood catastrophes in the 1920s:
Flood control was the most pressing issue before the Seventieth Congress, which sat from 1927 to 1929. Congressional members quickly recognized that the problems were two-fold. First, Congressman Edward Denison of Illinois criticized the absence of federal leadership: "the Federal Government has allowed the people. . . to follow their own course and build their own levees as they choose and where they choose until the action of the people of one State has thrown the waters back upon the people of another State, and vice versa." Moreover, as Congressman Robert Crosser of Ohio noted, the federal government's "levees only" policy--a "monumental blunder"--was not the right sort of federal guidance. (1482-83)
In passing the Flood Control Act of 1928, congressional members were influenced by Progressive Era objectives. Comprehensive planning and multiple-use management were hallmarks of the time. The goal was nothing less than a unified, planned society. In the early 1900s, many federal agencies, including the Bureau of Reclamation and the U.S. Geological Survey, had agreed that each river must be treated as an integrated unit from source to mouth. Rivers were to be developed "systematically and consistently," with coordination of navigation, flood control, irrigation, and hydro-power. But the Corps of Engineers refused to join the movement toward watershed planning, instead preferring to conduct river management in a piecemeal fashion for the benefit of myriad local interests. (1484)
But perverse incentives were created by Federal flood policies in the 1920s that persist to the present:
Only a few decades after the 1927 flood, the Mississippi River rose up out of its banks once again, teaching a new lesson: federal structural responses plus disaster relief payouts had incentivized ever more daring incursions into the floodplain. The floodwater evaded federal efforts to control it with engineered structures, and those same structures prevented the river from finding its natural retention areas--wetlands, oxbows, and meanders--that had previously provided safe storage for floodwater. The resulting damage to affected areas was increased by orders of magnitude. The federal response to this lesson was the adoption of a nationwide flood insurance program intended to discourage unwise floodplain development and to limit the need for disaster relief. Both lessons are detailed in this section. (1486)
Paradoxically, navigational structures and floodplain constriction by levees, highway embankments, and development projects exacerbated the flood damage all along the rivers in 1951 and 1952. Flood-control engineering works not only enhanced the danger of floods, but actually contributed to higher flood losses. Flood losses were, in turn, used to justify more extensive control structures, creating a vicious cycle of ever-increasing flood losses and control structures. The mid-century floods demonstrated the need for additional risk-management measures. (1489)
Only five years after the program was enacted, Gilbert White's admonition was validated. Congress found that flood losses were continuing to increase due to the accelerating development of floodplains. Ironically, both federal flood control infrastructure and the availability of federal flood insurance were at fault. To address the problem, Congress passed the Flood Disaster Protection Act of 1973, which made federal assistance for construction in flood hazard areas, including loans from federally insured banks, contingent upon the purchase of flood insurance, which is only made available to participating communities. (1491)
But development and building in the floodplains of the rivers of the United States has continued and even accelerated since the 1990s.

Government policy comes into this set of disasters at several levels. First, climate policy -- the evidence has been clear for at least two decades that the human production of greenhouse gases is creating rapid climate change, including rising temperatures in atmosphere and oceans, severe storms, and rising ocean levels. A fundamental responsibility of government is to regulate and direct activities that create public harms, and the US government has failed abjectly to change the policy environment in ways that substantially reduce the production of CO2 and other greenhouse gases. Second, as Klein and Zellmer document, the policies adopted by the US government in the early part of the twentieth century intended to prevent major flood disasters were ill conceived. The efforts by the US government and regional governments to control flooding through levees, reservoirs, dams, and other infrastructure interventions have failed, and have probably made the problems of flooding along major US rivers worse. Third, the human activities in flood plains -- residences, businesses, hotels and resorts -- have worsened the severity of the consequences of floods by elevating the cost in lives and property because of reckless development in flood zones. Governments have failed to discourage or prevent these forms of development, and the consequences have proven to be extreme (and worsening).

It is evident that storms, floods, and sea-level rise will be vastly more destructive in the decades to come. Here is a projection of the effects on the Florida coastline after a sustained period of sea-level rise resulting from a 2-degree Centigrade rise in global temperature (link):


We seem to have passed the point where it will be possible to avoid catastrophic warming. Our governments need to take strong actions now to ameliorate the severity of global warming, and to prepare us for the damage when it inevitably comes.

Kojève on freedom


An earlier post highlighted Alexandre Kojève's presentation of Hegel's rich conception of labor, freedom, and human self-creation. This account is contained in Kojève's analysis of the Master-Slave section of Hegel's Phenomenology in Kojève's Introduction to the Reading of Hegel: Lectures on the "Phenomenology of Spirit"; link.

Here are the key passages from Hegel's Phenomenology on which Kojève's account depends, from Terry Pinkard's translation in Georg Wilhelm Friedrich Hegel: The Phenomenology of Spirit:

Hegel on the Master-Slave relation

195. However, the feeling of absolute power as such, and in the particularities of service, is only dissolution in itself, and, although the fear of the lord is the beginning of wisdom, in that fear consciousness is what it is that is for it itself , but it is not being-for-itself. However, through work, this servile consciousness comes round to itself. In the moment corresponding to desire in the master’s consciousness, the aspect of the non-essential relation to the thing seemed to fall to the lot of the servant, as the thing there retained its self-sufficiency. Desire has reserved to itself the pure negating of the object, and, as a result, it has reserved to itself that unmixed feeling for its own self. However, for that reason, this satisfaction is itself only a vanishing, for it lacks the objective aspect, or stable existence. In contrast, work is desire held in check, it is vanishing staved off , or: work cultivates and educates. The negative relation to the object becomes the form of the object; it becomes something that endures because it is just for the laborer himself that the object has self-sufficiency. This negative mediating middle, this formative doing, is at the same time singularity, or the pure being-for-itself of consciousness, which in the work external to it now enters into the element of lasting. Thus, by those means, the working consciousness comes to an intuition of self-sufficient being as its own self.

196. However, what the formative activity means is not only that the serving consciousness as pure being-for-itself becomes, to itself, an exist- ing being within that formative activity. It also has the negative mean- ing of the first moment, that of fear. For in forming the thing, his own negativity, or his being-for-itself, only as a result becomes an object to himself in that he sublates the opposed existing form. However, this objective negative is precisely the alien essence before which he trembled, but now he destroys this alien negative and posits himself as such a negative within the element of continuance. He thereby becomes for himself an existing- being-for-itself . Being-for-itself in the master is to the servant an other, or it is only for him. In fear, being-for-itself is in its own self . In culturally formative activity, being-for-itself becomes for him his own being- for-itself, and he attains the consciousness that he himself is in and for himself. As a result, the form, by being posited as external, becomes to him not something other than himself, for his pure being-for-itself is that very form, which to him therein becomes the truth. Therefore, through this retrieval, he comes to acquire through himself a mind of his own, and he does this precisely in the work in which there had seemed to be only some outsider’s mind. – For this reflection, the two moments of fear and service, as well as the moments of culturally formative activity are both necessary, and both are necessary in a universal way. Without the discipline of service and obedience, fear is mired in formality and does not diffuse itself over the conscious actuality of existence. Without culturally formative activity, fear remains inward and mute, and consciousness will not become for it [consciousness] itself. If consciousness engages in formative activity without that first, absolute fear, then it has a mind of its own which is only vanity, for its form, or its negativity, is not negativity in itself , and his formative activity thus cannot to himself give him the consciousness of himself as consciousness of the essence. If he has not been tried and tested by absolute fear but only by a few anxieties, then the negative essence will have remained an externality to himself, and his substance will not have been infected all the way through by it. While not each and every one of the ways in which his natural consciousness was brought to fulfillment was shaken to the core, he is still attached in himself to determinate being. His having a mind of his own is then only stubbornness, a freedom that remains bogged down within the bounds of servility. To the servile consciousness, pure form can as little become the essence as can the pure form – when it is taken as extending itself beyond the singular individual – be a universal culturally formative activity, an absolute concept. Rather, the form is a skill which, while it has dominance over some things, has dominance over neither the universal power nor the entire objective essence. (Hegel, Phenomenology, 115-116)

Kojève's interpretation of Hegel

Here are the primary passages that represent the heart of Kojève's interpretation of this section.

Work, on the other hand, is repressed Desire, an arrested passing phase; or, in other words, it forms-and-educates. Work transforms the World and civilizes, educates, Man, the man who wants to work -- or who must work -- must repress the instinct that drives him "to consume" "immediately" the "raw" object. And the Slave can work for the Master -- that is, for another than himself -- only by repressing his own desires. Hence he transcends himself by working -- or perhaps better, he educates himself, he "cultivates" and "sublimates" his instincts by repressing them. On the other hand, he does not destroy the thing as it is given. He postpones the destruction of the thing by first transforming it through work; he prepares it for consumption -- that is to say, he "forms" it. In his work, he transforms things and transforms himself at the same time: he forms things and the World by transforming himself, by educating himself; and he educates himself, he forms himself, by transforming things and the World, Thus, the negative-or-negating relation to the object becomes a form of this object and gains permanence, precisely because, for the worker, the object has autonomy.... The product of work is the worker's production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it.... Therefore, it is by work, and only by work, that man realizes himself objectively as man. Only after producing an artificial object is man himself really and objectively more than and different from a natural being; and only in this real and objective product does he become truly conscious of his subjective human reality. Kojève 24-25

The Master can never detach himself from the World in which he lives, and if this World perishes, he perishes with it. Only the Slave can transcend the given world (which is subjugated by the Master) and not perish. Only the Slave can transform the World that forms him and fixes him in slavery and create a World that he has formed in which he will be free. And the Slave achieves this only through forced and terrified work carried out in the Master's service. To be sure, this work by itself does not free him. But in transforming the World by this work, the Slave transforms himself too, and thus creates the new objective conditions that permit him to take up once more the liberating Fight for recognition that he refused in the beginning for fear of death. And thus in the long run, all slavish work realizes not the Master's will, but the will -- at first unconscious -- of the Slave, who -- finally --succeeds where the Master -- necessarily -- fails. Therefore, it is indeed originally dependent, serving, and slavish Consciousness that in the end realizes and reveals the ideal of autonomous Self-Consciousness and is thus its "truth." Kojève 29-30

However, to understand the edifice of universal history and the process of its construction, one must know the materials that were used to construct it. These materials are men. To know what History is, one must therefore know what Man who realizes it is. Most certainly, man is something quite different from a brick. In the first place, if we want to compare universal history to the construction of an edifice, we must point out that men are not only the bricks that are used in the construction; they are also the masons who build it and the architects who conceive the plan for it, a plan, moreover, which is progressively elaborated during the construction itself. Furthermore, even as "brick," man is essentially different from a material brick: even the human brick changes during the construction, just as the human mason and the human architect do. Nevertheless, there is something in Man, in every man, that makes him suited to participate--passively or actively--in the realization of universal history. At the beginning of this History, which ends finally in absolute Knowledge, there are, so to speak, the necessary and sufficient conditions. And Hegel studies these conditions in the first four chapters of the Phenomenology.

Finally, Man is not only the material, the builder, and the architect of the historical edifice.  He is also the one for whom this edifice is constructed: he lives in it, he sees and understands it, he describes and criticizes it. There is a whole category of men who do not actively participate in the historical construction and who are content to live in the constructed edifice and to talk about it. These men, who live somehow "above the battle," who are content to talk about things that they do not create by their Action, are Intellectuals who produce intellectuals' ideologies, which they take for philosophy (and pass off as such). Hegel describes and criticizes these ideologies in Chapter V. (32-33)

The central ideas here are --
  • Work transforms and educates the worker.
  • Work requires the delay of consumption.
  • Work transforms the world and the environment. 
  • The self-creation of the human being through work is essential to his or her reality as a human being.
  • By merely directing and commanding work, the master fails to engage in self-creation.
  • The master cannot be truly free.
  • Human beings create history through their creative labor.
  • Human beings create and transform themselves through labor.
  • History is human-centered. History is "subject" as well as "object".
  • Those who merely think and reflect upon history are sterile and contribute nothing to the course of history.
These comments add up to a substantive theory of the human being in the world -- one that emphasizes creativity, transformation, and self-creation. It stands in stark contrast to the liberal utilitarian view of Adam Smith and Jeremy Bentham of human nature as consumer and rational optimizer of a given set of choices; instead, on Kojève’s (and Hegel's) view, the human being becomes fully human through creative engagement with the natural world, through labor.

It is interesting to realize that Kojève was a philosopher, but he was not primarily an academic professor. Instead, he was a high-placed civil servant and statesman in the French state, a man whose thinking and actions were intended to create a new path for France. He is credited with being one of the early theorists of the European Union.

Kojève's account of labor and freedom is, of course, influenced by his own immersion in the writings of the early Marx; so the philosophy of labor, freedom, and self-creation articulated here is neither pure Hegel nor pure Marx. We might say that it is pure Kojève.

Jeff Love's biography of Kojève is also of interest, emphasizing the Russian roots of Kojève's thought; The Black Circle: A Life of Alexandre Kojève. Love confirms the importance of the richer theory of human freedom and self-realization that is offered in Kojève’s account, and notes a parallel with themes in nineteenth-century Russian literature.
Kojève’s critique of self-interest merits renewal in a day when consumer capitalism and the reign of self-interest are hardly in question, either implicitly or explicitly, and where the key precincts of critique have been hobbled by their own reliance on elements of the modern conception of the human being as the free historical individual that have not been sufficiently clarified. Kojève’s thought is thus anodyne: far from being “philosophically” mad or the learned jocularity of a jaded, extravagant genius, it expresses a probing inquiry into the nature of human being that returns us to questions that reach down to the roots of the free historical individual. Moreover, it extends a critique of self-interest deeply rooted in Russian thought, and Kojève does so, no doubt with trenchant irony, in the very capital of the modern bourgeoisie decried violently by Dostoevsky in his Winter Notes on Summer Impressions.
(Here is an interesting reflection on Kojève as philosopher by Stanley Rosen; link.)

Tuesday, September 3, 2019

The US Chemical Safety Board


The Federal agency responsible for investigating chemical and petrochemical accidents in the United States is the Chemical Safety Board (link). The mission of the Board is described in these terms:
The CSB is an independent federal agency charged with investigating industrial chemical accidents. Headquartered in Washington, DC, the agency’s board members are appointed by the President and confirmed by the Senate.
The CSB’s mission is to “drive chemical safety change through independent investigation to protect people and the environment.”
The CSB’s vision is “a nation safe from chemical disasters.”
The CSB conducts root cause investigations of chemical accidents at fixed industrial facilities. Root causes are usually deficiencies in safety management systems, but can be any factor that would have prevented the accident if that factor had not occurred. Other accident causes often involve equipment failures, human errors, unforeseen chemical reactions or other hazards. The agency does not issue fines or citations, but does make recommendations to plants, regulatory agencies such as the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency (EPA), industry organizations, and labor groups. Congress designed the CSB to be non-regulatory and independent of other agencies so that its investigations might, where appropriate, review the effectiveness of regulations and regulatory enforcement.
CSB was legislatively conceived in analogy with the National Transportation Safety Board, and its sole responsibility is to conduct investigations of major chemical accidents in the United States and report its findings to the public. It is not subordinate to OSHA or EPA, but it collaborates with those (and other) Federal agencies as appropriate (link). It has no enforcement powers; its sole function is to investigate, report, and recommend when serious chemical or petrochemical accidents have occurred.

One of its most important investigations concerned the March 23, 2005 Texas City BP refinery explosion. A massive explosion resulted in the deaths of 15 workers, injuries to over 170 workers, and substantial destruction of the refinery infrastructure. CSB conducted an extensive investigation into the “root causes” of the accident, and assigned substantial responsibility to BP’s corporate management of the facility. Here is the final report of that investigation (link), and here is a video prepared by CSB summarizing its main findings (link).

The key findings of the CSB report focus on the responsibility of BP management for the accident. Here is a summary of the CSB assessment of root causes:

The BP Texas City tragedy is an accident with organizational causes embedded in the refinery’s culture. The CSB investigation found that organizational causes linked the numerous safety system failures that extended beyond the ISOM unit. The organizational causes of the March 23, 2005, ISOM explosion are

  • BP Texas City lacked a reporting and learning culture. Reporting bad news was not encouraged, and often Texas City managers did not effectively investigate incidents or take appropriate corrective action.
  • BP Group lacked focus on controlling major hazard risk. BP management paid attention to, measured, and rewarded personal safety rather than process safety.
  • BP Group and Texas City managers provided ineffective leadership and oversight. BP management did not implement adequate safety oversight, provide needed human and economic resources, or consistently model adherence to safety rules and procedures.
  • BP Group and Texas City did not effectively evaluate the safety implications of major organizational, personnel, and policy changes.
Underlying almost all of these failures to manage this complex process with a priority on “process safety” rather than simply personal safety is a corporate mandate for cost reduction:
In late 2004, BP Group refining leadership ordered a 25 percent budget reduction “challenge” for 2005. The Texas City Business Unit Leader asked for more funds based on the conditions of the Texas City plant, but the Group refining managers did not, at first, agree to his request. Initial budget documents for 2005 reflect a proposed 25 percent cutback in capital expenditures, including on compliance, HSE, and capital expenditures needed to maintain safe plant operations.[208] The Texas City Business Unit Leader told the Group refining executives that the 25 percent cut was too deep, and argued for restoration of the HSE and maintenance-related capital to sustain existing assets in the 2005 budget. The Business Unit Leader was able to negotiate a restoration of less than half the 25 percent cut; however, he indicated that the news of the budget cut negatively affected workforce morale and the belief that the BP Group and Texas City managers were sincere about culture change. (176)
And what about corporate accountability? What did BP have to pay in recompense for its faulty management of the Texas City refinery and the subsequent damages to workers and local residents? The answer is, remarkably little. OSHA assessed a fine of $50.6 million for its violations of safety regulations (link, link), and it committed to spend at least $500M to take corrective steps within the plant to protect the safety of workers. This was a record fine at the time; and yet it might very well be seen by BP corporate executives as a modest cost of doing business in this industry. It does not seem to be of the magnitude that would lead to fundamental change of culture, action, and management within the company.

BP commissioned a major review of BP refinery safety in all five of its US-based refineries following release of the CSB report. This study became the Baker Panel REPORT OF THE BP U.S. REFINERIES INDEPENDENT SAFETY REVIEW PANEL (JANUARY 2007) (link). The Baker Panel consisted of fully qualified experts on industrial and technological safety who were in a very good position to assess the safety management and culture of BP in its operations of its five US-based refineries. The Baker Panel was specifically directed to refrain from attempting to analyze responsibility for the Texas City disaster and to focus its efforts on assessing the safety culture and management direction that were currently to be found in BP's five refineries. Here are some central findings:
  • Based on its review, the Panel believes that BP has not provided effective process safety leadership and has not adequately established process safety as a core value across all its five U.S. refineries.
  • BP has not always ensured that it identified and provided the resources required for strong process safety performance at its U.S. refineries. Despite having numerous staff at different levels of the organization that support process safety, BP does not have a designated, high-ranking leader for process safety dedicated to its refining business.
  • The Panel also found that BP did not effectively incorporate process safety into management decision-making. BP tended to have a short-term focus, and its decentralized management system and entrepreneurial culture have delegated substantial discretion to U.S. refinery plant managers without clearly defining process safety expectations, responsibilities, or accountabilities.
  • BP has not instilled a common, unifying process safety culture among its U.S. refineries.
  • While all of BP’s U.S. refineries have active programs to analyze process hazards, the system as a whole does not ensure adequate identification and rigorous analysis of those hazards.
  • The Panel’s technical consultants and the Panel observed that BP does have internal standards and programs for managing process risks. However, the Panel’s examination found that BP’s corporate safety management system does not ensure timely compliance with internal process safety standards and programs at BP’s five U.S. refineries.
  • The Panel also found that BP’s corporate safety management system does not ensure timely implementation of external good engineering practices that support and could improve process safety performance at BP’s five U.S. refineries. (Summary of findings, xii-xiii)
These findings largely validate and support the critical assessment of BP's safety management practices in the CSB report.

It seems clear that an important part of the substantial improvement that has occurred in aviation safety in the past fifty years is the effective investigation and reporting provided by the NTSB. NTSB is an authoritative and respected bureau of experts whom the public trusts when it comes to discovering the causes of aviation disasters. The CSB has a much shorter institutional history -- it was created in 1990 -- but we need to ask a parallel question here as well: Does the CSB provide a strong lever for improving safety practices in the chemical and petrochemical industries through its accident investigations; or are industry actors largely free to continue their poor management practices indefinitely, safe in the realization that large chemical accidents are rare and the costs of occasional liability judgments are manageable?

Monday, August 12, 2019

Testing the NRC


Serious nuclear accidents are rare but potentially devastating to people, land, and agriculture. (It appears that minor to moderate nuclear accidents are not nearly so rare, as James Mahaffey shows in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima.) Three Mile Island, Chernobyl, and Fukushima are disasters that have given the public a better idea of how nuclear power reactors can go wrong, with serious and long-lasting effects. Reactors are also among the most complex industrial systems around, and accidents are common in complex, tightly coupled industrial systems. So how can we have reasonable confidence in the safety of nuclear reactors?

One possible answer is that we cannot have reasonable confidence at all. However, there are hundreds of large nuclear reactors in the world, and 98 active nuclear reactors in the United States alone. So it is critical to have highly effective safety regulation and oversight of the nuclear power industry. In the United States that regulatory authority rests with the Nuclear Regulatory Commission. So we need to ask the question: how good is the NRC at regulating, inspecting, and overseeing the safety of nuclear reactors in our country?

One would suppose that there would be excellent and detailed studies within the public administration literature that attempt to answer this question, and we might expect that researchers within the field of science and technology studies might have addressed it as well. However, this seems not to be the case. I have yet to find a full-length study of the NRC as a regulatory agency, and the NRC is mentioned only twice in the 600-plus page Oxford Handbook of Regulation. However, we can get an oblique view of the workings of the NRC through other sources. One set of observers who are in a position to evaluate the strengths and weaknesses of the NRC are nuclear experts who are independent of the nuclear industry. For example, publications from the Bulletin of the Atomic Scientists include many detailed reports on the operations and malfunctions of nuclear power plants that permit a degree of assessment of the quality of oversight provided by the NRC (link). And a detailed (and scathing) report by the General Accounting Office on the near-disaster at the Davis-Besse nuclear power plant is another expert assessment of NRC functioning (link).

David Lochbaum, Edwin Lyman, and Susan Stranahan fit the description of highly qualified independent scientists and observers, and their detailed case history of the Fukushima disaster provides a degree of insight into the workings of the NRC as well as the Japanese nuclear safety agency. Their book, Fukushima: The Story of a Nuclear Disaster, is jointly written by the authors under the auspices of the Union of Concerned Scientists, one of the best informed networks of nuclear experts we have in the United States. Lochbaum is director of the UCS Nuclear Safety Project and author of Nuclear Waste Disposal Crisis. The book provides a careful and scientific treatment of the unfolding of the Fukushima disaster hour by hour, and highlights the background errors that were made by regulators and owners in the design and operation of the Fukushima plant as well. The book makes numerous comparisons to the current workings of the NRC which permit a degree of assessment of the US regulatory agency.

In brief, Lochbaum and his co-authors appear to have a reasonably high opinion of the technical staff, scientists, and advisors who prepare recommendations for NRC consideration, but a low opinion of the willingness of the five commissioners to adopt costly recommendations that are strongly opposed by the nuclear industry. The authors express frustration that the nuclear safety agencies in both countries appear to have failed to have learned important lessons from the Fukushima disaster:
“The [Japanese] government simply seems in denial about the very real potential for another catastrophic accident.... In the United States, the NRC has also continued operating in denial mode. It turned down a petition requesting that it expand emergency evacuation planning to twenty-five miles from nuclear reactors despite the evidence at Fukushima that dangerous levels of radiation can extend at least that far if a meltdown occurs. It decided to do nothing about the risk of fire at over-stuffed spent fuel pools. And it rejected the main recommendation of its own Near-Term Task Force to revise its regulatory framework. The NRC and the industry instead are relying on the flawed FLEX program as a panacea for any and all safety vulnerabilities that go beyond the “design basis.” (kl 117)
They believe that the NRC is excessively vulnerable to influence by the nuclear power industry and to elected officials who favor economic growth over hypothetical safety concerns, with the result that it tends to err in favor of the economic interests of the industry.
Like many regulatory agencies, the NRC occupies uneasy ground between the need to guard public safety and the pressure from the industry it regulates to get off its back. When push comes to shove in that balancing act, the nuclear industry knows it can count on a sympathetic hearing in Congress; with millions of customers, the nation’s nuclear utilities are an influential lobbying group. (36)
They note that the NRC has consistently declined to undertake more substantial reform of its approach to safety, as recommended by its own panel of experts. The key recommendation of the Near-Term Task Force (NTTF) was that the regulatory framework should be anchored in a more strenuous standard of accident prevention, requiring plant owners to address "beyond-design-basis accidents". The Fukushima earthquake and tsunami events were "beyond-design-basis"; nonetheless, they occurred, and the NTTF recommended that safety planning should incorporate consideration of these unlikely but possible events.
The task force members believed that once the first proposal was implemented, establishing a well-defined framework for decision making, their other recommendations would fall neatly into place. Absent that implementation, each recommendation would become bogged down as equipment quality specifications, maintenance requirements, and training protocols got hashed out on a case-by-case basis. But when the majority of the commissioners directed the staff in 2011 to postpone addressing the first recommendation and focus on the remaining recommendations, the game was lost even before the opening kickoff. The NTTF’s Recommendation 1 was akin to the severe accident rulemaking effort scuttled nearly three decades earlier, when the NRC considered expanding the scope of its regulations to address beyond-design accidents. Then, as now, the perceived need for regulatory “discipline,” as well as industry opposition to an expansion of the NRC’s enforcement powers, limited the scope of reform. The commission seemed to be ignoring a major lesson of Fukushima Daiichi: namely, that the “fighting the last war” approach taken after Three Mile Island was simply not good enough. (kl 253)
As a result, "regulatory discipline" (essentially the pro-business ideology that holds that regulation should be kept to a minimum) prevailed, and the primary recommendation was tabled. The issue was of great importance, in that it involved setting the standard of risk and accident severity for which the owner needed to plan. By staying with the lower standard, the NRC left the door open to the most severe kinds of accidents.

The NTTF task force also addressed the issue of "delegated regulation" (in which the agency defers to the industry in many issues of certification and risk assessment) (Here is the FAA's definition of delegated regulation; link.)
The task force also wanted the NRC to reduce its reliance on industry voluntary initiatives, which were largely outside of regulatory control, and instead develop its own “strong program for dealing with the unexpected, including severe accidents.” (252)
Other more detail-oriented recommendations were refused as well -- for example, a requirement to install reliable hardened containment vents in boiling water reactors, with a requirement that these vents should incorporate filters to remove radioactive gas before venting. 
But what might seem a simple, logical decision—install a $15 million filter to reduce the chance of tens of billions of dollars’ worth of land contamination as well as harm to the public—got complicated. The nuclear industry launched a campaign to persuade the NRC commissioners that filters weren’t necessary. A key part of the industry’s argument was that plant owners could reduce radioactive releases more effectively by using FLEX equipment.... In March 2013, they voted 3–2 to delay a requirement that filters be installed, and recommended that the staff consider other alternatives to prevent the release of radiation during an accident. (254)
The NRC voted against including the requirement of filters on containment vents, a decision that was based on industry arguments that the cost of the filters was excessive and unnecessary.

The authors argue that the NRC needs to significantly rethink its standards of safety and foreseeable risk.
What is needed is a new, commonsense approach to safety, one that realistically weighs risks and counterbalances them with proven, not theoretical, safety requirements. The NRC must protect against severe accidents, not merely pretend they cannot occur. (257)
Their recommendation is to make use of an existing and rigorous plan for reactor safety incorporating the results of "severe accident mitigation alternatives" (SAMA) analysis already performed -- but largely disregarded.

However, they are not optimistic that the NRC will be willing to undertake these substantial changes that would significantly enhance safety and make a Fukushima-scale disaster less likely. Reporting on a post-Fukushima conference sponsored by the NRC, they write:
But by now it was apparent that little sentiment existed within the NRC for major changes, including those urged by the commission’s own Near-Term Task Force to expand the realm of “adequate protection.”
Lochbaum and his co-authors also make an intriguing series of points about the use of modeling and simulation in the effort to evaluate safety in nuclear plants. They agree that simulation methods are an essential part of the toolkit for nuclear engineers seeking to evaluate accident scenarios; but they argue that the simulation tools currently available (or perhaps ever available) fall far short of the precision sometimes attributed to them. So simulation tools sometimes give a false sense of confidence in the existing safety arrangements in a particular setting.
Even so, the computer simulations could not reproduce numerous important aspects of the accidents. And in many cases, different computer codes gave different results. Sometimes the same code gave different results depending on who was using it. The inability of these state-of-the-art modeling codes to explain even some of the basic elements of the accident revealed their inherent weaknesses—and the hazards of putting too much faith in them. (263)
In addition to specific observations about the functioning of the NRC the authors identify chronic failures in the nuclear power system in Japan that should be of concern in the United States as well. Conflict of interest, falsification of records, and punishment of whistleblowers were part of the culture of nuclear power and nuclear regulation in Japan. And these problems can arise in the United States as well. Here are examples of the problems they identify in the Japanese nuclear power system; it is a valuable exercise to attempt to determine whether these issues arise in the US regulatory environment as well.

Non-compliance and falsification of records in Japan
Headlines scattered over the decades built a disturbing picture. Reactor owners falsified reports. Regulators failed to scrutinize safety claims. Nuclear boosters dominated safety panels. Rules were buried for years in endless committee reviews. “Independent” experts were financially beholden to the nuclear industry for jobs or research funding. “Public” meetings were padded with industry shills posing as ordinary citizens. Between 2005 and 2009, as local officials sponsored a series of meetings to gauge constituents’ views on nuclear power development in their communities, NISA encouraged the operators of five nuclear plants to send employees to the sessions, posing as members of the public, to sing the praises of nuclear technology. (46)
The authors do not provide evidence about similar practices in the United States, though the history of the Davis-Besse nuclear plant in Ohio suggests that similar things happen in the US industry. Charles Perrow treats the Davis-Besse near-disaster in a fair amount of detail; link. Descriptions of the Davis-Besse nuclear incident can be found herehere, here, and here.
Conflict of interest
Shortly after the Fukushima accident, Japan’s Yomiuri Shimbun reported that thirteen former officials of government agencies that regulate energy companies were currently working for TEPCO or other power firms. Another practice, known as amaagari, “ascent to heaven,” spins the revolving door in the opposite direction. Here, the nuclear industry sends retired nuclear utility officials to government agencies overseeing the nuclear industry. Again, ferreting out safety problems is not a high priority.
Punishment of whistle-blowers
In 2000, Kei Sugaoka, a nuclear inspector working for GE at Fukushima Daiichi, noticed a crack in a reactor’s steam dryer, which extracts excess moisture to prevent harm to the turbine. TEPCO directed Sugaoka to cover up the evidence. Eventually, Sugaoka notified government regulators of the problem. They ordered TEPCO to handle the matter on its own. Sugaoka was fired. (47)
There is a similar story in the Davis-Besse plant history.

Factors that interfere with effective regulation

In summary: there appear to be several structural factors that make nuclear regulation less effective than it needs to be.

First is the fact of the political power and influence of the nuclear industry itself. This was a major factor in the background of the Chernobyl disaster as well, where generals and party officials pushed incessantly for rapid completion of reactors; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe. Lochbaum and his collaborators demonstrate the power that TEPCO had in shaping the regulations under which it built the Fukushima complex, including the assumptions that were incorporated about earthquake risk and tsunami risk. Charles Perrow demonstrates a comparable ability by the nuclear industry in the United States to influence the rules and procedures that govern their use of nuclear power as well (link). This influence permits the owners of nuclear power plants to influence the content of regulation as well as the systems of inspection and oversight that the agency adopts.

A related factor is the set of influences and lobbying points that come from the needs of the economy and the production pressures of the energy industry. (Interestingly enough, this was also a major influence on Soviet decision-making in choosing the graphite-moderated light water reactor for use at Chernobyl and numerous other plants in the 1960s; Serhii Plokhy, Chernobyl: The History of a Nuclear Catastrophe.)

Third is the fact emphasized by Charles Perrow that the NRC is primarily governed by Congress, and legislators are themselves vulnerable to the pressures and blandishments of the industry and demands for a low-regulation business environment. This makes it difficult for the NRC to carry out its role as independent guarantor of the health and safety of the public. Here is Perrow's description of the problem in The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters (quoting Lochbaum from a 2004 Union of Concerned Scientists report):
With utilities profits falling when the NRC got tough after the Time story, the industry not only argued that excessive regulation was the problem, it did something about what it perceived as harassment. The industry used the Senate subcommittee that controls the agency’s budget, headed by a pro-nuclear Republican senator from New Mexico, Pete Domenici. Using the committee’s funds, he commissioned a special study by a consulting group that was used by the nuclear industry. It recommended cutting back on the agency’s budget and size. Using the consultant’s report, Domenici “declared that the NRC could get by just fine with a $90 million budget cut, 700 fewer employees, and a greatly reduced inspection effort.” (italics supplied) The beefed-up inspections ended soon after the threat of budget cuts for the agency. (Mangels 2003) And the possibility for public comment was also curtailed, just for good measure. Public participation in safety issues once was responsible for several important changes in NRC regulations, says David Lochbaum, a nuclear safety engineer with the Union of Concerned Scientists, but in 2004, the NRC, bowed to industry pressure and virtually eliminated public participation. (Lochbaum 2004) As Lochbaum told reporter Mangels, “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.”  (The Next Catastrophe, kl 2799)
A fourth important factor is a pervasive complacency within the professional nuclear community about the inherent safety of nuclear power. This is a factor mentioned by Lochbaum:
Although the accident involved a failure of technology, even more worrisome was the role of the worldwide nuclear establishment: the close-knit culture that has championed nuclear energy—politically, economically, socially—while refusing to acknowledge and reduce the risks that accompany its operation. Time and again, warning signs were ignored and near misses with calamity written off. (kl 87)
This is what we might call an ideological or cultural factor, in that it describes a mental framework for thinking about the technology and the public. It is very real factor in decision-making, both within the industry and in the regulatory world. Senior nuclear engineering experts at major research universities seem to share the view that the public "fear" of nuclear power is entirely misplaced, given the safety record of the industry. They believe the technical problems of nuclear power generation have been solved, and that a rational society would embrace nuclear power without anxiety. For rebuttal to this complacency, see Rose and Sweeting's report in the Bulletin of the Atomic Scientists, "How safe is nuclear power? A statistical study suggests less than expected" (link). Here is the abstract to their paper:
After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available.
Lee Clarke's book on planning for disaster on the basis of unrealistic models and simulations is relevant here. In Mission Improbable: Using Fantasy Documents to Tame Disaster Clarke argues that much of the planning currently in place for largescale disasters depends upon models, simulations, and scenario-building tools in which we should have very little confidence.

The complacency about nuclear safety mentioned here makes safety regulation more difficult and, paradoxically, makes the safe use of nuclear power more unlikely. Only when the risks are confronted with complete transparency and honesty will it be possible to design regulatory systems that do an acceptable job of ensuring the safety and health of the public.

In short, Lochbaum and his co-authors seem to provide evidence for the conclusion that the NRC is not in a position to perform its primary function: to establish a rational and scientifically well grounded set of standards for safe reactor design and operation. Further, its ability to enforce through inspection seems impaired as well by the power and influence the nuclear industry can deploy through Congress to resist its regulatory efforts. Good expert knowledge is canvassed through the NRC's processes; but the policy recommendations that flow from this scientific analysis are all too often short-circuited by the ability of the industry to fend off new regulatory requirements. Lochbaum's comment quoted by Perrow above seems all too true: “The NRC is as good a regulator as Congress permits it to be. Right now, Congress doesn’t want a good regulator.” 

It is very interesting to read the transcript of a 2014 hearing of the Senate Committee on Environment and Public Works titled "NRC'S IMPLEMENTATION OF THE FUKUSHIMA NEAR-TERM TASK FORCE RECOMMENDATIONS AND OTHER ACTIONS TO ENHANCE AND MAINTAIN NUCLEAR SAFETY" (link). Senator Barbara Boxer, California Democrat and chair of the committee, opened the meeting with these words:
Although Chairman Macfarlane said, when she announced her resignation, she had assured that ‘‘the agency implemented lessons learned from the tragic accident at Fukushima.’’ She said, ‘‘the American people can be confident that such an accident will never take place here.’’

I say the reality is not a single one of the 12 key safety recommendations made by the Fukushima Near-Term Task Force has been implemented. Some reactor operators are still not in compliance with the safety requirements that were in place before the Fukushima disaster. The NRC has only completed its own action 4 of the 12 task force recommendations.
This is an alarming assessment, and one that is entirely in accord with the observations made by Lochbaum above.

Sunday, August 11, 2019

Hegel on labor and freedom



Hegel provided a powerful conception of human beings in the world and a rich conception of freedom. Key to that conception is the idea of self-creation through labor. Hegel had an "aesthetic" conception of labor: human beings confront the raw given of nature and transform it through intelligent effort into things they imagine that will satisfy their needs and desires.

Alexandre Kojève's reading of Hegel is especially clear on Hegel's conception of labor and freedom. This is provided in Kojève's analysis of the Master-Slave section of Hegel's Phenomenology in his Introduction to the Reading of Hegel. The key idea is expressed in these terms:
The product of work is the worker's production. It is the realization of his project, of his idea; hence, it is he that is realized in and by this product, and consequently he contemplates himself when he contemplates it.... Therefore, it is by work, and only by work, that man realizes himself objectively as man. (Kojève, Introduction to the Reading of Hegel)
It seems to me that this framework of thought provides an interesting basis for a philosophy of technology as well. We might think of technology as collective and distributed labor, the processes through which human beings collectively transform the world around themselves to better satisfy human needs. Through intelligence and initiative human beings and organizations transform the world around them to create new possibilities for human need satisfaction. Labor and technology are emancipating and self-creating. Labor and technology help to embody the conditions of freedom.

However, this assessment is only one side of the issue. Technologies are created for a range of reasons by a heterogeneous collection of actors: generating profits, buttressing power relations, serving corporate and political interests. It is true that new technologies often serve to extend the powers of the human beings who use them, or to satisfy their needs and wants more fully and efficiently. Profit motives and the market help to ensure that this is true to some extent; technologies and products need to be "desired" if they are to be sold and to generate profits for the businesses that produce them. But given the conflicts of interest that exist in human society, technologies also serve to extend the capacity of some individuals and groups to wield power over others.

This means that there is a dark side to labor and technology as well. There is the labor of un-freedom. Not all labor allows the worker to fulfill him- or herself through free exercise of talents. Instead the wage laborer is regulated by the time clock and the logic of cost reduction. This constitutes Marx's most fundamental critique of capitalism, as a system of alienation and exploitation of the worker as a human being. Here are a few paragraphs on alienated labor from Marx's Economic and Philosophical Manuscripts:
The worker becomes all the poorer the more wealth he produces, the more his production increases in power and size. The worker becomes an ever cheaper commodity the more commodities he creates. The devaluation of the world of men is in direct proportion to the increasing value of the world of things. Labor produces not only commodities; it produces itself and the worker as a commodity – and this at the same rate at which it produces commodities in general. 
This fact expresses merely that the object which labor produces – labor’s product – confronts it as something alien, as a power independent of the producer. The product of labor is labor which has been embodied in an object, which has become material: it is the objectification of labor. Labor’s realization is its objectification. Under these economic conditions this realization of labor appears as loss of realization for the workers objectification as loss of the object and bondage to it; appropriation as estrangement, as alienation. 
So much does the labor’s realization appear as loss of realization that the worker loses realization to the point of starving to death. So much does objectification appear as loss of the object that the worker is robbed of the objects most necessary not only for his life but for his work. Indeed, labor itself becomes an object which he can obtain only with the greatest effort and with the most irregular interruptions. So much does the appropriation of the object appear as estrangement that the more objects the worker produces the less he can possess and the more he falls under the sway of his product, capital. 
All these consequences are implied in the statement that the worker is related to the product of labor as to an alien object. For on this premise it is clear that the more the worker spends himself, the more powerful becomes the alien world of objects which he creates over and against himself, the poorer he himself – his inner world – becomes, the less belongs to him as his own. It is the same in religion. The more man puts into God, the less he retains in himself. The worker puts his life into the object; but now his life no longer belongs to him but to the object. Hence, the greater this activity, the more the worker lacks objects. Whatever the product of his labor is, he is not. Therefore, the greater this product, the less is he himself. The alienation of the worker in his product means not only that his labor becomes an object, an external existence, but that it exists outside him, independently, as something alien to him, and that it becomes a power on its own confronting him. It means that the life which he has conferred on the object confronts him as something hostile and alien.
So does labor fulfill freedom or create alienation? Likewise, does technology emancipate and fulfill us, or does it enthrall and disempower us? Marx's answer to the first question is that it does both, depending on the social relations within which it is defined, managed, and controlled.

It would seem that we can answer the second question for ourselves, in much the same terms. Technology both extends freedom and constricts it. It is indeed true that technology can extend human freedom and realize human capacities. The use of technology and science in agriculture means that only a small percentage of people in advanced countries are farmers, and those who are enjoy a high standard of living compared to peasants of the past. Communication and transportation technologies create new possibilities for education, personal development, and self-expression. The enhancements to economic productivity created by technological advances have permitted a huge increase in the wellbeing of ordinary people in the past century -- a fact that permits us to pursue the things we care about more freely. But new technologies also can be used to control people, to monitor their thoughts and actions, and to wage war against them. More insidiously, new technologies may "alienate" us in new ways -- make us less social, less creative, and less independent of mind and thought.

So it seems clear on its face that technology is both favorable to the expansion of freedom and the exercise of human capacities, and unfavorable. It is the social relations through which technology is exercised and controlled that make the primary difference in which effect is more prominent.

Friday, August 9, 2019

The sociology of scientific discipline formation


There was a time in the philosophy of science when it may have been believed that scientific knowledge develops in a logical, linear way from observation and experiment to finished theory. This was something like the view presupposed by the founding logical positivists like Carnap and Reichenbach. But we now understand that the creation of a field of science is a social process with a great deal of contingency and path-dependence. The institutions through which science proceeds -- journals, funding agencies, academic departments, Ph.D. programs -- are all influenced by the particular interests and goals of a variety of actors, with the result that a field of science develops (or fails to develop) with a huge amount of contingency. Researchers in the history of science and the sociology of science and technology approach this problem in fairly different ways.

Scott Frickel's 2004 book Chemical Consequences: Environmental Mutagens, Scientist Activism, and the Rise of Genetic Toxicology represents an effort to trace out the circumstances of the emergence of a new scientific sub-discipline, genetic toxicology. "This book is a historical sociological account of the rise of genetic toxicology and the scientists' social movement that created it" (kl 37).

Frickel identifies two large families of approaches to the study of scientific disciplines: "institutionalist accounts of discipline and specialty formation" and "cultural studies of 'disciplinarity' [that] make few epistemological distinctions between the cognitive core of scientific knowledge and the social structures, practices, and processes that advance and suspend it" (kl 63). He identifies himself primarily with the former approach:
I draw from both modes of analysis, but I am less concerned with what postmodernist science studies call the micropolitics of meaning than I am with the institutional politics of knowledge. This perspective views discipline building as a political process that involves alliance building, role definition, and resource allocation. ... My main focus is on the structures and processes of decision making in science that influence who is authorized to make knowledge, what groups are given access to that knowledge, and how and where that knowledge is implemented (or not). (kl 71)
Crucial for Frickel's study of genetic toxicology is this family of questions: "How is knowledge produced, organized, and made credible 'in-between' existing disciplines? What institutional conditions nurture interdisciplinary work? How are porous boundaries controlled? Genetic toxicology's advocates pondered similar questions. Some complained that disciplinary ethnocentrism prevented many biologists' appreciation for the broader ecological implications of their own investigations.... " (kl 99).

The account Frickel provides involves all of the institutional contingency that we might hope for; at the same time, it is an encouraging account for anyone committed to the importance of scientific research in charting a set of solutions to the enormous problems humanity currently faces.
Led by geneticists, these innovations were also intensely interdisciplinary, reflecting the efforts of scientists working in academic, government, and industry settings whose training was rooted in more than thirty disciplines and departments ranging across the biological, agricultural, environmental, and health sciences. Although falling short of some scientists' personal visions of what this new science could become, their campaign had lasting impacts. Chief among these outcomes have been the emergence of a set of institutions, professional roles, and laboratory practices known collectively as "genetic toxicology." (kl 37)
Frickel gives prominence to the politics of environmental activism in the emergence and directions of the new discipline of genetic toxicology. Activists on campus and in the broader society gave impetus to the need for new scientific research on the various toxic effects of pesticides and industrial chemicals; but they also affected the formation of the scientists themselves.

Also of interest is an edited volume on interdisciplinary research in the sciences edited by Frickel, Mathieu Albert, and Barbara Prainsack, Investigating Interdisciplinary Collaboration: Theory and Practice across Disciplines. The book takes special notice of some of the failures of interdisciplinarity, and calls for a careful assessment of the successes and failures of interdisciplinary research projects.
 We think that these celebratory accounts give insufficient analytical attention to the insistent and sustained push from administrators, policy makers, and funding agencies to engineer new research collaborations across disciplines. In our view, the stakes of these efforts to seed interdisciplinary research and teaching "from above" are sufficiently high to warrant a rigorous empirical examination of the academic and social value of interdisciplinarity. (kl 187)
In their excellent introduction Frickel, Albert, and Prainsack write:
A major problem that one confronts in assuming the superiority of interdisciplinary research is a basic lack of studies that use comparative designs to establish that measurable differences in fact exist and to demonstrate the value of interdisciplinarity relative to disciplinary research. (kl 303)
They believe that the appreciation of "interdisciplinary research projects" for its own sake depends on several uncertain presuppositions: that interdisciplinary knowledge is better knowledge, that disciplines constrain interdisciplinary knowledge, and that interdisciplinary interactions are unconstrained by hierarchies. They believe that each of these assumptions is dubious.

Both books are highly interesting to anyone concerned with the development and growth of scientific knowledge. Once we abandoned the premises of logical positivism, we needed a more sophisticated understanding of how the domain of scientific research, empirical and theoretical, is constituted in actual social institutional settings. How is it that Western biology did better than Lysenko? How can environmental science re-establish its credentials for credibility with an increasingly skeptical public?  How are we to cope with the proliferation of pseudo-science in crucial areas -- health and medicine, climate, the feasibility of human habitation on Mars? Why should we be confident that the institutions of university science, peer review, tier-one journals, and National Academy selection committees succeed in guiding us to better, more veridical understandings of the empirical world around us?

Earlier posts have addressed topics concerning social studies of science; link, link, link.)