Tuesday, October 23, 2018

Sexual harassment in academic contexts


Sexual harassment of women in academic settings is regrettably common and pervasive, and its consequences are grave. At the same time, it is a remarkably difficult problem to solve. The "me-too" movement has shed welcome light on specific individual offenders and has generated more awareness of some aspects of the problem of sexual harassment and misconduct. But we have not yet come to a public awareness of the changes needed to create a genuinely inclusive and non-harassing environment for women across the spectrum of mistreatment that has been documented. The most common institutional response following an incident is to create a program of training and reporting, with a public commitment to investigating complaints and enforcing university or institutional policies rigorously and transparently. These efforts are often well intentioned, but by themselves they are insufficient. They do not address the underlying institutional and cultural features that make sexual harassment so prevalent.

The problem of sexual harassment in institutional contexts is a difficult one because it derives from multiple features of the organization. The ambient culture of the organization is often an important facilitator of harassing behavior -- often enough a patriarchal culture that is deferential to the status of higher-powered individuals at the expense of lower-powered targets. There is the fact that executive leadership in many institutions continues to be predominantly male, who bring with them a set of gendered assumptions that they often fail to recognize. The hierarchical nature of the power relations of an academic institution is conducive to mistreatment of many kinds, including sexual harassment. Bosses to administrative assistants, research directors to post-docs, thesis advisors to PhD candidates -- these unequal relations of power create a conducive environment for sexual harassment in many varieties. In each case the superior actor has enormous power and influence over the career prospects and work lives of the women over whom they exercise power. And then there are the habits of behavior that individuals bring to the workplace and the learning environment -- sometimes habits of masculine entitlement, sometimes disdainful attitudes towards female scholars or scientists, sometimes an underlying willingness to bully others that finds expression in an academic environment. (A recent issue of the Journal of Social Issues (link) devotes substantial research to the topic of toxic leadership in the tech sector and the "masculinity contest culture" that this group of researchers finds to be a root cause of the toxicity this sector displays for women professionals. Research by Jennifer Berdahl, Peter Glick, Natalya Alonso, and more than a dozen other scholars provides in-depth analysis of this common feature of work environments.)

The scope and urgency of the problem of sexual harassment in academic contexts is documented in excellent and expert detail in a recent study report by the National Academies of Sciences, Engineering, and Medicine (link). This report deserves prominent discussion at every university.

The study documents the frequency of sexual harassment in academic and scientific research contexts, and the data are sobering. Here are the results of two indicative studies at Penn State University System and the University of Texas System:




The Penn State survey indicates that 43.4% of undergraduates, 58.9% of graduate students, and 72.8% of medical students have experienced gender harassment, while 5.1% of undergraduates, 6.0% of graduate students, and 5.7% of medical students report having experienced unwanted sexual attention and sexual coercion. These are staggering results, both in terms of the absolute number of students who were affected and the negative effects that these  experiences had on their ability to fulfill their educational potential. The University of Texas study shows a similar pattern, but also permits us to see meaningful differences across fields of study. Engineering and medicine provide significantly more harmful environments for female students than non-STEM and science disciplines. The authors make a particularly worrisome observation about medicine in this context:
The interviews conducted by RTI International revealed that unique settings such as medical residencies were described as breeding grounds for abusive behavior by superiors. Respondents expressed that this was largely because at this stage of the medical career, expectation of this behavior was widely accepted. The expectations of abusive, grueling conditions in training settings caused several respondents to view sexual harassment as a part of the continuum of what they were expected to endure. (63-64)
The report also does an excellent job of defining the scope of sexual harassment. Media discussion of sexual harassment and misconduct focuses primarily on egregious acts of sexual coercion. However, the  authors of the NAS study note that experts currently encompass sexual coercion, unwanted sexual attention, and gender harassment under this category of harmful interpersonal behavior. The largest sub-category is gender harassment:
"a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about" members of one gender (Fitzgerald, Gelfand, and Drasgow 1995, 430). (25)
The "iceberg" diagram (p. 32) captures the range of behaviors encompassed by the concept of sexual harassment. (See Leskinen, Cortina, and Kabat 2011 for extensive discussion of the varieties of sexual harassment and the harms associated with gender harassment.)


The report emphasizes organizational features as a root cause of a harassment-friendly environment.
By far, the greatest predictors of the occurrence of sexual harassment are organizational. Individual-level factors (e.g., sexist attitudes, beliefs that rationalize or justify harassment, etc.) that might make someone decide to harass a work colleague, student, or peer are surely important. However, a person that has proclivities for sexual harassment will have those behaviors greatly inhibited when exposed to role models who behave in a professional way as compared with role models who behave in a harassing way, or when in an environment that does not support harassing behaviors and/or has strong consequences for these behaviors. Thus, this section considers some of the organizational and environmental variables that increase the risk of sexual harassment perpetration. (46)
Some of the organizational factors that they refer to include the extreme gender imbalance that exists in many professional work environments, the perceived absence of organizational sanctions for harassing behavior, work environments where sexist views and sexually harassing behavior are modeled, and power differentials (47-49). The authors make the point that gender harassment is chiefly aimed at indicating disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:
Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)
So what can a university or research institution do to reduce and eliminate the likelihood of sexual harassment for women within the institution? Several remedies seem fairly obvious, though difficult.
  • Establish a pervasive expectation of civility and respect in the workplace and the learning environment
  • Diffuse the concentrations of power that give potential harassers the opportunity to harass women within their domains
  • Ensure that the institution honors its values by refusing the "star culture" common in universities that makes high-prestige university members untouchable
  • Be vigilant and transparent about the processes of investigation and adjudication through which complaints are considered
  • Create effective processes that ensure that complainants do not suffer retaliation
  • Consider candidates' receptivity to the values of a respectful, civil, and non-harassing environment during the hiring and appointment process (including research directors, department and program chairs, and other positions of authority)
  • Address the gender imbalance that may exist in leadership circles
As the authors put the point in the final chapter of the report:
Preventing and effectively addressing sexual harassment of women in colleges and universities is a significant challenge, but we are optimistic that academic institutions can meet that challenge--if they demonstrate the will to do so. This is because the research shows what will work to prevent sexual harassment and why it will work. A systemwide change to the culture and climate in our nation's colleges and universities can stop the pattern of harassing behavior from impacting the next generation of women entering science, engineering, and medicine. (169)

Sunday, October 21, 2018

System effects


Quite a few posts here have focused on the question of emergence in social ontology, the idea that there are causal processes and powers at work at the level of social entities that do not correspond to similar properties at the individual level. Here I want to raise a related question, the notion that an important aspect of the workings of the social world derives from "system effects" of the organizations and institutions through which social life transpires. A system accident or effect is one that derives importantly from the organization and configuration of the system itself, rather than the specific properties of the units.

What are some examples of system effects? Consider these phenomena:
  • Flash crashes in stock markets as a result of automated trading
  • Under-reporting of land values in agrarian fiscal regimes 
  • Grade inflation in elite universities 
  • Increase in product defect frequency following a reduction in inspections 
  • Rising frequency of industrial errors at the end of work shifts 
Here is how Nancy Leveson describes systems causation in Engineering a Safer World: Systems Thinking Applied to Safety:
Safety approaches based on systems theory consider accidents as arising from the interactions among system components and usually do not specify single causal variables or factors. Whereas industrial (occupational) safety models and event chain models focus on unsafe acts or conditions, classic system safety models instead look at what went wrong with the system's operation or organization to allow the accident to take place. (KL 977)
Charles Perrow offers a taxonomy of systems as a hierarchy of composition in Normal Accidents: Living with High-Risk Technologies:
Consider a nuclear plant as the system. A part will be the first level -- say a valve. This is the smallest component of the system that is likely to be identified in analyzing an accident. A functionally related collection of parts, as, for example, those that make up the steam generator, will be called a unit, the second level. An array of units, such as the steam generator and the water return system that includes the condensate polishers and associated motors, pumps, and piping, will make up a subsystem, in this case the secondary cooling system. This is the third level. A nuclear plan has around two dozen subsystems under this rough scheme. They all come together in the fourth level, the nuclear plant or system. Beyond this is the environment. (65)
Large socioeconomic systems like capitalism and collectivized socialism have system effects -- chronic patterns of low productivity and corruption in the latter case, a tendency to inequality and immiseration in the former case. In each case the observed effect is the result of embedded features of property and labor in the two systems that result in specific kinds of outcomes. And an important dimension of social analysis is to uncover the ways in which ordinary actors pursuing ordinary goals within the context of the two systems, lead to quite different outcomes at the level of the "mode of production". And these effects do not depend on there being a distinctive kind of actor in each system; in fact, one could interchange the actors and still find the same macro-level outcomes.

Here is a preliminary effort at a definition for this concept in application to social organizations:
A system effect is an outcome that derives from the embedded characteristics of incentive and opportunity within a social arrangement that lead normal actors to engage in activity leading to the hypothesized aggregate effect.
Once we see what the incentive and opportunity structures are, we can readily see why some fraction of actors modify their behavior in ways that lead to the outcome. In this respect the system is the salient causal factor rather than the specific properties of the actors -- change the system properties and you will change the social outcome.

When we refer to system effects we often have unintended consequences in mind -- unintended both by the individual actors and the architects of the organization or practice. But this is not essential; we can also think of examples of organizational arrangements that were deliberately chosen or designed to bring about the given outcome. In particular, a given system effect may be intended by the designer and unintended by the individual actors. But when the outcomes in question are clearly dysfunctional or "catastrophic", it is natural to assume that they are unintended. (This, however, is one of the specific areas of insight that comes out of the new institutionalism: the dysfunctional outcome may be favorable for some sets of actors even as they are unfavorable for the workings of the system as a whole.)
 
Another common assumption about system effects is that they are remarkably stable through changes of actors and efforts to reverse the given outcome. In this sense they are thought to be somewhat beyond the control of the individuals who make up the system. The only promising way of undoing the effect is to change the incentives and opportunities that bring it about. But to the extent that a given configuration has emerged along with supporting mechanisms protecting it from deformation, changing the configuration may be frustratingly difficult.

Safety and its converse are often described as system effects. By this is often meant two things. First, there is the important insight that traditional accident analysis favors "unit failure" at the expense of more systemic factors. And second, there is the idea that accidents and failures often result from "tightly linked" features of systems, both social and technical, in which variation in one component of a system can have unexpected consequences for the operation of other components of the system. Charles Perrow describes the topic of loose and tight coupling in social systems in Normal Accidents; 89 ff,)

Friday, October 5, 2018

Social mobility disaggregated


There is a new exciting and valuable contribution from the research group around Raj Chetty, Nathan Hendren, and John Friedman, this time on the topic of neighborhood-level social mobility. (Earlier work highlighted measures of the impact on social mobility contributed by university education across the country. This work is presented on the Opportunity Insights website; link, link. Here is an earlier post on that work; link.) In the recently released work Chetty and his colleagues have used census data to compare incomes of parents and children across the country by neighborhood of birth, with the ability to disaggregate by race and gender, and the results are genuinely staggering. Here is a report on the project on the US Census website; link. The interactive dataset and mapping app are provided here (link). The study identifies neighborhoods of origin; characteristics of parents and neighborhoods; and characteristics of children.

Here are screenshots of metropolitan Detroit representing the individual incomes of the children (as adults) based on their neighborhoods of origin for all children, black children, and white children. (Of course a percentage of these individuals no longer live in the original neighborhood.) There are 24 outcome variables included as well as 13 neighborhood characteristics, and it is possible to create maps based on multiple combinations of these variables. It is also possible to download the data.




Children born in Highland Park, Michigan earned an average individual income as adults in 2014-15 of $18K; children born in Plymouth, Michigan earned an average individual income as adults of $42K. It is evident that these differences in economic outcomes are highly racialized; in many of the tracts in the Detroit area there are "insufficient data" for either black or white individuals to provide average data for these sub-populations in the given areas. This reflects the substantial degree of racial segregation that exists in the Detroit metropolitan area. (The project provides a special study of opportunity in Detroit, "Finding Opportunity in Detroit".)

This dataset is genuinely eye-opening for anyone interested in the workings of economic opportunity in the United States. It is also valuable for public policy makers at the local and higher levels who have an interest in improving outcomes for children in poverty. It is possible to use the many parameters included in the data to probe for obstacles to socioeconomic progress that might be addressed through targeted programs of opportunity enhancement.

(Here is a Brookings description of the social mobility project's central discoveries; link.)


Wednesday, October 3, 2018

Emotions as neurophysiological constructs


Are emotions real? Are they hardwired to our physiology? Are they pre-cognitive and purely affective? Was Darwin right in speculating that facial expressions are human universals that accurately represent a small repertoire of emotional experiences (The Expression of the Emotions in Man and Animals)? Or instead are emotions a part of the cognitive output of the brain, influenced by context, experience, expectation, and mental framework? Lisa Feldman Barrett is an accomplished neuroscientist who addresses all of these questions in her recent book How Emotions Are Made: The Secret Life of the Brain, based on several decades of research on the emotions. The book is highly interesting, and has important implications for the social sciences more broadly.

Barrett's core view is that the received theory of the emotions -- that they are hardwired and correspond to specific if unknown neurological groups, connected to specific physiological and motor responses -- is fundamentally wrong. She marshals a great deal of experimental evidence to the incorrectness of that theory. In its place she argues that emotional responses and experiences are the result of mental, conceptual, and cognitive construction by our central nervous system, entirely analogous to our ability to find meaning in a visual field of light and dark areas in order to resolve it as a bee (her example). The emotions are like perception more generally -- they result from an active process in which the brain attempts to impose order and pattern on sensory stimulation, a process she refers to as "simulation". She refers to this as the theory of constructed emotion (30). In brief:
Emotions are not reactions to the world. You are not a passive receiver of sensory input but an active constructor of your emotions. From sensory input and past experience, your brain constructs meaning and prescribes action. If you didn't have concepts that represent your past experience, all your sensory inputs would just be noise. (31)
And further:
Particular concepts like "Anger" and "Distrust" are not genetically determined. Your familiar emotion concepts are built-in only because you grew up in a particular social context where those emotion concepts are meaningful and useful, and your brain applies them outside your awareness to construct your experiences. (33)
This theory has much in common with theorizing about the nature of perception and thought within cognitive psychology, where the constructive nature of perception and representation has been a core tenet. Paul Kolers' motion perception experiments in the 1960s and 1970s established that perception is an active and constructive process, not a simple rendering of information from the retina into visual diagrams in the mind (Aspects of Motion Perception). And Daniel Dennett's Consciousness Explained argues for a "multiple drafts" theory of conscious experience which once again emphasizes the active and constructive nature of consciousness.

One implication of Barrett's theory is that emotions are concept-dependent. We need to learn the terms for emotions in our ambient language community before we can experience them. The emotions we experience are conceptually loaded and structured.
People who exhibit low emotional granularity will have only a few emotion concepts. In English, they might have words in their vocabulary like "sadness," "fear," "guilt," "shame," "embarrassment," "irritation," "anger," and "contempt," but those words all correspond to the same concept whose goal is something like "feeling unpleasant." This person has a few tools -- a hammer and Swiss Army knife. (106)
In a later chapter Barrett takes her theory in a Searle-like direction by emphasizing the inherent and irreducible constructedness of social facts and social relations (chapter 7). Without appropriate concepts we cannot understand or represent the behaviors and interactions of people around us; and their interactions depend inherently on the conceptual systems or frames within which we place their actions. Language, conceptual frames, and collective intentionality are crucial constituents of social facts, according to this perspective. I find Searle's arguments on this subject less than convincing (link), and I'm tempted to think that Barrett is going out on a limb by embracing his views more extensively than needed for her own theory of the emotions.

I find Barrett's work interesting for a number of reasons. One is the illustration it provides of human plasticity and heterogeneity. "Any category of emotion such as "Happiness" or "Guilt" is filled with variety" (35). Another is the methodological sophistication Barrett demonstrates in her refutation of two thousand years of received wisdom about the emotions, from Aristotle and Plato to Paul Ekman and colleagues. This sophistication extends to her effort to avoid language in describing emotions and research strategies that embeds the ontology of the old view -- an ontology that reifies particular emotions in the head and body of the other human being (40). She correctly observes that language like "detecting emotion X in the subject" implies that the psychological condition exists as a fixed reality in the subject; whereas the whole point of her theory is that the experience of disgust or happiness is a transient and complex construction by the brain behind the scenes of our conscious experience. She is "anti-realist" in her treatment of emotion. "We don't recognize emotions or identify emotions: we construct our own emotional experiences, and our perceptions of others' emotions, on the spot, as needed, through a complex interplay of systems" (40). And finally, her theory of emotion as a neurophysiological construct has a great deal of credibility -- its internal logic, its fit with current understandings of the central nervous system, its convergence with cognitive psychology and perception theory, and the range of experimental evidence that Barrett brings to bear.

Sunday, September 30, 2018

Philosophy and the study of technology failure

image: Adolf von Menzel, The Iron Rolling Mill (Modern Cyclopes)

Readers may have noticed that my current research interests have to do with organizational dysfunction and largescale technology failures. I am interested in probing the ways in which organizational failures and dysfunctions have contributed to large accidents like Bhopal, Fukushima, and the Deepwater Horizon disaster. I've had to confront an important question in taking on this research interest: what can philosophy bring to the topic that would not be better handled by engineers, organizational specialists, or public policy experts?

One answer is the diversity of viewpoint that a philosopher can bring to the discussion. It is evident that technology failures invite analysis from all of these specialized experts, and more. But there is room for productive contribution from reflective observers who are not committed to any of these disciplines. Philosophers have a long history of taking on big topics outside the defined canon of "philosophical problems", and often those engagements have proven fruitful. In this particular instance, philosophy can look at organizations and technology in a way that is more likely to be interdisciplinary, and perhaps can help to see dimensions of the problem that are less apparent from a purely disciplinary perspective.

There is also a rationale based on the terrain of the philosophy of science. Philosophers of biology have usually attempted to learn as much about the science of biology as they can manage, but they lack the level of expertise of a research biologist, and it is rare for a philosopher to make an original contribution to the scientific biological literature. Nonetheless it is clear that philosophers have a great deal to add to scientific research in biology. They can contribute to better reasoning about the implications of various theories, they can probe the assumptions about confirmation and explanation that are in use, and they can contribute to important conceptual disagreements. Biology is in a better state because of the work of philosophers like David Hull and Elliot Sober.

Philosophers have also made valuable contributions to science and technology studies, bringing a viewpoint that incorporates insights from the philosophy of science and a sensitivity to the social groundedness of technology. STS studies have proven to be a fruitful place for interaction between historians, sociologists, and philosophers. Here again, the concrete study of the causes and context of large technology failure may be assisted by a philosophical perspective.

There is also a normative dimension to these questions about technology failure for which philosophy is well prepared. Accidents hurt people, and sometimes the causes of accidents involve culpable behavior by individuals and corporations. Philosophers have a long history of contribution to these kinds of problems of fault, law, and just management of risks and harms.

Finally, it is realistic to say that philosophy has an ability to contribute to social theory. Philosophers can offer imagination and critical attention to the problem of creating new conceptual schemes for understanding the social world. This capacity seems relevant to the problem of describing, analyzing, and explaining largescale failures and disasters.

The situation of organizational studies and accidents is in some ways more hospitable for contributions by a philosopher than other "wicked problems" in the world around us. An accident is complicated and complex but not particularly obscure. The field is unlike quantum mechanics or climate dynamics, which are inherently difficult for non-specialists to understand. The challenge with accidents is to identify a multi-layered analysis of the causes of the accident that permits observers to have a balanced and operative understanding of the event. And this is a situation where the philosopher's perspective is most useful. We can offer higher-level descriptions of the relative importance of different kinds of causal factors. Perhaps the role here is analogous to messenger RNA, providing a cross-disciplinary kind of communications flow. Or it is analogous to the role of philosophers of history who have offered gentle critique of the cliometrics school for its over-dependence on a purely statistical approach to economic history.

So it seems reasonable enough for a philosopher to attempt to contribute to this set of topics, even if the disciplinary expertise a philosopher brings is more weighted towards conceptual and theoretical discussions than undertaking original empirical research in the domain.

What I expect to be the central finding of this research is the idea that a pervasive and often unrecognized cause of accidents is a systemic organizational defect of some sort, and that it is enormously important to have a better understanding of common forms of these deficiencies. This is a bit analogous to a paradigm shift in the study of accidents. And this view has important policy implications. We can make disasters less frequent by improving the organizations through which technology processes are designed and managed.

Thursday, September 27, 2018

James Scott on the earliest states


In 2011 James Scott gave a pair of Tanner Lectures at Harvard. He had chosen a topic for which he felt he had a fairly good understanding, having taught on early agrarian societies throughout much of his career. The topic was the origins of the earliest states in human history. But as he explains in the preface to the 2017 book Against the Grain: A Deep History of the Earliest States, preparation for the lectures led him into brand new debates, bodies of evidence, and theories which were pretty much off his personal map. The resulting book is his effort to bring his own understanding up to date, and it is a terrific and engaging book.

Scott gives a quick summary of the view of early states, nutrition, agriculture, and towns that he shared with most historians of early civilizations up through a few decades ago. Hunter-gatherer human groups were the primary mode of living for tens of thousands of years at the dawn of civilization. Humanity learned to domesticate plants and animals, creating a basis for sedentary agriculture in hamlets and villages. With the increase in productivity associated with settled agriculture, it was possible for nascent political authorities to collect taxes and create political institutions. Agriculture and politics created the conditions that conduced to the establishment of larger towns, and eventually cities. And humanity surged forward in terms of population size and quality of life.

But, as Scott summarizes, none of these sequences has held up to current scholarship.
We thought ... that the domestication of plants and animals led directly to sedentism and fixed-field agriculture. It turns out that sedentism long preceded evidence of plant and animal domestication and that both sedentism and domestication were in place at least four millennia before anything like agricultural villages appeared. (xi)
...
The early states were fragile and liable to collapse, but the ensuing "dark ages" may often have marked an actual improvement in human welfare. Finally, there is a strong case to be made that life outside the state -- life as a "barbarian" -- may often have been materially easier, freer, and healthier than life at least for nonelites inside civilization. (xii)
There is an element of "who are we?" in the topic -- that is, what features define modern humanity? Here is Scott's most general answer:
A sense, then, for how we came to be sedentary, cereal-growing, livestock-rearing subjects governed by the novel institution we now call the state requires an excursion into deep history. (3)
Who we are, in this telling of the story, is a species of hominids who are sedentary, town-living, agriculture-dependent subjects of the state. But this characterization is partial (as of course Scott knows); we are also meaning-makers, power-wielders, war-fighters, family-cultivators, and sometimes rebels. And each of these other qualities of humanity leads us in the direction of a different kinds of history, requiring a Clifford Geertz, a Michael Mann, a Tolstoy or a Marx to tell the story.

A particularly interesting part of the novel story about these early origins of human civilization that Scott provides has to do with the use of fire in the material lives of pre-technology humans -- hunters, foragers, and gatherers -- in a deliberate effort to sculpt the natural environment around then to concentrate food resources. According to Scott's readings of recent archeology and pre-agriculture history, human communities used fire to create the specific habitats that would entice their prey to make themselves readily available for the season's meals. He uses a strikingly phrase to capture the goal here -- reducing the radius of a meal. Early foragers literally reshaped the natural environments in which they lived.
What we have here is a deliberate disturbance ecology in which hominids create, over time, a mosaic of biodiversity and a distribution of desirable resources more to their liking. (40)
Most strikingly, Scott suggests a link between massive Native American use of fire to reduce forests, the sudden decline in their population from disease following contact with Europeans and consequent decline in burning, and the onset of the Little Ice Age (1500-1850) as a result of reduced CO2 production (39). Wow!

Using fire for cooking further reduced this "radius of the meal" by permitting early humans to consume a wider range of potential foods. And Scott argues that this innovation had evolutionary consequences for our hominid ancestors: human populations developed a digestive gut only one-third the length of that of other non-fire-using hominids. "We are a fire-adapted species" (42).

Scott makes an intriguing connection between grain-based agriculture and early states. The traditional narrative has it that pre-farming society was too low in food productivity to allow for sedentary life and dense populations. According to Scott this assumption is no longer supported by the evidence. Sedentary life based on foraging, gathering, and hunting was established several thousand years earlier than the development of agriculture. Gathering, farming, settled residence, and state power are all somewhat independent. In fact, Scott argues that these foraging communities were too well situated in their material environment to be vulnerable to a predatory state. "There was no single dominant resource that could be monopolized or controlled from the center, let alone taxed" (57). These communities generally were supported by three or four "food webs" that gave them substantial independence from both climate fluctuation and domination by powerful outsiders (49). Cereal-based civilizations, by contrast, were vulnerable to both threats, and powerful authorities had the ability to confiscate grain at the point of harvest or in storage. Grain made taxation possible.

We often think of hunter-gatherers in terms of game hunters and the feast-or-famine material life described by Marshall Sahlins in Stone Age Economics. But Scott makes the point that there are substantial ecological niches in wetlands where nutrition comes to the gatherers rather than the hunter. And in the early millennia of the lower Nile -- what Scott refers to as the southern alluvium -- the wetland ecological zone was ample for a very satisfactory and regular level of wellbeing. And, of special interest to Scott, "the wetlands are ungovernable" (56). (Notice the parallel with Scott's treatment of Zomia in The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia.)

So who are these early humans who navigated their material worlds so exquisitely well and yet left so little archeological record because they built their homes with sticks, mud, and papyrus?
It makes most sense to see them as agile and astute navigators of a diverse but also changeable and potentially dangerous environment.... We can see this long period as one of continuous experimentation and management of this environment. Rather than relying on only a small bandwidth of food resources, they seem to have been opportunistic generalists with a large portfolio of subsistence options spread across several food webs. (59)
Later chapters offer similarly iconoclastic accounts of the inherent instability of the early states (like a pyramid of tumblers on the stage), the advantages of barbarian civilization, the epidemiology of sedentary life, and other intriguing topics in the early history of humanity. And pervasively, there is the under current of themes that recur often in Scott's work -- the validity and dignity of the hidden players in history, the resourcefulness of ordinary hominids, and the importance of avoiding the received wisdom of humanity's history.

Scott is telling a new story here about where we came from, and it is a fascinating one.

Tuesday, September 25, 2018

System safety


An ongoing thread of posts here is concerned with organizational causes of large technology failures. The driving idea is that failures, accidents, and disasters usually have a dimension of organizational causation behind them. The corporation, research office, shop floor, supervisory system, intra-organizational information flow, and other social elements often play a key role in the occurrence of a gas plant fire, a nuclear power plant malfunction, or a military disaster. There is a tendency to look first and foremost for one or more individuals who made a mistake in order to explain the occurrence of an accident or technology failure; but researchers such as Perrow, Vaughan, Tierney, and Hopkins have demonstrated in detail the importance of broadening the lens to seek out the social and organizational background of an accident.

It seems important to distinguish between system flaws and organizational dysfunction in considering all of the kinds of accidents mentioned here. We might specify system safety along these lines. Any complex process has the potential for malfunction. Good system design means creating a flow of events and processes that make accidents inherently less likely. Part of the task of the designer and engineer is to identify chief sources of harm inherent in the process -- release of energy, contamination of food or drugs, unplanned fission in a nuclear plant -- and design fail-safe processes so that these events are as unlikely as possible. Further, given the complexity of contemporary technology systems it is critical to attempt to anticipate unintended interactions among subsystems -- each of which is functioning correctly but that lead to disaster in unusual but possible interaction scenarios.

In a nuclear processing plant, for example, there is the hazard of radioactive materials being brought into proximity with each other in a way that creates unintended critical mass. Jim Mahaffey's Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima offers numerous examples of such unintended events, from the careless handling of plutonium scrap in a machining process to the transfer of a fissionable liquid from a vessel of one shape to another. We might try to handle these risks as an organizational problem: more and better training for operatives about the importance of handling nuclear materials according to established protocols, and effective supervision and oversight to ensure that the protocols are observed on a regular basis. But it is also possible to design the material processes within a nuclear plant in a way that makes unintended criticality virtually impossible -- for example, by storing radioactive solutions in containers that simply cannot be brought into close proximity with each other.

Nancy Leveson is a national expert on defining and applying principles of system safety. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a thorough treatment of her thinking about this subject. She offers a handful of compelling reasons for believing that safety is a system-level characteristic that requires a systems approach: the fast pace of technological change, reduced ability to learn from experience, the changing nature of accidents, new types of hazards, increasing complexity and coupling, decreasing tolerance for single accidents, difficulty in selecting priorities and making tradeoffs , more complex relationships between humans and automation, and changing regulatory and public view of safety (kl 130 ff.). Particularly important in this list is the comment about complexity and coupling: "The operation of some systems is so complex that it defies the understanding of all but a few experts, and sometimes even they have incomplete information about the system's potential behavior" (kl 137).

Given the fact that safety and accidents are products of whole systems, she is critical of the accident methodology generally applied to serious industrial, aerospace, and chemical accidents. This methodology involves tracing the series of events that led to the outcome, and identifying one or more events as the critical cause of the accident. However, she writes:
In general, event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management decision making, and flaws in the safety culture of the or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation beyond the proximate evens.A narrow focus on technological components and pure engineering activities or a similar narrow focus on operator errors may lead to ignoring some of the most important factors in terms of preventing future accidents. (kl 452)
Here is a definition of system safety offered later in ESW in her discussion of the emergence of the concept within the defense and aerospace fields in the 1960s:
System Safety ... is a subdiscipline of system engineering. It was created at the same time and for the same reasons. The defense community tried using the standard safety engineering techniques on their complex new systems, but the limitations became clear when interface and component interaction problems went unnoticed until it was too late, resulting in many losses and near misses. When these early aerospace accidents were investigated, the causes of a large percentage of them were traced to deficiencies in design, operations, and management. Clearly, big changes were needed. System engineering along with its subdiscipline, System Safety, were developed to tackle these problems. (kl 1007)
Here Leveson mixes system design and organizational dysfunctions as system-level causes of accidents. But much of her work in this book and her earlier Safeware: System Safety and Computers gives extensive attention to the design faults and component interactions that lead to accidents -- what we might call system safety in the narrow or technical sense.
A systems engineering approach to safety starts with the basic assumption that some properties of systems, in this case safety, can only be treated adequately in the context of the social and technical system as a whole. A basic assumption of systems engineering is that optimization of individual components or subsystems will not in general lead to a system optimum; in fact, improvement of a particular subsystem may actually worsen the overall system performance because of complex, nonlinear interactions among the components. (kl 1007)
Overall, then, it seems clear that Leveson believes that both organizational features and technical system characteristics are part of the systems that created the possibility for accidents like Bhopal, Fukushima, and Three Mile Island. Her own accident model designed to help identify causes of accidents, STAMP (Systems-Theoretic Accident Model and Processes) emphasizes both kinds of system properties.
Using this new causality model ... changes the emphasis in system safety from preventing failures to enforcing behavioral safety constraints. Component failure accidents are still included, but or conception of causality is extended to include component interaction accidents. Safety is reformulated as a control problem rather than a reliability problem. (kl 1062)
In this framework, understanding why an accident occurred requires determining why the control was ineffective. Preventing future accidents requires shifting from a focus on preventing failures to the broader goal of designing and implementing controls that will enforce the necessary constraints. (kl 1084)
Leveson's brief analysis of the Bhopal disaster in 1984 (kl 384 ff.) emphasizes the organizational dysfunctions that led to the accident -- and that were completely ignored by the Indian state's accident investigation of the accident: out-of-service gauges, alarm deficiencies, inadequate response to prior safety audits, shortage of oxygen masks, failure to inform the police or surrounding community of the accident, and an environment of cost cutting that impaired maintenance and staffing. "When all the factors, including indirect and systemic ones, are considered, it becomes clear that the maintenance worker was, in fact, only a minor and somewhat irrelevant player in the loss. Instead, degradation in the safety margin occurred over time and without any particular single decision to do so but simply as a series of decisions that moved the plant slowly toward a situation where any slight error would lead to a major accident" (kl 447).

Saturday, September 15, 2018

Patient safety


An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment -- wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes -- making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility -- at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization -- a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital -- the digital patient record system, the devices that administer drugs, the surgical robots -- can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents -- the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses' stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:
Abstract The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.
(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in "A Systems Approach to Analyzing and Preventing Hospital Adverse Events" (link). Here is the abstract and summary of findings for that article:
Objective: This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.
Method: A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.
Results: The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.
Conclusions: The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.
Key Words: patient safety, systems theory, cardiac surgical procedures, adverse event causal analysis (J Patient Saf 2016;00: 00–00)
Crucial in this article is this research group's effort to identify causes "at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved". The key result is this: "The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals."

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll's thoughtful rebuttal (NYT 8/15/16; link).)

Friday, August 31, 2018

Turing's journey


A recent post comments on the value of biography as a source of insight into history and thought. Currently I am reading Andrew Hodges' Alan Turing: The Enigma (1983), which I am finding fascinating both for its portrayal of the evolution of a brilliant and unconventional mathematician as well as the honest efforts Hodges makes to describe Turing's sexual evolution and the tragedy in which it eventuated. Hodges makes a serious effort to give the reader some understanding of Turing's important contributions, including his enormously important "computable numbers" paper. (Here is a nice discussion of computability in the Stanford Encyclopedia of Philosophylink.) The book also offers a reasonably technical account of the Enigma code-breaking process.

Hilbert's mathematical imagination plays an important role in Turing's development. Hilbert's speculation that all mathematical statements would turn out to be derivable or disprovable turned out to be wrong, and Turing's computable numbers paper (along with Godel and Church) demonstrated the incompleteness of mathematics. But it was Hilbert's formulation of the idea that permitted the precise and conclusive refutations that came later. (Here is Richard Zack's account in the Stanford Encyclopedia of Philosophy of Hilbert's program; link.)

And then there were the machines. I had always thought of the Turing machine as a pure thought experiment designed to give specific meaning to the idea of computability. It has been eye-opening to learn of the innovative and path-breaking work that Turing did at Bletchley Park, Bell Labs, and other places in developing real computational machines. Turing's development of real computing machines and his invention of the activity of "programming" ("construction of tables") make his contributions to the development of digital computing machines much more advanced and technical than I had previously understood. His work late in the war on the difficult problem of encrypting speech for secure telephone conversation was also very interesting and innovative. Further, his understanding of the priority of creating a technology that would support "random access memory" was especially prescient. Here is Hodges' summary of Turing's view in 1947:
Considering the storage problem, he listed every form of discrete store that he and Don Bayley had thought of, including film, plugboards, wheels, relays, paper tape, punched cards, magnetic tape, and ‘cerebral cortex’, each with an estimate, in some cases obviously fanciful, of access time, and of the number of digits that could be stored per pound sterling. At one extreme, the storage could all be on electronic valves, giving access within a microsecond, but this would be prohibitively expensive. As he put it in his 1947 elaboration, ‘To store the content of an ordinary novel by such means would cost many millions of pounds.’ It was necessary to make a trade-off between cost and speed of access. He agreed with von Neumann, who in the EDVAC report had referred to the future possibility of developing a special ‘Iconoscope’ or television screen, for storing digits in the form of a pattern of spots. This he described as ‘much the most hopeful scheme, for economy combined with speed.’ (403)
These contributions are no doubt well known by experts on the history of computing. But for me it was eye-opening to learn how directly Turing was involved in the design and implementation of various automatic computing engines, including the British ACE machine itself at the National Physical Laboratory (link). Here is Turing's description of the evolution of his thinking on this topic, extracted from a lecture in 1947:
Some years ago I was researching on what might now be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general. One of my conclusions was that the idea of a ‘rule of thumb’ process and a ‘machine process’ were synonymous. The expression ‘machine process’ of course means one which could be carried out by the type of machine I was considering…. Machines such as the ACE may be regarded as practical versions of this same type of machine. There is at least a very close analogy. (399)
At the same time his clear logical understanding of the implications of a universal computing machine was genuinely visionary. He was evangelical in his advocacy of the goal of creating a machine with a minimalist and simple architecture where all the complexity and specificity of the use of the machine derives from its instructions (programming), not its specialized hardware.

Also interesting is the fact that Turing had a literary impulse (not often exercised), and wrote at least one semi-autobiographical short story about a sexual encounter. Only a few pages survive. Here is a paragraph quoted by Hodges:
Alec had been working rather hard until two or three weeks before. It was about interplanetary travel. Alec had always been rather keen on such crackpot problems, but although he rather liked to let himself go rather wildly to newspapermen or on the Third Programme when he got the chance, when he wrote for technically trained readers, his work was quite sound, or had been when he was younger. This last paper was real good stuff, better than he'd done since his mid twenties when he had introduced the idea which is now becoming known as 'Pryce's buoy'. Alec always felt a glow of pride when this phrase was used. The rather obvious double-entendre rather pleased him too. He always liked to parade his homosexuality, and in suitable company Alec could pretend that the word was spelt without the 'u'. It was quite some time now since he had 'had' anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable. (564)
The passage is striking for several reasons; but most obviously, it brings together the two leading themes of his life, his scientific imagination and his sexuality.

This biography of Turing reinforces for me the value of the genre more generally. The reader gets a better understanding of the important developments in mathematics and computing that Turing achieved, it presents a vivid view of the high stakes in the secret conflict that Turing was a crucial part of in the use of cryptographic advances to defeat the Nazi submarine threat, and it gives personal insights into the very unique individual who developed into such a world-changing logician, engineer, and scientist.

Wednesday, August 29, 2018

The insights of biography


I have always found biographies a particularly interesting source of learning and stimulation. A recent example is a biography and celebration of Muthuvel Kalaignar Karunanidhi published in a recent issue of the Indian semi-weekly Frontline. Karunanidhi was an enormously important social and political leader in India for over sixty years in the Dravidian movement in southern India and Tamil Nadu, and his passing earlier this month was the occasion for a special issue of Frontline. Karunanidhi was president of the Dravidian political party Dravida Munnetra Kazhagam (DMK) for more than fifty years. And he is an individual I had never heard of before opening up Frontline. In his early life he was a script writer and film maker who was able to use his artistic gifts to create characters who inspired political activism among young Tamil men and women. And in the bulk of his career he was an activist, orator, and official who had great influence on politics and social movements in southern India. The recollection and biography by A.S. Panneerselvan is excellent. (This article derives from Panneerselvan's forthcoming biography of Karunanidhi.) Here is how Panneerselvan frames his narrative:
In a State where language, empowerment, self-respect, art, literary forms and films coalesce to lend political vibrancy, Karunanidhi's life becomes a sort of natural metaphor of modern Tamil Nadu. His multifaceted personality helps to understand the organic evolution of the Dravidian Movement. To understand how he came to the position to wield the pen and his tongue for his politics, rather than bombs and rifles for revolution, one has to look at his early life. (7)
I assume that Karunanidhi and the Dravidian political movement would be common currency for Indian intellectuals and political activists. For an American with only a superficial understanding of Indian politics and history, his life story opens a whole new aspect of India's post-independence experience. I think of the primary dynamic of Indian politics since Independence as being a struggle between the non-sectarian political ideas of Congress, the Hindu nationalism of BJP, and the secular and leftist position of India's Communist movement. But the Dravidian movement diverges in specific ways from each of these currents. In brief, the central thread of the Dravidian is the rejection of the cultural hegemony of Hindi language, status, and culture, and an expression of pride and affirmation in the cultures and traditions of Tamil India. Panneerselvan describes an internal difference of emphasis on the topic of language and culture within the early stage of the Dravidian movement:
The duality of the Self-Respect Movement emerged very clearly during this phase. While Periyar and Annadurai were in total agreement in the diagnosis of the social milieu, their prognoses were quite opposite: For Periyar, language was an instrument for communication; for Annadurai, language was an organic socio-cultural oeuvre that lends a distinct identity and a sense of pride and belonging to the people. (13).
The Dravidian Movement was broadly speaking a movement for social justice, and it was fundamentally supportive of the rights and status of dalits. The tribute by K. Veeramani expresses the social justice commitments of DMK and Karunanidhi very well:
The goal of dispensation of social justice is possible only through reservation in education and public employment, giving adequate representation to the Scheduled Castes, the Scheduled Tribes and Other Backward Classes. Dispensation of social justice continues to be the core principle of the Dravidian movement, founded by South Indian Liberal Federation (SILF), popularly known as the Justice Party. (36) ... The core of Periyar's philosophy is to bring about equality through equal opportunities in a society rife with birth-based discrimination. Periyar strengthened the reservation mode as a compensation for birth-based inequalities. In that way, reservation has to be implemented as a mode of compensatory discrimination. (38)
Also important in the political agenda of the Dravidian Movement was a sustained effort to improve the conditions of tenants and agricultural workers through narrowing of the rights of landlords. J. Jeyaranjan observes:
The power relation between the landlord and the tenant is completely reversed, with the tenant enjoying certain powers to negotiate compensation for giving up the right to cultivate. Mobilisations by the undivided Communist Party of India (CPI) and the Dravidian movement, the Dravidar Kazhagam in particular, have been critical to the creation of a culture of collective action and resistance to landlord power. Further, the coming to power of the Dravida Munnetra Kazhagam (DMK) in 1967 created conditions for consolidating the power of lower-caste tenants who benefited both from a set of State initiatives launched by the DMK and the culture of collective action against Brahmin landlords. (52)
What can be learned from a detailed biography of a figure like Karunanidhi? For myself the opportunity such a piece of scholarship permits is to significantly broaden my own understanding of the nuances of philosophy, policy, values, and institutions through which the political developments of a relatively unfamiliar region of the world have developed. Such a biography allows the reader to gain a vivid experience of the issues and passions that motivated people, both intellectuals and laborers, in the 1920s, the 1960s, and the 1990s. And it gives a bit of insight into the complicated question of how talented individuals develop into impactful, committed, and dedicated leaders and thinkers.

(Here is a collection of snippets from Karunandhi's films; link.)



Sunday, August 19, 2018

Safety culture or safety behavior?


Andrew Hopkins is a much-published expert on industrial safety who has an important set of insights into the causes of industrial accidents. Much of his career has focused on the oil and gas industry, but he has written on other sectors as well. Particularly interesting are several books: Failure to Learn: The BP Texas City Refinery Disaster; Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout; and Lessons from Longford: The ESSO Gas Plant Explosion. He also provides a number of interesting working papers here.

One of his interesting working papers is on the topic of safety culture in the drilling industry, "Why safety cultures don't work" (link).
Companies that set out to create a “safety culture” often expend huge amounts of resource trying to change the way operatives, foremen and supervisory staff think and feel about safety. The results are often disappointing. (1)
Changing the way people think is nigh impossible, but setting up organizational structures that monitor compliance with procedure, even if that procedure is seen as redundant or unnecessary, is doable. (3)
Hopkins' central point is that safety requires change of routine behavior, not in the first instance change of culture or thought. This means that management and regulatory agencies need to establish safe practices and then enforce compliance through internal and external measures. He uses the example of seat belt usage: campaigns to encourage the use of seat belts had little effect, but behavior changed when fines were imposed on drivers who continued to refrain from seat belt usage.

His central focus here, as in most of his books, is on the processes involved in the drilling industry. He makes the point that the incentives that are established in oil and gas drilling are almost entirely oriented towards maximizing speed and production. Exhortations towards "safe practices" are ineffectual in this context.

Much of his argument here comes down to the contrast between high-likelihood, low-harm accidents and low-likelihood, high-harm accidents. The steps required to prevent low-likelihood, high-harm accidents are generally not visible in the workplace, precisely because the sequences that lead to them are highly uncommon. Routine safety procedures will not reduce the likelihood of occurrence of the high-harm accident.

Hopkins offers the example of the air traffic control industry. The ultimate disaster in air traffic control is a mid-air collision. Very few such incidents have occurred. The incident Hopkins refers to was a mid-air collision over Uberlinger, Germany in 2002. But procedures in air traffic control give absolute priority to preventing such disasters, and the solution is to identify a key precursor event to a mid-air collision and ensure that these precursor events are recorded, investigated, and reacted to when they occur. The relevant precursor event in air traffic control is a proximity of two aircraft at a distance of 1.5 miles or less. The required separation is 2 miles. Air traffic control regulations and processes require a full investigation and reaction for all incidents of separation that occur with 1.5 miles of separation or less. Air traffic control is a high-reliability industry precisely because it gives priority and resources to the prevention, not only of the disastrous incidents themselves, but the the precursors that may lead to them. "This is a clear example of the way a high-reliability organization operates. It works out what the most catastrophic event is likely to be, regardless of how rare such events are in recent experience, and devises good indicators of how well the prevention of that catastrophe is being managed. It is a way of thinking that is highly unusual in the oil and gas industry" (2).

The drilling industry does not commonly follow similar high-level safety management. A drilling blowout is the incident of greatest concern in the drilling industry. There are, according to Hopkins, several obvious precursor events to a well blowout: well kicks and cementing failures. It is Hopkins' contention that safety in the drilling industry would be greatly enhanced (with respect to the catastrophic events that are both low-probability and high-harm) if procedures were reoriented so that priority attention and tracking were given to these kinds of precursor events. By reducing or eliminating the occurrence of the precursor events, major accidents would be prevented.

Another organizational factor that Hopkins highlights is the role that safety officers play within the organization. In high-reliability organizations, safety officers have an organizationally privileged role; in low-reliability organizations their voices seem to disappear in the competition among many managerial voices with other interests (speed, production, public relations). (This point is explored in an earlier post; link.)
Prior to Macondo [the Deepwater Horizon oil spill], BP’s process safety structure was decentralized. The safety experts had very little power. They lacked strong reporting lines to the centre and answered to commercial managers who tended to put production ahead of engineering excellence. After Macondo, BP reversed this. Now, what I call the “voices of safety” are powerful and heard loud and clear in the boardroom. (3)
Ominously, Hopkins makes a prescient point about the crucial role played by regulatory agencies in enhancing safety in high-risk industries.
Many regulatory regimes, however, particularly that of the US, are not functioning as they ought to. Regulators need to be highly skilled and resourced and must be able to match the best minds in industry in order to have competent discussions about the risk-management strategies of the corporations. In the US they're not doing that yet. The best practice recognized worldwide is the safety case regime, in use in UK and Norway. (4)
Given the militantly anti-regulatory stance of the current US federal administration and the aggressive lack of attention its administrators pay to scientific and technical expertise, this is a very sobering source of worry about the future of industrial, chemical, and nuclear safety in the US.

Saturday, July 28, 2018

Rob Sellers on recent social psychology



Scientific fields are shaped by many apparently contingent and capricious facts. This is one of the key insights of science and technology studies. And yet eventually it seems that scientific communities succeed in going beyond the limitations of these somewhat arbitrary starting points. The human sciences are especially vulnerable to this kind of arbitrariness, and facts about race, gender, and sexuality have been seen to have created arbitrary starting points in various fields of the social and human sciences.

A case in point is the discipline of social psychology. Social psychology studies how individual human beings are shaped in their behavior by the social arrangements in which they mature and live. And yet all too often it has emerged that researchers in this discipline have brought with them a lot of baggage in the form of their own social assumptions which have distorted the theories and methods they have developed.

Rob Sellers is an accomplished social psychologist at the University of Michigan who has thought deeply about the intersections of race and academic life. He also has an unusual and deep appreciation of the history of his discipline. In this recent interview he discusses the legacies of four important African American social psychologists and their impact on the discipline. His subjects are Claude Steele, James Jackson, James Jones, and Jim Sidanius. He argues that these men, all of the same generation and born in the late 1940s, brought about a crucial reorientation in the ways that social psychologists thought about and studied the lives of black people. They have each had distinguished careers and have overseen large numbers of PhD students. Their influence on social psychology has been very substantial.

The interview is worth watching in its entirety -- I hope there will also be a second interview that pressures some of these issues more fully -- but here are some highlights.

There was an assumption among earlier generations of social psychology that white behavior and experience was normal, and that other identities were abnormal. James Jackson provided a fundamental reset to this presupposition by demonstrating how normal black lives were. This represented something like a paradigm change for the discipline, in that it brought about a fundamental reorientation of the perspectives social psychologists brought to their research.

A parallel assumption in earlier research in social psychology, according to Sellers, was that black lives were somehow "damaged" -- low self-esteem, low ability to cope. Jackson demonstrated that this assumption too was fundamentally wrong. Black individuals performed similarly to whites in accepted tests of self-esteem. And the premise of damage underestimates the dignity and persistent success of African American communities.

Claude Steele contributed to an understanding of differences in performance across major social categories through his theory of stereotype threat (link). As Rob Sellers observes, Steele's experimental research on the effects of stereotypes and presuppositions about differences in capacity between groups has made a very large contribution to both social psychology and the field of education. At the same time, Sellers signals in the interview that he has some hesitations about the magnitude of the effect of stereotype threat (19:45).

Sellers credits James Jones's research on prejudice with making a large difference in which we understand contemporary racism and the experience of being black within a racially divided society. He also made highly original contributions to the study of African-American culture, finding linkages back to West African cultural meanings and practices. Sellers accepts the idea that cultural assumptions and practices can persist for many generations beyond their original setting.

Another common assumption in social psychology was that intergroup conflict (for example, racism) was cultural and historically contingent. Jim Sidanius advanced a general theory, social dominance theory (along with Felicia Pratto), which undertook to explain racism and other forms of intergroup oppression as an evolutionary consequence of competition for resources, including access to reproduction.

Another important observation Sellers makes in the interview is that the men described here, for all their heterodoxy, were pretty mainstream in their scientific behavior. They established their reputations and careers through research that found acceptance in the main journals and institutions of the time. By contrast, another group of black psychologists rejected the mainstream more directly. Sellers described the revolt in 1969 of the Association of Black Psychologists and the competition this engendered between the mainstream APA and the more activist ABP.

One interesting point that comes out of this interview is the depth of Rob Sellers' own knowledge of the social psychology of high-level athletes. His comments about Jackie Robinson are particularly interesting.

The question I hope to pursue in my next conversation with Rob is whether the particular experiences of race that these men had in America in the 1950s as children (in the Midwest) and the 1960s as young adults shaped their scientific ideas in any direct ways. It seems intuitively likely that this was the case. But it isn't possible to easily read off of their work the imprint of the experience of racism in earlier stages of their lives. And yet when we look closely at the biographies of a range of black intellectuals we find a clear imprint of the early experiences on contemporary consciousness. (For illustrations see posts on Ahmad Rahman and Phil Richards; link, link).


Wednesday, July 25, 2018

Cyber threats


David Sanger's very interesting recent book, The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, is a timely read this month, following the indictments of twelve Russian intelligence officers for hacking the DNC in 2015. Sanger is a national security writer for the New York Times, and has covered cyber security issues for a number of years. He and William Broad and John Markoff were among the first journalists to piece together the story behind the Stuxnet attack on Iran's nuclear fuel program (the secret program called Olympic Games), and the book also offers some intriguing hints about the possibility of "left of launch" intrusions by US agencies into the North Korean missile program. This is a book that everyone should read. It greatly broadens the scope of what most of us think about under the category of "hacking". We tend to think of invasions of privacy and identity theft when we think of nefarious uses of the internet; but Sanger makes it clear that the stakes are much greater. The capabilities of current cyber-warfare tools have the possibility of bringing down whole national infrastructures, leading to massive civilian hardship.

There are several important takeaways from Sanger's book. One is the pervasiveness and power of the offensive cyber tools available to nation-state actors in penetrating and potentially disrupting or destroying the infrastructures of their potential opponents. Russia, China, North Korea, Iran, and the United States are all shown to possess tools of intrusion, data extraction, and system destruction that are extremely difficult for targeted countries and systems to defend against. The Sony attack (North Korea), the Office of Personnel Management (China), the attack on the Ukraine electric grid (Russia), the attack on Saudi Arabia's massive oil company Aramco (Iran), and the attack on the US electoral system (Russia) all proceeded with massive effect and without evident response from their victims or the United States. At this moment in time the balance of capability appears to favor the offense rather than the defense. A second important theme is the extreme level of secrecy that the US intelligence establishment has imposed on the capabilities it possesses for conducting cyber conflict. Sanger makes it clear that he believes that a greater level of public understanding of the capabilities and risks created by cyber weapons like Stuxnet would be beneficial in the United States and other countries, by permitting a more serious public debate about means and ends, risks and rewards of the use of cyber weapons. He likens it to the evolution of the Obama administration's eventual willingness to make a public case for the use of unmanned drone strikes against its enemies.

Third, Sanger makes it clear that the classic logic of deterrence that was successful in maintaining nuclear peace is less potent when it comes to cyber warfare and escalation. State-level adversaries have selected strategies of cyber attack precisely because of the relatively low cost of developing this technology, the relative anonymity of an attack once it occurs, and the difficulties faced by victims in selecting appropriate and effective counter-strikes that would deter the attacker in the future.

The National Security Agency gets a lot of attention in the book. The Office of Tailored Access Operations gets extensive discussion, based on revelations from the Snowden materials and other sources. Sanger makes it clear that the NSA had developed a substantial toolkit for intercepting communications and penetrating computer systems to capture data files of security interest. But according to Sanger it has also developed strong cyber tools for offensive use against potential adversaries. Part of the evidence for this judgment comes from the Snowden revelations (which are also discussed extensively). Part comes from what Sanger and others were able to discover about the workings of Stuxnet in targeting Iranian nuclear centrifuges over a many-month period. And part comes from suggestive reporting about the odd fact that North Korea's medium range missile tests were so spectacularly unsuccessful for a series of launches.

The book leads to worrisome conclusions and questions. US infrastructure and counter-cyber programs were highly vulnerable to attacks that have already taken place in our country. The extraction by Chinese military intelligence of millions of confidential personal records of US citizens from the Office of Personnel Management took place over months and was uncovered only after the damage was done. The effectiveness of Russian attacks on the Ukraine electric power grid suggest that similar attacks would be possible in other advanced countries, including the United States. All of these incidents suggest a level of vulnerability and potential for devastating attack that the public is not prepared for.