Saturday, September 15, 2018

Patient safety


An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment -- wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes -- making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility -- at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization -- a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital -- the digital patient record system, the devices that administer drugs, the surgical robots -- can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents -- the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses' stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:
Abstract The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.
(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in "A Systems Approach to Analyzing and Preventing Hospital Adverse Events" (link). Here is the abstract and summary of findings for that article:
Objective: This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.
Method: A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.
Results: The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.
Conclusions: The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.
Key Words: patient safety, systems theory, cardiac surgical procedures, adverse event causal analysis (J Patient Saf 2016;00: 00–00)
Crucial in this article is this research group's effort to identify causes "at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved". The key result is this: "The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals."

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

Friday, August 31, 2018

Turing's journey


A recent post comments on the value of biography as a source of insight into history and thought. Currently I am reading Andrew Hodges' Alan Turing: The Enigma (1983), which I am finding fascinating both for its portrayal of the evolution of a brilliant and unconventional mathematician as well as the honest efforts Hodges makes to describe Turing's sexual evolution and the tragedy in which it eventuated. Hodges makes a serious effort to give the reader some understanding of Turing's important contributions, including his enormously important "computable numbers" paper. (Here is a nice discussion of computability in the Stanford Encyclopedia of Philosophylink.) The book also offers a reasonably technical account of the Enigma code-breaking process.

Hilbert's mathematical imagination plays an important role in Turing's development. Hilbert's speculation that all mathematical statements would turn out to be derivable or disprovable turned out to be wrong, and Turing's computable numbers paper (along with Godel and Church) demonstrated the incompleteness of mathematics. But it was Hilbert's formulation of the idea that permitted the precise and conclusive refutations that came later. (Here is Richard Zack's account in the Stanford Encyclopedia of Philosophy of Hilbert's program; link.)

And then there were the machines. I had always thought of the Turing machine as a pure thought experiment designed to give specific meaning to the idea of computability. It has been eye-opening to learn of the innovative and path-breaking work that Turing did at Bletchley Park, Bell Labs, and other places in developing real computational machines. Turing's development of real computing machines and his invention of the activity of "programming" ("construction of tables") make his contributions to the development of digital computing machines much more advanced and technical than I had previously understood. His work late in the war on the difficult problem of encrypting speech for secure telephone conversation was also very interesting and innovative. Further, his understanding of the priority of creating a technology that would support "random access memory" was especially prescient. Here is Hodges' summary of Turing's view in 1947:
Considering the storage problem, he listed every form of discrete store that he and Don Bayley had thought of, including film, plugboards, wheels, relays, paper tape, punched cards, magnetic tape, and ‘cerebral cortex’, each with an estimate, in some cases obviously fanciful, of access time, and of the number of digits that could be stored per pound sterling. At one extreme, the storage could all be on electronic valves, giving access within a microsecond, but this would be prohibitively expensive. As he put it in his 1947 elaboration, ‘To store the content of an ordinary novel by such means would cost many millions of pounds.’ It was necessary to make a trade-off between cost and speed of access. He agreed with von Neumann, who in the EDVAC report had referred to the future possibility of developing a special ‘Iconoscope’ or television screen, for storing digits in the form of a pattern of spots. This he described as ‘much the most hopeful scheme, for economy combined with speed.’ (403)
These contributions are no doubt well known by experts on the history of computing. But for me it was eye-opening to learn how directly Turing was involved in the design and implementation of various automatic computing engines, including the British ACE machine itself at the National Physical Laboratory (link). Here is Turing's description of the evolution of his thinking on this topic, extracted from a lecture in 1947:
Some years ago I was researching on what might now be described as an investigation of the theoretical possibilities and limitations of digital computing machines. I considered a type of machine which had a central mechanism and an infinite memory which was contained on an infinite tape. This type of machine appeared to be sufficiently general. One of my conclusions was that the idea of a ‘rule of thumb’ process and a ‘machine process’ were synonymous. The expression ‘machine process’ of course means one which could be carried out by the type of machine I was considering…. Machines such as the ACE may be regarded as practical versions of this same type of machine. There is at least a very close analogy. (399)
At the same time his clear logical understanding of the implications of a universal computing machine was genuinely visionary. He was evangelical in his advocacy of the goal of creating a machine with a minimalist and simple architecture where all the complexity and specificity of the use of the machine derives from its instructions (programming), not its specialized hardware.

Also interesting is the fact that Turing had a literary impulse (not often exercised), and wrote at least one semi-autobiographical short story about a sexual encounter. Only a few pages survive. Here is a paragraph quoted by Hodges:
Alec had been working rather hard until two or three weeks before. It was about interplanetary travel. Alec had always been rather keen on such crackpot problems, but although he rather liked to let himself go rather wildly to newspapermen or on the Third Programme when he got the chance, when he wrote for technically trained readers, his work was quite sound, or had been when he was younger. This last paper was real good stuff, better than he'd done since his mid twenties when he had introduced the idea which is now becoming known as 'Pryce's buoy'. Alec always felt a glow of pride when this phrase was used. The rather obvious double-entendre rather pleased him too. He always liked to parade his homosexuality, and in suitable company Alec could pretend that the word was spelt without the 'u'. It was quite some time now since he had 'had' anyone, in fact not since he had met that soldier in Paris last summer. Now that his paper was finished he might justifiably consider that he had earned another gay man, and he knew where he might find one who might be suitable. (564)
The passage is striking for several reasons; but most obviously, it brings together the two leading themes of his life, his scientific imagination and his sexuality.

This biography of Turing reinforces for me the value of the genre more generally. The reader gets a better understanding of the important developments in mathematics and computing that Turing achieved, it presents a vivid view of the high stakes in the secret conflict that Turing was a crucial part of in the use of cryptographic advances to defeat the Nazi submarine threat, and it gives personal insights into the very unique individual who developed into such a world-changing logician, engineer, and scientist.

Wednesday, August 29, 2018

The insights of biography


I have always found biographies a particularly interesting source of learning and stimulation. A recent example is a biography and celebration of Muthuvel Kalaignar Karunanidhi published in a recent issue of the Indian semi-weekly Frontline. Karunanidhi was an enormously important social and political leader in India for over sixty years in the Dravidian movement in southern India and Tamil Nadu, and his passing earlier this month was the occasion for a special issue of Frontline. Karunanidhi was president of the Dravidian political party Dravida Munnetra Kazhagam (DMK) for more than fifty years. And he is an individual I had never heard of before opening up Frontline. In his early life he was a script writer and film maker who was able to use his artistic gifts to create characters who inspired political activism among young Tamil men and women. And in the bulk of his career he was an activist, orator, and official who had great influence on politics and social movements in southern India. The recollection and biography by A.S. Panneerselvan is excellent. (This article derives from Panneerselvan's forthcoming biography of Karunanidhi.) Here is how Panneerselvan frames his narrative:
In a State where language, empowerment, self-respect, art, literary forms and films coalesce to lend political vibrancy, Karunanidhi's life becomes a sort of natural metaphor of modern Tamil Nadu. His multifaceted personality helps to understand the organic evolution of the Dravidian Movement. To understand how he came to the position to wield the pen and his tongue for his politics, rather than bombs and rifles for revolution, one has to look at his early life. (7)
I assume that Karunanidhi and the Dravidian political movement would be common currency for Indian intellectuals and political activists. For an American with only a superficial understanding of Indian politics and history, his life story opens a whole new aspect of India's post-independence experience. I think of the primary dynamic of Indian politics since Independence as being a struggle between the non-sectarian political ideas of Congress, the Hindu nationalism of BJP, and the secular and leftist position of India's Communist movement. But the Dravidian movement diverges in specific ways from each of these currents. In brief, the central thread of the Dravidian is the rejection of the cultural hegemony of Hindi language, status, and culture, and an expression of pride and affirmation in the cultures and traditions of Tamil India. Panneerselvan describes an internal difference of emphasis on the topic of language and culture within the early stage of the Dravidian movement:
The duality of the Self-Respect Movement emerged very clearly during this phase. While Periyar and Annadurai were in total agreement in the diagnosis of the social milieu, their prognoses were quite opposite: For Periyar, language was an instrument for communication; for Annadurai, language was an organic socio-cultural oeuvre that lends a distinct identity and a sense of pride and belonging to the people. (13).
The Dravidian Movement was broadly speaking a movement for social justice, and it was fundamentally supportive of the rights and status of dalits. The tribute by K. Veeramani expresses the social justice commitments of DMK and Karunanidhi very well:
The goal of dispensation of social justice is possible only through reservation in education and public employment, giving adequate representation to the Scheduled Castes, the Scheduled Tribes and Other Backward Classes. Dispensation of social justice continues to be the core principle of the Dravidian movement, founded by South Indian Liberal Federation (SILF), popularly known as the Justice Party. (36) ... The core of Periyar's philosophy is to bring about equality through equal opportunities in a society rife with birth-based discrimination. Periyar strengthened the reservation mode as a compensation for birth-based inequalities. In that way, reservation has to be implemented as a mode of compensatory discrimination. (38)
Also important in the political agenda of the Dravidian Movement was a sustained effort to improve the conditions of tenants and agricultural workers through narrowing of the rights of landlords. J. Jeyaranjan observes:
The power relation between the landlord and the tenant is completely reversed, with the tenant enjoying certain powers to negotiate compensation for giving up the right to cultivate. Mobilisations by the undivided Communist Party of India (CPI) and the Dravidian movement, the Dravidar Kazhagam in particular, have been critical to the creation of a culture of collective action and resistance to landlord power. Further, the coming to power of the Dravida Munnetra Kazhagam (DMK) in 1967 created conditions for consolidating the power of lower-caste tenants who benefited both from a set of State initiatives launched by the DMK and the culture of collective action against Brahmin landlords. (52)
What can be learned from a detailed biography of a figure like Karunanidhi? For myself the opportunity such a piece of scholarship permits is to significantly broaden my own understanding of the nuances of philosophy, policy, values, and institutions through which the political developments of a relatively unfamiliar region of the world have developed. Such a biography allows the reader to gain a vivid experience of the issues and passions that motivated people, both intellectuals and laborers, in the 1920s, the 1960s, and the 1990s. And it gives a bit of insight into the complicated question of how talented individuals develop into impactful, committed, and dedicated leaders and thinkers.

(Here is a collection of snippets from Karunandhi's films; link.)



Sunday, August 19, 2018

Safety culture or safety behavior?


Andrew Hopkins is a much-published expert on industrial safety who has an important set of insights into the causes of industrial accidents. Much of his career has focused on the oil and gas industry, but he has written on other sectors as well. Particularly interesting are several books: Failure to Learn: The BP Texas City Refinery Disaster; Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout; and Lessons from Longford: The ESSO Gas Plant Explosion. He also provides a number of interesting working papers here.

One of his interesting working papers is on the topic of safety culture in the drilling industry, "Why safety cultures don't work" (link).
Companies that set out to create a “safety culture” often expend huge amounts of resource trying to change the way operatives, foremen and supervisory staff think and feel about safety. The results are often disappointing. (1)
Changing the way people think is nigh impossible, but setting up organizational structures that monitor compliance with procedure, even if that procedure is seen as redundant or unnecessary, is doable. (3)
Hopkins' central point is that safety requires change of routine behavior, not in the first instance change of culture or thought. This means that management and regulatory agencies need to establish safe practices and then enforce compliance through internal and external measures. He uses the example of seat belt usage: campaigns to encourage the use of seat belts had little effect, but behavior changed when fines were imposed on drivers who continued to refrain from seat belt usage.

His central focus here, as in most of his books, is on the processes involved in the drilling industry. He makes the point that the incentives that are established in oil and gas drilling are almost entirely oriented towards maximizing speed and production. Exhortations towards "safe practices" are ineffectual in this context.

Much of his argument here comes down to the contrast between high-likelihood, low-harm accidents and low-likelihood, high-harm accidents. The steps required to prevent low-likelihood, high-harm accidents are generally not visible in the workplace, precisely because the sequences that lead to them are highly uncommon. Routine safety procedures will not reduce the likelihood of occurrence of the high-harm accident.

Hopkins offers the example of the air traffic control industry. The ultimate disaster in air traffic control is a mid-air collision. Very few such incidents have occurred. The incident Hopkins refers to was a mid-air collision over Uberlinger, Germany in 2002. But procedures in air traffic control give absolute priority to preventing such disasters, and the solution is to identify a key precursor event to a mid-air collision and ensure that these precursor events are recorded, investigated, and reacted to when they occur. The relevant precursor event in air traffic control is a proximity of two aircraft at a distance of 1.5 miles or less. The required separation is 2 miles. Air traffic control regulations and processes require a full investigation and reaction for all incidents of separation that occur with 1.5 miles of separation or less. Air traffic control is a high-reliability industry precisely because it gives priority and resources to the prevention, not only of the disastrous incidents themselves, but the the precursors that may lead to them. "This is a clear example of the way a high-reliability organization operates. It works out what the most catastrophic event is likely to be, regardless of how rare such events are in recent experience, and devises good indicators of how well the prevention of that catastrophe is being managed. It is a way of thinking that is highly unusual in the oil and gas industry" (2).

The drilling industry does not commonly follow similar high-level safety management. A drilling blowout is the incident of greatest concern in the drilling industry. There are, according to Hopkins, several obvious precursor events to a well blowout: well kicks and cementing failures. It is Hopkins' contention that safety in the drilling industry would be greatly enhanced (with respect to the catastrophic events that are both low-probability and high-harm) if procedures were reoriented so that priority attention and tracking were given to these kinds of precursor events. By reducing or eliminating the occurrence of the precursor events, major accidents would be prevented.

Another organizational factor that Hopkins highlights is the role that safety officers play within the organization. In high-reliability organizations, safety officers have an organizationally privileged role; in low-reliability organizations their voices seem to disappear in the competition among many managerial voices with other interests (speed, production, public relations). (This point is explored in an earlier post; link.)
Prior to Macondo [the Deepwater Horizon oil spill], BP’s process safety structure was decentralized. The safety experts had very little power. They lacked strong reporting lines to the centre and answered to commercial managers who tended to put production ahead of engineering excellence. After Macondo, BP reversed this. Now, what I call the “voices of safety” are powerful and heard loud and clear in the boardroom. (3)
Ominously, Hopkins makes a prescient point about the crucial role played by regulatory agencies in enhancing safety in high-risk industries.
Many regulatory regimes, however, particularly that of the US, are not functioning as they ought to. Regulators need to be highly skilled and resourced and must be able to match the best minds in industry in order to have competent discussions about the risk-management strategies of the corporations. In the US they're not doing that yet. The best practice recognized worldwide is the safety case regime, in use in UK and Norway. (4)
Given the militantly anti-regulatory stance of the current US federal administration and the aggressive lack of attention its administrators pay to scientific and technical expertise, this is a very sobering source of worry about the future of industrial, chemical, and nuclear safety in the US.

Saturday, July 28, 2018

Rob Sellers on recent social psychology



Scientific fields are shaped by many apparently contingent and capricious facts. This is one of the key insights of science and technology studies. And yet eventually it seems that scientific communities succeed in going beyond the limitations of these somewhat arbitrary starting points. The human sciences are especially vulnerable to this kind of arbitrariness, and facts about race, gender, and sexuality have been seen to have created arbitrary starting points in various fields of the social and human sciences.

A case in point is the discipline of social psychology. Social psychology studies how individual human beings are shaped in their behavior by the social arrangements in which they mature and live. And yet all too often it has emerged that researchers in this discipline have brought with them a lot of baggage in the form of their own social assumptions which have distorted the theories and methods they have developed.

Rob Sellers is an accomplished social psychologist at the University of Michigan who has thought deeply about the intersections of race and academic life. He also has an unusual and deep appreciation of the history of his discipline. In this recent interview he discusses the legacies of four important African American social psychologists and their impact on the discipline. His subjects are Claude Steele, James Jackson, James Jones, and Jim Sidanius. He argues that these men, all of the same generation and born in the late 1940s, brought about a crucial reorientation in the ways that social psychologists thought about and studied the lives of black people. They have each had distinguished careers and have overseen large numbers of PhD students. Their influence on social psychology has been very substantial.

The interview is worth watching in its entirety -- I hope there will also be a second interview that pressures some of these issues more fully -- but here are some highlights.

There was an assumption among earlier generations of social psychology that white behavior and experience was normal, and that other identities were abnormal. James Jackson provided a fundamental reset to this presupposition by demonstrating how normal black lives were. This represented something like a paradigm change for the discipline, in that it brought about a fundamental reorientation of the perspectives social psychologists brought to their research.

A parallel assumption in earlier research in social psychology, according to Sellers, was that black lives were somehow "damaged" -- low self-esteem, low ability to cope. Jackson demonstrated that this assumption too was fundamentally wrong. Black individuals performed similarly to whites in accepted tests of self-esteem. And the premise of damage underestimates the dignity and persistent success of African American communities.

Claude Steele contributed to an understanding of differences in performance across major social categories through his theory of stereotype threat (link). As Rob Sellers observes, Steele's experimental research on the effects of stereotypes and presuppositions about differences in capacity between groups has made a very large contribution to both social psychology and the field of education. At the same time, Sellers signals in the interview that he has some hesitations about the magnitude of the effect of stereotype threat (19:45).

Sellers credits James Jones's research on prejudice with making a large difference in which we understand contemporary racism and the experience of being black within a racially divided society. He also made highly original contributions to the study of African-American culture, finding linkages back to West African cultural meanings and practices. Sellers accepts the idea that cultural assumptions and practices can persist for many generations beyond their original setting.

Another common assumption in social psychology was that intergroup conflict (for example, racism) was cultural and historically contingent. Jim Sidanius advanced a general theory, social dominance theory (along with Felicia Pratto), which undertook to explain racism and other forms of intergroup oppression as an evolutionary consequence of competition for resources, including access to reproduction.

Another important observation Sellers makes in the interview is that the men described here, for all their heterodoxy, were pretty mainstream in their scientific behavior. They established their reputations and careers through research that found acceptance in the main journals and institutions of the time. By contrast, another group of black psychologists rejected the mainstream more directly. Sellers described the revolt in 1969 of the Association of Black Psychologists and the competition this engendered between the mainstream APA and the more activist ABP.

One interesting point that comes out of this interview is the depth of Rob Sellers' own knowledge of the social psychology of high-level athletes. His comments about Jackie Robinson are particularly interesting.

The question I hope to pursue in my next conversation with Rob is whether the particular experiences of race that these men had in America in the 1950s as children (in the Midwest) and the 1960s as young adults shaped their scientific ideas in any direct ways. It seems intuitively likely that this was the case. But it isn't possible to easily read off of their work the imprint of the experience of racism in earlier stages of their lives. And yet when we look closely at the biographies of a range of black intellectuals we find a clear imprint of the early experiences on contemporary consciousness. (For illustrations see posts on Ahmad Rahman and Phil Richards; link, link).


Wednesday, July 25, 2018

Cyber threats


David Sanger's very interesting recent book, The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, is a timely read this month, following the indictments of twelve Russian intelligence officers for hacking the DNC in 2015. Sanger is a national security writer for the New York Times, and has covered cyber security issues for a number of years. He and William Broad and John Markoff were among the first journalists to piece together the story behind the Stuxnet attack on Iran's nuclear fuel program (the secret program called Olympic Games), and the book also offers some intriguing hints about the possibility of "left of launch" intrusions by US agencies into the North Korean missile program. This is a book that everyone should read. It greatly broadens the scope of what most of us think about under the category of "hacking". We tend to think of invasions of privacy and identity theft when we think of nefarious uses of the internet; but Sanger makes it clear that the stakes are much greater. The capabilities of current cyber-warfare tools have the possibility of bringing down whole national infrastructures, leading to massive civilian hardship.

There are several important takeaways from Sanger's book. One is the pervasiveness and power of the offensive cyber tools available to nation-state actors in penetrating and potentially disrupting or destroying the infrastructures of their potential opponents. Russia, China, North Korea, Iran, and the United States are all shown to possess tools of intrusion, data extraction, and system destruction that are extremely difficult for targeted countries and systems to defend against. The Sony attack (North Korea), the Office of Personnel Management (China), the attack on the Ukraine electric grid (Russia), the attack on Saudi Arabia's massive oil company Aramco (Iran), and the attack on the US electoral system (Russia) all proceeded with massive effect and without evident response from their victims or the United States. At this moment in time the balance of capability appears to favor the offense rather than the defense. A second important theme is the extreme level of secrecy that the US intelligence establishment has imposed on the capabilities it possesses for conducting cyber conflict. Sanger makes it clear that he believes that a greater level of public understanding of the capabilities and risks created by cyber weapons like Stuxnet would be beneficial in the United States and other countries, by permitting a more serious public debate about means and ends, risks and rewards of the use of cyber weapons. He likens it to the evolution of the Obama administration's eventual willingness to make a public case for the use of unmanned drone strikes against its enemies.

Third, Sanger makes it clear that the classic logic of deterrence that was successful in maintaining nuclear peace is less potent when it comes to cyber warfare and escalation. State-level adversaries have selected strategies of cyber attack precisely because of the relatively low cost of developing this technology, the relative anonymity of an attack once it occurs, and the difficulties faced by victims in selecting appropriate and effective counter-strikes that would deter the attacker in the future.

The National Security Agency gets a lot of attention in the book. The Office of Tailored Access Operations gets extensive discussion, based on revelations from the Snowden materials and other sources. Sanger makes it clear that the NSA had developed a substantial toolkit for intercepting communications and penetrating computer systems to capture data files of security interest. But according to Sanger it has also developed strong cyber tools for offensive use against potential adversaries. Part of the evidence for this judgment comes from the Snowden revelations (which are also discussed extensively). Part comes from what Sanger and others were able to discover about the workings of Stuxnet in targeting Iranian nuclear centrifuges over a many-month period. And part comes from suggestive reporting about the odd fact that North Korea's medium range missile tests were so spectacularly unsuccessful for a series of launches.

The book leads to worrisome conclusions and questions. US infrastructure and counter-cyber programs were highly vulnerable to attacks that have already taken place in our country. The extraction by Chinese military intelligence of millions of confidential personal records of US citizens from the Office of Personnel Management took place over months and was uncovered only after the damage was done. The effectiveness of Russian attacks on the Ukraine electric power grid suggest that similar attacks would be possible in other advanced countries, including the United States. All of these incidents suggest a level of vulnerability and potential for devastating attack that the public is not prepared for.

Tuesday, July 17, 2018

Downward causation


I've argued for the idea that social phenomena are generated by the actions, thoughts, and mental frameworks of myriad actors (link). This expresses the idea of ontological individualism. But I also believe that social arrangements -- structures, ideologies, institutions -- have genuine effects on the actions of individual actors and populations of actors and on intermediate-level social structures. There is real downward and lateral causation in the social world. Are these two views compatible?

I believe they are compatible.

The negative view holds that what appears to be downward causation is really just the workings of the lower-level components through their aggregation dynamics -- the lower struts of Coleman's boat (link). So when we say "the ideology of nationalism causes the rise of ultraconservative political leaders", this is just a shorthand for "many voters share the values of nationalism and elect candidates who propose radical solutions to issues like immigration." This seems to be the view of analytical-sociology purists.

But consider the alternative view -- that higher level entities sometimes come to possess stable causal powers that influence the behavior and even the constitution of the entities of which they are composed. This seems like an implausible idea in the natural sciences -- it is hard to imagine a world in which electrons have different physical properties as an effect of the lattice arrangement of atoms in a metal. But human actors are different from electrons and atoms, in that their behavior and constitution are in fact plastic to an important degree. In one social environment actors are disposed to be highly attentive to costs and benefits; in another social environment they are more amenable to conformance to locally expressed norms. And we can say quite a bit about the mechanisms of social psychology through which the cognitive and normative frameworks of actors are influenced by features of their social environments. This has an important implication: features of the higher-level social reality can change the dispositions and workings of the lower-level actors. And these changes may in turn lead to the emergence of new higher-level factors (new institutions, new normative systems, new social practices of solidarity, ...). So enduring social arrangements can cause changes in the dynamic properties of the actors who live within them.

Could we even say, more radically and counter-intuitively, that a normative structure like extremist populism "generates" behavior at the individual level? So rather than holding that individual actions generate higher-level structures, might we hold that higher-level normative structures generate patterns of behavior? For example, we might say that the normative strictures of patriarchy generate patterns of domination and deference among men and women at the individual level; or the normative strictures of Jim-Crow race relations generate individual-level patterns of subordination and domination among white and black individuals. There is a sense in which this statement about the direction of generation is obviously true; broadly shared knowledge frameworks or normative commitments "generate" typical forms of behavior in stylized circumstances of choice.

Does this way of thinking about the process of "generation" suggest that we need to rethink the directionality implied by the micro-macro distinction? Might we say that normative systems and social structures are as fundamental as patterns of individual behavior?

Consider the social reality depicted in the photograph above. Here we see coordinated action of a number of soldiers climbing out of a trench in World War I to cross the killing field of no mans land. The dozen or so soldiers depicted here are part of a vast army at war (3.8 million by 1918), deployed over a front extending hundreds of miles. The majority of the soldiers depicted here are about to receive grievous or mortal wounds. And yet they go over the trench. What can we say about the cause of this collective action at a specific moment in time? First, an order was conveyed through a communications system extending from commander to sergeant to enlisted man: "attack at 7:00 am". Second, the industrial wealth of Great Britain permitted the state the ability to equip and field a vast infantry army. Third, a system of international competition broke down into violent confrontation and war, leading numerous participant nations to organize and fund armies at war to defeat their enemies. Fourth, the morale of the troops was maintained at a sufficiently high level to avoid mass desertion and refusal to fight.  Fifth, an infantry training regime existed which gave ordinary farmhands, workers, accountants, and lords the habits and skills of infantry soldiers. All of these factors are part of the causal background of this simple episode in World War I; and most of these factors exist at a meso- or macro-level of social organization. Clearly this particular group of social actors was influenced by higher-level social factors. But equally clearly, the mechanisms through which these higher-level social factors work are straightforward to identify through reference to systems of individual actors.

Think for a minute about materials science. The hardness of titanium causes the nail to scratch the glass. It is true that material properties like hardness depend upon their microstructures. Nonetheless we are perfectly comfortable in attributing real causal powers to titanium at the level of a macro-material. And this attribution is not merely a way of summarizing a long story about the micro-structure of metallic titanium.

I've generally tried to think about these kinds of causal stories in terms of the idea of microfoundations. The hardness of titanium derives from its microfoundations at the level of atomic and subatomic causation. And the causal powers of patriarchy derive from the fact that the normative principles of partriarchy are embedded in the minds and behavior of many individuals, who become exemplars, enforcers, and encouragers of compliant behavior. The processes through which individuals acquire normative principles and the processes through which they behaviorally reflect these principles constitute the microfoundations of the meso- and macro-power of patriarchy.

So the question of whether there is downward causation seems almost too easy. Of course there is downward causation in the social world. Individuals are influenced in their choices and behavior by structural and normative factors beyond their control. And more fundamentally, individuals are changed in their fundamental dispositions to behavior through their immersion in social arrangements.

Saturday, June 23, 2018

Shakespeare on tyranny


Stephen Greenblatt is a literary critic and historian whose insights into philosophy and the contemporary world are genuinely and consistently profound. His most recent book returns to his primary expertise, the corpus of Shakespeare's plays. But it is -- by intention or otherwise -- an  important reflection on the presidency of Donald Trump as well. The book is Tyrant: Shakespeare on Politics, and it traces in fascinating detail the evolution and fates of tyrants through Shakespeare's plays. Richard III gets a great deal of attention, as do Lear and Macbeth. Greenblatt makes it clear that Shakespeare was interested both in the institutions of governance within which tyrants seized power, and the psychology of the tyrant. The parallels with the behavior and psychology of the current US President are striking.

Here is how Greenblatt frames his book.
“A king rules over willing subjects,” wrote the influential sixteenth-century Scottish scholar George Buchanan, “a tyrant over unwilling.” The institutions of a free society are designed to ward off those who would govern, as Buchanan put it, “not for their country but for themselves, who take account not of the public interest but of their own pleasure.” Under what circumstances, Shakespeare asked himself, do such cherished institutions, seemingly deep-rooted and impregnable, suddenly prove fragile? Why do large numbers of people knowingly accept being lied to? How does a figure like Richard III or Macbeth ascend to the throne? (1)
So who is the tyrant? What is his typical psychology?
Shakespeare's Richard III brilliantly develops the personality features of the aspiring tyrant already sketched in the Henry VI trilogy: the limitless self-regard, the lawbreaking, the pleasure in inflicting pain, the compulsive desire to dominate. He is pathologically narcissistic and supremely arrogant. He has a grotesque sense of entitlement, never doubting that he can do whatever he chooses. He loves to bark orders and to watch underlings scurry to carry them out. He expects absolute loyalty, but he is incapable of gratitude. The feelings of others mean nothing to him. He has no natural grace, no sense of shared humanity, no decency. He is not merely indifferent to the law; he hates it and takes pleasure in breaking it. He hates it because it gets in his way and because it stands for a notion of the public good that he holds in contempt. He divides the world into winners and losers. The winners arouse his regard insofar as he can use them for his own ends; the losers arouse only his scorn. The public good is something only losers like to talk about. What he likes to talk about is winning. (53)
One of Richard’s uncanny skills—and, in Shakespeare’s view, one of the tyrant’s most characteristic qualities—is the ability to force his way into the minds of those around him, whether they wish him there or not. (64)
Greenblatt has a lot to say about the enablers of the tyrant -- those who facilitate and those who silently consent.
Another group is composed of those who do not quite forget that Richard is a miserable piece of work but who nonetheless trust that everything will continue in a normal way. They persuade themselves that there will always be enough adults in the room, as it were, to ensure that promises will be kept, alliances honored, and core institutions respected. Richard is so obviously and grotesquely unqualified for the supreme position of power that they dismiss him from their minds. Their focus is always on someone else, until it is too late. They fail to realize quickly enough that what seemed impossible is actually happening. They have relied on a structure that proves unexpectedly fragile. (67)
One of the topics that appears in Shakespeare's corpus is a class-based populism from the under-classes. Consider Jack Cade, the lying and violent foil to The Duke of York.
Cade himself, for all we know, may think that what he is so obviously making up as he goes along will actually come to pass. Drawing on an indifference to the truth, shamelessness, and hyperinflated self-confidence, the loudmouthed demagogue is entering a fantasyland—“ When I am king, as king I will be”—and he invites his listeners to enter the same magical space with him. In that space, two and two do not have to equal four, and the most recent assertion need not remember the contradictory assertion that was made a few seconds earlier. (37)
And what about the fascination tyrants have with secret alliances with hostile foreign powers?
Third, the political party determined to seize power at any cost makes secret contact with the country’s traditional enemy. England’s enmity with the nation across the Channel—constantly fanned by all the overheated patriotic talk of recovering its territories there, and fueled by all the treasure and blood spilled in the attempt to do so—suddenly vanishes. The Yorkists—who, in the person of Cade, had pretended to consider it an act of treason even to speak French—enter into a set of secret negotiations with France. Nominally, the negotiations aim to end hostilities between the two countries by arranging a dynastic marriage, but they actually spring, as Queen Margaret cynically observes, “from deceit, bred by necessity” (3 Henry VI 3.3.68).
How does the tyrant rule? In a word, badly.
The tyrant’s triumph is based on lies and fraudulent promises braided around the violent elimination of rivals. The cunning strategy that brings him to the throne hardly constitutes a vision for the realm; nor has he assembled counselors who can help him formulate one. He can count—for the moment, at least—on the acquiescence of such suggestible officials as the London mayor and frightened clerks like the scribe. But the new ruler possesses neither administrative ability nor diplomatic skill, and no one in his entourage can supply what he manifestly lacks. His own mother despises him. His wife, Anne, fears and hates him. (84)
Several things seem apparent, both from Greenblatt's reading of Shakespeare and from the recent American experience. One is that freedom and the rule of law are inextricably entangled. It is not an exaggeration to say that freedom simply is the situation of living in a society in which the rule of law is respected (and laws establish individual rights and impersonal procedures). When strongmen are able to use the organs of the state or their private henchmen to enact their personal will, the freedom and liberties of the whole of society are compromised.

Second, the rule of law is a normative commitment; but it is also an institutional reality. Institutions like the Constitution, the division of powers, the independence of the judiciary, and the codification of government ethics are preventive checks against arbitrary power by individuals with power. But as Greenblatt's examples show, the critical positions within the institutions of law and government are occupied by ordinary men and women. And when they are venal, timid, and bent to the will of the sovereign, they present no barrier against tyranny. This is why fidelity to the rule of law and the independence of the justice system is the most fundamental and irreplaceable ethical commitment we must demand of officials. Conversely, when an elected official demonstrates lack of commitment to the principles, we must be very anxious for the fate of our democracy.

Greenblatt's book is fascinating for the historical context it provides for Shakespeare's plays. But it is even more interesting for the critical light it sheds on our current politics. And it makes clear that the moral choices posed by politicians determined to undermine the institutions of democracy are perennial, whether in Shakespeare's time or our own.

Tuesday, May 22, 2018

Social generativity and complexity


The idea of generativity in the realm of the social world expresses the notion that social phenomena are generated by the actions and thoughts of the individuals who constitute them, and nothing else (link, link). More specifically, the principle of generativity postulates that the properties and dynamic characteristics of social entities like structures, ideologies, knowledge systems, institutions, and economic systems are produced by the actions, thoughts, and dispositions of the set of individuals who make them up. There is no other kind of influence that contributes to the causal and dynamic properties of social entities. Begin with a population of individuals with such-and-so mental and behavioral characteristics; allow them to interact with each other over time; and the structures we observe emerge as a determinate consequence of these interactions.

This view of the social world lends great ontological support to the methods associated with agent-based models (link). Here is how Joshua Epstein puts the idea in Generative Social Science: Studies in Agent-Based Computational Modeling):
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest.... Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (42)
Consider an analogy with cooking. The properties of the cake are generated by the properties of the ingredients, their chemical properties, and the sequence of steps that are applied to the assemblage of the mixture from the mixing bowl to the oven to the cooling board. The final characteristics of the cake are simply the consequence of the chemistry of the ingredients and the series of physical influences that were applied in a given sequence.

Now consider the concept of a complex system. A complex system is one in which there is a multiplicity of causal factors contributing to the dynamics of the system, in which there are causal interactions among the underlying causal factors, and in which causal interactions are often non-linear. Non-linearity is important here, because it implies that a small change in one or more factors may lead to very large changes in the outcome. We like to think of causal systems as consisting of causal factors whose effects are independent of each other and whose influence is linear and additive.

A gardener is justified in thinking of growing tomatoes in this way: a little more fertilizer, a little more water, and a little more sunlight each lead to a little more tomato growth. But imagine a garden in which the effect of fertilizer on tomato growth is dependent on the recent gradient of water provision, and the effects of both positive influencers depend substantially on the recent amount of sunlight available. Under these circumstances it is difficult to predict the aggregate size of the tomato given information about the quantities of the inputs.

One of the key insights of complexity science is that generativity is fully compatible with a wicked level of complexity. The tomato's size is generated by its history of growth, determined by the sequence of inputs over time. But for the reason just mentioned, the complexity of interactions between water, sunlight, and fertilizer in their effects on growth mean that the overall dynamics of tomato growth are difficult to reconstruct.

Now consider the idea of strong emergence -- the idea that some aggregates possess properties that cannot in principle be explained by reference to the causal properties of the constituents of the aggregate. This means that the properties of the aggregate are not generated by the workings of the constituents; otherwise we would be able in principle to explain the properties of the aggregate by demonstrating how they derive from the (complex) pathways leading from the constituents to the aggregate. This version of the absolute autonomy of some higher-level properties is inherently mysterious. It implies that the aggregate does not supervene upon the properties of the constituents; there could be different aggregate properties with identical constituent properties. And this seems ontological untenable.

The idea of ontological individualism captures this intuition in the setting of social phenomena: social entities are ultimately composed of and constituted by the properties of the individuals who make them up, and nothing else. This does not imply methodological individualism; for reasons of complexity or computational limitations it may be practically impossible to reconstruct the pathways through which the social entity is generated out of the properties of individuals. But ontological individualism places an ontological constraint on the way that we conceptualize the social world. And it gives a concrete meaning to the idea of the microfoundations for a social entity. The microfoundations of a social entity are the pathways and mechanisms, known or unknown, through which the social entity is generated by the actions and intentionality of the individuals who constitute it.

Monday, May 7, 2018

What the boss wants to hear ...


According to David Halberstam in his outstanding history of the war in Vietnam, The Best and the Brightest, a prime cause of disastrous decision-making by Presidents Kennedy and Johnson was an institutional imperative in the Defense Department to come up with a set of facts that conformed to what the President wanted to hear. Robert McNamara and McGeorge Bundy were among the highest-level miscreants in Halberstam's account; they were determined to craft an assessment of the situation on the ground in Vietnam that conformed best with their strategic advice to the President.

Ironically, a very similar dynamic led to one of modern China's greatest disasters, the Great Leap Forward famine in 1959. The Great Helmsman was certain that collective agriculture would be vastly more productive than private agriculture; and following the collectivization of agriculture, party officials in many provinces obliged this assumption by reporting inflated grain statistics throughout 1958 and 1959. The result was a famine that led to at least twenty million excess deaths during a two-year period as the central state shifted resources away from agriculture (Frank Dik├ÂtterMao's Great Famine: The History of China's Most Devastating Catastrophe, 1958-62).

More mundane examples are available as well. When information about possible sexual harassment in a given department is suppressed because "it won't look good for the organization" and "the boss will be unhappy", the organization is on a collision course with serious problems. When concerns about product safety or reliability are suppressed within the organization for similar reasons, the results can be equally damaging, to consumers and to the corporation itself. General Motors, Volkswagen, and Michigan State University all seem to have suffered from these deficiencies of organizational behavior. This is a serious cause of organizational mistakes and failures. It is impossible to make wise decisions -- individual or collective -- without accurate and truthful information from the field. And yet the knowledge of higher-level executives depends upon the truthful and full reporting of subordinates, who sometimes have career incentives that work against honesty.

So how can this unhappy situation be avoided? Part of the answer has to do with the behavior of the leaders themselves. It is important for leaders to explicitly and implicitly invite the truth -- whether it is good news or bad news. Subordinates must be encouraged to be forthcoming and truthful; and bearers of bad news must not be subject to retaliation. Boards of directors, both private and public, need to make clear their own expectations on this score as well: that they expect leading executives to invite and welcome truthful reporting, and that they expect individuals throughout the organization to provide truthful reporting. A culture of honesty and transparency is a powerful antidote to the disease of fabrications to please the boss.

Anonymous hotlines and formal protection of whistle-blowers are other institutional arrangements that lead to greater honesty and transparency within an organization. These avenues have the advantage of being largely outside the control of the upper executives, and therefore can serve as a somewhat independent check on dishonest reporting.

A reliable practice of accountability is also a deterrent to dishonest or partial reporting within an organization. The truth eventually comes out -- whether about sexual harassment, about hidden defects in a product, or about workplace safety failures. When boards of directors and organizational policies make it clear that there will be negative consequences for dishonest behavior, this gives an ongoing incentive of prudence for individuals to honor their duties of honesty within the organization.

This topic falls within the broader question of how individual behavior throughout an organization has the potential for giving rise to important failures that harm the public and harm the organization itself.


Monday, April 23, 2018

Regulatory failure


When we think of the issues of health and safety that exist in a modern complex economy, it is impossible to imagine that these social goods will be produced in sufficient quantity and quality by market forces alone. Safety and health hazards are typically regarded as "externalities" by private companies -- if they can be "dumped" on the public without cost, this is good for the profitability of the company. And state regulation is the appropriate remedy for this tendency of a market-based economy to chronically produce hazards and harms, whether in the form of environmental pollution, unsafe foods and drugs, or unsafe industrial processes. David Moss and John Cisternino's New Perspectives on Regulation provides some genuinely important perspectives on the role and effectiveness of government regulation in an epoch which has been shaped by virulent efforts to reduce or eliminate regulations on private activity. This volume is a report from the Tobin Project.

It is poignant to read the optimism that the editors and contributors have -- in 2009 -- about the resurgence of support for government regulation. The financial crisis of 2008 had stimulated a vigorous round of regulation of financial institutions, and most of the contributors took this as a harbinger of a fresh public support for regulation more generally. Of course events have shown this confidence to be sadly mistaken; the dismantling of Federal regulatory regimes by the Trump administration threatens to take the country back to the period described by Upton Sinclair in the early part of the prior century. But what this demonstrates is the great importance of the Tobin Project. We need to build a public understanding and consensus around the unavoidable necessity of effective and pervasive regulatory regimes in environment, health, product safety, and industrial safety.

Here is how Mitchell Weiss, Executive Director of the Tobin Project, describes the project culminating in this volume:
To this end, in the fall of 2008 the Tobin Project approached leading scholars in the social sciences with an unusual request: we asked them to think about the topic of economic regulation and share key insights from their fields in a manner that would be accessible to both policymakers and the public. Because we were concerned that a conventional literature survey might obscure as much as it revealed, we asked instead that the writers provide a broad sketch of the most promising research in their fields pertaining to regulation; that they identify guiding principles for policymakers wherever possible; that they animate these principles with concrete policy proposals; and, in general, that they keep academic language and footnotes to a minimum. (5)
The lead essay is provided by Joseph Stiglitz, who looks more closely than previous decades of economists had done at the real consequences of market failure. Stiglitz puts the point about market failure very crisply:
Only under certain ideal circumstances may individuals, acting on their own, obtain “pareto efficient” outcomes, that is, situations in which no one can be made better off without making another worse off. These individuals involved must be rational and well informed, and must operate in competitive market- places that encompass a full range of insurance and credit markets. In the absence of these ideal circumstances, there exist government interventions that can potentially increase societal efficiency and/or equity. (11)
And regulation is unpopular -- with the businesses, landowners, and other powerful agents whose actions are constrained.
By its nature, a regulation restricts an individual or firm from doing what it otherwise would have done. Those whose behavior is so restricted may complain about, say, their loss of profits and potential adverse effects on innovation. But the purpose of government intervention is to address potential consequences that go beyond the parties directly involved, in situations in which private profit is not a good measure of social impact. Appropriate regulation may even advance welfare-enhancing innovations. (13)
Stiglitz pays attention to the pervasive problem of "regulatory capture":
The current system has made regulatory capture too easy. The voices of those who have benefited from lax regulation are strong; the perspectives of the investment community have been well represented. Among those whose perspectives need to be better represented are the laborers whose jobs would be lost by macro-mismanagement, and the pension holders whose pension funds would be eviscerated by excessive risk taking.

One of the arguments for a financial products safety commission, which would assess the efficacy and risks of new products and ascertain appropriate usage, is that it would have a clear mandate, and be staffed by people whose only concern would be protecting the safety and efficacy of the products being sold. It would be focused on the interests of the ordinary consumer and investors, not the interests of the financial institutions selling the products. (18)
It is very interesting to read Stiglitz's essay with attention to the economic focus he offers. His examples all come from the financial industry -- the risk at hand in 2008-2009. But the arguments apply equally profoundly to manufacturing, the pharmaceutical and food industries, energy industries, farming and ranching, and the for-profit education sector. At the same time the institutional details are different, and an essay on this subject with a focus on nuclear or chemical plants would probably identify a different set of institutional barriers to effective regulation.

Also particularly interesting is the contribution by Michael Barr, Eldar Shafir, and Sendhil Mullainathan on how behavioral perspectives on "rational action" can lead to more effective regulatory regimes. This essay pays close attention to the findings of experimental economics and behavioral economics, and the deviations from "pure economic rationality" that are pervasive in ordinary economic decision making. These features of decision-making are likely to be relevant to the effectiveness of a regulatory regime as well. Further, it suggests important areas of consumer behavior that are particularly subject to exploitative practices by financial companies -- creating a new need for regulation of these kinds of practices. Here is how they summarize their approach:
We propose a different approach to regulation. Whereas the classical perspective assumes that people generally know what is important and knowable, plan with insight and patience, and carry out their plans with wisdom and self-control, the central gist of the behavioral perspective is that people often fail to know and understand things that matter; that they misperceive, misallocate, and fail to carry out their intended plans; and that the context in which people function has great impact on their behavior, and, consequently, merits careful attention and constructive work. In our framework, successful regulation requires integrating this richer view of human behavior with our understanding of markets. Firms will operate on the contour de ned by this psychology and will respond strategically to regulations. As we describe above, because firms have a great deal of latitude in issue framing, product design, and so on, they have the capacity to a affect behavior and circumvent or pervert regulatory constraints. Ironically, firms’ capacity to do so is enhanced by their interaction with “behavioral” consumers (as opposed to the hypothetically rational actors of neoclassical economic theory), since so many of the things a regulator would find very hard to control (for example, frames, design, complexity, etc.) can greatly influence consumers’ behavior. e challenge of behaviorally informed regulation, therefore, is to be well designed and insightful both about human behavior and about the behaviors that firms are likely to exhibit in response to both consumer behavior and regulation. (55)
The contributions to this volume are very suggestive with regard to the issues of product safety, manufacturing safety, food and drug safety, and the like which constitute the larger core of the need for regulatory regimes. And the challenges faced in the areas of financial regulation discussed here are likely to be found to be illuminating in other sectors as well.

Thursday, April 5, 2018

Empowering the safety officer?


How can industries involving processes that create large risks of harm for individuals or populations be modified so they are more capable of detecting and eliminating the precursors of harmful accidents? How can nuclear accidents, aviation crashes, chemical plant explosions, and medical errors be reduced, given that each of these activities involves large bureaucratic organizations conducting complex operations and with substantial inter-system linkages? How can organizations be reformed to enhance safety and to minimize the likelihood of harmful accidents?

One of the lessons learned from the Challenger space shuttle disaster is the importance of a strongly empowered safety officer in organizations that deal in high-risk activities. This means the creation of a position dedicated to ensuring safe operations that falls outside the normal chain of command. The idea is that the normal decision-making hierarchy of a large organization has a built-in tendency to maintain production schedules and avoid costly delays. In other words, there is a built-in incentive to treat safety issues with lower priority than most people would expect.

If there had been an empowered safety officer in the launch hierarchy for the Challenger launch in 1986, there is a good chance this officer would have listened more carefully to the Morton-Thiokol engineering team's concerns about low temperature damage to O-rings and would have ordered a halt to the launch sequence until temperatures in Florida raised to the critical value. The Rogers Commission faulted the decision-making process leading to the launch decision in its final report on the accident (The Report of the Presidential Commission on the Space Shuttle Challenger Accident - The Tragedy of Mission 51-L in 1986 - Volume One, Volume Two, Volume Three).

This approach is productive because empowering a safety officer creates a different set of interests in the management of a risky process. The safety officer's interest is in safety, whereas other decision makers are concerned about revenues and costs, public relations, reputation, and other instrumental goods. So a dedicated safety officer is empowered to raise safety concerns that other officers might be hesitant to raise. Ordinary bureaucratic incentives may lead to underestimating risks or concealing faults; so lowering the accident rate requires giving some individuals the incentive and power to act effectively to reduce risks.

Similar findings have emerged in the study of medical and hospital errors. It has been recognized that high-risk activities are made less risky by empowering all members of the team to call a halt in an activity when they perceive a safety issue. When all members of the surgical team are empowered to halt a procedure when they note an apparent error, serious operating-room errors are reduced. (Here is a report from the American College of Obstetricians and Gynecologists on surgical patient safety; link. And here is a 1999 National Academy report on medical error; link.)

The effectiveness of a team-based approach to safety depends on one central fact. There is a high level of expertise embodied in the staff operating a surgical suite, an engineering laboratory, or a drug manufacturing facility. By empowering these individuals to stop a procedure when they judge there is an unrecognized error in play, this greatly extend the amount of embodied knowledge involved in a process. The surgeon, the commanding officer, or the lab director is no longer the sole expert whose judgments count.

But it also seems clear that these innovations don't work equally well in all circumstances. Take nuclear power plant operations. In Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima James Mahaffey documents multiple examples of nuclear accidents that resulted from the efforts of mid-level workers to address an emerging problem in an improvised way. In the case of nuclear power plant safety, it appears that the best prescription for safety is to insist on rigid adherence to pre-established protocols. In this case the function of a safety officer is to monitor operations to ensure protocol conformance -- not to exercise independent judgment about the best way to respond to an unfavorable reactor event.

It is in fact an interesting exercise to try to identify the kinds of operations in which these innovations are likely to be effective.

Here is a fascinating interview in Slate with Jim Bagian, a former astronaut, one-time director of the Veteran Administration's National Center for Patient Safety, and distinguished safety expert; link. Bagian emphasizes the importance of taking a system-based approach to safety. Rather than focusing on finding blame for specific individuals whose actions led to an accident, Bagian emphasizes the importance of tracing back to the institutional, organizational, or logistic background of the accident. What can be changed in the process -- of delivering medications to patients, of fueling a rocket, or of moving nuclear solutions around in a laboratory -- that make the likelihood of an accident substantially lower? (Here is a co-authored piece by Bagian and others on the topic of team-based patient safety in the operating room; link.)

The safety principles involved here seem fairly simple: cultivate a culture in which errors and near-misses are reported and investigated without blame; empower individuals within risky processes to halt the process if their expertise and experience indicates the possibility of a significant risky error; create individuals within organizations whose interests are defined in terms of the identification and resolution of unsafe practices or conditions; and share information about safety within the industry and with the public.

Sunday, March 25, 2018

Mechanisms, singular and general


Let's think again about the semantics of causal ascriptions. Suppose that we want to know what  caused a building crane to collapse during a windstorm. We might arrive at an account something like this:
  • An unusually heavy gust of wind at 3:20 pm, in the presence of this crane's specific material and structural properties, with the occurrence of the operator's effort to adjust the crane's extension at 3:21 pm, brought about cascading failures of structural elements of the crane, leading to collapse at 3:25 pm.
The process described here proceeds from the "gust of wind striking the crane" through an account of the material and structural properties of the device, incorporating the untimely effort by the operator to readjust the device's extension, leading to a cascade from small failures to a large failure. And we can identify the features of causal necessity that were operative at the several links of the chain.

Notice that there are few causal regularities or necessary and constant conjunctions in this account. Wind does not usually bring about the collapse of cranes; if the operator's intervention had occurred a few minutes earlier or later, perhaps the failure would not have occurred; and small failures do not always lead to large failures. Nonetheless, in the circumstances described here there is causal necessity extending from the antecedent situation at 3:15 pm to the full catastrophic collapse at 3:25 pm.

Does this narrative identify a causal mechanism? Are we better off describing this as a sequences of cause-effect sequences, none of which represents a causal mechanism per se? Or, on the contrary, can we look at the whole sequence as a single causal mechanism -- though one that is never to be repeated? Does a causal mechanism need to be a recurring and robust chain of events, or can it be a highly unique and contingent chain?

Most mechanisms theorists insist on a degree of repeatability in the sequences that they describe as "mechanisms". A causal mechanism is the triggering pathway through which one event leads to the production of another event in a range of circumstances in an environment. Fundamentally a causal mechanism is a "molecule" of causal process which can recur in a range of different social settings.

For example:
  • X typically brings about O.
Whenever this sequence of events occurs, in the appropriate timing, the outcome O is produced. This ensemble of events {X, O} is a single mechanism.

And here is the crucial point: to call this a mechanism requires that this sequence recurs in multiple instances across a range of background conditions.

This suggests an answer to the question about the collapsing crane: the sequence from gust to operator error to crane collapse is not a mechanism, but is rather a unique causal sequence. Each part of the sequence has a causal explanation available; each conveys a form of causal necessity in the circumstances. But the aggregation of these cause-effect connections falls short of constituting a causal mechanism because the circumstances in which it works are all but unique. A satisfactory causal explanation of the internal cause-effect pairs will refer to real repeatable mechanisms -- for example, "twisting a steel frame leads to a loss of support strength". But the concatenation does not add up to another, more complex, mechanism.

Contrast this with "stuck valve" accidents in nuclear power reactors. Valves control the flow of cooling fluids around the critical fuel. If the fuel is deprived of coolant it rapidly overheats and melts. A "stuck valve-loss of fluid-critical overheating" sequence is a recognized mechanism of nuclear meltdown, and has been observed in a range of nuclear-plant crises. It is therefore appropriate to describe this sequence as a genuine causal mechanism in the creation of a nuclear plant failure.

(Stuart Glennan takes up a similar question in "Singular and General Causal Relations: A Mechanist Perspective"; link.)

Friday, March 23, 2018

Machine learning


The Center for the Study of Complex Systems at the University of Michigan hosted an intensive day-long training on some of the basics of machine learning for graduate students and interested faculty and staff. Jake Hofman, a Microsoft researcher who also teaches this subject at Columbia University, was the instructor, and the session was both rigorous and accessible (link). Participants were asked to load a copy of R, a software package designed for the computations involved in machine learning and applied statistics, and numerous data sets were used as examples throughout the day. (Here is a brief description of R; link.) Thanks, Jake, for an exceptionally stimulating workshop.

So what is machine learning? Most crudely, it is a handful of methods through which researchers can sift through a large collection of events or objects, each of which has a very large number of properties, in order to arrive at a predictive sorting of the events or objects into a set of categories. The objects may be email texts or hand-printed numerals (the examples offered in the workshop), the properties may be the presence/absence of a long list of words or the presence of a mark in a bitmap grid, and the categories may be "spam/not spam" or the numerals between 0 and 9. But equally, the objects may be Facebook users, the properties "likes/dislikes" for a very large list of webpages, and the categories "Trump voter/Clinton voter". There is certainly a lot more to machine learning -- for example, these techniques don't shed light on the ways that AI Go systems improve their play. But it's good to start with the basics. (Here is a simple presentation of the basics of machine learning; link.)

Two intuitive techniques form the core of basic machine learning theory. The first makes use of the measurement of conditional probabilities in conjunction with Bayes' theorem to assign probabilities of the object being a Phi given the presence of properties xi. The second uses massively multi-factor regressions to calculate a probability for the event being Phi given regression coefficients ci.

Another basic technique is to treat the classification problem spatially. Use the large number of variables to define an n-dimensional space; then classify the object according to the average or majority value of its m-closest neighbors. (The neighbor number m might range from 1 to some manageable number such as 10.)


There are many issues of methodology and computational technique raised by this approach to knowledge. But these are matters of technique, and smart data science researchers have made great progress on them. More interesting here are epistemological issues: how good and how reliable are the findings produced by these approaches to the algorithmic treatment of large data sets? How good is the spam filter or the Trump voter detector when applied to novel data sets? What kind of errors would we anticipate this approach to be vulnerable to?

One important observation is that these methods are explicitly anti-theoretical. There is no place for discovery of causal mechanisms or underlying explanatory processes in these calculations. The researcher is not expected to provide a theoretical hypothesis about how this system of phenomena works. Rather, the techniques are entirely devoted to the discovery of persistent statistical associations among variables and the categories of the desired sorting. This is as close to Baconian induction as we get in the sciences (link). The approach is concerned about classification and prediction, not explanation. (Here is an interesting essay where Jake Hofman addresses the issues of prediction versus explanation of social data; link.)

A more specific epistemic concern that arises is the possibility that the training set of data may have had characteristics that are importantly different from comparable future data sets. This is the familiar problem of induction: will the future resemble the past sufficiently to support predictions based on past data? Spam filters developed in one email community may work poorly in an email community in another region or profession. We can label this as the problem of robustness.

Another limitation of this approach has to do with problems where our primary concern is with a singular event or object rather than a population. If we want to know whether NSA employee John Doe is a Russian mole, it isn't especially useful to know that his nearest neighbors in a multi-dimensional space of characteristics are moles; we need to know more specifically whether Doe himself has been corrupted by the Russians. If we want to know whether North Korea will explode a nuclear weapon against a neighbor in the next six months the techniques of machine learning seem to be irrelevant.

The statistical and computational tools of machine learning are indeed powerful, and seem to lead to results that are both useful and sometimes surprising. One should not imagine, however, that machine learning is a replacement for all other forms of research methodology in the social and behavioral sciences.

(Here is a brief introduction to a handful of the algorithms currently in use in machine-learning applications; link.)