Friday, December 14, 2018

The mind of government


We often speak of government as if it has intentions, beliefs, fears, plans, and phobias. This sounds a lot like a mind. But this impression is fundamentally misleading. "Government" is not a conscious entity with a unified apperception of the world and its own intentions. So it is worth teasing out the ways in which government nonetheless arrives at "beliefs", "intentions", and "decisions".

Let's first address the question of the mythical unity of government. In brief, government is not one unified thing. Rather, it is an extended network of offices, bureaus, departments, analysts, decision-makers, and authority structures, each of which has its own reticulated internal structure.

This has an important consequence. Instead of asking "what is the policy of the United States government towards Africa?", we are driven to ask subordinate questions: what are the policies towards Africa of the State Department, the Department of Defense, the Department of Commerce, the Central Intelligence Agency, or the Agency for International Development? And for each of these departments we are forced to recognize that each is itself a large bureaucracy, with sub-units that have chosen or adapted their own working policy objectives and priorities. There are chief executives at a range of levels -- President of the United States, Secretary of State, Secretary of Defense, Director of CIA -- and each often has the aspiration of directing his or her organization as a tightly unified and purposive unit. But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory.

This said, organizational units at all levels arrive at something analogous to beliefs (assessments of fact and probable future outcomes), assessments of priorities and their interactions, plans, and decisions (actions to take in the near and intermediate future). And governments make decisions at the highest level (leave the EU, raise taxes on fuel, prohibit immigration from certain countries, ...). How does the analytical and factual part of this process proceed? And how does the decision-making part unfold?

One factor is particularly evident in the current political environment in the United States. Sometimes the analysis and decision-making activities of government are short-circuited and taken by individual executives without an underlying organizational process. A president arrives at his view of the facts of global climate change based on his "gut instincts" rather than an objective and disinterested assessment of the scientific analysis available to him. An Administrator of the EPA acts to eliminate long-standing environmental protections based on his own particular ideological and personal interests. A Secretary of the Department of Energy takes leadership of the department without requesting a briefing on any of its current projects. These are instances of the dictator strategy (in the social-choice sense), where a single actor substitutes his will for the collective aggregation of beliefs and desires associated with both bureaucracy and democracy. In this instance the answer to our question is a simple one: in cases like these government has beliefs and intentions because particular actors have beliefs and intentions and those actors have the power and authority to impose their beliefs and intentions on government.

The more interesting cases involve situations where there is a genuine collective process through which analysis and assessment takes place (of facts and priorities), and through which strategies are considered and ultimately adopted. Agencies usually make decisions through extended and formalized processes. There is generally an organized process of fact gathering and scientific assessment, followed by an assessment of various policy options with public exposure. Final a policy is adopted (the moment of decision).

The decision by the EPA to ban DDT in 1972 is illustrative (link, linklink). This was a decision of government which thereby became the will of government. It was the result of several important sub-processes: citizen and NGO activism about the possible toxic harms created by DDT, non-governmental scientific research assessing the toxicity of DDT, an internal EPA process designed to assess the scientific conclusions about the environmental and human-health effects of DDT, an analysis of the competing priorities involved in this issue (farming, forestry, and malaria control versus public health), and a decision recommended to the Administrator and adopted that concluded that the priority of public health and environmental safety was weightier than the economic interests served by the use of the pesticide.

Other examples of agency decision-making follow a similar pattern. The development of policy concerning science and technology is particularly interesting in this context. Consider, for example, Susan Wright (link) on the politics of regulation of recombinant DNA. This issue is explored more fully in her book Molecular Politics: Developing American and British Regulatory Policy for Genetic Engineering, 1972-1982. This is a good case study of "government making up its mind". Another interesting case study is the development of US policy concerning ozone depletion; link.

These cases of science and technology policy illustrate two dimensions of the processes through which a government agency "makes up its mind" about a complex issue. There is an analytical component in which the scientific facts and the policy goals and priorities are gathered and assessed. And there is a decision-making component in which these analytical findings are crafted into a decision -- a policy, a set of regulations, or a funding program, for example. It is routine in science and technology policy studies to observe that there is commonly a substantial degree of intertwining between factual judgments and political preferences and influences brought to bear by powerful outsiders. (Here is an earlier discussion of these processes; link.)

Ideally we would like to imagine a process of government decision-making that proceeds along these lines: careful gathering and assessment of the best available scientific evidence about an issue through expert specialist panels and sections; careful analysis of the consequences of available policy choices measured against a clear understanding of goals and priorities of the government; and selection of a policy or action that is best, all things considered, for forwarding the public interest and minimizing public harms. Unfortunately, as the experience of government policies concerning climate change in both the Bush administration and the Trump administration illustrates, ideology and private interest distort every phase of this idealized process.

(Philip Tetlock's Superforecasting: The Art and Science of Prediction offers an interesting analysis of the process of expert factual assessment and prediction. Particularly interesting is his treatment of intelligence estimates.)

Monday, December 3, 2018

Is corruption a social thing?


When we discuss the ontology of various aspects of the social world, we are often thinking of such things as institutions, organizations, social networks, value systems, and the like. These examples pick out features of the world that are relatively stable and functional. Where does an imperfection or dysfunction of social life like corruption fit into our social ontology?

We might say that “corruption” is a descriptive category that is aimed at capturing a particular range of behavior, like stealing, gossiping, or asceticism. This makes corruption a kind of individual behavior, or even a characteristic of some individuals. “Mayor X is corrupt.”

This initial effort does not seem satisfactory, however. The idea of corruption is tied to institutions, roles, and rules in a very direct way, and therefore we cannot really present the concept accurately without articulating these institutional features of the concept of corruption. Corruption might be paraphrased in these terms:
  • Individual X plays a role Y in institution Z; role Y prescribes honest and impersonal performance of duties; individual X accepts private benefits to take actions that are contrary to the prescriptions of Y. In virtue of these facts X behaves corruptly.
Corruption, then, involves actions taken by officials that deviate from the rules governing their role, in order to receive private benefits from the subjects of those actions. Absent the rules and role, corruption cannot exist. So corruption is a feature that presupposes certain social facts about institutions. (Perhaps there is a link to Searle’s social ontology here; link.)

We might consider that corruption is analogous to friction in physical systems. Friction is a factor that affects the performance of virtually all mechanical systems, but that is a second-order factor within classical mechanics. And it is possible to give mechanical explanations of the ubiquity of friction, in terms of the geometry of adjoining physical surfaces, the strength of inter-molecular attractions, and the like. Analogously, we can offer theories of the frequency with which corruption occurs in organizations, public and private, in terms of the interests and decision-making frameworks of variously situated actors (e.g. real estate developers, land value assessors, tax assessors, zoning authorities …). Developers have a business interest in favorable rulings from assessors and zoning authorities; some officials have an interest in accepting gifts and favors to increase personal income and wealth; each makes an estimate of the likelihood of detection and punishment; and a certain rate of corrupt exchanges is the result.

This line of thought once again makes corruption a feature of the actors and their calculations. But it is important to note that organizations themselves have features that make corrupt exchanges either more likely or less likely (link, link). Some organizations are corruption-resistant in ways in which others are corruption-neutral or corruption-enhancing. These features include internal accounting and auditing procedures; whistle-blowing practices; executive and supervisor vigilance; and other organizational features. Further, governments and systems of law can make arrangements that discourage corruption; the incidence of corruption is influenced by public policy. For example, legal requirements on transparency in financial practices by firms, investment in investigatory resources in oversight agencies, and weighty penalties to companies found guilty of corrupt practices can affect the incidence of corruption. (Robert Klitgaard’s treatment of corruption is relevant here; he provides careful analysis of some of the institutional and governmental measures that can be taken that discourage corrupt practices; link, link. And there are cross-country indices of corruption (e.g. Transparency International) that demonstrate the causal effectiveness of anti-corruption measures at the state level. Finland, Norway, and Switzerland rank well on the Transparency International index.)

So -- is corruption a thing? Does corruption need to be included in a social ontology? Does a realist ontology of government and business organization have a place for corruption? Yes, yes, and yes. Corruption is a real property of individual actors’ behavior, observable in social life. It is a consequence of strategic rationality by various actors. Corruption is a social practice with its own supporting or inhibiting culture. Some organizations effectively espouse a core set of values of honesty and correct performance that make corruption less frequent. And corruption is a feature of the design of an organization or bureau, analogous to “mean-time-between-failure” as a feature of a mechanical design. Organizations can adopt institutional protections and cultural commitments that minimize corrupt behavior, while other organizations fail to do so and thereby encourage corrupt behavior. So “corruption-vulnerability” is a real feature of organizations and corruption has a social reality.

Saturday, December 1, 2018

Exercising government's will


Since the beginning of the industrial age the topic of regulation of private activity for the public good has been essential for the health and safety of the public. The economics of externalities and public harms are too powerful to permit private actors to conduct their affairs purely according to the dictates of profit and private interest. The desolation of the River Irk described in Engels' The Condition of the Working-Class in England in 1844 was powerful evidence of this dynamic in the nineteenth century, and need for the protection of health and safety in the food industry, the protection of air and water quality, and establishment of regulations ensuring safe operation of industrial, chemical, and nuclear plants became evident in the middle of the twentieth century. (Of course it goes without saying that our current administration no longer concedes this point.)

A fundamental problem for understanding the mechanics of government is the question of how the will and intentions of government (policies and regulatory regimes) are conveyed from the sites of decision-making to the behavior of the actors whom these policies are meant to influence.

The familiar principal-agent problem designates precisely this complex of issues. Applying a government policy or regulation requires a chain of behaviors by multiple agents within an extended network of governmental and non-governmental offices. It is all too evident that actors at various levels have interests and intentions that are important to their choices; and blind obedience to commands from above is not a common practice within any organization. Instead, actors within an office or bureau have some degree of freedom to act strategically with regard to their own preferences and interests. What, then, are the arrangements that the principal can put in place that makes conformance by the agent more complete?

Further, there are commonly a range of non-governmental entities and actors who are affected by governmental policies and regulations. They too have the ability to act strategically in consideration of their preferences and interests. And some of the actions that are available to non-governmental actors have the capacity to significantly influence the impact and form of various governmental policies and regulations. The corporations that own nuclear power plants, for example, have an ability to constrain and deflect the inspection schedules to which their properties are subject through influence on legislators, and the regulatory agency may be seriously hampered in its ability to apply existing safety regulations.

This is a problem of social ontology: what kind of thing is a governmental agency, how does it work internally, and through what kinds of mechanisms does it influence the world around it (firms, criminals, citizens, local government, …)?

Two related ideas about the nature of organizations are relevant in this context. The idea of organizations as “strategic action fields” that is developed by Fligstein and McAdam (A Theory of Fields) fits the situation of a governmental agency. And the earlier work by Michel Crozier and Erhard Friedberg offer a similar account of the strategic action that jointly determines the workings of an organization. Here is a representative passage from Crozier and Friedberg:
The reader should not misconstrue the significance of this theoretical bet. We have not sought to formulate a set of general laws concerning the substance, the properties and the stages of development of organizations and systems. We do not have the advantage of being able to furnish normative precepts like those offered by management specialists who always believe they can elaborate a model of “good organization” and present a guide to the means and measures necessary to realize it. We present of series of simple propositions on the problems raised by the existence of these complex but integrated ensembles that we call organizations, and on the means and instruments that people have invented to surmount these problems; that is to say, to assure and develop their cooperation in view of the common goals.” L’acteur et le système, p. 11
(Here are some earlier discussions of these theories; link, link, link.  And here is a related discussion of Mayer Zald's treatment of organizations; link.)

Also relevant from the point of view of the ontology of government organization is the new theory of institutional logics. Patricia Thornton, William Ocasio, and Michael Lounsbury describe new theoretical developments within the general framework of new institutionalism in The Institutional Logics Perspective: A New Approach to Culture, Structure and Process. Here is how they define their understanding of "institutional logic":
... as the socially constructed, historical patterns of cultural symbols and material practices, including assumptions, values, and beliefs, by which individuals and organizations provide meaning to their daily activity, organize time and space, and reproduce their lives and experiences. (2)
The institutional logics perspective is a metatheoretical framework for analyzing the interrelationships among institutions, individuals, and organizations in social systems. It aids researchers in questions of how individual and organizational actors are influenced by their situation in multiple social locations in an interinstitutional system, for example the institutional orders of the family, religion, state, market, professions, and corporations. Conceptualized as a theoretical model, each institutional order of the interinstitutional system distinguishes unique organizing principles, practices, and symbols that influence individual and organizational behavior. Institutional logics represent frames of reference that condition actors' choices for sensemaking, the vocabulary they use to motivate action, and their sense of self and identity. The principles, practices, and symbols of each institutional order differentially shape how reasoning takes place and how rationality is perceived and experienced. (2)
Here is a discussion of institutional logics; link.

So what can we say about the ontology of policy implementation, compliance, and executive decisions? We can say that --
  • it proceeds through individual actors in particular circumstances guided by particular interests and preferences; 
  • implementation is likely to be imperfect in the best of circumstances and entirely ineffectual in other circumstances; 
  • implementation is affected by the strategic non-governmental actors and organizations it is designed to influence, leading to further distortion and incompleteness. 
We can also, more positively, identify specific mechanisms that governments and executives introduce to increase the effectiveness of implementation of their policies. These include --
  • internal audit and discipline functions, 
  • communications and training strategies designed at enhancing conformance by intermediate actors, 
  • periodic purges of non-conformant sub-officials and powerful non-governmental actors, 
  • and dozens of other strategies and mechanisms of conformance.
Most fundamentally we can say that any model of government that postulates frictionless application and implementation of policy is flawed at its core. Such a model overlooks an ontological fundamental about government and other organizations, large and small: that organizational action is never automatic, algorithmic, or exact; that it is always conveyed by intermediate actors who have their own understandings and preferences about policy; and that it works in an environment where powerful non-governmental actors are almost always in positions to blunt the effectiveness of “the will of government”.

This topic unavoidably introduces the idea of corruption into the discussion (link, link). Sometimes the contrarian behavior of internal actors derives from private benefits offered them by outsiders influenced by the actions of government. (Hotels in Moscow?) More generally, however, it raises the question of conflicts of commitment, mission, role obligations, and organizational ethics.

Friday, November 30, 2018

Modeling the social


One of the most interesting authorities on social models and simulations is Scott Page. This month he published a major book on this topic, The Model Thinker: What You Need to Know to Make Data Work for You, and it is a highly valuable contribution. The book corresponds roughly to the content of Page's very successful Coursera course on models and simulations, and it serves as an excellent introduction to many different kinds of mathematical models in the social sciences.

Page's fundamental premise in the book is that we need many models, and many intellectual perspectives, to make sense of the social world. Mathematical modeling is a way of getting disciplined about the logic of our theories and hypotheses about various processes in the world, including the physical, biological, and social realms. No single approach will be adequate to understanding the complexity of the world; rather, we need multiple hypotheses and models to disentangle the many concurrent causal and systemic processes that are under way at a single time. As Page puts the point:
As powerful as single models can be, a collection of models accomplishes even more. With many models, we avoid the narrowness inherent in each individual model. A many-models approach illuminates each component model's blind spots. Policy choices made based on single models may ignore important features of the world such as income disparity, identity diversity, and interdependencies with other systems. (2)
Social ontology supports this approach in a fundamental way. The way I would put the point is this: social processes are almost invariably heterogeneous in their causes, temporal characters, and effects. So we need to have a way of theorizing society that is well suited to the forms of heterogeneity, and the many-models approach does exactly that.

Page proposes that there are multiple reasons why we might turn to models of a situation (physical, ecological, social, ...): to "reason, explain, design, communicate, act, predict, and explore" (15). We might simplify this list by saying that models can enhance theoretical understanding of complex phenomena (explanation, discovery of truth, exploration of hypotheses) and they may also serve practical purposes involving prediction and control.



Especially interesting are topics taken up in later chapters of the book, including the discussion of network models and broadcast, diffusion, and contagion models (chapters 9-10). These are all interesting because they represent different approaches to a common social phenomenon, the spread of a property through a population (ideas, disease, rebellion, hate and intolerance). These are among the most fundamental mechanisms of social change and stability, and Page's discussion of relevant models is insightful and accessible.

Page describes the constructs he considers as models, or abstract representations analogous to mathematical expressions. But we might also think of them as mini-theories of social mechanisms. Many of these examples illustrate a single kind of process that is found in real social situations, though rarely in a pure form. Games of coordination are a good example (chapter 15): the challenge of coordinating behavior with another purposive actor in order to bring about a beneficial outcome for both is a common social circumstance. Game theory provides an abstract analysis of how coordination can be achieved between rational agents; and the situation is more complicated when we consider imperfectly rational actors.

Another distinction that might be relevant in sorting the models that Page describes is that between "micro" and "macro". Some of the models Page presents have to do with individual-level behavior (and interactions between individuals); whereas others have to do with transitions among aggregated social states (market states, political regimes, ecological populations). The majority of the models considered have to do with individual choice, decision rules, and information sharing -- a micro-level approach comparable to agent-based modeling techniques. Several of the systems-dynamics models fall at the macro-end of the spectrum. Page treats this issue with the concept of "granularity": the level of structure and action at which the model's abstraction is couched (222).

The book closes with two very interesting examples of important social phenomena that can be analyzed using some of the models in the book. The first is the opioid epidemic in the United States, and the second is the last four decades' rapid increase in economic inequality. Thomas Schelling's memorable phrase, "the inescapable mathematics of musical chairs", is relevant to both problems. Once we recognize the changing rates of prescription of opioids, clustering of opioid users, and probability of transitioning from usage to addiction, the explosion of addition rates and mortality is inevitable.

Early in the book Page notes the current vogue for "big data" as a solution to the problem of understanding and forecasting large social trends and changes. He rightly argues that the data do not speak for themselves. Instead, it is necessary to bring analytical techniques to bear in order to identify relevant patterns, and we need to use imagination and rigor in creating hypotheses about the social mechanisms that underlie the patterns we discover. The Model Thinker is indeed a model of an approach to analyzing and understanding the complex world of social action and interaction that we inhabit.

Wednesday, November 21, 2018

Eleven years of Understanding Society


This month marks the end of the eleventh year of publication of Understanding Society. Thanks to all the readers and visitors who have made the blog so rewarding. The audience continues to be international, with roughly half of visits coming from the United States and the rest from UK, the Philippines, India, Australia, and other European countries. There are a surprising number of visits from Ukraine.

Topics in the past year have been diverse. The most frequent topic is my current research interest, organizational dysfunction and technology failure. Also represented are topics in the philosophy of social science (causal mechanisms, computational modeling), philosophy of history, China, and the politics of hate and division. The post with the largest number of views was "Is history probabilistic?", posted on December 30, and the least-read post was "The insights of biography", posted on August 29. Not surprisingly, the content of the blog follows the topics which I'm currently thinking about, including most recently the issue of sexual harassment of women in university settings.

Writing the blog has been a good intellectual experience for me. Taking an hour or two to think intensively about a particular idea -- large or small -- and trying to figure out what I think about it is genuinely stimulating for me. It makes me think of the description that Richard Schacht gave in an undergraduate course on nineteenth-century philosophy of Hegel's theory of creativity and labor. A sculptor begins with an indefinite idea of a physical form, a block of stone, and a hammer and chisel, and through interaction with the materials, tools, and hands he or she creates something new. The initial vision, inchoate as it is, is not enough, and the block of stone is mute. But the sculptor gives material expression to his or her visions through concrete interaction with the materials at hand. This is not a bad analogy for the process of thinking and writing itself. It is interesting that Marx's conception of the creativity of labor derives from this Hegelian metaphor.

This is what I had hoped for when I began the blog in 2007. I wanted to have a challenging form of expression that would allow me to develop ideas about how society and the social sciences work, and I hoped that this activity would draw me into new ideas, new thinking, and new approaches to problems already of interest. This has certainly materialized for me -- perhaps in the same way that a sculptor develops new capacities by contending with the resistance and contingency of the stone. There are issues, perspectives, and complexities that I have come to find very interesting that would not have come up in a more linear kind of academic writing.

It is also interesting for me to reflect on the role that "audience" plays for the writer. Since the first year of the blog I have felt that I understood the level of knowledge, questions, and interests that brought visitors to read a post or two, and sometimes to leave a comment. This is a smart, sophisticated audience. I have felt complete freedom in treating my subjects in the way that I think about them, without needing to simplify or reduce the problems I am considering to a more "public" level. This contrasts with the experience I had in blogging for the Huffington Post a number of years ago. Huff Post was a much more visible platform, but I never felt a connection with the audience, and I never felt the sense of intellectual comfort that I have in producing Understanding Society. As a result it was difficult to formulate my ideas in a way that seemed both authentic and original.

So thank you, to all the visitors and readers who have made the blog so satisfying for me over such a long time.

Tuesday, October 23, 2018

Sexual harassment in academic contexts


Sexual harassment of women in academic settings is regrettably common and pervasive, and its consequences are grave. At the same time, it is a remarkably difficult problem to solve. The "me-too" movement has shed welcome light on specific individual offenders and has generated more awareness of some aspects of the problem of sexual harassment and misconduct. But we have not yet come to a public awareness of the changes needed to create a genuinely inclusive and non-harassing environment for women across the spectrum of mistreatment that has been documented. The most common institutional response following an incident is to create a program of training and reporting, with a public commitment to investigating complaints and enforcing university or institutional policies rigorously and transparently. These efforts are often well intentioned, but by themselves they are insufficient. They do not address the underlying institutional and cultural features that make sexual harassment so prevalent.

The problem of sexual harassment in institutional contexts is a difficult one because it derives from multiple features of the organization. The ambient culture of the organization is often an important facilitator of harassing behavior -- often enough a patriarchal culture that is deferential to the status of higher-powered individuals at the expense of lower-powered targets. There is the fact that executive leadership in many institutions continues to be predominantly male, who bring with them a set of gendered assumptions that they often fail to recognize. The hierarchical nature of the power relations of an academic institution is conducive to mistreatment of many kinds, including sexual harassment. Bosses to administrative assistants, research directors to post-docs, thesis advisors to PhD candidates -- these unequal relations of power create a conducive environment for sexual harassment in many varieties. In each case the superior actor has enormous power and influence over the career prospects and work lives of the women over whom they exercise power. And then there are the habits of behavior that individuals bring to the workplace and the learning environment -- sometimes habits of masculine entitlement, sometimes disdainful attitudes towards female scholars or scientists, sometimes an underlying willingness to bully others that finds expression in an academic environment. (A recent issue of the Journal of Social Issues (link) devotes substantial research to the topic of toxic leadership in the tech sector and the "masculinity contest culture" that this group of researchers finds to be a root cause of the toxicity this sector displays for women professionals. Research by Jennifer Berdahl, Peter Glick, Natalya Alonso, and more than a dozen other scholars provides in-depth analysis of this common feature of work environments.)

The scope and urgency of the problem of sexual harassment in academic contexts is documented in excellent and expert detail in a recent study report by the National Academies of Sciences, Engineering, and Medicine (link). This report deserves prominent discussion at every university.

The study documents the frequency of sexual harassment in academic and scientific research contexts, and the data are sobering. Here are the results of two indicative studies at Penn State University System and the University of Texas System:




The Penn State survey indicates that 43.4% of undergraduates, 58.9% of graduate students, and 72.8% of medical students have experienced gender harassment, while 5.1% of undergraduates, 6.0% of graduate students, and 5.7% of medical students report having experienced unwanted sexual attention and sexual coercion. These are staggering results, both in terms of the absolute number of students who were affected and the negative effects that these  experiences had on their ability to fulfill their educational potential. The University of Texas study shows a similar pattern, but also permits us to see meaningful differences across fields of study. Engineering and medicine provide significantly more harmful environments for female students than non-STEM and science disciplines. The authors make a particularly worrisome observation about medicine in this context:
The interviews conducted by RTI International revealed that unique settings such as medical residencies were described as breeding grounds for abusive behavior by superiors. Respondents expressed that this was largely because at this stage of the medical career, expectation of this behavior was widely accepted. The expectations of abusive, grueling conditions in training settings caused several respondents to view sexual harassment as a part of the continuum of what they were expected to endure. (63-64)
The report also does an excellent job of defining the scope of sexual harassment. Media discussion of sexual harassment and misconduct focuses primarily on egregious acts of sexual coercion. However, the  authors of the NAS study note that experts currently encompass sexual coercion, unwanted sexual attention, and gender harassment under this category of harmful interpersonal behavior. The largest sub-category is gender harassment:
"a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about" members of one gender (Fitzgerald, Gelfand, and Drasgow 1995, 430). (25)
The "iceberg" diagram (p. 32) captures the range of behaviors encompassed by the concept of sexual harassment. (See Leskinen, Cortina, and Kabat 2011 for extensive discussion of the varieties of sexual harassment and the harms associated with gender harassment.)


The report emphasizes organizational features as a root cause of a harassment-friendly environment.
By far, the greatest predictors of the occurrence of sexual harassment are organizational. Individual-level factors (e.g., sexist attitudes, beliefs that rationalize or justify harassment, etc.) that might make someone decide to harass a work colleague, student, or peer are surely important. However, a person that has proclivities for sexual harassment will have those behaviors greatly inhibited when exposed to role models who behave in a professional way as compared with role models who behave in a harassing way, or when in an environment that does not support harassing behaviors and/or has strong consequences for these behaviors. Thus, this section considers some of the organizational and environmental variables that increase the risk of sexual harassment perpetration. (46)
Some of the organizational factors that they refer to include the extreme gender imbalance that exists in many professional work environments, the perceived absence of organizational sanctions for harassing behavior, work environments where sexist views and sexually harassing behavior are modeled, and power differentials (47-49). The authors make the point that gender harassment is chiefly aimed at indicating disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:
Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)
So what can a university or research institution do to reduce and eliminate the likelihood of sexual harassment for women within the institution? Several remedies seem fairly obvious, though difficult.
  • Establish a pervasive expectation of civility and respect in the workplace and the learning environment
  • Diffuse the concentrations of power that give potential harassers the opportunity to harass women within their domains
  • Ensure that the institution honors its values by refusing the "star culture" common in universities that makes high-prestige university members untouchable
  • Be vigilant and transparent about the processes of investigation and adjudication through which complaints are considered
  • Create effective processes that ensure that complainants do not suffer retaliation
  • Consider candidates' receptivity to the values of a respectful, civil, and non-harassing environment during the hiring and appointment process (including research directors, department and program chairs, and other positions of authority)
  • Address the gender imbalance that may exist in leadership circles
As the authors put the point in the final chapter of the report:
Preventing and effectively addressing sexual harassment of women in colleges and universities is a significant challenge, but we are optimistic that academic institutions can meet that challenge--if they demonstrate the will to do so. This is because the research shows what will work to prevent sexual harassment and why it will work. A systemwide change to the culture and climate in our nation's colleges and universities can stop the pattern of harassing behavior from impacting the next generation of women entering science, engineering, and medicine. (169)

Sunday, October 21, 2018

System effects


Quite a few posts here have focused on the question of emergence in social ontology, the idea that there are causal processes and powers at work at the level of social entities that do not correspond to similar properties at the individual level. Here I want to raise a related question, the notion that an important aspect of the workings of the social world derives from "system effects" of the organizations and institutions through which social life transpires. A system accident or effect is one that derives importantly from the organization and configuration of the system itself, rather than the specific properties of the units.

What are some examples of system effects? Consider these phenomena:
  • Flash crashes in stock markets as a result of automated trading
  • Under-reporting of land values in agrarian fiscal regimes 
  • Grade inflation in elite universities 
  • Increase in product defect frequency following a reduction in inspections 
  • Rising frequency of industrial errors at the end of work shifts 
Here is how Nancy Leveson describes systems causation in Engineering a Safer World: Systems Thinking Applied to Safety:
Safety approaches based on systems theory consider accidents as arising from the interactions among system components and usually do not specify single causal variables or factors. Whereas industrial (occupational) safety models and event chain models focus on unsafe acts or conditions, classic system safety models instead look at what went wrong with the system's operation or organization to allow the accident to take place. (KL 977)
Charles Perrow offers a taxonomy of systems as a hierarchy of composition in Normal Accidents: Living with High-Risk Technologies:
Consider a nuclear plant as the system. A part will be the first level -- say a valve. This is the smallest component of the system that is likely to be identified in analyzing an accident. A functionally related collection of parts, as, for example, those that make up the steam generator, will be called a unit, the second level. An array of units, such as the steam generator and the water return system that includes the condensate polishers and associated motors, pumps, and piping, will make up a subsystem, in this case the secondary cooling system. This is the third level. A nuclear plan has around two dozen subsystems under this rough scheme. They all come together in the fourth level, the nuclear plant or system. Beyond this is the environment. (65)
Large socioeconomic systems like capitalism and collectivized socialism have system effects -- chronic patterns of low productivity and corruption in the latter case, a tendency to inequality and immiseration in the former case. In each case the observed effect is the result of embedded features of property and labor in the two systems that result in specific kinds of outcomes. And an important dimension of social analysis is to uncover the ways in which ordinary actors pursuing ordinary goals within the context of the two systems, lead to quite different outcomes at the level of the "mode of production". And these effects do not depend on there being a distinctive kind of actor in each system; in fact, one could interchange the actors and still find the same macro-level outcomes.

Here is a preliminary effort at a definition for this concept in application to social organizations:
A system effect is an outcome that derives from the embedded characteristics of incentive and opportunity within a social arrangement that lead normal actors to engage in activity leading to the hypothesized aggregate effect.
Once we see what the incentive and opportunity structures are, we can readily see why some fraction of actors modify their behavior in ways that lead to the outcome. In this respect the system is the salient causal factor rather than the specific properties of the actors -- change the system properties and you will change the social outcome.

When we refer to system effects we often have unintended consequences in mind -- unintended both by the individual actors and the architects of the organization or practice. But this is not essential; we can also think of examples of organizational arrangements that were deliberately chosen or designed to bring about the given outcome. In particular, a given system effect may be intended by the designer and unintended by the individual actors. But when the outcomes in question are clearly dysfunctional or "catastrophic", it is natural to assume that they are unintended. (This, however, is one of the specific areas of insight that comes out of the new institutionalism: the dysfunctional outcome may be favorable for some sets of actors even as they are unfavorable for the workings of the system as a whole.)
 
Another common assumption about system effects is that they are remarkably stable through changes of actors and efforts to reverse the given outcome. In this sense they are thought to be somewhat beyond the control of the individuals who make up the system. The only promising way of undoing the effect is to change the incentives and opportunities that bring it about. But to the extent that a given configuration has emerged along with supporting mechanisms protecting it from deformation, changing the configuration may be frustratingly difficult.

Safety and its converse are often described as system effects. By this is often meant two things. First, there is the important insight that traditional accident analysis favors "unit failure" at the expense of more systemic factors. And second, there is the idea that accidents and failures often result from "tightly linked" features of systems, both social and technical, in which variation in one component of a system can have unexpected consequences for the operation of other components of the system. Charles Perrow describes the topic of loose and tight coupling in social systems in Normal Accidents; 89 ff,)

Friday, October 5, 2018

Social mobility disaggregated


There is a new exciting and valuable contribution from the research group around Raj Chetty, Nathan Hendren, and John Friedman, this time on the topic of neighborhood-level social mobility. (Earlier work highlighted measures of the impact on social mobility contributed by university education across the country. This work is presented on the Opportunity Insights website; link, link. Here is an earlier post on that work; link.) In the recently released work Chetty and his colleagues have used census data to compare incomes of parents and children across the country by neighborhood of birth, with the ability to disaggregate by race and gender, and the results are genuinely staggering. Here is a report on the project on the US Census website; link. The interactive dataset and mapping app are provided here (link). The study identifies neighborhoods of origin; characteristics of parents and neighborhoods; and characteristics of children.

Here are screenshots of metropolitan Detroit representing the individual incomes of the children (as adults) based on their neighborhoods of origin for all children, black children, and white children. (Of course a percentage of these individuals no longer live in the original neighborhood.) There are 24 outcome variables included as well as 13 neighborhood characteristics, and it is possible to create maps based on multiple combinations of these variables. It is also possible to download the data.




Children born in Highland Park, Michigan earned an average individual income as adults in 2014-15 of $18K; children born in Plymouth, Michigan earned an average individual income as adults of $42K. It is evident that these differences in economic outcomes are highly racialized; in many of the tracts in the Detroit area there are "insufficient data" for either black or white individuals to provide average data for these sub-populations in the given areas. This reflects the substantial degree of racial segregation that exists in the Detroit metropolitan area. (The project provides a special study of opportunity in Detroit, "Finding Opportunity in Detroit".)

This dataset is genuinely eye-opening for anyone interested in the workings of economic opportunity in the United States. It is also valuable for public policy makers at the local and higher levels who have an interest in improving outcomes for children in poverty. It is possible to use the many parameters included in the data to probe for obstacles to socioeconomic progress that might be addressed through targeted programs of opportunity enhancement.

(Here is a Brookings description of the social mobility project's central discoveries; link.)


Wednesday, October 3, 2018

Emotions as neurophysiological constructs


Are emotions real? Are they hardwired to our physiology? Are they pre-cognitive and purely affective? Was Darwin right in speculating that facial expressions are human universals that accurately represent a small repertoire of emotional experiences (The Expression of the Emotions in Man and Animals)? Or instead are emotions a part of the cognitive output of the brain, influenced by context, experience, expectation, and mental framework? Lisa Feldman Barrett is an accomplished neuroscientist who addresses all of these questions in her recent book How Emotions Are Made: The Secret Life of the Brain, based on several decades of research on the emotions. The book is highly interesting, and has important implications for the social sciences more broadly.

Barrett's core view is that the received theory of the emotions -- that they are hardwired and correspond to specific if unknown neurological groups, connected to specific physiological and motor responses -- is fundamentally wrong. She marshals a great deal of experimental evidence to the incorrectness of that theory. In its place she argues that emotional responses and experiences are the result of mental, conceptual, and cognitive construction by our central nervous system, entirely analogous to our ability to find meaning in a visual field of light and dark areas in order to resolve it as a bee (her example). The emotions are like perception more generally -- they result from an active process in which the brain attempts to impose order and pattern on sensory stimulation, a process she refers to as "simulation". She refers to this as the theory of constructed emotion (30). In brief:
Emotions are not reactions to the world. You are not a passive receiver of sensory input but an active constructor of your emotions. From sensory input and past experience, your brain constructs meaning and prescribes action. If you didn't have concepts that represent your past experience, all your sensory inputs would just be noise. (31)
And further:
Particular concepts like "Anger" and "Distrust" are not genetically determined. Your familiar emotion concepts are built-in only because you grew up in a particular social context where those emotion concepts are meaningful and useful, and your brain applies them outside your awareness to construct your experiences. (33)
This theory has much in common with theorizing about the nature of perception and thought within cognitive psychology, where the constructive nature of perception and representation has been a core tenet. Paul Kolers' motion perception experiments in the 1960s and 1970s established that perception is an active and constructive process, not a simple rendering of information from the retina into visual diagrams in the mind (Aspects of Motion Perception). And Daniel Dennett's Consciousness Explained argues for a "multiple drafts" theory of conscious experience which once again emphasizes the active and constructive nature of consciousness.

One implication of Barrett's theory is that emotions are concept-dependent. We need to learn the terms for emotions in our ambient language community before we can experience them. The emotions we experience are conceptually loaded and structured.
People who exhibit low emotional granularity will have only a few emotion concepts. In English, they might have words in their vocabulary like "sadness," "fear," "guilt," "shame," "embarrassment," "irritation," "anger," and "contempt," but those words all correspond to the same concept whose goal is something like "feeling unpleasant." This person has a few tools -- a hammer and Swiss Army knife. (106)
In a later chapter Barrett takes her theory in a Searle-like direction by emphasizing the inherent and irreducible constructedness of social facts and social relations (chapter 7). Without appropriate concepts we cannot understand or represent the behaviors and interactions of people around us; and their interactions depend inherently on the conceptual systems or frames within which we place their actions. Language, conceptual frames, and collective intentionality are crucial constituents of social facts, according to this perspective. I find Searle's arguments on this subject less than convincing (link), and I'm tempted to think that Barrett is going out on a limb by embracing his views more extensively than needed for her own theory of the emotions.

I find Barrett's work interesting for a number of reasons. One is the illustration it provides of human plasticity and heterogeneity. "Any category of emotion such as "Happiness" or "Guilt" is filled with variety" (35). Another is the methodological sophistication Barrett demonstrates in her refutation of two thousand years of received wisdom about the emotions, from Aristotle and Plato to Paul Ekman and colleagues. This sophistication extends to her effort to avoid language in describing emotions and research strategies that embeds the ontology of the old view -- an ontology that reifies particular emotions in the head and body of the other human being (40). She correctly observes that language like "detecting emotion X in the subject" implies that the psychological condition exists as a fixed reality in the subject; whereas the whole point of her theory is that the experience of disgust or happiness is a transient and complex construction by the brain behind the scenes of our conscious experience. She is "anti-realist" in her treatment of emotion. "We don't recognize emotions or identify emotions: we construct our own emotional experiences, and our perceptions of others' emotions, on the spot, as needed, through a complex interplay of systems" (40). And finally, her theory of emotion as a neurophysiological construct has a great deal of credibility -- its internal logic, its fit with current understandings of the central nervous system, its convergence with cognitive psychology and perception theory, and the range of experimental evidence that Barrett brings to bear.

Sunday, September 30, 2018

Philosophy and the study of technology failure

image: Adolf von Menzel, The Iron Rolling Mill (Modern Cyclopes)

Readers may have noticed that my current research interests have to do with organizational dysfunction and largescale technology failures. I am interested in probing the ways in which organizational failures and dysfunctions have contributed to large accidents like Bhopal, Fukushima, and the Deepwater Horizon disaster. I've had to confront an important question in taking on this research interest: what can philosophy bring to the topic that would not be better handled by engineers, organizational specialists, or public policy experts?

One answer is the diversity of viewpoint that a philosopher can bring to the discussion. It is evident that technology failures invite analysis from all of these specialized experts, and more. But there is room for productive contribution from reflective observers who are not committed to any of these disciplines. Philosophers have a long history of taking on big topics outside the defined canon of "philosophical problems", and often those engagements have proven fruitful. In this particular instance, philosophy can look at organizations and technology in a way that is more likely to be interdisciplinary, and perhaps can help to see dimensions of the problem that are less apparent from a purely disciplinary perspective.

There is also a rationale based on the terrain of the philosophy of science. Philosophers of biology have usually attempted to learn as much about the science of biology as they can manage, but they lack the level of expertise of a research biologist, and it is rare for a philosopher to make an original contribution to the scientific biological literature. Nonetheless it is clear that philosophers have a great deal to add to scientific research in biology. They can contribute to better reasoning about the implications of various theories, they can probe the assumptions about confirmation and explanation that are in use, and they can contribute to important conceptual disagreements. Biology is in a better state because of the work of philosophers like David Hull and Elliot Sober.

Philosophers have also made valuable contributions to science and technology studies, bringing a viewpoint that incorporates insights from the philosophy of science and a sensitivity to the social groundedness of technology. STS studies have proven to be a fruitful place for interaction between historians, sociologists, and philosophers. Here again, the concrete study of the causes and context of large technology failure may be assisted by a philosophical perspective.

There is also a normative dimension to these questions about technology failure for which philosophy is well prepared. Accidents hurt people, and sometimes the causes of accidents involve culpable behavior by individuals and corporations. Philosophers have a long history of contribution to these kinds of problems of fault, law, and just management of risks and harms.

Finally, it is realistic to say that philosophy has an ability to contribute to social theory. Philosophers can offer imagination and critical attention to the problem of creating new conceptual schemes for understanding the social world. This capacity seems relevant to the problem of describing, analyzing, and explaining largescale failures and disasters.

The situation of organizational studies and accidents is in some ways more hospitable for contributions by a philosopher than other "wicked problems" in the world around us. An accident is complicated and complex but not particularly obscure. The field is unlike quantum mechanics or climate dynamics, which are inherently difficult for non-specialists to understand. The challenge with accidents is to identify a multi-layered analysis of the causes of the accident that permits observers to have a balanced and operative understanding of the event. And this is a situation where the philosopher's perspective is most useful. We can offer higher-level descriptions of the relative importance of different kinds of causal factors. Perhaps the role here is analogous to messenger RNA, providing a cross-disciplinary kind of communications flow. Or it is analogous to the role of philosophers of history who have offered gentle critique of the cliometrics school for its over-dependence on a purely statistical approach to economic history.

So it seems reasonable enough for a philosopher to attempt to contribute to this set of topics, even if the disciplinary expertise a philosopher brings is more weighted towards conceptual and theoretical discussions than undertaking original empirical research in the domain.

What I expect to be the central finding of this research is the idea that a pervasive and often unrecognized cause of accidents is a systemic organizational defect of some sort, and that it is enormously important to have a better understanding of common forms of these deficiencies. This is a bit analogous to a paradigm shift in the study of accidents. And this view has important policy implications. We can make disasters less frequent by improving the organizations through which technology processes are designed and managed.

Thursday, September 27, 2018

James Scott on the earliest states


In 2011 James Scott gave a pair of Tanner Lectures at Harvard. He had chosen a topic for which he felt he had a fairly good understanding, having taught on early agrarian societies throughout much of his career. The topic was the origins of the earliest states in human history. But as he explains in the preface to the 2017 book Against the Grain: A Deep History of the Earliest States, preparation for the lectures led him into brand new debates, bodies of evidence, and theories which were pretty much off his personal map. The resulting book is his effort to bring his own understanding up to date, and it is a terrific and engaging book.

Scott gives a quick summary of the view of early states, nutrition, agriculture, and towns that he shared with most historians of early civilizations up through a few decades ago. Hunter-gatherer human groups were the primary mode of living for tens of thousands of years at the dawn of civilization. Humanity learned to domesticate plants and animals, creating a basis for sedentary agriculture in hamlets and villages. With the increase in productivity associated with settled agriculture, it was possible for nascent political authorities to collect taxes and create political institutions. Agriculture and politics created the conditions that conduced to the establishment of larger towns, and eventually cities. And humanity surged forward in terms of population size and quality of life.

But, as Scott summarizes, none of these sequences has held up to current scholarship.
We thought ... that the domestication of plants and animals led directly to sedentism and fixed-field agriculture. It turns out that sedentism long preceded evidence of plant and animal domestication and that both sedentism and domestication were in place at least four millennia before anything like agricultural villages appeared. (xi)
...
The early states were fragile and liable to collapse, but the ensuing "dark ages" may often have marked an actual improvement in human welfare. Finally, there is a strong case to be made that life outside the state -- life as a "barbarian" -- may often have been materially easier, freer, and healthier than life at least for nonelites inside civilization. (xii)
There is an element of "who are we?" in the topic -- that is, what features define modern humanity? Here is Scott's most general answer:
A sense, then, for how we came to be sedentary, cereal-growing, livestock-rearing subjects governed by the novel institution we now call the state requires an excursion into deep history. (3)
Who we are, in this telling of the story, is a species of hominids who are sedentary, town-living, agriculture-dependent subjects of the state. But this characterization is partial (as of course Scott knows); we are also meaning-makers, power-wielders, war-fighters, family-cultivators, and sometimes rebels. And each of these other qualities of humanity leads us in the direction of a different kinds of history, requiring a Clifford Geertz, a Michael Mann, a Tolstoy or a Marx to tell the story.

A particularly interesting part of the novel story about these early origins of human civilization that Scott provides has to do with the use of fire in the material lives of pre-technology humans -- hunters, foragers, and gatherers -- in a deliberate effort to sculpt the natural environment around then to concentrate food resources. According to Scott's readings of recent archeology and pre-agriculture history, human communities used fire to create the specific habitats that would entice their prey to make themselves readily available for the season's meals. He uses a strikingly phrase to capture the goal here -- reducing the radius of a meal. Early foragers literally reshaped the natural environments in which they lived.
What we have here is a deliberate disturbance ecology in which hominids create, over time, a mosaic of biodiversity and a distribution of desirable resources more to their liking. (40)
Most strikingly, Scott suggests a link between massive Native American use of fire to reduce forests, the sudden decline in their population from disease following contact with Europeans and consequent decline in burning, and the onset of the Little Ice Age (1500-1850) as a result of reduced CO2 production (39). Wow!

Using fire for cooking further reduced this "radius of the meal" by permitting early humans to consume a wider range of potential foods. And Scott argues that this innovation had evolutionary consequences for our hominid ancestors: human populations developed a digestive gut only one-third the length of that of other non-fire-using hominids. "We are a fire-adapted species" (42).

Scott makes an intriguing connection between grain-based agriculture and early states. The traditional narrative has it that pre-farming society was too low in food productivity to allow for sedentary life and dense populations. According to Scott this assumption is no longer supported by the evidence. Sedentary life based on foraging, gathering, and hunting was established several thousand years earlier than the development of agriculture. Gathering, farming, settled residence, and state power are all somewhat independent. In fact, Scott argues that these foraging communities were too well situated in their material environment to be vulnerable to a predatory state. "There was no single dominant resource that could be monopolized or controlled from the center, let alone taxed" (57). These communities generally were supported by three or four "food webs" that gave them substantial independence from both climate fluctuation and domination by powerful outsiders (49). Cereal-based civilizations, by contrast, were vulnerable to both threats, and powerful authorities had the ability to confiscate grain at the point of harvest or in storage. Grain made taxation possible.

We often think of hunter-gatherers in terms of game hunters and the feast-or-famine material life described by Marshall Sahlins in Stone Age Economics. But Scott makes the point that there are substantial ecological niches in wetlands where nutrition comes to the gatherers rather than the hunter. And in the early millennia of the lower Nile -- what Scott refers to as the southern alluvium -- the wetland ecological zone was ample for a very satisfactory and regular level of wellbeing. And, of special interest to Scott, "the wetlands are ungovernable" (56). (Notice the parallel with Scott's treatment of Zomia in The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia.)

So who are these early humans who navigated their material worlds so exquisitely well and yet left so little archeological record because they built their homes with sticks, mud, and papyrus?
It makes most sense to see them as agile and astute navigators of a diverse but also changeable and potentially dangerous environment.... We can see this long period as one of continuous experimentation and management of this environment. Rather than relying on only a small bandwidth of food resources, they seem to have been opportunistic generalists with a large portfolio of subsistence options spread across several food webs. (59)
Later chapters offer similarly iconoclastic accounts of the inherent instability of the early states (like a pyramid of tumblers on the stage), the advantages of barbarian civilization, the epidemiology of sedentary life, and other intriguing topics in the early history of humanity. And pervasively, there is the under current of themes that recur often in Scott's work -- the validity and dignity of the hidden players in history, the resourcefulness of ordinary hominids, and the importance of avoiding the received wisdom of humanity's history.

Scott is telling a new story here about where we came from, and it is a fascinating one.

Tuesday, September 25, 2018

System safety


An ongoing thread of posts here is concerned with organizational causes of large technology failures. The driving idea is that failures, accidents, and disasters usually have a dimension of organizational causation behind them. The corporation, research office, shop floor, supervisory system, intra-organizational information flow, and other social elements often play a key role in the occurrence of a gas plant fire, a nuclear power plant malfunction, or a military disaster. There is a tendency to look first and foremost for one or more individuals who made a mistake in order to explain the occurrence of an accident or technology failure; but researchers such as Perrow, Vaughan, Tierney, and Hopkins have demonstrated in detail the importance of broadening the lens to seek out the social and organizational background of an accident.

It seems important to distinguish between system flaws and organizational dysfunction in considering all of the kinds of accidents mentioned here. We might specify system safety along these lines. Any complex process has the potential for malfunction. Good system design means creating a flow of events and processes that make accidents inherently less likely. Part of the task of the designer and engineer is to identify chief sources of harm inherent in the process -- release of energy, contamination of food or drugs, unplanned fission in a nuclear plant -- and design fail-safe processes so that these events are as unlikely as possible. Further, given the complexity of contemporary technology systems it is critical to attempt to anticipate unintended interactions among subsystems -- each of which is functioning correctly but that lead to disaster in unusual but possible interaction scenarios.

In a nuclear processing plant, for example, there is the hazard of radioactive materials being brought into proximity with each other in a way that creates unintended critical mass. Jim Mahaffey's Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima offers numerous examples of such unintended events, from the careless handling of plutonium scrap in a machining process to the transfer of a fissionable liquid from a vessel of one shape to another. We might try to handle these risks as an organizational problem: more and better training for operatives about the importance of handling nuclear materials according to established protocols, and effective supervision and oversight to ensure that the protocols are observed on a regular basis. But it is also possible to design the material processes within a nuclear plant in a way that makes unintended criticality virtually impossible -- for example, by storing radioactive solutions in containers that simply cannot be brought into close proximity with each other.

Nancy Leveson is a national expert on defining and applying principles of system safety. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a thorough treatment of her thinking about this subject. She offers a handful of compelling reasons for believing that safety is a system-level characteristic that requires a systems approach: the fast pace of technological change, reduced ability to learn from experience, the changing nature of accidents, new types of hazards, increasing complexity and coupling, decreasing tolerance for single accidents, difficulty in selecting priorities and making tradeoffs , more complex relationships between humans and automation, and changing regulatory and public view of safety (kl 130 ff.). Particularly important in this list is the comment about complexity and coupling: "The operation of some systems is so complex that it defies the understanding of all but a few experts, and sometimes even they have incomplete information about the system's potential behavior" (kl 137).

Given the fact that safety and accidents are products of whole systems, she is critical of the accident methodology generally applied to serious industrial, aerospace, and chemical accidents. This methodology involves tracing the series of events that led to the outcome, and identifying one or more events as the critical cause of the accident. However, she writes:
In general, event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management decision making, and flaws in the safety culture of the or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation beyond the proximate evens.A narrow focus on technological components and pure engineering activities or a similar narrow focus on operator errors may lead to ignoring some of the most important factors in terms of preventing future accidents. (kl 452)
Here is a definition of system safety offered later in ESW in her discussion of the emergence of the concept within the defense and aerospace fields in the 1960s:
System Safety ... is a subdiscipline of system engineering. It was created at the same time and for the same reasons. The defense community tried using the standard safety engineering techniques on their complex new systems, but the limitations became clear when interface and component interaction problems went unnoticed until it was too late, resulting in many losses and near misses. When these early aerospace accidents were investigated, the causes of a large percentage of them were traced to deficiencies in design, operations, and management. Clearly, big changes were needed. System engineering along with its subdiscipline, System Safety, were developed to tackle these problems. (kl 1007)
Here Leveson mixes system design and organizational dysfunctions as system-level causes of accidents. But much of her work in this book and her earlier Safeware: System Safety and Computers gives extensive attention to the design faults and component interactions that lead to accidents -- what we might call system safety in the narrow or technical sense.
A systems engineering approach to safety starts with the basic assumption that some properties of systems, in this case safety, can only be treated adequately in the context of the social and technical system as a whole. A basic assumption of systems engineering is that optimization of individual components or subsystems will not in general lead to a system optimum; in fact, improvement of a particular subsystem may actually worsen the overall system performance because of complex, nonlinear interactions among the components. (kl 1007)
Overall, then, it seems clear that Leveson believes that both organizational features and technical system characteristics are part of the systems that created the possibility for accidents like Bhopal, Fukushima, and Three Mile Island. Her own accident model designed to help identify causes of accidents, STAMP (Systems-Theoretic Accident Model and Processes) emphasizes both kinds of system properties.
Using this new causality model ... changes the emphasis in system safety from preventing failures to enforcing behavioral safety constraints. Component failure accidents are still included, but or conception of causality is extended to include component interaction accidents. Safety is reformulated as a control problem rather than a reliability problem. (kl 1062)
In this framework, understanding why an accident occurred requires determining why the control was ineffective. Preventing future accidents requires shifting from a focus on preventing failures to the broader goal of designing and implementing controls that will enforce the necessary constraints. (kl 1084)
Leveson's brief analysis of the Bhopal disaster in 1984 (kl 384 ff.) emphasizes the organizational dysfunctions that led to the accident -- and that were completely ignored by the Indian state's accident investigation of the accident: out-of-service gauges, alarm deficiencies, inadequate response to prior safety audits, shortage of oxygen masks, failure to inform the police or surrounding community of the accident, and an environment of cost cutting that impaired maintenance and staffing. "When all the factors, including indirect and systemic ones, are considered, it becomes clear that the maintenance worker was, in fact, only a minor and somewhat irrelevant player in the loss. Instead, degradation in the safety margin occurred over time and without any particular single decision to do so but simply as a series of decisions that moved the plant slowly toward a situation where any slight error would lead to a major accident" (kl 447).

Saturday, September 15, 2018

Patient safety


An issue which is of concern to anyone who receives treatment in a hospital is the topic of patient safety. How likely is it that there will be a serious mistake in treatment -- wrong-site surgery, incorrect medication or radiation dose, exposure to a hospital-acquired infection? The current evidence is alarming. (Martin Makary et al estimate that over 250,000 deaths per year result from medical mistakes -- making medical error now the third leading cause of mortality in the United States (link).) And when these events occur, where should we look for assigning responsibility -- at the individual providers, at the systems that have been implemented for patient care, at the regulatory agencies responsible for overseeing patient safety?

Medical accidents commonly demonstrate a complex interaction of factors, from the individual provider to the technologies in use to failures of regulation and oversight. We can look at a hospital as a place where caring professionals do their best to improve the health of their patients while scrupulously avoiding errors. Or we can look at it as an intricate system involving the recording and dissemination of information about patients; the administration of procedures to patients (surgery, medication, radiation therapy). In this sense a hospital is similar to a factory with multiple intersecting locations of activity. Finally, we can look at it as an organization -- a system of division of labor, cooperation, and supervision by large numbers of staff whose joint efforts lead to health and accidents alike. Obviously each of these perspectives is partially correct. Doctors, nurses, and technicians are carefully and extensively trained to diagnose and treat their patients. The technology of the hospital -- the digital patient record system, the devices that administer drugs, the surgical robots -- can be designed better or worse from a safety point of view. And the social organization of the hospital can be effective and safe, or it can be dysfunctional and unsafe. So all three aspects are relevant both to safe operations and the possibility of chronic lack of safety.

So how should we analyze the phenomenon of patient safety? What factors can be identified that distinguish high safety hospitals from low safety? What lessons can be learned from the study of accidents and mistakes that cumulatively lead to a hospitals patient safety record?

The view that primarily emphasizes expertise and training of individual practitioners is very common in the healthcare industry, and yet this approach is not particularly useful as a basis for improving the safety of healthcare systems. Skill and expertise are necessary conditions for effective medical treatment; but the other two zones of accident space are probably more important for reducing accidents -- the design of treatment systems and the organizational features that coordinate the activities of the various individuals within the system.

Dr. James Bagian is a strong advocate for the perspective of treating healthcare institutions as systems. Bagian considers both technical systems characteristics of processes and the organizational forms through which these processes are carried out and monitored. And he is very skilled at teasing out some of the ways in which features of both system and organization lead to avoidable accidents and failures. I recall his description of a safety walkthrough he had done in a major hospital. He said that during the tour he noticed a number of nurses' stations which were covered with yellow sticky notes. He observed that this is both a symptom and a cause of an accident-prone organization. It means that individual caregivers were obligated to remind themselves of tasks and exceptions that needed to be observed. Far better was to have a set of systems and protocols that made sticky notes unnecessary. Here is the abstract from a short summary article by Bagian on the current state of patient safety:
Abstract The traditional approach to patient safety in health care has ranged from reticence to outward denial of serious flaws. This undermines the otherwise remarkable advances in technology and information that have characterized the specialty of medical practice. In addition, lessons learned in industries outside health care, such as in aviation, provide opportunities for improvements that successfully reduce mishaps and errors while maintaining a standard of excellence. This is precisely the call in medicine prompted by the 1999 Institute of Medicine report “To Err Is Human: Building a Safer Health System.” However, to effect these changes, key components of a successful safety system must include: (1) communication, (2) a shift from a posture of reliance on human infallibility (hence “shame and blame”) to checklists that recognize the contribution of the system and account for human limitations, and (3) a cultivation of non-punitive open and/or de-identified/anonymous reporting of safety concerns, including close calls, in addition to adverse events.
(Here is the Institute of Medicine study to which Bagian refers; link.)

Nancy Leveson is an aeronautical and software engineer who has spent most of her career devoted to designing safe systems. Her book Engineering a Safer World: Systems Thinking Applied to Safety is a recent presentation of her theories of systems safety. She applies these approaches to problems of patient safety with several co-authors in "A Systems Approach to Analyzing and Preventing Hospital Adverse Events" (link). Here is the abstract and summary of findings for that article:
Objective: This study aimed to demonstrate the use of a systems theory-based accident analysis technique in health care applications as a more powerful alternative to the chain-of-event accident models currently underpinning root cause analysis methods.
Method: A new accident analysis technique, CAST [Causal Analysis based on Systems Theory], is described and illustrated on a set of adverse cardiovascular surgery events at a large medical center. The lessons that can be learned from the analysis are compared with those that can be derived from the typical root cause analysis techniques used today.
Results: The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals. With the use of the system-theoretic analysis results, recommendations can be generated to change the context in which decisions are made and thus improve decision making and reduce the risk of an accident.
Conclusions: The use of a systems-theoretic accident analysis technique can assist in identifying causal factors at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved. Identification of these causal factors in accidents will help health care systems learn from mistakes and design system-level changes to prevent them in the future.
Key Words: patient safety, systems theory, cardiac surgical procedures, adverse event causal analysis (J Patient Saf 2016;00: 00–00)
Crucial in this article is this research group's effort to identify causes "at all levels of the system without simply assigning blame to either the frontline clinicians or technicians involved". The key result is this: "The analysis of the 30 cardiovascular surgery adverse events using CAST revealed the reasons behind unsafe individual behavior, which were related to the design of the system involved and not negligence or incompetence on the part of individuals."

Bagian, Leveson, and others make a crucial point: in order to substantially increase the performance of hospitals and the healthcare system more generally when it comes to patient safety, it will be necessary to extend the focus of safety analysis from individual incidents and agents to the systems and organizations through which these accidents were possible. In other words, attention to systems and organizations is crucial if we are to significantly reduce the frequency of medical and hospital mistakes.

(The Makary et al estimate of 250,000 deaths caused by medical error has been questioned on methodological grounds. See Aaron Carroll's thoughtful rebuttal (NYT 8/15/16; link).)