Navigation page

Pages

Sunday, March 25, 2018

Mechanisms, singular and general


Let's think again about the semantics of causal ascriptions. Suppose that we want to know what  caused a building crane to collapse during a windstorm. We might arrive at an account something like this:
  • An unusually heavy gust of wind at 3:20 pm, in the presence of this crane's specific material and structural properties, with the occurrence of the operator's effort to adjust the crane's extension at 3:21 pm, brought about cascading failures of structural elements of the crane, leading to collapse at 3:25 pm.
The process described here proceeds from the "gust of wind striking the crane" through an account of the material and structural properties of the device, incorporating the untimely effort by the operator to readjust the device's extension, leading to a cascade from small failures to a large failure. And we can identify the features of causal necessity that were operative at the several links of the chain.

Notice that there are few causal regularities or necessary and constant conjunctions in this account. Wind does not usually bring about the collapse of cranes; if the operator's intervention had occurred a few minutes earlier or later, perhaps the failure would not have occurred; and small failures do not always lead to large failures. Nonetheless, in the circumstances described here there is causal necessity extending from the antecedent situation at 3:15 pm to the full catastrophic collapse at 3:25 pm.

Does this narrative identify a causal mechanism? Are we better off describing this as a sequences of cause-effect sequences, none of which represents a causal mechanism per se? Or, on the contrary, can we look at the whole sequence as a single causal mechanism -- though one that is never to be repeated? Does a causal mechanism need to be a recurring and robust chain of events, or can it be a highly unique and contingent chain?

Most mechanisms theorists insist on a degree of repeatability in the sequences that they describe as "mechanisms". A causal mechanism is the triggering pathway through which one event leads to the production of another event in a range of circumstances in an environment. Fundamentally a causal mechanism is a "molecule" of causal process which can recur in a range of different social settings.

For example:
  • X typically brings about O.
Whenever this sequence of events occurs, in the appropriate timing, the outcome O is produced. This ensemble of events {X, O} is a single mechanism.

And here is the crucial point: to call this a mechanism requires that this sequence recurs in multiple instances across a range of background conditions.

This suggests an answer to the question about the collapsing crane: the sequence from gust to operator error to crane collapse is not a mechanism, but is rather a unique causal sequence. Each part of the sequence has a causal explanation available; each conveys a form of causal necessity in the circumstances. But the aggregation of these cause-effect connections falls short of constituting a causal mechanism because the circumstances in which it works are all but unique. A satisfactory causal explanation of the internal cause-effect pairs will refer to real repeatable mechanisms -- for example, "twisting a steel frame leads to a loss of support strength". But the concatenation does not add up to another, more complex, mechanism.

Contrast this with "stuck valve" accidents in nuclear power reactors. Valves control the flow of cooling fluids around the critical fuel. If the fuel is deprived of coolant it rapidly overheats and melts. A "stuck valve-loss of fluid-critical overheating" sequence is a recognized mechanism of nuclear meltdown, and has been observed in a range of nuclear-plant crises. It is therefore appropriate to describe this sequence as a genuine causal mechanism in the creation of a nuclear plant failure.

(Stuart Glennan takes up a similar question in "Singular and General Causal Relations: A Mechanist Perspective"; link.)

Friday, March 23, 2018

Machine learning


The Center for the Study of Complex Systems at the University of Michigan hosted an intensive day-long training on some of the basics of machine learning for graduate students and interested faculty and staff. Jake Hofman, a Microsoft researcher who also teaches this subject at Columbia University, was the instructor, and the session was both rigorous and accessible (link). Participants were asked to load a copy of R, a software package designed for the computations involved in machine learning and applied statistics, and numerous data sets were used as examples throughout the day. (Here is a brief description of R; link.) Thanks, Jake, for an exceptionally stimulating workshop.

So what is machine learning? Most crudely, it is a handful of methods through which researchers can sift through a large collection of events or objects, each of which has a very large number of properties, in order to arrive at a predictive sorting of the events or objects into a set of categories. The objects may be email texts or hand-printed numerals (the examples offered in the workshop), the properties may be the presence/absence of a long list of words or the presence of a mark in a bitmap grid, and the categories may be "spam/not spam" or the numerals between 0 and 9. But equally, the objects may be Facebook users, the properties "likes/dislikes" for a very large list of webpages, and the categories "Trump voter/Clinton voter". There is certainly a lot more to machine learning -- for example, these techniques don't shed light on the ways that AI Go systems improve their play. But it's good to start with the basics. (Here is a simple presentation of the basics of machine learning; link.)

Two intuitive techniques form the core of basic machine learning theory. The first makes use of the measurement of conditional probabilities in conjunction with Bayes' theorem to assign probabilities of the object being a Phi given the presence of properties xi. The second uses massively multi-factor regressions to calculate a probability for the event being Phi given regression coefficients ci.

Another basic technique is to treat the classification problem spatially. Use the large number of variables to define an n-dimensional space; then classify the object according to the average or majority value of its m-closest neighbors. (The neighbor number m might range from 1 to some manageable number such as 10.)


There are many issues of methodology and computational technique raised by this approach to knowledge. But these are matters of technique, and smart data science researchers have made great progress on them. More interesting here are epistemological issues: how good and how reliable are the findings produced by these approaches to the algorithmic treatment of large data sets? How good is the spam filter or the Trump voter detector when applied to novel data sets? What kind of errors would we anticipate this approach to be vulnerable to?

One important observation is that these methods are explicitly anti-theoretical. There is no place for discovery of causal mechanisms or underlying explanatory processes in these calculations. The researcher is not expected to provide a theoretical hypothesis about how this system of phenomena works. Rather, the techniques are entirely devoted to the discovery of persistent statistical associations among variables and the categories of the desired sorting. This is as close to Baconian induction as we get in the sciences (link). The approach is concerned about classification and prediction, not explanation. (Here is an interesting essay where Jake Hofman addresses the issues of prediction versus explanation of social data; link.)

A more specific epistemic concern that arises is the possibility that the training set of data may have had characteristics that are importantly different from comparable future data sets. This is the familiar problem of induction: will the future resemble the past sufficiently to support predictions based on past data? Spam filters developed in one email community may work poorly in an email community in another region or profession. We can label this as the problem of robustness.

Another limitation of this approach has to do with problems where our primary concern is with a singular event or object rather than a population. If we want to know whether NSA employee John Doe is a Russian mole, it isn't especially useful to know that his nearest neighbors in a multi-dimensional space of characteristics are moles; we need to know more specifically whether Doe himself has been corrupted by the Russians. If we want to know whether North Korea will explode a nuclear weapon against a neighbor in the next six months the techniques of machine learning seem to be irrelevant.

The statistical and computational tools of machine learning are indeed powerful, and seem to lead to results that are both useful and sometimes surprising. One should not imagine, however, that machine learning is a replacement for all other forms of research methodology in the social and behavioral sciences.

(Here is a brief introduction to a handful of the algorithms currently in use in machine-learning applications; link.)

Saturday, March 10, 2018

Technology lock-in accidents

image: diagram of molten salt reactor

Organizational and regulatory features are sometimes part of the causal background of important technology failures. This is particularly true in the history of nuclear power generation. The promise of peaceful uses of atomic energy was enormously attractive at the end of World War II. In abstract terms the possibility of generating useable power from atomic reactions was quite simple. What was needed was a controllable fission reaction in which the heat produced by fission could be captured to run a steam-powered electrical generator.

The technical challenges presented by harnessing nuclear fission in a power plant were large. Fissionable material needed to be produced as useable fuel sources. A control system needed to be designed to maintain the level of fission at a desired level. And, most critically, a system for removing heat from the fissioning fuel needed to be designed so that the reactor core would not overheat and melt down, releasing energy and radioactive materials into the environment.

Early reactor designs took different approaches to the heat-removal problem. Liquid metal reactors used a metal like sodium as the fluid that would run through the core removing heat to a heat sink for dispersal; and water reactors used pressurized water to serve that function. The sodium breeder reactor design appeared to be a viable approach, but incidents like the Fermi 1 disaster near Detroit cast doubt on the wisdom of using this approach. The reactor design that emerged as the dominant choice in civilian power production was the light water reactor. But light water reactors presented their own technological challenges, including most especially the risk of a massive steam explosion in the event of a power interruption to the cooling plant. In order to obviate this risk reactor designs involved multiple levels of redundancy to ensure that no such power interruption would occur. And much of the cost of construction of a modern light water power plant is dedicated to these systems -- containment vessels, redundant power supplies, etc. In spite of these design efforts, however, light water reactors at Three Mile Island and Fukushima did in fact melt down under unusual circumstances -- with particularly devastating results in Fukushima. The nuclear power industry in the United States essentially died as a result of public fears of the possibility of meltdown of nuclear reactors near populated areas -- fears that were validated by several large nuclear disasters.

What is interesting about this story is that there was an alternative reactor design that was developed by US nuclear scientists and engineers in the 1950s that involved a significantly different solution to the problem of harnessing the heat of a nuclear reaction and that posed a dramatically lower level of risk of meltdown and radioactive release. This is the molten salt reactor, first developed at the Oak Ridge National Laboratory facility in the 1950s. This was developed as part of the loopy idea of creating an atomic-powered aircraft that could remain aloft for months. This reactor design operates at atmospheric pressure, and the technological challenges of maintaining a molten salt cooling system are readily solved. The fact that there is no water involved in the cooling system means that the greatest danger in a nuclear power plant, a violent steam explosion, is eliminated entirely. Molten salt will not turn to steam, and the risk of a steam-based explosion is removed completely. Chinese nuclear energy researchers are currently developing a next generation of molten salt reactors, and there is a likelihood that they will be successful in designing a reactor system that is both more efficient in terms of cost and dramatically safer in terms of low-probability, high-cost accidents (link). This technology also has the advantage of making much more efficient use of the nuclear fuel, leaving a dramatically smaller amount of radioactive waste to dispose of.

So why did the US nuclear industry abandon the molten-salt reactor design? This seems to be a situation of lock-in by an industry and a regulatory system. Once the industry settled on the light water reactor design, it was implemented by the Nuclear Regulatory Commission in terms of the regulations and licensing requirements for new nuclear reactors. It was subsequently extremely difficult for a utility company or a private energy corporation to invest in the research and development and construction costs that would be associated with a radical change of design. There is currently an effort by an American company to develop a new-generation molten salt reactor, and the process is inhibited by the knowledge that it will take a minimum of ten years to gain certification and licensing for a possible commercial plant to be based on the new design (link).

This story illustrates the possibility that a process of technology development may get locked into a particular approach that embodies substantial public risk, and it may be all but impossible to subsequently adopt a different approach. In another context Thomas Hughes refers to this as technological momentum, and it is clear that there are commercial, institutional, and regulatory reasons for this "stickiness" of a major technology once it is designed and adopted. In the case of nuclear power the inertia associated with light water reactors is particularly unfortunate, given that it blocked other solutions that were both safer and more economical.

(Here is a valuable review of safety issues in the nuclear power industry; link. Also relevant is Robin Cowan, "Nuclear Power Reactors: A Study in Technological Lock-in"; link -- thanks, Özgür, for the reference. And here is a critical assessment of molten salt reactor designs by Bulletin of the Atomic Scientists (link).)

Saturday, March 3, 2018

Consensus and mutual understanding


Groups make decisions through processes of discussion aimed at framing a given problem, outlining the group's objectives, and arriving at a plan for how to achieve the objectives in an intelligent way. This is true at multiple levels, from neighborhood block associations to corporate executive teams to the President's cabinet meetings. However, collective decision-making through extended discussion faces more challenges than is generally recognized. Processes of collective deliberation are often haphazard, incomplete, and indeterminate.

What is collective deliberation about? It is often the case that a collaborative group or team has a generally agreed-upon set of goals -- let's say reducing the high school dropout rate in a city or improving morale on the plant floor or deterring North Korean nuclear expansion. The group comes together to develop a strategy and a plan for achieving the goal. Comments are offered about how to think about the problem, what factors may be relevant to bringing the problem about, what interventions might have a positive effect on the problem. After a reasonable range of conversation the group arrives at a strategy for how to proceed.

An idealized version of group problem-solving makes this process both simple and logical. The group canvases the primary facts available about the problem and its causes. The group recognized that there may be multiple goods involved in the situation, so the primary objective needs to be considered in the context of the other valuable goods that are part of the same bundle of activity. The group canvases these various goods as well. The group then canvases the range of interventions that are feasible in the existing situation, along with the costs and benefits of each strategy. Finally, the group arrives at a consensus about which strategy is best, given everything we know about the dynamics of the situation.

But anyone who has been part of a strategy-oriented discussion asking diverse parties to think carefully about a problem that all participants care about will realize that the process is rarely so amenable to simple logical development. Instead, almost every statement offered in the discussion is both ambiguous to some extent and factually contestable. Outcomes are sensitive to differences in the levels of assertiveness of various participants. Opinions are advanced as facts, and there is insufficient effort expended to validate the assumptions that are being made. Outcomes are also sensitive to the order and structure of the agenda for discussion. And finally, discussions need to be summarized; but there are always interpretive choices that need to be made in summarizing a complex discussion. Points need to be assigned priority and cogency; and different scribes will have different judgments about these matters.

Here is a problem of group decision-making that is rarely recognized but seems pervasive in the real world. This is the problem of recurring misunderstandings and ambiguities within the group of the various statements and observations that are made. The parties proceed on the basis of frameworks of assumptions that differ substantially from one person to the next but are never fully exposed. One person asserts that the school day should be lengthened, imagining a Japanese model of high school. Another thinks back to her own high school experience and agrees, thinking that five hours of instruction may well be more effective for learning than four hours. They agree about the statement but they are thinking of very different changes.

The bandwidth of a collective conversation about a complicated problem is simply too narrow to permit ambiguities and factually errors to be tracked down and sorted out. The conversation is invariably incomplete, and often takes shape because of entirely irrelevant factors like who speaks first or most forcefully. It is as if the space of the discussion is in two dimensions, whereas the complexity of the problem under review is in three dimensions.

The problem is exacerbated by the fact that participants sometimes have their own agendas and hobby horses that they continually re-inject into the discussion under varying pretexts. As the group fumbles towards possible consensus these fixed points coming from a few participants either need to be ruled out or incorporated -- and neither is a fully satisfactory result. If the point is ruled out some participants will believe their inputs are not respected, but if it is incorporated then the consensus has been deformed from a more balanced view of the issue.

A common solution to the problems of group deliberation mentioned here is to assign an expert facilitator or "muse" for the group who is tasked to build up a synthesis of the discussion as it proceeds. But it is evident that the synthesis is underdetermined by the discussion. Some points will be given emphasis over others, and a very different story line could have been reached that leads to different outcomes. This is the Rashomon effect applied to group discussions.

A different solution is to think of group discussion as simply an aid to a single decision maker -- a chief executive who listens to the various points of view and then arrives at her own formulation of the problem and a solution strategy. But of course this approach abandons the idea of reaching a group consensus in favor of the simpler problem of an individual reaching his or her own interpretation of the problem and possible solutions based on input from others.

This is a problem for organizations, both formal and informal, because every organization attempts to decide what to do through some kind of exploratory discussion. It is also a problem for the theory of deliberative democracy (link, link).

This suggests that there is an important problem of collective rationality that has not been addressed either by philosophy or management studies: the problem of aggregating beliefs, perceptions, and values held by diverse members of a group onto a coherent statement of the problem, causes, and solutions for the issue under deliberation. We would like to be able to establish processes that lead to rational and effective solutions to problems that incorporate available facts and judgments. Further we would like the outcomes to be non-arbitrary -- that is, given an antecedent set of factual and normative beliefs by the participants, we would like to imagine that there is a relatively narrow band of policy solutions that will emerge as the consensus or decision. We have theories of social choice -- aggregation of fixed preferences. And we have theories of rational decision-making and planning. But a deliberative group discussion of an important problem is substantially more complex. We need a philosophy of the meeting!