Navigation page

Pages

Tuesday, February 27, 2018

Computational social science


Is it possible to elucidate complex social outcomes using computational tools? Can we overcome some of the issues for social explanation posed by the fact of heterogeneous actors and changing social environments by making use of increasingly powerful computational tools for modeling the social world? Ken Kollman, John Miller, and Scott Page make the affirmative case to this question in their 2003 volume, Computational Models in Political Economy. The book focuses on computational approaches to political economy and social choice. Their introduction provides an excellent overview of the methodological and philosophical issues that arise in computational social science.
The subject of this book, political economy, naturally lends itself to a computational methodology. Much of political economy concerns institutions that aggregate the behavior of multiple actors, such as voters, politicians, organizations, consumers, and firms. Even when the interactions within and rules of a political or economic institution tion are relatively simple, the aggregate patterns that emerge can be difficult to predict and understand, particularly when there is no equilibrium. It is even more difficult to understand overlapping and interdependent institutions.... Computational methods hold the promise of enabling scholars to integrate aspects of both political and economic institutions without compromising fundamental features of either. (kl 27)
The most interesting of the approaches that they describe is the method of agent-based models (linklink, link). They summarize the approach in these terms:
The models typically have four characteristics, or methodological primitives: agents are diverse, agents interact with each other in a decentralized manner, agents are boundedly rational and adaptive, and the resulting patterns of outcomes comes often do not settle into equilibria.... The purpose of using computer programs in this second role is to study the aggregate patterns that emerge from the "bottom up." (kl 51)
Here is how the editors summarize the strengths of computational approaches to social science.
First, computational models are flexible in their ability to encode a wide range of behaviors and institutions. Any set of assumptions about agent behavior or institutional constraints that can be encoded can be analyzed. 
Second, as stated, computational models are rigorous in that conclusions follow from computer code that forces researchers to be explicit about assumptions. 
Third, while most mathematical models include assumptions so that an equilibrium exists, a system of interacting political actors need not settle into an equilibrium point. It can also cycle, or it can traverse an unpredictable path of outcomes. 
The great strength of computational models is their ability to uncover dynamic patterns. (kl 116)
And they offer a set of criteria of adequacy for ABM models. The model should explain the results; the researcher should check robustness; the model should build upon the past; the researcher should justify the use of the computer; and the researcher should question assumptions (kl 131).
To summarize, models should be evaluated based on their ability to give insight and understanding into old and new phenomena in the simplest way possible. Good, simple models, such as the Prisoner's Dilemma or Nash bargaining, with their ability to frame and shed light on important questions, outlast any particular tool or technique. (kl 139)
A good illustration of a computational approach to problems of political economy is the editors' own contribution to the volume, "Political institutions and sorting in a Tiebout model". A Tiebout configuration is a construct within public choice theory where citizens are permitted to choose among jurisdictions providing different bundles of goods.
In a Tiebout model, local jurisdictions compete for citizens by offering bundles of public goods. Citizens then sort themselves among jurisdictions according to their preferences. Charles M. Tiebout's (1956) original hypothesis challenged Paul Samuelson's (1954) conjecture that public goods could not be allocated efficiently. The Tiebout hypothesis has since been extended to include additional propositions. (kl 2012)
Using an agent-based model they compare different sets of political institutions at the jurisdiction level through which policy choices are made; and they find that there are unexpected outcomes at the population level that derive from differences in the institutions embodied at the jurisdiction level.
Our model departs from previous approaches in several important respects. First, with a few exceptions, our primary interest in comparing paring the performance of political institutions has been largely neglected in the Tiebout literature. A typical Tiebout model takes the political institution, usually majority rule, as constant. Here we vary institutions and measure performance, an approach more consistent with the literature on mechanism design. Second, aside from an example used to demonstrate the annealing phenomenon, we do not explicitly compare equilibria. (kl 2210)
And they find significant differences in collective behavior in different institutional settings.

ABM methodology is well suited to the kind of research problem the authors have posed here. The computational method permits intuitive illustration of the ways that individual preferences in specific settings aggregate to distinctive collective behaviors at the group level. But the approach is not so suitable to the analysis of social behavior that involves a higher degree of hierarchical coordination of individual behavior -- for example, in an army, a religious institution, or a business firm. Furthermore, the advantage of abstractness in ABM formulations is also a disadvantage, in that it leads researchers to ignore some of the complexity and nuance of local circumstances of action that lead to significant differences in outcome.


Saturday, February 24, 2018

Nuclear accidents


diagrams: Chernobyl reactor before and after

Nuclear fission is one of the world-changing discoveries of the mid-twentieth century. The atomic bomb projects of the United States led to the atomic bombing of Japan in August 1945, and the hope for limitless electricity brought about the proliferation of a variety of nuclear reactors around the world in the decades following World War II. And, of course, nuclear weapons proliferated to other countries beyond the original circle of atomic powers.

Given the enormous energies associated with fission and the dangerous and toxic properties of radioactive components of fission processes, the possibility of a nuclear accident is a particularly frightening one for the modern public. The world has seen the results of several massive nuclear accidents -- Chernobyl and Fukushima in particular -- and the devastating results they have had on human populations and the social and economic wellbeing of the regions in which they occurred.

Safety is therefore a paramount priority in the nuclear industry, both in research labs and military and civilian applications. So what is the situation of safety in the nuclear sector? Jim Mahaffey's Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima is a detailed and carefully researched attempt to answer this question. And the information he provides is not reassuring. Beyond the celebrated and well-known disasters at nuclear power plants (Three Mile Island, Chernobyl, Fukushima), Mahaffey refers to hundreds of accidents involving reactors, research laboratories, weapons plants, and deployed nuclear weapons that have had less public awareness. These accidents resulted in a very low number of lives lost, but their frequency is alarming. They are indeed "normal accidents" (Perrow, Normal Accidents: Living with High-Risk Technologies. For example:
  • a Japanese fishing boat is contaminated by fallout from Castle Bravo test of hydrogen bomb; lots of radioactive fish at the markets in Japan (March 1, 1954) (kl 1706)
  • one MK-6 atomic bomb is dropped on Mars Bluff, South Carolina, after a crew member accidentally pulled the emergency bomb release handle (February 5, 1958) (kl 5774)
  • Fermi 1 liquid sodium plutonium breeder reactor experiences fuel meltdown during startup trials near Detroit (October 4, 1966) (kl 4127)
Mahaffey also provides detailed accounts of the most serious nuclear accidents and meltdowns during the past forty years, Three Mile Island, Chernobyl, and Fukushima.

The safety and control of nuclear weapons is of particular interest. Here is Mahaffey's summary of "Broken Arrow" events -- the loss of atomic and fusion weapons:
Did the Air Force ever lose an A-bomb, or did they just misplace a few of them for a short time? Did they ever drop anything that could be picked up by someone else and used against us? Is humanity going to perish because of poisonous plutonium spread that was snapped up by the wrong people after being somehow misplaced? Several examples will follow. You be the judge. 
Chuck Hansen [U.S. Nuclear Weapons - The Secret History] was wrong about one thing. He counted thirty-two “Broken Arrow” accidents. There are now sixty-five documented incidents in which nuclear weapons owned by the United States were lost, destroyed, or damaged between 1945 and 1989. These bombs and warheads, which contain hundreds of pounds of high explosive, have been abused in a wide range of unfortunate events. They have been accidentally dropped from high altitude, dropped from low altitude, crashed through the bomb bay doors while standing on the runway, tumbled off a fork lift, escaped from a chain hoist, and rolled off an aircraft carrier into the ocean. Bombs have been abandoned at the bottom of a test shaft, left buried in a crater, and lost in the mud off the coast of Georgia. Nuclear devices have been pounded with artillery of a foreign nature, struck by lightning, smashed to pieces, scorched, toasted, and burned beyond recognition. Incredibly, in all this mayhem, not a single nuclear weapon has gone off accidentally, anywhere in the world. If it had, the public would know about it. That type of accident would be almost impossible to conceal. (kl 5527)
There are a few common threads in the stories of accident and malfunction that Mahaffey provides. First, there are failures of training and knowledge on the part of front-line workers. The physics of nuclear fission are often counter-intuitive, and the idea of critical mass does not fully capture the danger of a quantity of fissionable material. The geometry of the storage of the material makes a critical difference in going critical. Fissionable material is often transported and manipulated in liquid solution; and the shape and configuration of the vessel in which the solution is held makes a difference to the probability of exponential growth of neutron emission -- leading to runaway fission of the material. Mahaffey documents accidents that occurred in nuclear materials processing plants that resulted from plant workers applying what they knew from industrial plumbing to their efforts to solve basic shop-floor problems. All too often the result was a flash of blue light and the release of a great deal of heat and radioactive material.

Second, there is a fault at the opposite end of the knowledge spectrum -- the tendency of expert engineers and scientists to believe that they can solve complicated reactor problems on the fly. This turned out to be a critical problem at Chernobyl (kl 6859).
The most difficult problem to handle is that the reactor operator, highly trained and educated with an active and disciplined mind, is liable to think beyond the rote procedures and carefully scheduled tasks. The operator is not a computer, and he or she cannot think like a machine. When the operator at NRX saw some untidy valve handles in the basement, he stepped outside the procedures and straightened them out, so that they were all facing the same way. (kl 2057)
There are also clear examples of inappropriate supervision in the accounts shared by Mahaffey. Here is an example from Chernobyl.
[Deputy chief engineer] Dyatlov was enraged. He paced up and down the control panel, berating the operators, cursing, spitting, threatening, and waving his arms. He demanded that the power be brought back up to 1,500 megawatts, where it was supposed to be for the test. The operators, Toptunov and Akimov, refused on grounds that it was against the rules to do so, even if they were not sure why. 
Dyatlov turned on Toptunov. “You lying idiot! If you don’t increase power, Tregub will!”  
Tregub, the Shift Foreman from the previous shift, was officially off the clock, but he had stayed around just to see the test. He tried to stay out of it. 
Toptunov, in fear of losing his job, started pulling rods. By the time he had wrestled it back to 200 megawatts, 205 of the 211 control rods were all the way out. In this unusual condition, there was danger of an emergency shutdown causing prompt supercriticality and a resulting steam explosion. At 1: 22: 30 a.m., a read-out from the operations computer advised that the reserve reactivity was too low for controlling the reactor, and it should be shut down immediately. Dyatlov was not worried. “Another two or three minutes, and it will be all over. Get moving, boys! (kl 6887)
This was the turning point in the disaster.

A related fault is the intrusion of political and business interests into the design and conduct of high-risk nuclear actions. Leaders want a given outcome without understanding the technical details of the processes they are demanding; subordinates like Toptunov are eventually cajoled or coerced into taking the problematic actions. The persistence of advocates for liquid sodium breeder reactors represents a higher-level example of the same fault. Associated with this role of political and business interests is an impulse towards secrecy and concealment when accidents occur and deliberate understatement of the public dangers created by an accident -- a fault amply demonstrated in the Fukushima disaster.

Atomic Accidents provides a fascinating history of events of which most of us are unaware. The book is not primarily intended to offer an account of the causes of these accidents, but rather the ways in which they unfolded and the consequences they had for human welfare. (Generally speaking his view is that nuclear accidents in North America and Western Europe have had remarkably few human casualties.) And many of the accidents he describes are exactly the sorts of failures that are common in all largescale industrial and military processes.

(Largescale technology failure has come up frequently here. See these posts for analysis of some of the organizational causes of technology failure (link, link, link).)

Sunday, February 11, 2018

Folk psychology and Alexa


Paul Churchland made a large splash in the philosophy of mind and cognitive science several decades ago when he cast doubt on the categories of "folk psychology" -- the ordinary and commonsensical concepts we use to describe and understand each other's mental lives. In Paul Churchland and Patricia Churchland, On the Contrary: Critical Essays, 1987-1997, Paul Churchland writes:
"Folk psychology" denotes the prescientific, commonsense conceptual framework that all normally socialized humans deploy in order to comprehend, predict, explain, and manipulate the behavior of . humans and the higher animals. This framework includes concepts such as belief, desire, pain pleasure, love, hate, joy, fear, suspicion, memory, recognition, anger, sympathy, intention, and so forth.... Considered as a whole, it constitutes our conception of what a person is. (3)
Churchland does not doubt that we ordinary human beings make use of these concepts in everyday life, and that we could not dispense with them. But he is not convinced that they have a scientifically useful role to play in scientific psychology or cognitive science.

In our ordinary dealings with other human beings it is both important and plausible that the framework of folk psychology is approximately true. Our fellow human beings really do have beliefs, desires, fears, and other mental capacities, and these capacities are in fact the correct explanation of their behavior. How these capacities are realized in the central nervous system is largely unknown, though as materialists we are committed to the belief that there are such underlying neurological functionings. But eliminative materialism doesn't have a lot of credibility, and the treatment of mental states as epiphenoma to the neurological machinery isn't convincing either.

These issues had the effect of creating a great deal of discussion in the philosophy of psychology since the 1980s (link). But the topic seems all the more interesting now that tens of millions of people are interacting with Alexa, Siri, and the Google Assistant, and are often led to treat the voice as emanating from an intelligent (if not very intelligent) entity. I presume that it is clear that Alexa and her counterparts are currently "question bots" with fairly simple algorithms underlying their capabilities. But how will we think about the AI agent when the algorithms are not simple; when the agents can sustain lengthy conversations; and when the interactions give the appearance of novelty and creativity?

It turns out that this is a topic that AI researchers have thought about quite a bit. Here is the abstract of "Understanding Socially Intelligent Agents—A Multilayered Phenomenon", a fascinating 2001 article in IEEE by Perrson, Laaksolahti, and Lonnqvist (link):
The ultimate purpose with socially intelligent agent (SIA) technology is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence. Such user-centred SIA technology, must consider the everyday knowledge and expectations by which users make sense of real, fictive, or artificial social beings. This folk-theoretical understanding of other social beings involves several, rather independent levels such as expectations on behavior, expectations on primitive psychology, models of folk-psychology, understanding of traits, social roles, and empathy. The framework presented here allows one to analyze and reconstruct users' understanding of existing and future SIAs, as well as specifying the levels SIA technology models in order to achieve an impression of social intelligence.
The emphasis here is clearly on the semblance of intelligence in interaction with the AI agent, not the construction of a genuinely intelligent system capable of intentionality and desire. Early in the article they write:
As agents get more complex, they will land in the twilight zone between mechanistic and living, between dead objects and live beings. In their understanding of the system, users will be tempted to employ an intentional stance, rather than a mechanistic one.. Computer scientists may choose system designs that encourage or discourage such anthropomorphism. Irrespective of which, we need to understand how and under what conditions it works.
But the key point here is that the authors favor an approach in which the user is strongly led to apply the concepts of folk psychology to the AI agent; and yet in which the underlying mechanisms generating the AI's behavior completely invalidate the application of these concepts. (This approach brings to mind Searle's Chinese room example concerning "intelligent" behavior; link.) This is clearly the approach taken by current designs of AI agents like Siri; the design of the program emphasizes ordinary language interaction in ways that lead the user to interact with the agent as an intentional "person".

The authors directly confront the likelihood of "folk-psychology" interactions elicited in users by the behavior of AI agents:
When people are trying to understand the behaviors of others, they often use the framework of folk-psychology. Moreover, people expect others to act according to it. If a person’s behavior blatantly falls out of this framework, the person would probably be judged “other” in some, e.g., children, “crazies,” “psychopaths,” and “foreigners.” In order for SIAs to appear socially intelligent, it is important that their behavior is understandable in term of the folk-psychological framework. People will project these expectations on SIA technology and will try to attribute mental states and processes according to it. (354)
And the authors make reference to several AI constructs that are specifically designed to elicit a folk-psychological response from the users:
In all of these cases, the autonomous agents have some model of the world, mind, emotions, and of their present internal state. This does not mean that users automatically infer the “correct” mental state of the agent or attribute the same emotion that the system wants to convey. However, with these background models regulating the agent’s behavior the system will support and encourage the user to employ her faculty of folk-psychology reasoning onto the agent. Hopefully, the models generate consistently enough behavior to make folk-psychology a framework within which to understand and act upon the interactive characters. (355)
The authors emphasize the instrumentalism of their recommended approach to SIA capacities from beginning to end:
In order to develop believable SIAs we do not have to know how beliefs-desires and intentions actually relate to each other in the real minds of real people. If we want to create the impression of an artificial social agent driven by beliefs and desires, it is enough to draw on investigations on how people in different cultures develop and use theories of mind to understand the behaviors of others. SIAs need to model the folk-theory reasoning, not the real thing. To a shallow AI approach, a model of mind based on folk-psychology is as valid as one based on cognitive theory. (349)
This way of approaching the design of AI agents suggests that the "folk psychology" interpretation of Alexa's more capable successors will be fundamentally wrong. The agent will not be conscious, intentional, or mental; but it will behave in ways that make it almost impossible not to fall into the trap of anthropomorphism. And this in turn brings us back to Churchland and the critique of folk psychology in the human-human cases. If computer-assisted AI agents can be completely persuasive as mentally structured actors, then why are we so confident that this is not the case for fellow humans as well?

Friday, February 9, 2018

Cold war history from an IR perspective


Odd Arne Westad's The Cold War: A World History is a fascinating counterpoint to Tony Judt's Postwar: A History of Europe Since 1945. There are some obvious differences -- notably, Westad takes a global approach to the Cold War, with substantial attention to the dynamics of Cold War competition in Asia, Africa, Latin America, and the Middle East, as well as Europe, whereas Judt's book is primarily focused on the politics and bi-polar competition of Communism and liberal democratic capitalism in Europe. Westad is a real expert on East Asia, so his global perspectives on the period are very well informed. Both books provide closely reasoned and authoritative interpretations of the large events of the 1950s through the 1990s. So it is very interesting to compare them from an historiographic point of view.

The feature that I'd like to focus on here is Westad's perspective on these historical developments from the point of view of an international-relations conceptual framework. Westad pays attention to the economic and social developments that were underway in the West and the Eastern bloc; but his most frequent analytical question is, what were the intentions, beliefs, and strategies of the nations which were involved in competition throughout the world in this crucial period of world history? Ideology and social philosophy play a large role in his treatment. Judt too offers interpretations of what leaders like Truman, Gorbachev, or Thatcher were trying to accomplish; but the focus of his historiographical thinking is more on the circumstances of ordinary life and the social, economic, and political changes through which ordinary people shaped their political identities across Europe. In Westad's framework there is an underlying emphasis on strategic rationality -- and failures of rationality -- by leaders and national governments that is more muted in Judt's analysis. The two perspectives are not incompatible; but they are significantly different.

Here are a few illustrative passages from Westad's book revealing the orientation of his interpretation around interest and ideology:
The Cold War originated in two processes that took place around the turn of the twentieth century. One was the transformation of the United States and Russia into two supercharged empires with a growing sense of international mission. The other was the sharpening of the ideological divide between capitalism and its critics. These came together with the American entry into World War I and with the Russian Revolution of 1917, and the creation of a Soviet state as an alternative vision to capitalism. (19)
The contest between the US and the USSR over the future of Germany is a good example.
The reasons why Stalin wanted a united Germany were exactly the same reasons why the United States, by 1947, did not. A functional German state would have to be integrated with western Europe in order to succeed, Washington found. And that could not be achieved if Soviet influence grew throughout the country. This was not only a point about security. It was also about economic progress. The Marshall Plan was intended to stimulate western European growth through market integration, and the western occupation zones in Germany were crucial for this project to succeed. Better, then, to keep the eastern zone (and thereby Soviet pressure) out of the equation. After two meetings of the allied foreign ministers in 1947 had failed to agree on the principles for a peace treaty with Germany (and thereby German reunification), the Americans called a conference in London in February 1948 to which the Soviets were not invited.(109)
And the use of development aid during reconstruction was equally strategic:
For Americans and western European governments alike, a major part of the Marshall Plan was combatting local Communist parties. Some of it was done directly, through propaganda. Other effects on the political balance were secondary or even coincidental. A main reason why Soviet-style Communism lost out in France or Italy was simply that their working classes began to have a better life, at first more through government social schemes than through salary increases. The political miscalculations of the Communist parties and the pressure they were under from Moscow to disregard the local political situation in order to support the Soviet Union also contributed. When even the self-inflicted damage was not enough, such as in Italy, the United States experimented with covert operations to break Communist influence. (112)
Soviet miscalculations were critical in the development of east-west power relations. Westad treats the Berlin blockade in these terms:
The Berlin blockade, which lasted for almost a year, was a Soviet political failure from start to finish. It failed to make west Berlin destitute; a US and British air-bridge provided enough supplies to keep the western sectors going. On some days aircraft landed at Tempelhof Airport at three minute intervals. Moscow did not take the risk of ordering them to be shot down. But worse for Stalin: the long-drawn-out standoff confirmed even to those Germans who had previously been in doubt that the Soviet Union could not be a vehicle for their betterment. The perception was that Stalin was trying to starve the Berliners, while the Americans were trying to save them. On the streets of Berlin more than half a million protested Soviet policies. (116)
I don't want to give the impression that Westad's book ignores non-strategic aspects of the period. His treatment of McCarthyism, for example, is quite astute:
The series of hearings and investigations, which accusations such as McCarthy’s gave rise to, destroyed people’s lives and careers. Even for those who were cleared, such as the famous central Asia scholar Owen Lattimore, some of the accusations stuck and made it difficult to find employment. It was, as Lattimore said in his book title from 1950, Ordeal by Slander. For many of the lesser known who were targeted—workers, actors, teachers, lawyers—it was a Kafkaesque world, where their words were twisted and used against them during public hearings by people who had no knowledge of the victims or their activities. Behind all of it was the political purpose of harming the Administration, though even some Democrats were caught up in the frenzy and the president himself straddled the issue instead of publicly confronting McCarthy. McCarthyism, as it was soon called, reduced the US standing in the world and greatly helped Soviet propaganda, especially in western Europe. (120)
It is interesting too to find areas of disagreement between the two historians. Westad's treatment of Leonid Brezhnev is sympathetic:
Brezhnev and his colleagues’ mandate was therefore quite clear. Those who had helped put them in power wanted more emphasis on planning, productivity growth, and welfare. They wanted a leadership that avoided unnecessary crises with the West, but also stood up for Soviet gains and those of Communism globally. Brezhnev was the ideal man for the purpose. As a leader, he liked to consult with others, even if only to bring them onboard with decisions already taken. After the menacing Stalin and the volatile Khrushchev, Brezhnev was likeable and “comradely”; he remembered colleagues’ birthdays and the names of their wives and children. His favorite phrases were “normal development” and “according to plan.” And the new leader was easily forgiven a certain vagueness in terms of overall reform plans as long as he emphasized stability and year-on-year growth in the Soviet economy.... Contrary to what is often believed, the Soviet economy was not a disaster zone during the long reign of Leonid Brezhnev and the leadership cohort who came into power with him. The evidence points to slow and limited but continuous growth, within the framework provided by the planned economy system. The best estimates that we have is that the Soviet economy as a whole grew on average 2.5 to 3 percent per year during the 1960s and ’70s. (367)
By contrast, Judt treats Brezhnev less sympathetically and as a more minor figure:
The economic reforms of the fifties and sixties were from the start a fitful attempt to patch up a structurally dysfunctional system. To the extent that they implied a half-hearted willingness to decentralize economic decisions or authorize de facto private production, they were offensive to hardliners among the old guard. But otherwise the liberalizations undertaken by Khrushchev, and after him Brezhnev, presented no immediate threat to the network of power and patronage on which the Soviet system depended. Indeed, it was just because economic improvements in the Soviet bloc were always subordinate to political priorities that they achieved so very little. (Judt, 424)
Perhaps the most striking contrast between these two books is the scope that each provides. Judt is focused on the development of postwar Europe, and he does an unparalleled job of providing both detail and interpretation of the developments over these decades in well over a dozen countries. Westad is interested in providing a global history of the Cold War, and his expertise on Asian history and politics during this period, as well as his wide-ranging knowledge of developments in Africa, the Middle East, and Latin America, permits him to succeed in this goal. His representation of this history is nuanced and insightful at every turn. The Cold War unavoidably involves a focus on the USSR and the US and their blocs as central players; but Westad's account is by no means eurocentric. His treatments of India, China, and Southeast Asia are particularly excellent, and his account of turbulence and faulty diplomacy in the Middle East is particularly timely for the challenges we face today.

*        *         *

Here are a couple of interesting video lectures by Westad and Judt.