Saturday, July 27, 2019

Soviet nuclear disasters: Kyshtym


The 1986 meltdown of reactor number 4 at the Chernobyl Nuclear Power Plant was the greatest nuclear disaster the world has yet seen. Less well known is the Kyshtym disaster in 1957, which resulted in a massive release of radioactive material in the Eastern Ural region of the Soviet Union. This was a catastrophic underground explosion at a nuclear storage facility near the Mayak power plant in the Eastern Ural region of the USSR. Information about the disaster was tightly restricted by Soviet authorities, with predictably bad consequences.

Zhores Medvedev was one of the first qualified scientists to provide information and hypotheses about the Kyshtym disaster. His book Nuclear Disaster in the Urals was written while he was in exile in Great Britain and appeared in 1980. It is fascinating to learn that his reasoning is based on his study of ecological, biological, and environmental research done by Soviet scientists between 1957 and 1980. Medvedev was able to piece together the extent of contamination and the general nature of the cause of the event from basic information about radioactive contamination in lakes and streams in the region included incidentally in scientific reports from the period.

It is very interesting to find that scientists in the United States were surprisingly skeptical about Medvedev's assertions. W. Stratton et al published a review analysis in Science in 1979 (link) that found Medvedev's reasoning unpersuasive.
A steam explosion of one tank is not inconceivable but is most improbable, because the heat generation rate from a given amount of fission products is known precisely and is predictable. Means to dissipate this heat would be a part of the design and could be made highly reliable. (423)
They offer an alternative hypothesis about any possible radioactive contamination in the Kyshtym region -- the handful of multimegaton nuclear weapons tests conducted by the USSR in the Novaya Zemlya area.
We suggest that the observed data can be satisfied by postulating localized fallout (perhaps with precipitation) from explosion of a large nuclear weapon, or even from more than one explosion, because we have no limits on the length of time that fallout continued. (425)
And they consider weather patterns during the relevant time period to argue that these tests could have been the source of radiation contamination identified by Medvedev. Novaya Zemlya is over 1000 miles north of Kyshtym (20 degrees of latitude). So the fallout from the nuclear tests may be a possible alternative hypothesis, but it is farfetched. They conclude:
We can only conclude that, though a radiation release incident may well be supported by the available evidence, the magnitude of the incident may have been grossly exaggerated, the source chosen uncritically, and the dispersal mechanism ignored. Even so we find it hard to believe that an area of this magnitude could become contaminated and the event not discussed in detail or by more than one individual for more than 20 years. (425)
The heart of their skepticism depends on an entirely indefensible assumption: that Soviet science, engineering, and management were entirely capable of designing and implementing a safe system for nuclear waste storage. They were perhaps right about the scientific and engineering capabilities of the Soviet system; but the management systems in place were woefully inadequate. Their account rested on an assumption of straightforward application of engineering knowledge to the problem; but they failed to take into account the defects of organization and oversight that were rampant within Soviet industrial systems. And in the end the core of Medvedev's claims have been validated.

Another official report was compiled by Los Alamos scientists, released in 1982, that concluded unambiguously that Medvedev was mistaken, and that the widespread ecological devastation in the region resulted from small and gradual processes of contamination rather than a massive explosion of waste materials (link). Here is the conclusion put forward by the study's authors:
What then did happen at Kyshtym? A disastrous nuclear accident that killed hundreds, injured thousands, and contaminated thousands of square miles of land? Or, a series of relatively minor incidents, embellished by rumor, and severely compounded by a history of sloppy practices associated with the complex? The latter seems more highly probable.
So Medvedev is dismissed.

After the collapse of the USSR voluminous records about the Kyshtym disaster became available from secret Soviet files, and those records make it plain that US scientists badly misjudged the nature of the Kyshtym disaster. Medvedev was much closer to the truth than were Stratton and his colleagues or the authors of the Los Alamos report.

A scientific report based on Soviet-era documents that were released after the fall of the Soviet Union appeared in the Journal of Radiological Protection in 2017 (A V Akleyev et al 2017; link). Here is their brief description of the accident:
Starting in the earliest period of Mayak PA activities, large amounts of liquid high-level radioactive waste from the radiochemical facility were placed into long-term controlled storage in metal tanks installed in concrete vaults. Each full tank contained 70–80 tons of radioactive wastes, mainly in the form of nitrate compounds. The tanks were water-cooled and equipped with temperature and liquid-level measurement devices. In September 1957, as a result of a failure of the temperature-control system of tank #14, cooling-water delivery became insufficient and radioactive decay caused an increase in temperature followed by complete evaporation of the water, and the nitrate salt deposits were heated to 330 °C–350 °C. The thermal explosion of tank #14 occurred on 29 September 1957 at 4:20 pm local time. At the time of the explosion the activity of the wastes contained in the tank was about 740 PBq [5, 6]. About 90% of the total activity settled in the immediate vicinity of the explosion site (within distances less than 5 km), primarily in the form of coarse particles. The explosion gave rise to a radioactive plume which dispersed into the atmosphere. About 2 × 106 Ci (74PBq) was dispersed by the wind (north-northeast direction with wind velocity of 5–10 m s−1) and caused the radioactive trace along the path of the plume [5]. Table 1 presents the latest estimates of radionuclide composition of the release used for reconstruction of doses in the EURT area. The mixture corresponded to uranium fission products formed in a nuclear reactor after a decay time of about 1 year, with depletion in 137Cs due to a special treatment of the radioactive waste involving the extraction of 137Cs [6]. (R20-21)
Here is the region of radiation contamination (EURT) that Akleyev et al identify:

This region represents a large area encompassing 23,000 square kilometers (8,880 square miles). Plainly Akleyev et al describe a massive disaster including a very large explosion in an underground nuclear waste storage facility, large-scale dispersal of nuclear materials, and evacuation of population throughout a large region. This is very close to the description provided by Medvedev.

A somewhat surprising finding of the Akleyev study is that the exposed population did not show dramatically worse health outcomes and mortality relative to unexposed populations. For example, "Leukemia mortality rates over a 30-year period after the accident did not differ from those in the group of unexposed people" (R30). Their epidemiological study for cancers overall likewise indicates only a small effect of accidental radiation exposure on cancer incidence:
The attributable risk (AR) of solid cancer incidence in the EURTC, which gives the proportion of excess cancer cases out of the sum of excess and baseline cases, calculated according to the linear model, made up 1.9% over the whole follow-up period. Therefore, only 27 cancer cases out of 1426 could be associated with accidental radiation exposure of the EURT population. AR is highest in the highest dose groups (250–500 mGy and >500 mGy) and exceeds 17%.
So why did the explosion occur? James Mahaffey examines the case in detail in Atomic Accidents: A History of Nuclear Meltdowns and Disasters: From the Ozark Mountains to Fukushima. Here is his account:
In the crash program to produce fissile bomb material, a great deal of plutonium was wasted in the crude separation process. Production officials decided that instead of being dumped irretrievably into the river, the plutonium that had failed to precipitate out, remaining in the extraction solution, should be saved for future processing. A big underground tank farm was built in 1953 to hold processed fission waste. Round steel tanks were installed in banks of 20, sitting on one large concrete slab poured at the bottom of an excavation, 27 feet deep. Each bank was equipped with a heat exchanger, removing the heat buildup from fission-product decay using water pipes wrapped around the tanks. The tanks were then buried under a backfill of dirt. The tanks began immediately to fill with various waste solutions from the extraction plant, with no particular distinction among the vessels. The tanks contained all the undesirable fission products, including cobalt-60, strontium-90, and cesium-137, along with unseparated plutonium and uranium, with both acetate and nitrate solutions pumped into the same volume. One tank could hold probably 100 tons of waste product. 
In 1956, a cooling-water pipe broke leading to one of the tanks. It would be a lot of work to dig up the tank, find the leak, and replace the pipe, so instead of going to all that trouble, the engineers in charge just turned off the water and forgot about it. 
A year passed. Not having any coolant flow and being insulated from the harsh Siberian winter by the fill dirt, the tank retained heat from the fission-product decay. Temperature inside reached 660 ° Fahrenheit, hot enough to melt lead and cast bullets. Under this condition, the nitrate solutions degraded into ammonium nitrate, or fertilizer, mixed with acetates. The water all boiled away, and what was left was enough solidified ANFO explosive to blow up Sterling Hall several times, being heated to the detonation point and laced with dangerous nuclides. [189] 
Sometime before 11: 00 P.M. on Sunday, September 29, 1957, the bomb went off, throwing a column of black smoke and debris reaching a kilometer into the sky, accented with larger fragments burning orange-red. The 160-ton concrete lid on the tank tumbled upward into the night like a badly thrown discus, and the ground thump was felt many miles away. Residents of Chelyabinsk rushed outside and looked at the lighted display to the northwest, as 20 million curies of radioactive dust spread out over everything sticking above ground. The high-level wind that night was blowing northeast, and a radioactive plume dusted the Earth in a tight line, about 300 kilometers long. This accident had not been a runaway explosion in an overworked Soviet production reactor. It was the world’s first “dirty bomb,” a powerful chemical explosive spreading radioactive nuclides having unusually high body burdens and guaranteed to cause havoc in the biosphere. The accidentally derived explosive in the tank was the equivalent of up to 100 tons of TNT, and there were probably 70 to 80 tons of radioactive waste thrown skyward. (KL 5295)
So what were the primary organizational and social causes of this disaster? One is the haste created in nuclear design and construction created by Stalin's insistence on moving forward the Soviet nuclear weapons program as rapidly as possible. As is evident in the Chernobyl case as well, the political pressures on engineers and managers that followed from these political priorities often led to disastrous decisions and actions. A second is the institutionalized system of secrecy that surrounded industry generally, the military specifically, and the nuclear industry most especially. A third is the casual attitude taken by Soviet officials towards the health and wellbeing of the population. And a final cause highlighted by Mahaffey's account is the low level of attention given at the plant level to safety and maintenance of highly risky facilities. Stratton et al based their analysis on the fact that the heat-generating characteristics of nuclear waste were well understood and that effective means existed for controlling those risks. That may be, but what they failed to anticipate is that these risks would be fundamentally disregarded on the ground and in the supervisory system above the Kyshtym reactor complex.

(It is interesting to note that Mahaffey himself underestimates the amount of information that is now available about the effects of the disaster. He writes that "studies of the effects of this disaster are extremely difficult, as records do not exist, and previous residents are hard to track down" (kl 5330). But the Akleyev study mentioned above provides extensive health details about the affected population made possible as a result of data collected during Soviet times and concealed.)

Thursday, July 18, 2019

Safety and accident analysis: Longford


Andrew Hopkins has written a number of fascinating case studies of industrial accidents, usually in the field of petrochemicals. These books are crucial reading for anyone interested in arriving at a better understanding of technological safety in the context of complex systems involving high-energy and tightly-coupled processes. Especially interesting is his Lessons from Longford: The ESSO Gas Plant Explosion. The Longford refining plant suffered an explosion and fire in 1998 that killed two workers, badly injured others, and interrupted the supply of natural gas to the state of Victoria for two weeks. Hopkins is a sociologist, but has developed substantial expertise in the technical details of petrochemical refining plants. He served as an expert witness in the Royal Commission hearings that investigated the accident. The accounts he offers of these disasters are genuinely fascinating to read.

Hopkins makes the now-familiar point that companies often seek to lay responsibility for a major industrial accident on operator error or malfeasance. This was Esso's defense concerning its corporate liability in the Longford disaster. But, as Hopkins points out, the larger causes of failure go far beyond the individual operators whose decisions and actions were proximate to the event. Training, operating plans, hazard analysis, availability of appropriate onsite technical expertise -- these are all the responsibility of the owners and managers of the enterprise. And regulation and oversight of safety practices are the responsibility of stage agencies. So it is critical to examine the operations of a complex and dangerous technology system at all these levels.

A crucial part of management's responsibility is to engage in formal "hazard and operability" (HAZOP) analysis. "A HAZOP involves systematically imagining everything that might go wrong in a processing plant and developing procedures or engineering solutions to avoid these potential problems" (26). This kind of analysis is especially critical in high-risk industries including chemical plants, petrochemical refineries, and nuclear reactors. It emerged during the Longford accident investigation that HAZOP analyses had been conducted for some aspects of risk but not for all -- even in areas where the parent company Exxon was itself already fully engaged in analysis of those risky scenarios. The risk of embrittlement of processing equipment when exposed to super-chilled conditions was one that Exxon had already drawn attention to at the corporate level because of prior incidents.

A factor that Hopkins judges to be crucial to the occurrence of the Longford Esso disaster is the decision made by management to remove engineering staff from the plant to a central location where they could serve a larger number of facilities "more efficiently".
A second relevant change was the relocation to Melbourne in 1992 of all the engineering staff who had previously worked at Longford, leaving the Longford operators without the engineering backup to which they were accustomed. Following their removal from Longford, engineers were expected to monitor the plant from a distance and operators were expected to telephone the engineers when they felt a need to. Perhaps predictably, these arrangements did not work effectively, and I shall argue in the next chapter that the absence of engineering expertise had certain long-term consequences which contributed to the accident. (34)
One result of this decision is the fact that when the Longford incident began there were no engineering experts on site who could correctly identify the risks created by the incident. Technicians therefore restarted the process by reintroducing warm oil into the super-chilled heat exchanger. The metal had become brittle as a result of the extremely low temperatures and cracked, leading to the release of fuel and subsequent explosion and fire. As Hopkins points out, Exxon experts had long been aware of the hazards of embrittlement. However, it appears that the operating procedures developed by Esso at Longford ignored this risk, and operators and supervisors lacked the technical/scientific knowledge to recognize the hazard when it arose.

The topic of "tight coupling" (the tight interconnection across different parts of a complex technological system) comes up frequently in discussions of technology accidents. Hopkins shows that the Longford case gives a new spin to this idea. In the case of the explosion and fire at Longford it turned out to be very important that plant 1 was interconnected by numerous plumbing connections to plants 2 and 3. This meant that fuel from plants 2 and 3 continued to flow into plant 1 and greatly extended the length of time it took to extinguish the fire. Plant 1 had to be fully isolated from plants 2 and 3 before the fire could be extinguished (or plants 2 and 3 could be restarted), and there were enough plumbing connections among them, poorly understood at the time of the fire, that took a great deal of time to disconnect (32).

Hopkins addresses the issue of government regulation of high-risk industries in connection with the Longford disaster. Written in 1999 or so, he recognizes the trend towards "self-regulation" in place of government rules stipulating the operating of various industries. He contrasts this approach with deregulation -- the effort to allow the issue of safe operation to be governed by the market rather than by law.
Whereas the old-style legislation required employers to comply with precise, often quite technical rules, the new style imposes an overarching requirement on employers that they provide a safe and healthy workplace for their employees, as far as practicable. (92)
He notes that this approach does not necessarily reduce the need for government inspections; but the goal of regulatory inspection will be different. Inspectors will seek to satisfy themselves that the industry has done a responsible job of identify hazards and planning accordingly, rather than looking for violations of specific rules. (This parallels to some extent his discussion of two different philosophies of audit, one of which is much more conducive to increasing the systems-safety of high-risk industries; chapter 7.) But his preferred regulatory approach is what he describes as "safety case regulation". (Hopkins provides more detail about the workings of a safety case regime in Disastrous Decisions: The Human and Organisational Causes of the Gulf of Mexico Blowout, chapter 10.)
The essence of the new approach is that the operator of a major hazard installation is required to make a case or demonstrate to the relevant authority that safety is being or will be effectively managed at the installation. Whereas under the self-regulatory approach, the facility operator is normally left to its own devices in deciding how to manage safety, under the safety case approach it must lay out its procedures for examination by the regulatory authority. (96)
The preparation of a safety case would presumably include a comprehensive HAZOP analysis, along with procedures for preventing or responding to the occurrence of possible hazards. Hopkins reports that the safety case approach to regulation is being adopted by the EU, Australia, and the UK with respect to a number of high-risk industries. This discussion is highly relevant to the current debate over aircraft manufacturing safety and the role of the FAA in overseeing manufacturers.

It is interesting to realize that Hopkins is implicitly critical of another of my favorite authors on the topic of accidents and technology safety, Charles Perrow. Perrow's central idea of "normal accidents" brings along with it a certain pessimism about the ability to increase safety in complex industrial and technological systems; accidents are inevitable and normal (Normal Accidents: Living with High-Risk Technologies). Hopkins takes a more pragmatic approach and argues that there are engineering and management methodologies that can significantly reduce the likelihood and harm of accidents like the Esso gas plant explosion. His central point is that we don't need to be able to anticipate a long chain of unlikely events in order to identify the hazard in which these chains may eventuate -- for example, loss of coolant in a nuclear reactor or loss of warm oil in a refinery process. These final events of numerous different possible accident scenarios all require procedures in place that will guide the responses of engineers and technicians when "normal accidents" occur (33).

Hopkins highlights the challenge to safety created by the ongoing modification of a power plant or chemical plant; later modifications may create hazards not anticipated by the rigorous accident analysis performed on the original design.
Processing plants evolve and grow over time. A study of petroleum refineries in the US has shown that "the largest and most complex refineries in the sample are also the oldest ... Their complexity emerged as a result of historical accretion. Processes were modified, added, linked, enhanced and replaced over a history that greatly exceeded the memories of those who worked in the refinery. (33)
This is one of the chief reasons why Perrow believes technological accidents are inevitable. However, Hopkins draws a different conclusion:
However, those who are committed to accident prevention draw a different conclusion, namely, that it is important that every time physical changes are made to plant these changes be subjected to a systematic hazard identification process. ...  Esso's own management of change philosophy recognises this. It notes that "changes potentially invalidate prior risk assessments and can create new risks, if not managed diligently." (33)
(I believe this recommendation conforms to Nancy Leveson's theories of system safety engineering as well; link.)

Here is the causal diagram that Hopkins offers for the occurrence of the explosion at Longford (122).


The lowest level of the diagram represents the sequence of physical events and operator actions leading to the explosion, fatalities, and loss of gas supply. The next level represents the organizational factors identified in Longford's analysis of the event and its background. Central among these factors are the decision to withdraw engineers from the plant; a safety philosophy that focused on lost-time injuries rather than system hazards and processes; failures in the incident reporting system; failure to perform a HAZOP for plant 1; poor maintenance practices; inadequate audit practices; inadequate training for operators and supervisors; and a failure to identify the hazard created by interconnections with plants 2 and 3. The next level identifies the causes of the management failures -- Esso's overriding focus on cost-cutting and a failure by Exxon as the parent company to adequately oversee safety planning and share information from accidents at other plants. The final two levels of causation concern governmental and societal factors that contributed to the corporate behavior leading to the accident.

(Here is a list of major industrial disasters; link.)


Wednesday, July 17, 2019

Thai politics and Thai perspectives


One of the benefits of attending international conferences is meeting interesting scholars from different countries and traditions. I had that pleasure while participating in the Asian Conference on Philosophy of the Social Sciences at Nankai University in June, where I met Chaiyan Rajchagool. Chaiyan is a Thai academic in the social sciences. He earned his undergraduate degree in Thailand and received a PhD in sociology from the University of Manchester (UK). He has served as a senior academic administrator in Thailand and is now an associate professor of political science at the University of Phayao in northern Thailand. He is an experienced observer and analyst of Thai society, and he is one of dozens of Thai academics summoned by the military following the military coup in 2014 (link). I had numerous conversations with Chaiyan in Tianjin, which I enjoyed very much. He was generous to share with me his 1994 book, The rise and fall of the Thai absolute monarchy: Foundations of the modern Thai state from feudalism to peripheral capitalism, and the book is interesting in many different ways. Fundamentally it provides a detailed account of the political and economic transition that Siam / Thailand underwent in the nineteenth century, and it makes innovative use of the best parts of the political sociology of the 1970s and 1980s to account for these processes.

The book places the expansion of European colonialism in Southeast Asia at the center of the story of the emergence of the modern Thai state from mid nineteenth-century to the end of the absolute monarchy in the 1930s. Chaiyan seeks to understand the emergence of the modern Siamese and Thai state as a transition from "feudal" state formation to "peripheral capitalist" state formation. He puts this development from the 1850s to the end of the nineteenth century succinctly in the preface:
In the mid-nineteenth century Siam was a conglomerate of petty states and principalities and did not exist as a single political entity.... Economically Siam was transformed into what may be termed, in accordance with our theoretical framework, peripheral capitalism. Accompanying this development was the transformation of old classes and the rise of new classes.... At the political level a struggle took place within the ruling class in Bangkok, and new institutional structures of power began to emerge. As a result the previously fragmented systems of power and administration were brought under unified centralized command in the form of the absolute monarchy. (xiii-xiv)
This is a subtle, substantive, and rigorous account of the politics and economy of modern Siam / Thailand from the high point of western imperialism and colonialism in Asia to the twentieth century. The narrative is never dogmatic, and it offers an original and compelling account of the processes and causes leading to the formation of the Thai state and the absolutist monarchy. Chaiyan demonstrates a deep knowledge of the economic and political realities that existed on the ground in this region in the nineteenth century, and equally he shows expert knowledge about the institutions and strategies of the colonial powers in the region (Britain, France, Germany). I would compare the book to the theoretical insights about state formation of Michael Mann, Charles Tilly, and Fernand Braudel.

Chaiyan's account of the development of the Thai state emphasizes the role of economic and political interests, both domestic and international. Fundamentally he argues that British interests in teak (and later tin) prompted a British strategy that would today be called "neo-colonial": using its influence in the mid-nineteenth-century to create a regime and state that was favorable to its interests, without directly annexing these territories into its colonial empire. But there were internal determinants of political change as well, deriving from the conflicts between powerful families and townships over the control of taxes.
The year 1873-4 marks the beginning of a period in which the royalty attempted to take political control at the expense of the Bunnag nobility and its royal/noble allies. I have already noted that the Finance Office, founded in 1873, was to unify the collection f tax money from the various tax farmers under different ministries into a single office. To attain this goal, political enforcement and systematic administration were required. The Privy Council and the Council of State, established in June and August 1874, were the first high level state organizations. ... With the creation of a professional military force of 15,000 troops and 3,000 marines ... the decline of the nobility's power was irreversible, whereas the rise of the monarchy had never before had so promising a prospect. (85, 86)
Part of the development of the monarchy involved a transition from the personal politics of the feudal politics of the earlier period to a more bureaucratic-administrative system of governance:
Of interest in this regard was the fact that the state was moving away from the direct exercise of personal authority by members of the ruling class. this raises questions about the manner of articulation between the crown and the state. If direct personal control of the state characterizes a feudal state, the ruling class control of the state in a peripheral capitalist society takes the form of a more impersonal rule of law and administrative practice through which are mediated the influences of the politico-economic system and class interests. (88)
Chaiyan makes superb use of some of the conceptual tools of materialist social science and non-doctrinaire Marxism, framing his account in terms of the changes that were underway in Southeast Asian with respect to the economic structure of everyday life (property, labor, class) as well as the imperatives of British imperialism. The references include some of the very best sources in social and historical theory and non-doctrinaire Marxism that were available in the 1980s: for example, Ben Anderson, Perry Anderson, Norberto Bobbio, Fernand Braudel, Gene Genovese, Michael Mann, Ralph Miliband, Nicos Poulantzas, James Scott, Theda Skocpol, Charles Tilly, and Immanuel Wallerstein, to name a few out of the hundreds of authors cited in the bibliography. Chaiyan offers a relevant quotation from Fernand Braudel that I hadn't seen before but that is more profound than any of Marx's own pronouncements about "base and superstructure":
Any highly developed society can be broken down into several "ensembles": the economy, politics, culture, and the social hierarchy. The economy can only be understood in terms of the other "ensembles", for it both spreads itself about and opens its own doors to its neighbours. There is action and interaction. That rather special and partial form of the economy that is capitalism can only e fully explained in light of these contiguous "ensembles" and their encroachments; only then will it reveal its true face. 
This, the modern state, which did not create capitalism but only inherited it, sometimes acts in its favor and at other times acts against it; it sometimes allows capitalism to expand and at other times destroys its mainspring. Capitalism only triumphs when it becomes identified with the state, when it is the state.... 
So the state was either favorable or hostile to the financial world according to its own equilibrium and its own ability to stand firm. (Braudel Afterthoughts on Material Civilization and Capitalism, 64-65)
It is interesting to me that Chaiyan's treatment of the formation of a unified Thai state is attentive to the spatial and geophysical realities that were crucial to the expansion of central state power in the late nineteenth century.
A geophysical map, not a political one, would serve us better, for such a map reveals the mainly geophysical barriers that imposed constraints on the extension of Bangkok power. The natural waterways, mountains, forests and so on all helped determine how effectively Bangkok could claim and exert its power over townships. (2)
In actual fact, owing to geographical barriers and the consequent difficulty of communication, Bangkok exercised direct rule only over an area within a radius of two days travelling (by boat, cart, horse, or on foot). (3) 
Here is a map that shows the geophysical features of the region that eventually became unified Thailand, demonstrating stark delineation between lowlands and highlands:


This is a topic that received much prominence in the more recent writings of James Scott on the politics of southeast Asia, and his concept of "Zomia" as a way of singling out the particular challenges of exercising central state power in the highlands of southeast Asia (link, link). Here is a map created by Martin Lewis (link) intended to indicate the scope of the highland population (Zomia). The map is discussed in an earlier post.


And here is a map of the distribution of ethnic and language groups in Thailand (link), another important element in Chaiyan's account of the consolidation of the Thai monarchy:


It is an interesting commentary on the privilege, priorities, and limitations of the global academic world that Chaiyan's very interesting book has almost no visibility in western scholarship. In its own way it is the equal of some of Charles Tilly's writings about the origins of the French state; and yet Tilly is on all reading lists on comparative politics and Chaiyan is not. The book is not cited in one of the main English language sources on the history of Thailand, A History of Thailand by Chris Baker and Pasuk Phongpaichit, published by Cambridge University Press, even though that book covers exactly the same period in chapter 3. Online academic citation databases permitted me to find only one academic article that provided substantive discussion or use of the book ("Autonomy and subordination in Thai history: the case for semicolonial analysis", Inter‐Asia Cultural Studies, 2007 8:3, 329-348; link). The author of this article, Peter Jackson, is an emeritus professor of Thai history and culture at the Australian National University. The book itself is on the shelves at the University of Michigan library, and I would expect it is available in many research libraries in the United States as well.

So the very interesting theoretical and historical treatment that Chaiyan provides of state formation in Thailand seems not to have received much notice in western academic circles. Why is this? It is hard to avoid the inference that academic prestige and impact follow nations, languages, universities, and publishing houses. A history of a small developing nation, authored by a Thai intellectual at a small university, published by a somewhat obscure Thai publishing company, is not destined to make a large splash in elite European and North American academic worlds. But this is to the great disadvantage to precisely those worlds of thought and knowledge: if we are unlikely to be exposed to the writings of insightful scholars like Chaiyan Rajchagool, we are unlikely as well to understand the large historical changes our world has undergone over the past two centuries.

Amazon comes in for a lot of criticism these days; but one thing it has contributed in a very positive way is the easy availability of books like this one for readers who would otherwise never be exposed to it. How many other intellectuals with the insights of a Tilly or a Braudel are there in India, Côte d'Ivoire, Thailand, Bolivia, or Barbados whom we will never interact with in a serious way because of the status barriers that exist in the academic world?

*   *   *

(It is fascinating to me that one of the influences on Chaiyan at the University of Manchester was Teodor Shanin. Shanin is a scholar whom I came to admire greatly at roughly the same time when I was engaged in research in peasant studies in connection with Understanding Peasant China: Case Studies in the Philosophy of Social Science.)

Saturday, July 13, 2019

How things seem and why


The idea that there is a stark separation between many of our ideas of the social world, on the one hand, and the realities of the social world in which we live is an old one. We think "fairness and equality", but what we get is exploitation, domination, and opportunity-capture. And there is a reasonable suspicion that this gap is in some sense intentional: interested parties have deceived us. In some sense it was the lesson of Plato's allegory of the cave; it is the view that Marx expresses in his ideas of ideology and false consciousness; Gramsci's theory of hegemony expresses the view; Nietzsche seems to have this separation in mind in much of his writing; and the Frankfurt School made much of it as well. The antidote to these forms of illusion, according to many of these theorists, is critique: careful, penetrating analysis and criticism of the presuppositions and claims of the ideological theory. (Here are several efforts within Understanding Society to engage in this kind of work; link, link, link.)

Peter Baehr's recent book The Unmasking Style in Social Theory takes on this intellectual attitude of "unmasking" with a critical and generally skeptical eye. Baehr is an expert on the history of sociological theory who has written extensively on Hannah Arendt, Max Weber, and other fundamental contributors to contemporary social theory, and the book shows a deep knowledge of the history and intellectual traditions of social thought.

 The book picks out one particular aspect of the sociological tradition, the "style" of unmasking that he finds to be common in that history (and current practice). So what does Baehr mean by a style?
A style, in the sense used here, is a distinctive way of talking and writing. It is epitomized by characteristic words, images, metaphors, concepts and, especially, techniques. I refer to these collectively as elements or ingredients. (2)
The elements of the unmasking style that he identifies include rhetorical tools including weaponization, reduction and positioning, inversion, deflation, hyperbole and excess, and exclusive claims of emancipation (chapter 1).

The idea of an intellectual style is innocuous enough -- we can recognize the styles of analytic philosophy, contemporary literary criticism, and right-wing political commentary when we read or hear them. But there is a hidden question here: is there more than style to these traditions of thought? Are there methods of inquiry and reasoning, traditions of assessment of belief, and habits of scholarly interaction that underlie these various traditions? In much of Baehr's book he ignores these questions when it comes to the content of Marxist analysis, feminist theory, or the sociology of race in America. The impression he gives is that it is all style and rhetoric, with no rigorous research and analysis to support the claims.

In fact the overarching impression given by the book is that Baehr believes that much "unmasking" is itself biased, unfair, and dogmatic. He writes:
Unmasking aspires to create this roused awareness. The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable. Nor does it allow scientific refutation or principled disagreement. True as fiat, unmasking statements brook no contradiction. (3)
Such an approach to theory and politics is problematic for several reasons. Its authoritarianism is obvious. So is its exclusivity: I am right, you can shut up. Yet ongoing discord, unlike slapdash accusation, is a good thing. (131)
Part of Baehr's suspicion of the "style" of unmasking seems to derive from an allergy to the language of post-modernism in the humanities and some areas of social theory:
To be sure, unmask is a common term in social theory and political and cultural criticism. Find it consorting with illusion, disguise, fiction, hieroglyph, critique, mystification, fantasy, reversal, hegemony, myth, real interest, objective interest, semantic violence, symbolic violence, alienation, domination, revolution and emancipation. The denser this cluster, the more unmasking obtrudes from it. (5)
And he also associates the unmasking "style" with a culture of political correctness and a demand for compliance with a "progressive" agenda of political culture:
Rarely a day passes on Twitter without someone, somewhere, being upbraided for wickedness. When even a gesture or an intonation is potentially offensive to an aggrieved constituency on high alert, the opportunities for unmasking are endless. Some targets of censure are cowed. They apologize for an offense they were not conscious of committing. Publicly chastened, they resolve to be better behaved henceforth. (7)
A third salient difference between unmasking in popular culture and in academic social theory is that in the academy unmasking is considered progressive. Detecting concealed racism, white privilege, patriarchy, trans-gender phobia and colonial exploitation is the stock in trade of several disciplines, sub-disciplines and pseudo-disciplines across the humanities and social sciences. The common thread is the ubiquity of domination. (8)
Marxism lives on in sociology, in the humanities and social sciences, and in pockets of the wider culture. And wherever one finds Marxism, typically combined today to race and gender politics, and to postcolonial critique, one finds aspects of the unmasking template. (91)
These are currents of thought -- memes, theoretical frameworks, apperceptions of the true nature of contemporary society -- with which Baehr appears to have little patience.

But here are a few considerations in favor of unmasking in the world of politics, economics, and culture in which we now live.

First, Baehr's aversion to active efforts to reveal the pernicious assumptions and motives of specific voices in social media is misplaced. When the language of hate, white supremacy, denigration of Muslims, gays, and audacious women, and memes that seem to derive directly from the fascist and neo-Nazi toolbox, is it not entirely appropriate to call those voices to task? Is it not important, even vital, to unmask the voices of hate that challenge the basis of a liberal and inclusive democracy (link)? Is it the unmaskers or the trolls conveying aggressive hate and division who most warrant our disapproval?

And likewise in the area of the thought-frameworks surrounding the facts of modern market society. In some sense the claim that class interest (corporate interest, business interest, elite interest) strives hard to create public understandings of the world that are at odds with the real power relations that govern us is too obviously true to debate. This is the purpose of much corporate public relations and advertising, self-serving think-tanking, and other concrete mechanisms of shifting the terms of public understanding in a direction more favorable to the interests of the powerful. (Here is an article in the New York Times describing research documenting sustained efforts by ExxonMobil to cast doubt in  public opinion about the reality of global warming and climate change; link.) And there is no barrier to conducting careful, rigorous, and intellectually responsible "decoding" of these corporate efforts at composing a fantasy; this is precisely what Conway and Oreskes do with such force in Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming in the case of corporate efforts to distort scientific reality concerning their products and their effects (link).

Baehr's statements about the unavoidable dogmatism of "unmasking" analysis and criticism are also surprisingly categorical. "The kind of analysis it requires is never conveyed to the reader as an interpretation of events, hypothetical and contestable." Really? Are there no honest scholars in the field of critical race theory, or in feminist epistemology and philosophy of science, or in the sociology of science and technology? What is this statement other than precisely the kind of wholesale rejection of the intellectual honesty of one's opponents that otherwise seems to animate Baehr's critique?

The Unmasking Style is a bit of a paradox, in my view. It denounces the "style" of unmasking, and yet it reads as its own kind of wholesale discrediting of an intellectual orientation for which Baehr plainly has no patience. This is the orientation that takes seriously the facts of power, privilege, wealth, and racial and gender domination that continue to constitute the skeleton of our world. It is fine, of course, to disagree with this fundamental diagnosis of the dynamics of power, domination, and exploitation in the current world. But Baehr's book has many of the features of tone and rhetoric that the author vigorously criticizes in others. It is perplexing to find that this book offers so little of what the author seems to be calling for -- an intellectually open effort to discern the legitimate foundations of one's opponent's positions. For my view, readers of The Unmasking Style would be well advised to read as well one or two books by scholars like Frédéric Vandenberghe, including A Philosophical History of German Sociology, to gain a more sympathetic view of critical sociological theory and its efforts to discern the underlying power relations of the modern world (link).

In general, I find that there is much more intellectual substance to efforts to uncover the interest-bias of various depictions of the capitalist world than Baehr is willing to recognize. How do energy companies shape the debate over climate change? How did Cold War ideologies influence the development of the social sciences in the 1950s? How has pro-business, anti-regulation propaganda made the roll-back of protections of the health and safety of the public possible? What is the meaning of the current administration's persistent language about "dangerous immigrants" in terms of racial prejudice? These are questions that invoke some kind of "demystifying" analysis that would seem to fall in the category of what Baehr classifies as "unmasking"; and yet it is urgent that we undertake those inquiries.

A companion essay by Baehr, "The image of the veil in social theory", appears in Theory and Society this month (link), and takes a nuanced approach to the question of "mask" and "veil". The essay has little of the marks of polemical excess that seem to permeate the book itself. Here is the abstract to the essay:
Social theory draws energy not just from the concepts it articulates but also from the images it invokes. This article explores the image of the veil in social theory. Unlike the mask, which suggests a binary account of human conduct (what is covered can be uncovered), the veil summons a wide range of human experiences. Of special importance is the veil’s association with religion. In radical social thought, some writers ironize this association by “unveiling” religion as fraudulent (a move indistinguishable from unmasking it.) Baron d’Holbach and Marx offer classic examples of this stratagem. But other writers, notably Du Bois and Fanon, take a more nuanced and more theoretically productive approach to both religion and the veil. Refusing to debunk religion, these authors treat the veil—symbol and material culture—as a resource to theorize about social conflict. Proceeding in three stages, I, first, contrast the meanings of mask and unmasking with more supple veil imagery; second, identify anti-religious unveiling that is tantamount to unmasking; and, third, examine social theories of the veil that clarify the stakes of social adversity and political struggle. Du Bois’s and Fanon’s contributions to veil imagery receive special attention.
The Unmasking Style is erudite and interesting, and plainly designed to provoke debate. I only wish that it gave more consideration to the very real need we have to confront the lies and misrepresentations that currently pervade our contemporary world.

Tuesday, July 9, 2019

ABM fundamentalism


I've just had the singular opportunity of participating in the habilitation examination of Gianluca Manzo at the Sorbonne, based on his excellent manuscript on the relevance of agent-based models for justifying causal claims in the social sciences. Manzo is currently a research fellow in sociology at CNRS in Paris (Centre National de la Recherche Scientifique), and is a prolific contributor to analytical sociology and computational social science. The habilitation essay is an excellent piece of work and I trust it will be published as an influential monograph. Manzo has the distinction of being expert both on the philosophical and theoretical debates that are underway about social causation and an active researcher in the field of ABM simulations. Pierre Demeulenaere served as a generous and sympathetic mentor. The committee consisted of Anouk Barberousse, Ivan Ermakoff, Andreas Flache, Olivier Godechot, and myself, and reviewer comments and observations were of the highest quality and rigor. It was a highly stimulating session.

One element of our conversation was especially enlightening to me. I have written a number of times in Understanding Society and elsewhere about the utility of ABM models, and one line of thought I have developed is a critique of what I have labeled "ABM fundamentalism" -- the view that ABM models are the best possible technique for constructing social explanations for every possible subject in the social sciences (link). This view is expressed in Joshua Epstein's slogan, "If you didn't grow it, you didn't explain it." I maintain that ABM is a useful technique, but only one of many methods appropriate to the problem of constructing explanations of interesting sociological outcomes (link). So I advocate for theoretical and methodological pluralism when it comes to the ABM program.

I asked Gianluca whether he would agree that ABM fundamentalism is incorrect, and was surprised to find that he defends the universal applicability of ABM as a tool to implement any sociological theory. According to him, it is a perfectly general and universal modeling platform that can in principle be applied to any sociological problem. He also made it clear that he does not maintain that the use of ABM methods is optimal for every sociological problem of explanation. His defense of the universal applicability of ABM simulation techniques therefore does not imply that Manzo privileges these techniques as best for every sociological problem. But as a formal matter, he holds that ABM technology possesses the resources necessary to represent any fully specified social theory within a simulation.

The subsequent conversation succeeded in clarifying the underlying source of disagreement for me. What I realized in the discussion that ensued is that I was conflating two things in my label of ABM fundamentalism: the simulation technology and the substantive doctrine of generative social science. Epstein is a generativist, in the sense that he believes that social outcomes need in principle to be generated from a representation of facts about the individuals who make it up (Generative Social Science: Studies in Agent-Based Computational Modeling). Epstein is also an advocate of ABM techniques because they represent a particularly direct way of implementing a generativist explanation. But what Gianluca showed me is that ABM is not formally committed to the generativist dogma, and that an ABM simulation can perhaps incorporate factors at any social level. The insight that I gained, then, is that I should separate the substantive view of generativism from the formal mathematical tools of ABM simulations techniques.

I am still unclear how this would work -- that is, how an ABM simulation might be created that did an adequate job of representing features at a wide range of levels -- actors, organizations, states, structures, and ideologies. For example, how could an ABM simulation be designed that could capture a complex sociological analysis such as Tilly's treatment of the Vendée, with peasants, protests, and merchants, the church, winegrowers' associations, and the strategies of the state? Tilly's historical narrative seems inherently multi-stranded and irreducible to a simulation. Similar points could be made about Michael Mann's comparative historical account of fascisms or Theda Skocpol's analysis of social revolutions.

So there is still an open question for me in this topic. But I think I am persuaded that the fundamentalism to which I object is the substantive premise of generativism, not the formal computational methods of ABM simulations themselves. And if Gianluca is correct in saying that ABM is a universal simulation platform (as a Turing machine is a universal computational device) then the objection is misplaced.

So this habilitation examination in Paris had exactly the effect for me that we would hope for in an academic interaction -- it led me to look at an important issue in a somewhat different way. Thank you, Gianluca!