Showing posts with label failure. Show all posts
Showing posts with label failure. Show all posts

Saturday, June 4, 2022

Dysfunctions of Soviet economic ministries


In my book A New Social Ontology of Government (2020) I tried to provide an analytical inventory of the sources of "dysfunctions" in large organizations and government agencies. Why do agencies like FEMA or the NRC so often do such a poor job in carrying out their missions? The book proposed that we can better understand the failures of agencies and corporations based on a "social ontology" of actors and networks of actors within large organizations. The book discusses principal-agent problems, failures of communication across an organization, inconsistent priorities and agendas in sub-agencies within an organization, corruption, and "capture" of the organization's decision-making process by powerful outsiders (industry groups, interest groups, advocacy organizations).

It is very interesting to see a similar analysis by Paul Gregory and Andrei Markevich of the sources of dysfunction and organizational failure in the classic Soviet economic agencies in the 1930s-1950s. Their article "Creating Soviet Industry: The House That Stalin Built" (link) provides a good indication of the limitations of "command" even within a totalitarian dictatorship, and many of the conclusions converge with ideas presented in A New Social Ontology. Stalin's economic agencies and central planning apparatus showed many of the failures identified in other large organizations in the democratic capitalist West.

First, a little background. In the 1930s and 1940s there was an idealized conception of economic organization current in socialist thought (both communist and non-communist) according to which a socialist economy could be rationally and scientifically organized, without the "chaos" of a disorganized capitalist economy. The socialist economy would be vertically organized, with a "chief executive" (boss of bosses) presiding over ministries representing major sectors of the economy and giving commands concerning basic economic factors. The chief executive would set the targets for final outputs of capital goods and consumer goods to be produced. Each ministry would be responsible for production, investment, and labor use for the industries and firms in its sector. The input needs of the overall economy and all sectors and enterprises would be represented in the form of vast input-output tables that capture the interdependency of industries throughout the economy. The professional staff of the chief executive would set final needs for each commodity -- refrigerators, tanks, miles of railway tracks, ... Each industry has "input" requirements for primary goods (steel, coal, labor, metals, machines, ...), and an equilibrium economy requires that the right quantity of final goods and production goods should be calculated and produced to satisfy the needs of each industry as well as final demand. Wassily Leontief proposed a computational solution for this problem in the form of a large multi-sector input-output table -- an NxN model for representing the input-output relationships among N industries. Suppose there are 100 basic industries, and each industry requires some quantity of the inputs provided by every other industry. We can now compute the quantity of iron ore, coal, electricity, and labor needed to produce the desired end products in one time period. The results of the I-O model permit the development of plans and quotas for each industry: how much product they need to produce, and how much raw material and other inputs they will need to consume to complete their quota. Now there is the apparently simple problem of organization and management: bosses, managers, and supervisors are recruited for each industry to implement the sub-plan for the various industries and enterprises, and to ensure that the production process is efficiently organized, waste is minimized, and quotas are reached. Production by each enterprise is managed by plans originating with the central economic ministry. Orders and quotas begin with the central ministry; master plans are broken out into sub-plans for each industry; and each industry is monitored to ensure that it succeeds in assembling its resources into the specified quota of output. And the I-O methodology eliminates waste: it is possible to plan for the amount of steel needed for all producers and the number of refrigerators needed for all consumers, so there is no surplus (or deficit) of steel or refrigerators.

This is a vertical conception of economic organization based on a command theory of organization. It is dependent on determination of final output targets at the top and implementation at the bottom. And it is coordinated by the modeling permitted by Leontief tables or something similar. Resource constraints are incorporated into the system by inspection of the final output targets and the associated levels of raw material inputs: if the total plan including capital goods and consumer goods results in a need for ten times the amount of iron ore or coal available to the nation, then output targets must be reduced, new sources of iron ore and coal (mining) need to be developed, or international trade must make up the deficit. International trade presents a new problem, however: it requires that a surplus of goods be available (consumer goods, capital goods, or raw materials) that can generate currency reserves capable of funding purchases from other countries. This in turn requires readjustment of the overall system of plans.

This description is incomplete in several important aspects. First, this account focuses on quantity rather than quality by setting quotas in terms of total output rather than output at a given level of quality. This means that directors and managers have the option of producing more low-quality steel or bread rather than a smaller quantity of high-quality product. Much as a commercial bakery on Main Street in Fargo can reach market goals by adulterating the bread it produces, so the railway wagon enterprise in Chelyabinsk can substitute inferior inputs in order to achieve output quotas. (Here is a critical assessment of product quality in the late Soviet economy and the last-ditch efforts made by Mikhail Gorbachev to address the issue of quality control; link.) But the problem is systemic: managing to quota does not reward high standards of quality control, and there is no way for consumers to "punish" producers for low-quality products in the system described here because price and demand play no role in the process.

A second shortcoming of this concept of a planned economy is that it leaves out entirely the possibility of technological change and process improvement; implicitly this conception of production and investment assumes a static process of production. Technology change can be reflected in the planning process described here, because technology change shifts the quantities of inputs required for production of a unit of output, so technology change would be reflected in the I-O table for the industries that it affects. But the model itself does not have a mechanism for encouraging technology innovation.

However, there is a more fundamental problem with the vertical description provided here: it makes assumptions about the capacity to implement a command system in a vast network of organizations that is completely impossible to achieve. It is simply not the case that Stalin could decree "10 million toasters needed in 1935"; his ministry of "Small Electrical Appliances" could take this decree and convert it into sub-plans and commands for regional authorities; and plant bosses could convert their directives into working orders, smoothly implemented, by their 1,000 toaster assemblers. Instead, at each juncture we can expect to find conflicting interests, priorities, problems, and accommodations that diverge from the idealized sub-plan delivered by telegram from the Ministry of Small Electrical Appliances. We may find then that firms and sub-ministry offices fail to meet their quotas of toasters; or they lie about production figures; or they build one-slice toasters at lower cost; or they may deliver the correct number of completely useless and non-functional toasters; or they may deliver the toasters commanded, but at the cost of shifting production away from the electric borscht cookers and leave great numbers of Soviet consumers short of their favorite soup. And in fact, these sorts of opportunistic adjustments are exactly what Gregory and Markevich find in their analysis of Soviet archives. So let's turn now to the very interesting analysis these researchers provide of the organizational dysfunctions that can now be detected in Soviet archives.

Here is the approach taken by Gregory and Markevich:

The textbook stereotype has focused on the powerful State Planning Commission (Gosplan) as the allocator of resources, but most actual planning and resource management was carried out by the commissariats and more specifically by their branch administrations (glavk). This study considers the internal workings of the commissariats, rather their dealings with such organizations as Gosplan and the Commissariat of Finance. (789)

So, to start, Gregory and Markevich propose to disaggregate the Soviet organizational decision-making process, from the high-level planning agency to the commissariats and branch administrations -- the more proximate levels of economic organization. In other words, they implicitly adopt the perspective taken by current organizational theorists in western organizational studies: the idea that large organizations consist of networks of more or less loosely connected centers of decision-making (link, link, link, link).

In the three-tiered Soviet system, the industrial commissariats occupied the intermediate level between the "dictator" (assisted by functional agencies such as Gosplan or the Commissariat of Finance) at the top, and enterprises subordinated to the industrial commissariat (at the bottom). The "dictator" was an interlocking directorate of officials from the Politburo and the Council of People's Commissars (Sovnarkom). Notably, the most important industrial commissars, such as Ordzhonikidze and later Kaganovich, were also members of the interlocking directorate, allowing them to plead their cases both within the dictatorship and as part of the system's vertical hierarchy. (790)

The idealized view of the command economy emphasizes "vertical" relations of authority; whereas Gregory and Markevich pay much more attention to "horizontal" relations among managers, firms, and other economic actors. Horizontal agreements among managers within firms and across firms may act contrary to vertical commands; and because of the lack of accurate information, it may be impossible for higher-level bosses to punish those horizontal actors.

A perfectly informed dictator could impose vertical discipline, but the agent will always possess superior information (asymmetric information), and thus be left with the choice to obey or to engage in opportunistic behavior. Opportunism is promoted by the fact that the superior must hold the agent responsible, in this case, for production and delivery, and must mete out punishment for plan failure. The agent has an incentive to use its information advantage to obtain easy production and delivery plans and to provide inaccurate information in the case of plan failure. (792)

This feature of a command economy derives from "information asymmetry". Another situational feature involves the fact that "plans" in the Soviet economic system were rarely exact or specific, which meant that managers could evade their responsibilities (perhaps excepting the quotas imposed on their units). Further, central planning ministries and offices were generally very poorly staffed, and therefore had little capacity to genuinely oversee and manage the enterprises within their formal scope. Further, the strategy of using increasing levels of punishment and threat against managers who failed to reach quotas and targets had perverse consequences for the "vertical" command structure; punishment tactics had the effect of incentivizing local managers to make separate horizontal deals with other actors and to withhold the truth about production to their superiors (799). (It is worth recalling that China's Great Leap Forward Famine largely resulted from the fact that collective farm directors and regional economic authorities withheld information from Beijing about the terrible economic consequences of agricultural collectivization.)

As described earlier, in the nested Soviet dictatorship, the superior issues vertical orders to subordinates, which the subordinate either obeys or disobeys. In extreme cases, the subordinate might disobey the order outright; or the subordinate might disobey the order by engaging in a horizontal transaction while concealing this fact from the superior. In addition, the subordinate could lobby to influence the superior's vertical orders, to shape them to be more suitable. The archives provide a wealth of information on all these dealings between superiors and subordinates. (801)

Gregory and Markevich's analysis often turns on pervasive principal-agent problems within and across agencies and firms: "A persistent principal/agent conflict characterized the relationship between dictator and commissariat that followed from the commissariat's requirement to "fulfill the plan" and from the commissariat's information advantage" (813). But numerous other sources of "loose-connectedness" among agencies and firms appear in their analysis as well. And it is striking that there is a great deal in common across the organizational problems posed in running the Environmental Protection Agency (US), the GOSPLAN (USSR), and the General Motors Corporation.

Were reforms possible in the Communist economic systems?

In historical context, it is interesting to speculate whether some of the ideas associated with "market socialism" could have been incorporated into the Soviet economy in a way that enhanced quality, resource allocation, and technological and process innovation. Could the system of state-owned enterprises be reconciled with a system of market-determined prices? Could a state-owned economy become less centralized and more guided by "consumer preferences" and market conditions? Reforms along these lines would address some of the sources of systemic weakness in the Soviet economy -- imbalanced investment decisions, poor quality of both consumer and capital goods, and slow technological and process innovation. But this kind of reform would have a fatal flaw from the point of view of the Soviet dictatorship: it would substantially reduce the power of the party and the dictator over the economy, over the use of labor, and over the questions of what is produced and in what quantities.

During 1989-1991 I had the special opportunity to have several lengthy conversations with Hungarian socialist economist Janos Kornai at Harvard's Center for International Affairs, at the time of the collapse of communism in Hungary and the pending collapse of the USSR. It was highly interesting to hear this astute observer's observations about the economic failures of the command economy in the USSR and its satellites. From notes I took at the time, Kornai had in mind a package of reforms of socialism that might be referred to as "radical reform market socialism". (1) Price reforms should be undertaken in order to establish a system of market-clearing prices, reflecting relative scarcities and real opportunity costs. (2) Enterprises should be regulated by the principle of profit-maximization, and they should be subject to a hard budget constraint; unprofitable enterprises should be allowed to go bankrupt. (3) Barriers to competition should be eliminated in commodity markets, labor markets, and capital markets. (4) The skewed size distribution of enterprises in socialist economies should be redressed, with a larger proportion of middle- and small-scale enterprises. (5) International trade should be encouraged and exchange rates should be realistic. (6) The state should enact strong and credible legal protections of the new economic institutions: land-use arrangements should be formalized, private businesses should be protected, and the right to accumulate property should be assured. Kornai was also aware of the negative economic and political consequences that reforms like these could have for countries like Hungary, Poland, or the USSR. A hard budget constraint on enterprises would be likely to lead to waves of bankruptcies among inefficient enterprises, producing large numbers of unemployed workers. Price reforms would be likely to significantly alter the pattern of income distribution across sectors and regions, including a rebalancing of urban-rural incomes. And substantial price reform might lead to high rates of inflation in the medium term, again leading to unpredictable political consequences. These are consequences that might be politically unacceptable for socialist states. I don't recall that Kornai was favorable towards even deeper structural reforms of the socialist economies, including a transition to worker-owned cooperatives in place of state-owned enterprises.


Saturday, January 1, 2022

Strange defeat


One of the consequential puzzles of the Second World War was the sudden, catastrophic collapse of the French army following German invasion in 1940. This is the subject of Marc Bloch's Strange Defeat, written in 1940, and it is an event of major historical importance and mystery. The mystery is this: France was a powerful military force, it had declared war against Germany following the Nazi invasion of Poland, it had ample warning that Germany would wage war against it soon following the invasion of Poland, and it had invested heavily in defensive materiel against an anticipated German attack. And yet when the attack came in May 1940, France was surprised, French armies were quickly defeated, and France capitulated after only six weeks of fighting.

Most people who have written on Bloch's account have focused on the high-level hypotheses to be found in the book: incompetence in the French high command, political dysfunction within the French elite, and a predilection for "Hitler rather than Blum" among the elites. However, upon rereading, it is evident that Bloch has other ideas about the failure of the French military besides these large conflicts within French politics and society. As a staff officer with responsibilities for the organization of logistics, Bloch had ample opportunity to observe the behavior and decision-making of line officers and staff officers. And he focuses a great deal of attention on issues having to do with the mindset and expectations of French military men: what they understand about the battle situation, how they anticipate future needs, and how they communicate with other important actors.

The 'thinking oneself into the other fellow's shoes' is always a very difficult form of mental gymnastics, and it is not confined to men who occupy a special position in the military hierarchy. But it would be foolish to deny that staff officers as a whole have been a good deal to blame in this matter of sympathetic understanding. Their failure, when they did fail, was, however, due--I feel pretty sure--not so much to contempt as to lack of imagination and a tendency to take refuge from the urgency of fact in abstractions. (34)

So cognitive and mental framework shortcomings rise to the very top in Bloch's analysis of French army failures in the conduct of the war:

What drove our armies to disaster was the cumulative effect of a great number of different mistakes. One glaring characteristic is, however, common to all of them. Our leaders, or those who acted for them, were incapable of thinking in terms of a new war. In other words, the German triumph was, essentially, a triumph of intellect -- and it is that which makes it so peculiarly serious. (36)

These limitations of imagination and worldview issues were worsened by what Bloch identifies as a crippling organizational deficiency in the French army -- the strict separation between line officers and staff officers. This led to a very large separation in their worldviews, expectations, and ways of thinking about military matters between line and staff officers. Neither group knew what the other group was thinking or presupposing about the complex conditions of war in which they operated.

One simple and obvious remedy for this state of affairs would have been to establish a system which would have made it possible for small groups of officers to serve, turn and turn about, in the front line and at H.Q. But senior generals dislike having the personnel of their staffs changed too often. It should be remembered that in 1915 and 1916 their opposition to any reform along these lines led to an almost complete divorce between the outlook of the regimental and the staff officer. (35)

Associated with these cognitive framework failures was the French military's failure to adjust to the new "tempo of war" created by German tactics. Bloch recognized through his own experience that the German strategy relied on a tempo of action that outpaced the ability of the French high command and army to react effectively. "From the beginning to the end of the war, the metronome at headquarters was always set at too slow a beat" (43).

This "tempo" problem was not restricted only to the high command:

But it would not be fair to confine these criticisms to the High Command. Generally speaking, the combatant troops were no more successful than the staff in adjusting their movements or their tactical appreciations to the speed at which the Germans moved. (47)

This too can be unpacked into an organizational point: the line officers throughout the chain of command had too little training and readiness for initiative and adaptation; and when plans went wrong, chaos ensued. "They [the Germans] relied on action and on improvisation. We, on the other hand, believed in doing nothing and in behaving as we always had behaved" (49). Bloch is explicit in recognizing that initiative and improvisation could have substantially improved the French position:

That is why the Germans, true to their doctrine of speed, tended more and more to move their shock elements along the main arteries. It was, therefore, absolutely unnecessary to cover our front with a line extending for hundreds of kilometres, almost impossible to man, and terribly easy to pierce. On the other hand, the invader might have been badly mauled by a few islands of resistance well sited along the main roads, adequately camouflaged, sufficiently mobile, and armed with a few machine-guns and anti-tank artillery, or even with the humble 75! (50-51)

And the battlefield consequences of the French military's organizational discouragement of initiative and adaptation were severe:

But I am very much afraid that where this sort of self-government and mutual understanding did not exist, contacts between units and their senior formations, or, on the same level, between one unit and another, left a good deal to be desired. I have more than once heard regimental officers complain that they were left too long without orders, and it is very certain--as I have already shown by citing notorious examples--that the staff was imperfectly informed about what was happening on their section of the front. (66)

Bloch's account has many other strands of organizational observation concerning features of the French army that led to poor performance -- about "discipline of troops", about the intelligence organization, and about the poor liaison relationships that had been developed between French and British army staff.

There is an important lesson to draw here: Bloch's account is usually read as an indictment of French politics and society in the 1930s, but not as a detailed military and organizational study of failure. And yet, it is clear that Bloch has provided a great deal of content that contributes to exactly this kind of micro-level analysis of military dysfunction. Bloch, it turns out, was an astute organizational observer.

There is an interesting parallel between the collapse in 1940 and the comparably dramatic collapse of French armies in 1870 in the Franco-Prussian war (link). In both wars there was an obstinate rigidity in the French general staff that impeded adaptiveness to the changing and unexpected circumstances of the war that quickly engulfed them. Michael Howard's excellent history, The Franco-Prussian War provides extensive details about the sources of military failure in 1870.

It is worth observing that the defeat of France was not the only "strange defeat" that occurred between 1939 and 1941. Poland's very weak defense against Hitler's invasion in 1939, Stalin's unconvincing effort to invade Finland in 1939, and the stunning successes of Hitler's Barbarossa invasion of the Soviet Union in 1941 all represent military catastrophes that were on their face unlikely. In the Barbarossa case, much of the explanation falls on Stalin directly: his murderous purges of the Red Army officer corps in 1937, his mulish refusal to accept intelligence about a likely German invasion in summer 1941, his disastrous interference in strategy, placement of armies, and his unconditional orders that made maneuver impossible all combined to produce catastrophe in Ukraine, the Baltic states, and Russia itself in the first six months of the invasion. This suggests that perhaps explaining successful largescale military undertakings is harder than explaining failure and defeat. There are many ways to fail in a large, complex and highly coordinated activity like an invasion, and only a few ways to succeed.

Thursday, August 1, 2019

Pervasive organizational and regulatory failures


It is intriguing to observe how pervasive organizational and regulatory failures are in our collective lives. Once you are sensitized to these factors, you see them everywhere. A good example is in the business section of today's print version of the New York Times, August 1, 2019. There are at least five stories in this section that reflect the consequences of organizational and regulatory failure.

The first and most obvious story is one that has received frequent mention in Understanding Society, the Boeing 737 Max disaster. In a story titled “FAA oversight of Boeing scrutinized", the reporters give information about a Senate hearing on FAA oversight earlier this week.  Members of the Senate Appropriations Committee questioned the process of certification of new aircraft currently in use by the FAA.
Citing the Times story, Ms. Collins raised concerns over “instances in which FAA managers appeared to be more concerned with Boeing’s production timeline, rather than the safety recommendations of its own engineers.”
Senator Jack Reed referred to the need for a culture change to rebalance the relationship between regulator and industry. Agency officials continued to defend the certification process, which delegates 96% of the work of certification to the manufacturer.

This story highlights two common sources of organizational and regulatory failure. There is first the fact of “production pressure” coming from the owner of a risky process, involving timing, supply of product, and profitability. This pressure leads the owner to push the organization hard in an effort to achieve goals -- often leading to safety and design failures. The second factor identified here is the structural imbalance that exists between powerful companies running complex and costly processes, and the safety agencies tasked to oversee and regulate their behavior. The regulatory agency, in this case the FAA, is under-resourced and lacks the expert staff needed to carry out in depth a serious process of technical oversight.  The article does not identify the third factor which has been noted in prior posts on the Boeing disaster, the influence which Boeing has on legislators, government officials, and the executive branch.

 A second relevant story (on the same page as the Boeing story) refers to charges filed in Germany against the former CEO of Audi who has been charged concerning his role in the vehicle emissions scandal. This is part of the long-standing deliberate effort by Volkswagen to deceive regulators about the emissions characteristics of their diesel engine and exhaust systems. The charges against the Audi executive involved ordering the development of software designed to cheat diesel emissions testing for their vehicles. This ongoing story is primarily a story about corporate dysfunction, in which corporate leaders were involved in unethical and dishonest activities on behalf of the company. Regulatory failure is not a prominent part of this story, because the efforts at deception were so carefully calculated that it is difficult to see how normal standards of regulatory testing could have defeated them. Here the pressing problem is to understand how professional, experienced executives could have been led to undertake such actions, and how the corporation was vulnerable to this kind of improper behavior at multiple levels within the corporation. Presumably there were staff at multiple levels within these automobile companies who were aware of improper behavior. The story quotes a mid-level staff person who writes in an email that “we won’t make it without a few dirty tricks.” So the difficult question for these corporations is how their internal systems were inadequate to take note of dangerously improper behavior. The costs to Volkswagen and Audi in liability judgments and government penalties are truly vast, and surely outweigh the possible gains of the deception. These costs in the United States alone exceed $22 billion.

A similar story, this time from the tech industry, concerns a settlement of civil claims against Cisco Systems to settle claims “that it sold video surveillance technology that it knew had a significant security flaw to federal, state and local government agencies.” Here again we find a case of corporate dishonesty concerning some of its central products, leading to a public finding of malfeasance. The hard question is, what systems are in place for companies like Cisco that ensure ethical and honest presentation of the characteristics and potential defects of the products that they sell? The imperatives of working always to maximize profits and reduce costs lead to many kinds of dysfunctions within organizations, but this is a well understood hazard. So profit-based companies need to have active and effective programs in place that encourage and enforce honest and safe practices by managers, executives, and frontline workers. Plainly those programs broke down at Cisco, Volkswagen, and Audi. (One of the very useful features of Tom Beauchamp's book Case Studies in Business, Society, and Ethics is the light Beauchamp sheds through case studies on the genesis of unethical and dishonest behavior within a corporate setting.)

Now we go on to Christopher Flavelle's story about home-building in flood zones. From a social point of view, it makes no sense to continue to build homes, hotels, and resorts in flood zones. The increasing destruction of violent storms and extreme weather events has been evident at least since the devastation of Hurricane Katrina. Flavelle writes:
There is overwhelming scientific consensus that rising temperatures will increase the frequency and severity of coastal flooding caused by hurricanes, storm surges, heavy rain and tidal floods. At the same time there is the long-term threat of rising seas pushing the high-tide line inexorably inland.
However, Flavelle reports research by Climate Central that shows that the rate of home-building in flood zones since 2010 exceeds the rate of home-building in non-flood zones in eight states. So what are the institutional and behavioral factors that produce this amazingly perverse outcome? The article refers to incentives of local municipalities in generating property-tax revenues and of potential homeowners subject to urban sprawl and desires for second-home properties on the water. Here is a tragically short-sighted development official in Galveston who finds that "the city has been able to deal with the encroaching water, through the installation of pumps and other infrastructure upgrades": "You can build around it, at least for the circumstances today. It's really not affected the vitality of things here on the island at all." The factor that is not emphasized in this article is the role played by the National Flood Insurance Program in the problem of coastal (and riverine) development. If flood insurance rates were calculated in terms of the true riskiness of the proposed residence, hotel, or resort, then it would no longer be economically attractive to do the development. But, as the article makes clear, local officials do not like that answer because it interferes with "development" and property tax growth. ProPublica has an excellent 2013 story on the perverse incentives created by the National Flood Insurance Program, and its inequitable impact on wealthier home-owners and developers (link). Here is an article by Christine Klein and Sandra Zellmer in the SMU Law Review on the dysfunctions of Federal flood policy (link):
Taken together, the stories reveal important lessons, including the inadequacy of engineered flood control structures such as levees and dams, the perverse incentives created by the national flood insurance program, and the need to reform federal leadership over flood hazard control, particularly as delegated to the Army Corps of Engineers.
Here is a final story from the business section of the New York Times illustrating organizational and regulatory dysfunctions -- this time from the interface between the health industry and big tech. The story here is an effort that is being made by DeepMind researchers to use artificial intelligence techniques to provide early diagnosis of otherwise mysterious medical conditions like "acute kidney injury" (AKI). The approach proceeds by analyzing large numbers of patient medical records and attempting to identify precursor conditions that would predict the occurrence of AKI. The primary analytical tool mentioned in the article is the set of algorithms associated with neural networks. In this instance the organizational / regulatory dysfunction is latent rather than explicit and has to do with patient privacy. DeepMind is a business unit within the Google empire of businesses, Alphabet. DeepMind researchers gained access to large volumes of patient data from the UK National Health Service. There is now regulatory concern in the UK and the US concerning the privacy of patients whose data may wind up in the DeepMind analysis and ultimately in Google's direct control. "Some critics question whether corporate labs like DeepMind are the right organization to handle the development of technology with such broad implications for the public." Here the issue is a complicated one. It is of course a good thing to be able to diagnose disorders like AKI in time to be able to correct them. But the misuse and careless custody of user data by numerous big tech companies, including especially Facebook, suggests that sensitive personal data like medical files need to be carefully secured by effective legislation and regulation. And so far the regulatory system appears to be inadequate for the protection of individual privacy in a world of massive databases and largescale computing capabilities. The recent FTC $5 billion settlement imposed on Facebook, large as it is, may not suffice to change the business practices of Facebook (link).

(I didn't find anything in the sports section today that illustrates organizational and regulatory dysfunction, but of course these kinds of failures occur in professional and college sports as well. Think of doping scandals in baseball, cycling, and track and field, sexual abuse scandals in gymnastics and swimming, and efforts by top college football programs to evade NCAA regulations on practice time and academic performance.)

Wednesday, April 3, 2019

Organizations and dysfunction


A recurring theme in recent months in Understanding Society is organizational dysfunction and the organizational causes of technology failure. Helmut Anheier's volume When Things Go Wrong: Organizational Failures and Breakdowns is highly relevant to this topic, and it makes for very interesting reading. The volume includes contributions by a number of leading scholars in the sociology of organizations.

And yet the volume seems to miss the mark in some important ways. For one thing, it is unduly focused on the question of "mortality" of firms and other organizations. Bankruptcy and organizational death are frequent synonyms for "failure" here. This frame is evident in the summary the introduction offers of existing approaches in the field: organizational aspects, political aspects, cognitive aspects, and structural aspects. All bring us back to the causes of extinction and bankruptcy in a business organization. Further, the approach highlights the importance of internal conflict within an organization as a source of eventual failure. But it gives no insight into the internal structure and workings of the organization itself, the ways in which behavior and internal structure function to systematically produce certain kinds of outcomes that we can identify as dysfunctional.

Significantly, however, dysfunction does not routinely lead to death of a firm. (Seibel's contribution in the volume raises this possibility, which Seibel refers to as "successful failures"). This is a familiar observation from political science: what looks dysfunctional from the outside may be perfectly well tuned to a different set of interests (for example, in Robert Bates's account of pricing boards in Africa in Markets and States in Tropical Africa: The Political Basis of Agricultural Policies). In their introduction to this volume Anheier and Moulton refer to this possibility as a direction for future research: "successful for whom, a failure for whom?" (14).

The volume tends to look at success and failure in terms of profitability and the satisfaction of stakeholders. But we can define dysfunction in a more granular way by linking characteristics of performance to the perceived "purposes and goals" of the organization. A regulatory agency exists in order to effectively project the health and safety of the public. In this kind of case, failure is any outcome in which the agency flagrantly and avoidably fails to prevent a serious harm -- release of radioactive material, contamination of food, a building fire resulting from defects that should have been detected by inspection. If it fails to do so as well as it might then it is dysfunctional.

Why do dysfunctions persist in organizations? It is possible to identify several possible causes. The first is that a dysfunction from one point of view may well be a desirable feature from another point of view. The lack of an authoritative safety officer in a chemical plant may be thought to be dysfunctional if we are thinking about the safety of workers and the public as a primary goal of the plant (link). But if profitability and cost-savings are the primary goals from the point of view of the stakeholders, then the cost-benefit analysis may favor the lack of the safety officer.

Second, there may be internal failures within an organization that are beyond the reach of any executive or manager who might want to correct them. The complexity and loose-coupling of large organizations militate against house cleaning on a large scale.

Third, there may be powerful factions within an organization for whom the "dysfunctional" feature is an important component of their own set of purposes and goals. Fligstein and McAdam argue for this kind of disaggregation with their theory of strategic action fields (link). By disaggregating purposes and goals to the various actors who figure in the life cycle of the organization – founders, stakeholders, executives, managers, experts, frontline workers, labor organizers – it is possible to see the organization as a whole as simply the aggregation of the multiple actions and purposes of the actors within and adjacent to the organization. This aggregation does not imply that the organization is carefully adjusted to serve the public good or to maximize efficiency or to protect the health and safety of the public. Rather, it suggests that the resultant organizational structure serves the interests of the various actors to the fullest extent each actor is able to manage.

Consider the account offered by Thomas Misa of the decline of the steel industry in the United States in the first part of the twentieth century in A Nation of Steel: The Making of Modern America, 1865-1925. Misa's account seems to point to a massive dysfunction in the steel corporations of the inter-war period, a deliberate and sustained failure to invest in research on new steel technologies in metallurgy and production. Misa argues that the great steel corporations -- US Steel in particular -- failed to remain competitive in their industry in the early years of the twentieth century because management persistently pursued short-term profits and financial advantage for the company through domination of the market at the expense of research and development. It relied on market domination instead of research and development for its source of revenue and profits.
In short, U.S. Steel was big but not illegal. Its price leadership resulted from its complete dominance in the core markets for steel.... Indeed, many steelmakers had grown comfortable with U.S. Steel's overriding policy of price and technical stability, which permitted them to create or develop markets where the combine chose not to compete, and they testified to the court in favor of the combine. The real price of stability ... was the stifling of technological innovation. (255)
The result was that the modernized steel industries in Europe leap-frogged the previous US advantage and eventually led to unviable production technology in the United States.
At the periphery of the newest and most promising alloy steels, dismissive of continuous-sheet rolling, actively hostile to new structural shapes, a price leader but not a technical leader: this was U.S. Steel. What was the company doing with technological innovation? (257)
 Misa is interested in arriving at a better way of understanding the imperatives leading to technical change -- better than neoclassical economics and labor history. His solution highlights the changing relationships that developed between industrial consumers and producers in the steel industry.
We now possess a series of powerful insights into the dynamics of technology and social change. Together, these insights offer the realistic promise of being better able, if we choose, to modulate the complex process of technical change. We can now locate the range of sites for technical decision making, including private companies, trade organizations, engineering societies, and government agencies. We can suggest a typology of user-producer interactions, including centralized, multicentered, decentralized, and direct-consumer interactions, that will enable certain kinds of actions while constraining others. We can even suggest a range of activities that are likely to effect technical change, including standards setting, building and zoning codes, and government procurement. Furthermore, we can also suggest a range of strategies by which citizens supposedly on the "outside" may be able to influence decisions supposedly made on the "inside" about technical change, including credibility pressure, forced technology choice, and regulatory issues. (277-278)
In fact Misa places the dynamic of relationship between producer and large consumer at the center of the imperatives towards technological innovation:
In retrospect, what was wrong with U.S. Steel was not its size or even its market power but its policy of isolating itself from the new demands from users that might have spurred technical change. The resulting technological torpidity that doomed the industry was not primarily a matter of industrial concentration, outrageous behavior on the part of white- and blue-collar employees, or even dysfunctional relations among management, labor, and government. What went wrong was the industry's relations with its consumers. (278)
This relative "callous treatment of consumers" was profoundly harmful when international competition gave large industrial users of steel a choice. When US Steel had market dominance, large industrial users had little choice; but this situation changed after WWII. "This favorable balance of trade eroded during the 1950s as German and Japanese steelmakers rebuilt their bombed-out plants with a new production technology, the basic oxygen furnace (BOF), which American steelmakers had dismissed as unproven and unworkable" (279). Misa quotes a president of a small steel producer: "The Big Steel companies tend to resist new technologies as long as they can ... They only accept a new technology when they need it to survive" (280).

***

Here is an interesting table from Misa's book that sheds light on some of the economic and political history in the United States since the post-war period, leading right up to the populist politics of 2016 in the Midwest. This chart provides mute testimony to the decline of the rustbelt industrial cities. Michigan, Illinois, Ohio, Pennsylvania, and western New York account for 83% of the steel production on this table. When American producers lost the competitive battle for steel production in the 1980s, the Rustbelt suffered disproportionately, anad eventually blue collar workers lost their places in the affluent economy.

Monday, April 1, 2019

Ethical disasters


Many examples of technical disasters have been provided in Understanding Society, along with efforts to understand the systemic dysfunctions that contributed to their occurrence. Frequently those dysfunctions fall within the business organizations that manage large, complex technology systems, and often enough those dysfunctions derive from the imperatives of profit-maximization and cost avoidance. Andrew Hopkins' account of the business decisions contributing to the explosion of the ESSO gas plant in Longford, Australia illustrates this dynamic in Lessons from Longford: The ESSO Gas Plant Explosion. The withdrawal of engineering experts from the plant to a remote corporate headquarters was a cost-saving move that, according to Hopkins, contributed to the eventual disaster.

A topic we have not addressed in detail is the occurrence of ethical disasters -- terrible outcomes that are the result of deliberate choices by decision-makers within an organization that are, upon inspection, clearly and profoundly unethical and immoral. The collapse of Enron is probably one such disaster; the Bernie Madoff scandal is another. But it seems increasingly likely that Purdue Pharma and the Sackler family's business leadership of the corporation represent another major example. Recent reporting by ProPublica, the Atlantic, and the New York Times relies on documents collected in the course of litigation against Purdue Pharma and members of the Sackler family in Massachusetts and New York. (Here are the unredacted court documents on which much of this reporting depends; link.) These documents make it hard to avoid the ethical conclusion that the Sackler family actively participated in business strategies for their company Purdue Pharma that treated the OxyContin addiction epidemic as an expanding business opportunity. And this seems to be a huge ethical breach.

This set of issues is currently unresolved by the courts, so it rests with the legal system to resolve the facts and the issues of legal culpability. But as citizens we all have the ability to read the documents and make our own decisions about the ethical status of decisions and strategies made by the family and the corporation over the course of this disaster. The point here is simply to ask these key questions: how should we think about the ethical status of decisions and strategies of owners and managers that lead to terrible harms, and harms that could reasonably have been anticipated? How should a company or a set of owners respond to a catastrophe in which several hundred thousand people have died, and which was facilitated in part by deliberate marketing efforts by the company and the owners? How should the company have adjusted its business when it became apparent that its product was creating addiction and widespread death?

First, here are a few details from the current reporting about the case. Here are a few paragraphs from the ProPublica story (January 30, 2019):
Not content with billions of dollars in profits from the potent painkiller OxyContin, its maker explored expanding into an “attractive market” fueled by the drug’s popularity — treatment of opioid addiction, according to previously secret passages in a court document filed by the state of Massachusetts. 
In internal correspondence beginning in 2014, Purdue Pharma executives discussed how the sale of opioids and the treatment of opioid addiction are “naturally linked” and that the company should expand across “the pain and addiction spectrum,” according to redacted sections of the lawsuit by the Massachusetts attorney general. A member of the billionaire Sackler family, which founded and controls the privately held company, joined in those discussions and urged staff in an email to give “immediate attention” to this business opportunity, the complaint alleges. (ProPublica 1/30/2019; link)
The NYT story reproduces a diagram included in the New York court filings that illustrates the company's business strategy of "Project Tango" -- the idea that the company could make money both from sales of its pain medication and from sales of treatments for the addiction it caused.


Further, according to the reporting provided by the NYT and ProPublica, members of the Sackler family used their positions on the Purdue Pharma board to press for more aggressive business exploitation of the opportunities described here:
In 2009, two years after the federal guilty plea, Mortimer D.A. Sackler, a board member, demanded to know why the company wasn't selling more opioids, email traffic cited by Massachusetts prosecutors showed. In 2011, as states looked for ways to curb opioid prescriptions, family members peppered the sales staff with questions about how to expand the market for the drugs.... The family's statement said they were just acting as responsible board members, raising questions about "business issues that were highly relevant to doctors and patients. (NYT 4/1/2019; link)
From the 1/30/2019 ProPublica story, and based on more court documents:
Citing extensive emails and internal company documents, the redacted sections allege that Purdue and the Sackler family went to extreme lengths to boost OxyContin sales and burnish the drug’s reputation in the face of increased regulation and growing public awareness of its addictive nature. Concerns about doctors improperly prescribing the drug, and patients becoming addicted, were swept aside in an aggressive effort to drive OxyContin sales ever higher, the complaint alleges. (link)
And ProPublica underlines the fact that prosecutors believe that family members have personal responsibility for the management of the corporation:
The redacted paragraphs leave little doubt about the dominant role of the Sackler family in Purdue’s management. The five Purdue directors who are not Sacklers always voted with the family, according to the complaint. The family-controlled board approves everything from the number of sales staff to be hired to details of their bonus incentives, which have been tied to sales volume, the complaint says. In May 2017, when longtime employee Craig Landau was seeking to become Purdue’s chief executive, he wrote that the board acted as “de-facto CEO.” He was named CEO a few weeks later. (link)
The courts will resolve the question of legal culpability. The question here is one of the ethical standards that should govern the actions and strategies of owners and managers. Here are several simple ethical observations that seem relevant to this case.

First, it is obvious that pain medication is a good thing when used appropriately under the supervision of expert and well-informed physicians. Pain management enhances quality of life for people experiencing pain.

Second, addiction is plainly a bad thing, and it is worse when it leads to predictable death or disability for its victims. A company has a duty of concern for the quality of life of human beings affected by its product, and this extends to a duty to take all possible precautions to minimize the likelihood that human beings will be harmed by the product.

Third, given that the risks of addiction that were known about this product, the company has a moral obligation to treat its relations with physicians and other health providers as occasions of accurate and truthful education about the product, not opportunities for persuasion, inducement, and marketing. Rather than a sales force of representatives whose incomes are determined by the quantity of the product they sell, the company has a moral obligation to train and incentivize its representatives to function as honest educators providing full information about the risks as well as the benefits of the product. And, of course, it has an obligation not to immerse itself in the dynamics of "conflict of interest" discussed elsewhere (link) -- this means there should be no incentives provided to the physicians who agree to prescribe the product.

Fourth, it might be argued that the profits generated by the business of a given pharmaceutical product should be used proportionally to ameliorate the unavoidable harms it creates. Rather than making billions in profits from the sale of the product, and then additional hundreds of millions on products that offset the addictions and illness created by dissemination of the product (this was the plan advanced as "Project Tango"), the company and its owners should hold themselves accountable for the harms created by their product. (That is, the social and human costs of addiction should not be treated as "externalities" or even additional sources of profit for the company.)

Finally, there is an important question at a more individual scale. How should we think about super-rich owners of a company who seem to lose sight entirely of the human tragedies created by their company's product and simply demand more profits, more timely distribution of the profits, and more control of the management decisions of the company? These are individual human beings, and surely they have a responsibility to think rigorously about their own moral responsibilities. The documents released in these court proceedings seem to display an amazing blindness to moral responsibility on the part of some of these owners.

(There are other important cases illustrating the clash between moral responsibility, corporate profits, and corporate decision-making, having to do with the likelihood of collaboration between American companies, their German and Polish subsidiaries, and the Nazi regime during World War II. Edwin Black argues in IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America's Most Powerful Corporation-Expanded Edition that the US-based computer company provided important support for Germany's extermination strategy. Here is a 2002 piece from the Guardian on the update of Black's book providing more documentary evidence for this claim; link. And here is a piece from the Washington Post on American car companies in Nazi Germany; link. )

(Stephen Arbogast's Resisting Corporate Corruption: Cases in Practical Ethics From Enron Through The Financial Crisis is an interesting source on corporate ethics.)

Thursday, March 28, 2019

Social ontology of government



I am currently writing a book on the topic of the "social ontology of government". My goal is to provide a short treatment of the social mechanisms and entities that constitute the workings of government. The book will ask some important basic questions: what kind of thing is "government"? (I suggest it is an agglomeration of organizations, social networks, and rules and practices, with no overriding unity.) What does government do? (I simplify and suggest that governments create the conditions of social order and formulate policies and rules aimed at bringing about various social priorities that have been selected through the governmental process.) How does government work -- what do we know about the social and institutional processes that constitute its metabolism? (How do government entities make decisions, gather needed information, and enforce the policies they construct?)

In my treatment of the topic of the workings of government I treat the idea of "dysfunction" with the same seriousness as I do topics concerning the effective and functional aspects of governmental action. Examples of dysfunctions include principal-agent problems, conflict of interest, loose coupling of agencies, corruption, bribery, and the corrosive influence of powerful outsiders. It is interesting to me that this topic -- ontology of government -- has unexpectedly crossed over with another of my interests, the organizational causes of largescale accidents.

If there are guiding perspectives in my treatment, they are eclectic: Neil Fligstein and Doug McAdam, Manuel DeLanda, Nicos Poulantzas, Charles Perrow, Nancy Leveson, and Lyndon B. Johnson, for example.

In light of these interests, I find the front page of the New York Times on March 28, 2019 to be a truly fascinating amalgam of the social ontology of government, with a heavy dose of dysfunction. Every story on the front page highlights one feature or another of the workings and failures of government. Let's briefly name these features. (The item numbers flow roughly from upper right to lower left.)

Item 1 is the latest installment of the Boeing 737 MAX story. Failures of regulation and a growing regime of "collaborative regulation" in which the FAA delegates much of the work of certification of aircraft safety to the manufacturer appear at this early stage to be a part of the explanation of this systems failure. This was the topic of a recent post (link).

Items 2 and 3 feature the processes and consequences of failed government -- the social crisis in Venezuela created in part by the breakdown of legitimate government, and the fundamental and continuing inability of the British government and its prime minister to arrive at a rational and acceptable policy on an issue of the greatest importance for the country. Given that decision-making and effective administration of law are fundamental functions of government, these two examples are key contributions to the ontology of government. The Brexit story also highlights the dysfunctions that flow from the shameful self-dealing of politicians and leaders who privilege their own political interests over the public good. Boris Johnson, this one's for you!

Item 4 turns us to the  dynamics of presidential political competition. This item falls on the favorable side of the ledger, illustrating the important role that a strong independent press has in helping to inform the public about the past performance and behavior of candidates for high office. It is an important example of depth journalism and provides the public with accurate, nuanced information about an appealing candidate with a policy history as mayor that many may find unpalatable. The story also highlights the role that non-governmental organizations have in politics and government action, in this instance the ACLU.

Item 5 brings us inside the White House and gives the reader a look at the dynamics and mechanisms through which a small circle of presidential advisors are able to determine a particular approach to a policy issue that they favor. It displays the vulnerability the office of president shows to the privileged insiders' advice concerning policies they personally favor. Whether it is Mick Mulvaney, acting chief of staff to the current president, or Robert McNamara's advice to JFK and LBJ leading to escalation in Vietnam, the process permits ideologically committed insiders to wield extraordinary policy power.

Item 6 turns to the legislative process, this time in the New Jersey legislature, on the topic of the legalization of marijuana. This story too falls on the positive side of the "function-dysfunction" spectrum, in that it describes a fairly rational and publicly visible process of fact-gathering and policy assessment by a number of New Jersey legislators, leading to the withdrawal of the legislation.

Item 7 turns to the mechanisms of private influence on government, in a particularly unsavory but revealing way. The story reveals details of a high-end dinner "to pa tribute to the guest of honor, Gov. Andrew M. Cuomo." The article writes, "Lobbyists told their clients that the event would be a good thing to go to", at a minimum ticket price of $25,000 per couple. This story connects the dots between private interest and efforts to influence governmental policy. In this case the dots are not very far apart.

With a little effort all these items could be mapped onto the diagram of the interconnections within and across government and external social groups provided above.

Wednesday, March 27, 2019

Regulatory failure and the 737 MAX disasters


The recent crashes of two Boeing 737 MAX aircraft raise questions about the safety certification process through which this modified airframe was certified for use by the FAA. Recent accounts of the design and manufacture of the aircraft demonstrate an enormous pressure for speed and great pressure to reduce costs. Attention has focused on a software system, MCAS, which was a feature needed to adapt to the aerodynamics created by repositioning of larger engines on the existing 737 body. The software was designed to automatically intervene to prevent stall if a single sensor in the nose indicated unacceptable angle of ascent. The crash investigations are not complete, but current suspicions are that the pilots in the two aircraft were unable to control or disable the nose-down response of the system in the roughly 40 seconds they had to recover control of the aircraft. (James Fallows provides a good and accessible account of the details of the development of the 737 MAX in a story in the Atlantic; link.)

The question here concerns the regulatory background of the aircraft: was the certification process through which the 737 MAX was certified to fly a sufficiently rigorous and independent one?

Thomas Kaplan details in a New York Times article the division of responsibility that has been created in the certification process over the past several decades between the FAA and the manufacturer (NYT 3/27/19). Under this program, the FAA delegates a substantial part of the work of certification evaluation to the manufacturer and its engineering staff. Kaplan writes:
In theory, delegating much of the day-to-day regulatory work to Boeing allows the FAA to focus its limited resources on the most critical safety work, taps into existing industry technical expertise at a time when airliners are becoming increasingly complex, and allows Boeing in particular to bring out new planes faster at a time of intense global competition with its European rival Airbus.
However, it is apparent to both outsiders and insiders that this creates the possibility of impairing the certification process by placing crucial parts of the evaluation in the hands of experts whose interests and careers lie in the hands of the corporation whose product they are evaluating. This is an inherent conflict of interest for the employee, and it is potentially a critical flaw in the process from the point of view of safety. (See an earlier post on the need for an independent and empowered safety officer within complex and risky processes; link.)

Senator Richard Blumenthal (Connecticut) highlighted this concern when he wrote to the inspector general last week: "The staff responsible for regulating aircraft safety are answerable to the manufacturers who profit from cutting corners, not the American people who may be put at risk."

A 2011 audit report from the Transportation Department's inspector general's office highlighted exactly this kind of issue: "The report cited an instance where FAA engineers were concerned about the 'integrity' of an employee acting on the agency's behalf at an unnamed manufacturer because the employee was 'advocating a position that directly opposed FAA rules on an aircraft fuel system in favor of the manufacturer'." The article makes the point that Congress has encouraged this program of delegation in order to constrain budget requirements for the federal agency.

Kaplan notes that there is also a worrisome degree of exchange of executive staff between the FAA and the airline industry, raising the possibility that the industry's priorities about cost and efficiency may unduly influence the regulatory agency:
The part of the FAA under scrutiny, the Transport Airplane Directorate, was led at the time by an aerospace engineer names Ali Bahrami. The next year, he took a job at the Aerospace Industries Association, a trade group whose members include Boeing. In that position, he urged his former agency to allow manufacturers like Boeing to perform as much of the work of certifying new planes as possible. Mr. Bahrami is now back at the FAA as its top safety official.
This episode illustrates one of the key dysfunctions of organizations that have been highlighted elsewhere here: the workings of conflict of commitment and interest within an organization, and the ability that the executives of an organization have to impose behavior and judgment on their employees that are at odds with the responsibilities these individuals have to other important social goods, including airline safety. The episode has a lot in common with the sequence of events leading to the launch of Space Shuttle Challenger (Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA).

Charles Perrow has studied system failure extensively since publication of his important book, Normal Accidents: Living with High-Risk Technologies and extending through his 2011 book The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters. In a 2015 article, "Cracks in the 'regulatory state'" (link), he summarizes some of his concerns about the effectiveness of the regulatory enterprise. The abstract of the article shows its relevance to the current case:
Over the last 30 years, the U.S. state has retreated from its regulatory responsibility over private-sector economic activities. Over the same period, a celebratory literature, mostly in political science, has developed, characterizing the current period as the rise of the regulatory state or regulatory capitalism. The notion of regulation in this literature, however, is a perverse one—one in which regulators mostly advise rather than direct, and industry and firm self- regulation is the norm. As a result, new and potentially dangerous technologies such as fracking or mortgage backed derivatives are left unregulated, and older necessary regulations such as prohibitions are weakened. This article provides a joint criticism of the celebratory literature and the deregulation reality, and strongly advocates for a new sociology of regulation that both recognizes and documents these failures. (203)
The 2015 article highlights some of the precise sources of failure that seem to be evident in the 737 MAX case. "Government assumes a coordinating rather than a directive role, in this account, as regulators draw upon industry best practices, its standard-setting proclamations, and encourage self-monitoring" (203). This is precisely what current reporting demonstrates about the FAA relationship to the manufacturers.

One of the key flaws of self-monitoring is the lack of truly independent inspectors:
Part of the problem stems from the failure of firms to select truly independent inspectors. Firms can, in fact, select their own inspectors—for example, firemen or police from the local areas who are quite conscious of the economic power of the local chemical firm they are to inspect. (205)
Here again, the Boeing 737 MAX certification story seems to illustrate this defect as well.
How serious are these "cracked regulatory institutions"? According to Perrow they are deadly serious. Here is Perrow's summary assessment about the relationship between regulatory failure and catastrophe:
Almost every major industrial accident in recent times has involved either regulatory failure or the deregulation demanded by business and industry. For more examples, see Perrow (2011). It is hard to make the case that the industries involved have failed to innovate because of federal regulation; in particular, I know of no innovations in the safety area that were stifled by regulation. Instead, we have a deregulated state and deregulated capitalism, and rising environmental problems accompanied by growing income and wealth inequality. (210)
In short, we seem to be at the beginning of an important reveal of the cost of neoliberal efforts to minimize regulation and to pass the responsibility for safety significantly to the manufacturer.

(Baldwin, Cave, and Lodge provide a good exposure to current thinking about government regulation in Understanding Regulation: Theory, Strategy, and Practice, 2nd Edition. Their Oxford Handbook of Regulation also provides excellent resources on this topic.)

Monday, May 7, 2018

What the boss wants to hear ...


According to David Halberstam in his outstanding history of the war in Vietnam, The Best and the Brightest, a prime cause of disastrous decision-making by Presidents Kennedy and Johnson was an institutional imperative in the Defense Department to come up with a set of facts that conformed to what the President wanted to hear. Robert McNamara and McGeorge Bundy were among the highest-level miscreants in Halberstam's account; they were determined to craft an assessment of the situation on the ground in Vietnam that conformed best with their strategic advice to the President.

Ironically, a very similar dynamic led to one of modern China's greatest disasters, the Great Leap Forward famine in 1959. The Great Helmsman was certain that collective agriculture would be vastly more productive than private agriculture; and following the collectivization of agriculture, party officials in many provinces obliged this assumption by reporting inflated grain statistics throughout 1958 and 1959. The result was a famine that led to at least twenty million excess deaths during a two-year period as the central state shifted resources away from agriculture (Frank DikötterMao's Great Famine: The History of China's Most Devastating Catastrophe, 1958-62).

More mundane examples are available as well. When information about possible sexual harassment in a given department is suppressed because "it won't look good for the organization" and "the boss will be unhappy", the organization is on a collision course with serious problems. When concerns about product safety or reliability are suppressed within the organization for similar reasons, the results can be equally damaging, to consumers and to the corporation itself. General Motors, Volkswagen, and Michigan State University all seem to have suffered from these deficiencies of organizational behavior. This is a serious cause of organizational mistakes and failures. It is impossible to make wise decisions -- individual or collective -- without accurate and truthful information from the field. And yet the knowledge of higher-level executives depends upon the truthful and full reporting of subordinates, who sometimes have career incentives that work against honesty.

So how can this unhappy situation be avoided? Part of the answer has to do with the behavior of the leaders themselves. It is important for leaders to explicitly and implicitly invite the truth -- whether it is good news or bad news. Subordinates must be encouraged to be forthcoming and truthful; and bearers of bad news must not be subject to retaliation. Boards of directors, both private and public, need to make clear their own expectations on this score as well: that they expect leading executives to invite and welcome truthful reporting, and that they expect individuals throughout the organization to provide truthful reporting. A culture of honesty and transparency is a powerful antidote to the disease of fabrications to please the boss.

Anonymous hotlines and formal protection of whistle-blowers are other institutional arrangements that lead to greater honesty and transparency within an organization. These avenues have the advantage of being largely outside the control of the upper executives, and therefore can serve as a somewhat independent check on dishonest reporting.

A reliable practice of accountability is also a deterrent to dishonest or partial reporting within an organization. The truth eventually comes out -- whether about sexual harassment, about hidden defects in a product, or about workplace safety failures. When boards of directors and organizational policies make it clear that there will be negative consequences for dishonest behavior, this gives an ongoing incentive of prudence for individuals to honor their duties of honesty within the organization.

This topic falls within the broader question of how individual behavior throughout an organization has the potential for giving rise to important failures that harm the public and harm the organization itself.


Wednesday, December 13, 2017

Varieties of organizational dysfunction


Several earlier posts have made the point that important technology failures often include organizational faults in their causal background.

It is certainly true that most important accidents have multiple causes, and it is crucial to have as good an understanding as possible of the range of causal pathways that have led to air crashes, chemical plant explosions, or drug contamination incidents. But in the background we almost always find organizations and practices through which complex technical activities are designed, implemented, and regulated. Human actors, organized into patterns of cooperation, collaboration, competition, and command, are as crucial to technical processes as are power lines, cooling towers, and control systems in computers. So it is imperative that we follow the lead of researchers like Charles Perrow (The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters), Kathleen Tierney (The Social Roots of Risk: Producing Disasters, Promoting Resilience), or Diane Vaughan (The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA) and give close attention to the social- and organization-level failures that sometimes lead to massive technological failures.

It is useful to have a few examples in mind as we undertake to probe this question more deeply. Here are a number of important accidents and failures that have been carefully studied.
  • Three Mile Island, Chernobyl nuclear disasters
  • Challenger and Columbia space shuttle disasters
  • Failure of United States anti-submarine warfare in 1942-43
  • Flawed policy and decision-making in US leading to escalation of Vietnam War
  • Flawed policy and decision-making in France leading to Dien Bien Phu defeat
  • Failure of Nuclear Regulatory Commission to ensure reactor safety
  • DC-10 design process
  • Osprey design process
  • failure of Federal flood insurance to appropriately guide rational land use
  • FEMA failure in Katrina aftermath
  • Design and manufacture of the Edsel sedan
  • High rates of hospital-born infections in some hospitals
Examples like these allow us to begin to create an inventory of organizational flaws that sometimes lead to failures and accidents:
  • siloed decision-making (design division, marketing division, manufacturing division all have different priorities and interests)
  • lax implementation of formal processes
  • strategic bureaucratic manipulation of outcomes 
    • information withholding, lying
    • corrupt practices, conflicts of interest and commitment
  • short-term calculation of costs and benefits
  • indifference to public goods
  • poor evaluation of data; misinterpretation of data
  • lack of high-level officials responsible for compliance and safety
These deficiencies may be analyzed in terms of a more abstract list of organizational failures:
  • Poor decisions given existing priorities and facts
    • poor priority-setting processes
    • poor information-gathering and analysis
  • failure to learn and adapt from changing circumstances
  • internal capture of decision-making; corruption, conflict of interest
  • vulnerability of decision-making to external pressures (external capture)
  • faulty or ineffective implementation of policies, procedures, and regulations

******

Nancy Leveson is a leading authority on the systems-level causes of accidents and failures. A recent white paper can be found here. Here is the abstract for that paper:
New technology is making fundamental changes in the etiology of accidents and is creating a need for changes in the explanatory mechanisms used. We need better and less subjective understanding of why accidents occur and how to prevent future ones. The most effective models will go beyond assigning blame and instead help engineers to learn as much as possible about all the factors involved, including those related to social and organizational structures. This paper presents a new accident model founded on basic systems theory concepts. The use of such a model provides a theoretical foundation for the introduction of unique new types of accident analysis, hazard analysis, accident prevention strategies including new approaches to designing for safety, risk assessment techniques, and approaches to designing performance monitoring and safety metrics. (1; italics added)
Here is what Leveson has to say about the social and organizational causes of accidents:

2.1 Social and Organizational Factors

Event-based models are poor at representing systemic accident factors such as structural deficiencies in the organization, management deficiencies, and flaws in the safety culture of the company or industry. An accident model should encourage a broad view of accident mechanisms that expands the investigation from beyond the proximate events.

Ralph Miles Jr., in describing the basic concepts of systems theory, noted that:

Underlying every technology is at least one basic science, although the technology may be well developed long before the science emerges. Overlying every technical or civil system is a social system that provides purpose, goals, and decision criteria (Miles, 1973, p. 1).

Effectively preventing accidents in complex systems requires using accident models that include that social system as well as the technology and its underlying science. Without understanding the purpose, goals, and decision criteria used to construct and operate systems, it is not possible to completely understand and most effectively prevent accidents. (6)