Showing posts with label organizational change. Show all posts
Showing posts with label organizational change. Show all posts

Saturday, July 29, 2017

Dynamics of medieval cities


Cities provide a good illustration of the ontology of the theory of assemblage (link). Many forms of association, production, logistics, governance, and population processes came together from independent origins and with different causal properties. So one might imagine that unexpected dynamics of change are likely to be found in all urban settings.

The medieval period is not known for its propensity for innovation, out-of-the-box thinking, or dynamic tendencies towards change. One thinks rather of the placid, continuing social relations of the English countryside, the French village, or the Italian town. There is the idea that a stultifying social order made innovation and change difficult. However, studies of medieval cities over the past century have cast some doubt on this stereotype. Henri Pirenne's lectures on the medieval city in 1923 were collected in Medieval Cities: Their Origins and the Revival of Trade, and there are numerous clues indicating that Pirenne found ample features of dynamic change in the medieval city from the eleventh century forward. Here are a few examples:
The eleventh century, in fact, brings us face to face with a real commercial revival. This revival received its impetus from two centers of activity, one located in the south and the other in the north: Venice on one side and the Flemish coast on the other. (82) 
Trade was thus forced upon them [Venice] by the very conditions under which they lived. And they had the energy and the genius to turn to profit the unlimited possibilities which trade offered them. (83)
 Constantinople, even in the eleventh century, appears not only as a great city, but as the greatest city of the whole Mediterranean basin. Her population was not far from reaching the figure of a million inhabitants, and that population was singularly active. She was not content, as had been the population of Rome under the Republic and the Empire, to consume without producing. She gave herself over, with a zeal which the fiscal system shackled but did not choke, not only to trading but to industry. (84)
The geographical situation of Flanders, indeed, put her in a splendid position to become the western focus for the commerce of the seas of the north. It formed the natural terminus of the voyage for ships arriving from Northern England or which, having crossed the Sound after coming out of the Baltic, were on their way to the south. (97)
It was only in the twelfth century that, gradually but definitely, Western Europe was transformed. The economic development freed her from the traditional immobility to which a social organization, depending solely on the relations of man to the soil, had condemned her. Commerce and industry did not merely find a place alongside of agriculture; they reacted upon it.... The rigid confines of the demesnial system, which had up to now hemmed in all economic activity, were broken down and the whole social order was patterned along more flexible, more active and more varied lines. (101-102)
Large or small, [cities] were to be met everywhere; one was to be found, on the average, in every twenty-five square leagues of land. They had, in fact, become indispensable to society. They had introduced into it a division of labor which it could no longer do without. Between them and the country was established a reciprocal exchange of services. (102)
So trade, finance, manufacturing, and flexible labor led to a dynamic of change that resulted in real economic and urban development in medieval European cities. Pirenne emphatically does not give a rendering of the medieval city that features a rigid social order impeding social and economic change.

A recent study provides modern evidence that the stereotyped impression of social stasis in the urban world of the middle ages is incorrect (link). Rudolf Ceseretti and his co-authors of "Population-Area Relationship for Medieval European Cities" provide a strikingly novel view of the medieval city (link). Their key finding is that there is an unexpected similarity of behavior with modern urban centers that can be observed in the population and spatial characteristics of medieval cities. They have collected data on 173 medieval cities across Western Europe:


Here is how they frame their finding in the Introduction:
This research suggests that, at a fundamental level, cities consist of overlapping social and physical networks that are self-consistently bounded by settled physical space [55–57]. Here, we investigate whether the relationships between settlement population and settled land area predicted by scaling theory—and observed in contemporary cities—also characterized medieval European cities. In this paper, we analyze the relationship between the extent of built-up area and resident populations of 173 settlements located in present-day Belgium, France, England, Switzerland, Germany, and Italy, ca. AD 1300. Previous scholarship has produced population estimates for a large number medieval European cities [58,59]. We build on this work by linking population estimates with estimates for the built-up area compiled from historical and archaeological sources.
The authors focus on a common belief about medieval cities -- the idea that social interactions among residents are structured by powerful social institutions. Guilds, ethnicities, family groups, and religion provide examples of such institutions. If the net effect of social institutions like these is to reduce the likelihood of interaction of pairs of individuals, then medieval cities should display different patterns of spatial distribution of population and growth; if this effect is not significant, then medieval cities should resemble modern cities in these respects. This study finds the latter to be the case. Fundamentally they are interested in the topic of "scaling of settlement area with population size". Here is a plot of area and population for the cities they studied, separated by region:


Their central finding is that the data about population density and spatial distribution do not support the hypothesis that medieval social institutions substantially inhibited social interactions to an extent that hindered urban growth and development. Rather, medieval cities look in their population and spatial structures to be very similar to modern cities.
Table 1 shows that the point estimates of the scaling coefficients for all four regional groups and for the pooled dataset fall within the 2/3 ≥ a ≥ 5/6 range predicted by the social reactor model... Thus, medieval cities across Western Europe exhibit, on average, economies of scale with respect to spatial agglomeration such that larger cities were denser on average. This pattern is similar to that observed for modern cities. 
Even though medieval cities were structured by hierarchical institutions that are ostensibly not so dominant today, we interpret this finding as excluding a strongly segregating role for medieval social institutions. This would suggest that the institutions of Western European urban systems ca. 1300 did not substantially constrain social mixing, economic integration, or the free flow of people, ideas, and information. We take these findings as an indication that the underlying micro-level social dynamics of medieval cities were fundamentally similar to those of contemporary cities. (discussion)
This study presents a fascinating contemporary test of a thesis that would surely have interested Pirenne almost a century ago: did medieval cities develop spatially in ways that reflect a reasonable degree of freedom of choice among residents about where they lived and worked? And the data seem to confirm a "yes" for this question.

(I haven't attempted to summarize the methods used in this study, and the full article bears reading for anyone interested in the question of interpreting urban history from a quantitative point of view.)

Friday, July 21, 2017

A new model of organization?


In Team of Teams: New Rules of Engagement for a Complex World General Stanley McChrystal (with Tantum Collins, David Silverman, and Chris Fussell) describes a new, 21st-century conception of organization for large, complex activities involving thousands of individuals and hundreds of major sub-tasks. His concept is grounded in his experience in counter-insurgency warfare in Iraq. Rather than being constructed as centrally organized, bureaucratic, hierarchical processes with commanders and scripted agents, McChrystal argues that modern counter-terrorism requires a more decentralized and flexible system of action, which he refers to as "teams of teams". Information is shared freely, local commanders have ready access to resources and knowledge from other experts, and they make decisions in a more flexible way. The model hopes to capture the benefits of improvisation, flexibility, and a much higher level of trust and communication than is characteristic of typical military and corporate organizations.

One place where the "team of teams" structure is plausible is in the context of a focused technology startup company, where the whole group of participants need to be in regular and frequent collaboration with each other. Indeed, Paul Rabinow's ethnography in 1996 of the Cetus Corporation in its pursuit of PCR (polymerase chain reaction) in Making PCR: A Story of Biotechnology reflects a very similar topology of information flows and collaboration links across and within working subgroups (link). But the vision does not fit very well the organizational and operational needs of a large hospital, a railroad company, or a research university. It seems plausible that the challenges the US military faced in fighting Al-Qaeda and ISIL are not really analogous to those faced by less dramatic organizations like hospitals, universities, and corporations. The decentralized and improvisational circumstances of urban warfare against loosely organized terrorists may be sui generis

McChrystal proposes an organizational structure that is more decentralized, more open to local decision-making, and more flexible and resilient. These are unmistakeable virtues in some circumstances; but not in all circumstances and all organizations. And arguably such a structure would have been impossible in the planning and execution of the French defense of Dien Bien Phu or the US decision to wage war against the Vietnamese insurgency ten years later. These were situations where central decisions needed to be made, and the decisions needed to be implemented through well organized bureaucracies. The problem in both instances is that the wrong decisions were made, based on the wrong information and assessments. What was needed, it would appear, was better executive leadership and decision-making -- not a fundamentally decentralized pattern of response and counter-response.

One thing that deserves comment in the context of McChrystal's book is the history of bad organization, bad intelligence, and bad decision-making the world has witnessed in the military experiences of the past century. The radical miscalculations and failures of planning involved in the first months of the Korean War, the painful and tragic misjudgments made by the French military in preparing for Dien Bien Phu, the equally bad thinking and planning done by Robert McNamara and the whiz kids leading to the Vietnam War -- these examples stand out as sentinel illustrations of the failures of large organizations that have been tasked to carry out large, complex activities involving numerous operational units. The military and the national security establishments were good at some tasks, and disastrously bad at others. And the things they were bad at were both systemic and devastating. Bernard Fall illustrates these failures in Hell In A Very Small Place: The Siege Of Dien Bien Phu, and David Halberstam does so for the decision-making that led to the war in Vietnam in The Best and the Brightest.

So devising new ideas about command, planning, intelligence gathering and analysis, and priority-setting that are more effective would be a big contribution to humanity. But the deficiencies in Dien Bien Phu, Korea, or Vietnam seem different from those McChrystal identifies in Iraq. What was needed in these portentous moments of policy choice was clear-eyed establishment of appropriate priorities and goals, honest collection of intelligence and sources of information, and disinterested implementation of policies and plans that served the highest interests of the country. The "team of teams" approach doesn't seem to be a general solution to the wide range of military and political challenges nations face.

What one would have wanted to see in the French military or the US national security apparatus is something different from the kind of teamwork described by McChrystal: greater honesty on all parts, a commitment to taking seriously the assessments of experts and participants in the field, an openness to questioning strongly held assumptions, and a greater capacity for institutional wisdom in arriving at decisions of this magnitude. We would have wanted to see a process that was not dominated by large egos, self-interest, and fixed ideas. We would have wanted French generals and their civilian masters to soberly assess the military function that a fortress camp at Dien Bien Phu could satisfy; the realistic military requirements that would need to be satisfied in order to defend the location; and an honest effort to solicit the very best information and judgment from experienced commanders and officials about what a Viet-Minh siege might look like. Instead, the French military was guided by complacent assumptions about French military superiority, which led to a genuine catastrophe for the soldiers assigned to the task and to French society more broadly.

There are valid insights contained in McChrystal's book about the urgency of breaking down obstacles to communication and action within sprawling organizations as they confront a changing environment. But it doesn't add up to a model that is well designed for most contexts in which large organizations actually function.

Wednesday, June 14, 2017

Organizational learning


 

I've posed the question of organizational learning several times in recent months: are there forces that push organizations towards changes leading to improvements in performance over time? Is there a process of organizational evolution in the social world? So where do we stand on this question?

There are only two general theories that would lead us to conclude affirmatively. One is a selection theory. According to this approach, organizations undergo random changes over time, and the environment of action favors those organizations whose changes are functional with respect to performance. The selection theory itself has two variants, depending on how we think about the unit of selection. It might be hypothesized that the firm itself is the unit of selection, so firms survive or fail based on their own fitness. Over time the average level of performance rises through the extinction of low-performance organizations. Or it might be maintained that the unit is at a lower level -- the individual alternative arrangements for performing various kinds of work, which are evaluated and selected on the basis of some metric of performance. On this approach, individual innovations are the object of selection. 

The other large mechanism of organizational learning is quasi-intentional. We postulate that intelligent actors control various aspects of the functioning of an organization; these actors have a set of interests that drive their behavior; and actors fine-tune the arrangements of the organization so as to serve their interests. This is a process I describe as quasi-intentional to convey that the organization itself has no intentionality, but its behavior and arrangements are under the control of a loosely connected set of actors who are individually intentional and purposive. 

In a highly idealized representation of organizations at work, these quasi-intentional processes may indeed push the organization towards higher functioning. Governance processes -- boards of directors, executives -- have a degree of influence over the activities of other actors within and adjacent to the organization, and they are able to push some subordinate behavior in the direction of higher performance and innovation if they have an interest in doing so. And sometimes these governance actors do in fact have an interest in higher performance -- more revenue, less environmental harm, greater safety, gender and racial equity. Under these circumstances it is reasonable to expect that arrangements will be modified to improve performance, and the organization will "evolve".

However, two forms of counter-intentionality arise. The interests of the governing actors are not perfectly aligned with increasing performance. Substantial opportunities for conflict of interest exist at every level, including the executive level (e.g. Enron). So the actions of executives are not always in concert with the goal of improving performance. Second, other actors within the organization are often beyond control of executive actors and are motivated by interests that are quite separate from the goal of increasing performance. Their actions may often lead to status quo performance or even degradation of performance. 

So the question of whether a given organization will change in the direction of higher performance is highly sensitive to (i) the alignment of mission interest and personal interest for executive actors, (ii) the scope of control executive actors are able to exercise over subordinates, and (iii) the strength and pervasiveness of personal interests among subordinates within the organization and the capacity these subordinates have to select and maintain arrangements that favor their interests.

This represents a highly contingent and unpredictable situation for the question of organizational learning. We might regard the question as an ongoing struggle between local private interest and the embodiment of mission-defined interest. And there is no reason at all to believe that this struggle is biased in the direction of enhancement of performance. Some organizations will progress, others will be static, and yet others will decline over time. There is no process of evolution, guided or invisible, that leads inexorably towards improvement of arrangements and performance.

So we might formulate this conclusion in a fairly stark way. If organizations improve in capacity and performance over time in a changing environment, this is entirely the result of intelligent actors undertaking to implement innovations that will lead to these outcomes, at a variety of levels of action within the organization. There is no hidden process that can be expected to generate an evolutionary tendency towards higher organizational performance. 

(The images above are of NASA headquarters and Enron headquarters -- two organizations whose histories reflect the kinds of dysfunctions mentioned here.)


Thursday, June 1, 2017

Social change and leadership


Historians pay a lot of attention to important periods of social change -- the emergence of new political movements, the development of a great city, the end of Jim Crow segregation. There is an inclination to give a lot of weight to the importance of leaders, visionaries, and change-makers in driving these processes to successful outcomes. And, indeed, history correctly records the impact of charismatic and visionary leaders. But consider the larger question: are large social changes amenable to design by a small number of actors?

My inclination is to think that the capacity of calculated design for large, complex social changes is very much more limited than we often imagine. Instead, change more often emerges from the independent strategies and actions of numerous actors, only loosely coordinated with others, and proceeding from their own interests and framing assumptions. The large outcome -- the emergence of Chicago as the major metropolis of the Midwest, the forging of the EU and the monetary union, the coalescence of nationalist movements in France and Germany -- are the resultant of multiple actors and causes. Big outcomes are contingent outcomes of multiple streams of action, mobilization, business decisions, political parties, etc.

There are exceptions, of course. Italy's political history would have been radically different without Mussolini, and the American Civil War would probably have had a different course if Douglas had won the 1860 presidential election. 

But these are exceptions, I believe. More common is the history of Chicago, the surge of right-wing nationalism, or the collapse of the USSR. These are all multi-causal and multi-actor outcomes, and there is no single, unified process of development. And there is no author, no architect, of the outcome. 

So what does this imply about individual leaders and organizations who want to change the social and political environment facing them? Are their aspirations for creating change simply illusions? I don't think so. To deny that single visionaries cannot write the future does not imply they cannot nudge it in a desirable direction. And these effects can indeed alter the future, sometimes in the desired direction. An anti-racist politician can influence voters and institutions in ways that inflect the arc of his or her society in a less racist way. This doesn't permanently solve the problem, but it helps. And with good fortune, other actors will have made similar efforts, and gradually the situation of racism changes. 

This framework for thinking about large social change raises large questions about how we should think about improving the world around us. It seems to imply the importance of local and decentralized social change. We should perhaps adjust our aspirations for social progress around the idea of slow, incremental change through many actors, organizations, and coalitions. As Marx once wrote, "men make their own history, but not in circumstances of their own choosing." And we can add a qualification Marx would not have appreciated: change makers are best advised to construct their plans around long, slow, and incremental change instead of blueprints for unified, utopian change. 



Saturday, May 20, 2017

Is there a new capitalism?



An earlier post considered Dave Elder-Vass’s very interesting treatment of the contemporary digital economy. In Profit and Gift in the Digital Economy Elder-Vass argues that the vast economic significance of companies like Google, FaceBook, and Amazon in today's economy is difficult to assimilate within the conceptual framework of Marx’s foundational ideas about capitalism, constructed as they were around manufacturing, labor, and ownership of capital, and that we need some new conceptual tools in order to make sense of the economic system we now confront. (Elder-Vass responded to my earlier post here.)

A new book by Nick Srnicek looks at this problem from a different point of view. In Platform Capitalism Srnicek proposes to understand the realities of our current “digital economy” according to traditional ideas about capitalism and profit. Here is a preliminary statement of his approach:
The simple wager of the book is that we can learn a lot about major tech companies by taking them to be economic actors within a capitalist mode of production. This means abstracting from them as cultural actors defined by the values of the Californian ideology, or as political actors seeking to wield power. By contrast, these actors are compelled to seek out profits in order to fend off competition. This places strict limits on what constitutes possible and predictable expectations of what is likely to occur. Most notably, capitalism demands that firms constantly seek out new avenues for profit, new markets, new commodities, and new means of exploitation. For some, this focus on capital rather than labour may suggest a vulgar econo-mism; but, in a world where the labour movement has been significantly weakened, giving capital a priority of agency seems only to reflect reality. (Kindle Locations 156-162)
In other words, there is not a major break from General Motors, with its assembly lines, corporate management, and vehicles, to IBM, with its products, software, and innovations, to Google, with its purely abstract and information-intensive products. All are similar in their basic corporate navigation systems: make decisions today that will support or increase profits tomorrow. In fact, each of these companies falls within the orbit of the new digital economy, according to Srnicek:
As a preliminary definition, we can say that the digital economy refers to those businesses that increasingly rely upon information technology, data, and the internet for their business models. This is an area that cuts across traditional sectors – including manufacturing, services, transportation, mining, and telecommunications – and is in fact becoming essential to much of the economy today. (Kindle Locations 175-177).
What has changed, according to the economic history constructed by Srnicek, is that the creation and control of data has suddenly become a vast and dynamic source of potential profit, and capitalist firms have adapted quickly to capture these profits.

The restructuring associated with the rise of information-intensive economic activity has greatly changed the nature of work:
Simultaneously, the generalised deindustrialisation of the high-income economies means that the product of work becomes immaterial: cultural content, knowledge, affects, and services. This includes media content like YouTube and blogs, as well as broader contributions in the form of creating websites, participating in online forums, and producing software. (Kindle Locations 556-559)
But equally it takes the form of specialized data-intensive work within traditional companies: design experts, marketing analysis of “big data” on consumer trends, the use of large simulations to guide business decision-making, the use of automatically generated data from vehicles to guide future engineering changes.

In order to capture the profit opportunities associated with the availability of big data, something else was needed: an organizational basis for aggregating and monetizing the data that exist around us. This is the innovation that comes in for Srnicek's greatest focus of attention: the platform.
This chapter argues that the new business model that eventually emerged is a powerful new type of firm: the platform. Often arising out of internal needs to handle data, platforms became an efficient way to monopolise, extract, analyse, and use the increasingly large amounts of data that were being recorded. Now this model has come to expand across the economy, as numerous companies incorporate platforms: powerful technology companies (Google, Facebook, and Amazon), dynamic start-ups (Uber, Airbnb), industrial leaders (GE, Siemens), and agricultural powerhouses (John Deere, Monsanto), to name just a few. (Kindle Locations 602-607).
What are platforms? At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects. More often than not, these platforms also come with a series of tools that enable their users to build their own products, services, and marketplaces. Microsoft’s Windows operating system enables software developers to create applications for it and sell them to consumers; Apple’s App Store and its associated ecosystem (XCode and the iOS SDK) enable developers to build and sell new apps to users; Google’s search engine provides a platform for advertisers and content providers to target people searching for information; and Uber’s taxi app enables drivers and passengers to exchange rides for cash. (Kindle Locations 607-616)
Srnicek distinguishes five large types of digital data platforms that have been built out as business models: advertising, cloud, industrial, product, and "lean" platforms (the latter exemplified by Uber).

Srnicek believes that firms organized around digital platforms are subject to several important dynamics and tendencies: "expansion of extraction, positioning as a gatekeeper, convergence of markets, and enclosure of ecosystems" (kl 1298). These tendencies are created by the imperative by the platform-based firm to generate profits. Profits depend upon monetizing data; and data has little value in small volume. So the most fundamental imperative is -- mass collection of data from individual consumers.
If data collection is a key task of platforms, analysis is the necessary correlate. The proliferation of data-generating devices creates a vast new repository of data, which requires increasingly large and sophisticated storage and analysis tools, further driving the centralisation of these platforms. (kl 1337-1339)
So privacy threats emerging from the new digital economy are not a bug; they are an inherent feature of design.

This appears to lead us to Srnicek's most basic conclusion: the new digital economy is just like the old industrial economy in one important respect. Firms are wholly focused on generating profits, and they design intelligent strategies to permit themselves to appropriate ever-larger profits from the raw materials they process. In the case of the digital economy the raw material is data, and the profits come from centralizing and monopolizing access to data, and deploying data to generate profits for other firms (who in turn pay for access to the data). And revenues and profits have no correspondence to the size of the firm's workforce:
Tech companies are notoriously small. Google has around 60,000 direct employees, Facebook has 12,000, while WhatsApp had 55 employees when it was sold to Facebook for $ 19 billion and Instagram had 13 when it was purchased for $ 1 billion. By comparison, in 1962 the most significant companies employed far larger numbers of workers: AT& T had 564,000 employees, Exxon had 150,000 workers, and GM had 605,000 employees. Thus, when we discuss the digital economy, we should bear in mind that it is something broader than just the tech sector defined according to standard classifications. (Kindle Locations 169-174)
Marx's theory of capitalism fundamentally originates in a theory of conflict of interest and a theory of exploitation. In Capital that conflict exists between capitalists and workers, and consumers are essentially ignored (except when Marx sometimes refers to the deleterious effects of competition on public health; link). But in Srnicek's reading of the contemporary digital economy (and Elder-Vass's as well) the focus shifts away from labor and towards the consumer. The primary conflict in the digital economy is between the platform firm that seeks to acquire our data and the consumers who want the digital services but who are poorly aware of the cost to their privacy. And here it is more difficult to make an argument about exploitation. Are consumers being exploited in this exchange? Or are they getting fair value through extensive and valuable digital services, for the surrender of their privacy in the form of data collection of clicks, purchases, travel, phone usage, and the countless other ways in which individual data winds up in the aggregation engines?

In an unexpected way, this analysis leads us back to a question that seems to belong in the nineteenth century: what after all is the source of value and wealth? And who has a valid claim on a share? What principles of justice should govern the distribution of the wealth of society? The labor theory of value had an answer to the question, but it is an answer that didn't have a lot of validity in 1850 and has none today. But in that case we need to address the question again. The soaring inequalities of income and wealth that capitalism has produced since 1980 suggest that our economy has lost its control mechanisms for equity; and perhaps this has something to do with the fact that a great deal of the money being generated in capitalism today comes from control of data rather than the adding of value to products through labor. Oddly enough, perhaps Marx's other big idea is relevant here: social ownership of the means of production. If there were a substantial slice of public-sector ownership of big data firms, including financial institutions, the resulting flow of income and wealth might be expected to begin to correct the hyper-inequalities our economy is currently generating.

Friday, March 31, 2017

Science policy and the Cold War


The marriage of science, technology, and national security took a major step forward during and following World War II. The secret Manhattan project, marshaling the energies and time of thousands of scientists and engineers, showed that it was possible for military needs to effectively mobilize and conduct coordinated research into fundamental and applied topics, leading to the development of the plutonium bomb and eventually the hydrogen bomb. (Richard Rhodes' memorable The Making of the Atomic Bomb provides a fascinating telling of that history.) But also noteworthy is the coordinated efforts made in advanced computing, cryptography, radar, operations research, and aviation. (Interesting books on several of these areas include Stephen Budiansky's Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union and Blackett's War: The Men Who Defeated the Nazi U-Boats and Brought Science to the Art of Warfare Warfare, and Dyson's Turing's Cathedral: The Origins of the Digital Universe.) Scientists served the war effort, and their work made a material difference in the outcome. More significantly, the US developed effective systems for organizing and directing the process of scientific research -- decision-making processes to determine which avenues should be pursued, bureaucracies for allocating funds for research and development, and motivational structures that kept the participants involved with a high level of commitment. Tom Hughes' very interesting Rescuing Prometheus: Four Monumental Projects that Changed Our World tells part of this story.

But what about the peace?

During the Cold War there was a new global antagonism, between the US and the USSR. The terms of this competition included both conventional weapons and nuclear weapons, and it was clear on all sides that the stakes were high. So what happened to the institutions of scientific and technical research and development from the 1950s forward?

Stuart Leslie addressed these questions in a valuable 1993 book, The Cold War and American Science: The Military-Industrial-Academic Complex at MIT and Stanford. Defense funding maintained and deepened the quantity of university-based research that was aimed at what were deemed important military priorities.
The armed forces supplemented existing university contracts with massive appropriations for applied and classified research, and established entire new laboratories under university management: MIT's Lincoln Laboratory (air defense); Berkeley's Lawrence Livermore Laboratory (nuclear weapons); and Stanford's Applied Electronics Laboratory (electronic communications and countermeasures). (8)
In many disciplines, the military set the paradigm for postwar American science. Just as the technologies of empire (specifically submarine telegraphy and steam power) once defined the relevant research programs for Victorian scientists and engineers, so the military-driven technologies of the Cold War defined the critical problems for the postwar generation of American accidents and engineers.... These new challenges defined what scientists and engineers studied, what they designed and built, where they went to work, and what they did when they got there. (9)
And Leslie offers an institutional prediction about knowledge production in this context:
Just as Veblen could have predicted, as American science became increasingly bound up in a web of military institutions, so did its character, scope, and methods take on new, and often disturbing, forms. (9)
The evidence for this prediction is offered in the specialized chapters that follow. Leslie traces in detail the development of major research laboratories at both universities, involving tens of millions of dollars in funding, thousands of graduate students and scientists, and very carefully focused on the development of sensitive technologies in radio, computing, materials, aviation, and weaponry.
No one denied that MIT had profited enormously in those first decades after the war from its military connections and from the unprecedented funding sources they provided. With those resources the Institute put together an impressive number of highly regarded engineering programs, successful both financially and intellectually. There was at the same time, however, a growing awareness, even among those who had benefited most, that the price of that success might be higher than anyone had imagined -- a pattern for engineering education set, organizationally and conceptually, by the requirements of the national security state. (43)
Leslie gives some attention to the counter-pressures to the military's dominance in research universities that can arise within a democracy in the closing chapter of the book, when the anti-Vietnam War movement raised opposition to military research on university campuses and eventually led to the end of classified research on many university campuses. He highlights the protests that occurred at MIT and Stanford during the 1960s; but equally radical protests against classified and military research happened in Madison, Urbana, and Berkeley.

This is a set of issues that are very resonant with Science, Technology and Society studies (STS). Leslie is indeed a historian of science and technology, but his approach does not completely share the social constructivism of that approach today. His emphasis is on the implications of the funding sources on the direction that research in basic science and technology took in the 1950s and 1960s in leading universities like MIT and Stanford. And his basic caution is that the military and security priorities associated with this structure all but guaranteed that the course of research was distorted in directions that would not have been chosen in a more traditional university research environment.

The book raises a number of important questions about the organization of knowledge and the appropriate role of universities in scientific research. In one sense the Vietnam War is a red herring, because the opposition it generated in the United States was very specific to that particular war. But most people would probably understand and support the idea that universities played a crucial role in World War II by discovering and developing new military technologies, and that this was an enormously important and proper role for scientists in universities to play. Defeating fascism and dictatorship was an existential need for the whole country. So the idea that university research is sometimes used and directed towards the interests of national security is not inherently improper.

A different kind of worry arises on the topic of what kind of system is best for guiding research in science and technology towards improving the human condition. In grand terms, one might consider whether some large fraction of the billions of dollars spent in military research between 1950 and 1980 might have been better spent on finding ways of addressing human needs directly -- and therefore reducing the likely future causes of war. Is it possible that we would today be in a situation in which famine, disease, global warming, and ethnic and racial conflict were substantially eliminated if we had dedicated as much attention to these issues as we did to advanced nuclear weapons and stealth aircraft?

Leslie addresses STS directly in "Reestablishing a Conversation in STS: Who’s Talking? Who’s Listening? Who Cares?" (link). Donald MacKenzie's Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance tells part of the same story with a greater emphasis on the social construction of knowledge throughout the process.

(I recall a demonstration at the University of Illinois against a super-computing lab in 1968 or 1969. The demonstrators were appeased when it was explained that the computer was being used for weather research. It was later widely rumored on the campus that the weather research in question was in fact directed towards considering whether the weather of Vietnam could be manipulated in a militarily useful way.)

Friday, February 24, 2017

How organizations adapt


Organizations do things; they depend upon the coordinated efforts of numerous individuals; and they exist in environments that affect their ongoing success or failure. Moreover, organizations are to some extent plastic: the practices and rules that make them up can change over time. Sometimes these changes happen as the result of deliberate design choices by individuals inside or outside the organization; so a manager may alter the rules through which decisions are made about hiring new staff in order to improve the quality of work. And sometimes they happen through gradual processes over time that no one is specifically aware of. The question arises, then, whether organizations evolve toward higher functioning based on the signals from the environments in which they live; or on the contrary, whether organizational change is stochastic, without a gradient of change towards more effective functioning? Do changes within an organization add up over time to improved functioning? What kinds of social mechanisms might bring about such an outcome?

One way of addressing this topic is to consider organizations as mid-level social entities that are potentially capable of adaptation and learning. An organization has identifiable internal processes of functioning as well as a delineated boundary of activity. It has a degree of control over its functioning. And it is situated in an environment that signals differential success/failure through a variety of means (profitability, success in gaining adherents, improvement in market share, number of patents issued, ...). So the environment responds favorably or unfavorably, and change occurs.

Is there anything in this specification of the structure, composition, and environmental location of an organization that suggests the possibility or likelihood of adaptation over time in the direction of improvement of some measure of organizational success? Do institutions and organizations get better as a result of their interactions with their environments and their internal structure and actors?

There are a few possible social mechanisms that would support the possibility of adaptation towards higher functioning. One is the fact that purposive agents are involved in maintaining and changing institutional practices. Those agents are capable of perceiving inefficiencies and potential gains from innovation, and are sometimes in a position to introduce appropriate innovations. This is true at various levels within an organization, from the supervisor of a custodial staff to a vice president for marketing to a CEO. If the incentives presented to these agents are aligned with the important needs of the organization, then we can expect that they will introduce innovations that enhance functioning. So one mechanism through which we might expect that organizations will get better over time is the fact that some agents within an organization have the knowledge and power necessary to enact changes that will improve performance, and they sometimes have an interest in doing so. In other words, there is a degree of intelligent intentionality within an organization that might work in favor of enhancement.

This line of thought should not be over-emphasized, however, because there are competing forces and interests within most organizations. Previous posts have focused on current organizational theory based on the idea of a "strategic action field" of insiders and outsiders who determine the activities of the organization (Fligstein and McAdam, Crozier; link, link). This framework suggests that the structure and functioning of an organization is not wholly determined by a single intelligent actor ("the founder"), but is rather the temporally extended result of interactions among actors in the pursuit of diverse aims. This heterogeneity of purposive actions by actors within an institution means that the direction of change is indeterminate; it is possible that the coalitions that form will bring about positive change, but the reverse is possible as well.

And in fact, many authors and participants have pointed out that it is often enough not the case that the agents' interests are aligned with the priorities and needs of the organization. Jack Knight offers persuasive critique of the idea that organizations and institutions tend to increase in their ability to provide collective benefits in Institutions and Social Conflict. CEOs who have a financial interest in a rapid stock price increase may take steps that worsen functioning for shortterm market gain; supervisors may avoid work-flow innovations because they don't want the headache of an extended change process; vice presidents may deny information to other divisions in order to enhance appreciation of the efforts of their own division. Here is a short description from Knight's book of the way that institutional adjustment occurs as a result of conflict among players of unequal powers:
Individual bargaining is resolved by the commitments of those who enjoy a relative advantage in substantive resources. Through a series of interactions with various members of the group, actors with similar resources establish a pattern of successful action in a particular type of interaction. As others recognize that they are interacting with one of the actors who possess these resources, they adjust their strategies to achieve their best outcome given the anticipated commitments of others. Over time rational actors continue to adjust their strategies until an equilibrium is reached. As this becomes recognized as the socially expected combination of equilibrium strategies, a self-enforcing social institution is established. (Knight, 143)
A very different possible mechanism is unit selection, where more successful innovations or firms survive and less successful innovations and firms fail. This is the premise of the evolutionary theory of the firm (Nelson and Winter, An Evolutionary Theory of Economic Change). In a competitive market, firms with low internal efficiency will have a difficult time competing on price with more efficient firms; so these low-efficiency firms will go out of business occasionally. Here the question of "units of selection" arises: is it firms over which selection operates, or is it lower-level innovations that are the object of selection?

Geoffrey Hodgson provides a thoughtful review of this set of theories here, part of what he calls "competence-based theories of the firm". Here is Hodgson's diagram of the relationships that exist among several different approaches to study of the firm.



The market mechanism does not work very well as a selection mechanism for some important categories of organizations -- government agencies, legislative systems, or non-profit organizations. This is so, because the criterion of selection is "profitability / efficiency within a competitive market"; and government and non-profit organizations are not importantly subject to the workings of a market.

In short, the answer to the fundamental question here is mixed. There are factors that unquestionably work to enhance effectiveness in an organization. But these factors are weak and defeasible, and the countervailing factors (internal conflict, divided interests of actors, slackness of corporate marketplace) leave open the possibility that institutions change but they do not evolve in a consistent direction. And the glaring dysfunctions that have afflicted many organizations, both corporate and governmental, make this conclusion even more persuasive. Perhaps what demands explanation is the rare case where an organization achieves a high level of effectiveness and consistency in its actions, rather than the many cases that come to mind of dysfunctional organizational activity.

(The examples of organizational dysfunction that come to mind are many -- the failures of nuclear regulation of the civilian nuclear industry (Perrow, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters); the failure of US anti-submarine warfare in World War II (Cohen, Military Misfortunes: The Anatomy of Failure in War); and the failure of chemical companies to ensure safe operations of their plants (Shrivastava, Bhopal: Anatomy of Crisis). Here is an earlier post that addresses some of these examples; link. And here are several earlier posts on the topic of institutional change and organizational behavior; linklink.) 

Sunday, February 19, 2017

Designing and managing large technologies


What is involved in designing, implementing, coordinating, and managing the deployment of a large new technology system in a real social, political, and organizational environment? Here I am thinking of projects like the development of the SAGE early warning system, the Affordable Care Act, or the introduction of nuclear power into the civilian power industry.

Tom Hughes described several such projects in Rescuing Prometheus: Four Monumental Projects That Changed the Modern World. Here is how he describes his focus in that book:
Telling the story of this ongoing creation since 1945 carries us into a human-built world far more complex than that populated earlier by heroic inventors such as Thomas Edison and by firms such as the Ford Motor Company. Post-World War II cultural history of technology and science introduces us to system builders and the military-industrial-university complex. Our focus will be on massive research and development projects rather than on the invention and development of individual machines, devices, and processes. In short, we shall be dealing with collective creative endeavors that have produced the communications, information, transportation, and defense systems that structure our world and shape the way we live our lives. (3)
The emphasis here is on size, complexity, and multi-dimensionality. The projects that Hughes describes include the SAGE air defense system, the Atlas ICBM, Boston's Central Artery/Tunnel project, and the development of ARPANET. Here is an encapsulated description of the SAGE process:
The history of the SAGE Project contains a number of features that became commonplace in the development of large-scale technologies. Transdisciplinary committees, summer study groups, mission-oriented laboratories, government agencies, private corporations, and systems-engineering organizations were involved in the creation of SAGE. More than providing an example of system building from heterogeneous technical and organizational components, the project showed the world how a digital computer could function as a real-time information-processing center for a complex command and control system. SAGE demonstrated that computers could be more than arithmetic calculators, that they could function as automated control centers for industrial as well as military processes. (16)
Mega-projects like these require coordinated efforts in multiple areas -- technical and engineering challenges, business and financial issues, regulatory issues, and numerous other areas where innovation, discovery, and implementation are required. In order to be successful, the organization needs to make realistic judgments about questions for which there can be no certainty -- the future development of technology, the needs and preferences of future businesses and consumers, and the pricing structure that will exist for the goods and services of the industry in the future. And because circumstances change over time, the process needs to be able to adapt to important new elements in the planning environment.

There are multiple dimensions of projects like these. There is the problem of establishing the fundamental specifications of the project -- capacity, quality, functionality. There is the problem of coordinating the efforts of a very large team of geographically dispersed scientists and engineers, whose work is deployed across various parts of the problem. There is the problem of fitting the cost and scope of the project into the budgetary envelope that exists for it. And there is the problem of adapting to changing circumstances during the period of development and implementation -- new technology choices, new economic circumstances, significant changes in demand or social need for the product, large shifts in the costs of inputs into the technology. Obstacles in any of these diverse areas can lead to impairment or failure of the project.

Most of the cases mentioned here involve engineering projects sponsored by the government or the military. And the complexities of these cases are instructive. But there are equally complex cases that are implemented in a private corporate environment -- for example, the development of next-generation space vehicles by SpaceX. And the same issues of planning, coordination, and oversight arise in the private sector as well.

The most obvious thing to note in projects like these -- and many other contemporary projects of similar scope -- is that they require large teams of people with widely different areas of expertise and an ability to collaborate across disciplines. So a key part of leadership and management is to solve the problem of securing coordination around an overall plan across the numerous groups; updating plans in face of changing circumstances; and ensuring that the work products of the several groups are compatible with each other. Moreover, there is the perennial challenge of creating arrangements and incentives in the work environment -- laboratory, design office, budget division, logistics planning -- that stimulate the participants to high-level creativity and achievement.

This topic is of interest for practical reasons -- as a society we need to be confident in the effectiveness and responsiveness of the planning and development that goes into large projects like these. But it is also of interest for a deeper reason: the challenge of attributing rational planning and action to a very large and distributed organization at all. When an individual scientist or engineer leads a laboratory focused on a particular set of research problems, it is possible for that individual (with assistance from the program and lab managers hired for the effort) to keep the important scientific and logistical details in mind. It is an individual effort. But the projects described here are sufficiently complex that there is no individual leader who has the whole plan in mind. Instead, the "organizational intentionality" is embodied in the working committees, communications processes, and assessment mechanisms that have been established.

It is interesting to consider how students, both undergraduate and graduate, can come to have a better appreciation of the organizational challenges raised by large projects like these. Almost by definition, study of these problem areas in a traditional university curriculum proceeds from the point of view of a specialized discipline -- accounting, electrical engineering, environmental policy. But the view provided from a discipline is insufficient to give the student a rich understanding of the complexity of the real-world problems associated with projects like these. It is tempting to think that advanced courses for engineering and management students could be devised making extensive use of detailed case studies as well as simulation tools that would allow students to gain a more adequate understanding of what is needed to organize and implement a large new system. And interestingly enough, this is a place where the skills of humanists and social scientists are perhaps even more essential than the expertise of technology and management specialists. Historians and sociologists have a great deal to add to a student's understanding of these complex, messy processes.

(Martin Filler's review in the News York Review of Books of three recent books on the massive project in lower Manhattan to rebuild the World Trade Center illustrates some of the political and organizational challenges that stand in the way of large, complex projects; link.)

Tuesday, January 17, 2017

Signals intelligence and the management of military competition


In the past few years many observers have been alarmed by the high-tech realities of cyber-security, cyber-spying, and cyber-warfare. The current interest is on the apparent impunity with which government-sponsored intruders have managed to penetrate and exploit the computer systems of government and corporate organizations -- often extracting vast quantities of sensitive or classified information over extended periods of time. The Sony intrusion and the Office of Personnel Management intrusion represent clear examples of each (link, link). Gildart Jackson's Cyberspies: The Secret History of Surveillance, Hacking, and Digital Espionage provides a very interesting description of the contemporary realities of cyber-spying by governments and private intruders.

It is very interesting to realize that the cat-and-mouse game of using cryptography, electronic signals collection, and intelligence analysis to read an adversary's intentions and communications has a long history, and resulted in problems strikingly similar to those we currently face. A very good recent book that conveys a detailed narrative of the development of signals intelligence and cryptography since World War II is Stephen Budiansky's Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union. The book offers a surprisingly detailed account of the formation and management of the National Security Agency during the Truman presidency and the sophisticated efforts expended toward penetrating military and diplomatic codes since the Enigma successes of Bletchley Park.

There are several particularly interesting lessons to be learned from Code Warriors. One is a recognition of the remarkable resourcefulness and technical sophistication that was incorporated into the signals intelligence establishment in the 1940s and 1950s. Many of us think primarily of the achievements of Bletchley Park and the breaking of code systems like Enigma during World War II. But signals intelligence went far beyond cryptography. For example, a great deal of valuable intelligence resulted from "traffic analysis" -- specific information about time and location of various encrypted messages. Even without being able to read the messages themselves it was possible for analysts to draw inferences about military activity. This is an early version of meta-data analysis of email and phone calls.

Another surprise was the ability of intelligence establishment communications experts in the 1950s to use "side-channel" attacks to gain access to adversaries' communications channels (multi-channel radio teletype machines, for example). By recording the electromagnetic emissions, power fluctuations, and acoustic patterns of code machines, typewriters, and teletype machines it was possible to reconstruct the plain text that was passing through these devices.

Most interesting for readers of Understanding Society, however, are the large number of problems of organization, management, and leadership that effective intelligence service required. Several problems were particularly intractable. Inter-service rivalries were an enormous obstacle to effective collection, analysis, and use of signals intelligence. Motivating and retaining civilian experts as workers within a large research organization in the military was a second. And the problem of defending against misappropriation of documents and secrets by trusted insiders was another.

The problem of inter-agency rivalries and competition was debilitating and intractable. Army and Navy intelligence bureaus were enormously reluctant to subordinate their efforts to a single prioritized central agency. And this failure to cooperate and share information and processes led to substantial intelligence shortfalls.
The 1946 agreement between the Army and Navy to “coordinate” their separate signals intelligence operations had merely sidestepped glaring deficiencies in the entire arrangement, which was quickly proving itself unequal to the new technical and intelligence challenges they faced in attacking the Russian problem. (lc 1933)
But AFSA’s seventy-six-hundred-person staff and $35 million budget remained a small share of the total enterprise, and both the Army and Air Force cryptologic agencies continued to grab important projects for themselves. ASAPAC and USAFSS both duplicated AFSA’s work on Soviet and Chinese codes throughout the Korean War, and simply ignored attempts by AFSA to take charge of field processing within the theater. The Air Force had meanwhile established its headquarters of USAFSS at Brooks Air Force Base in Texas, a not too subtle attempt to escape from the Washington orbit altogether. (lc 2933)

AFSA was powerless to prevent even the most obvious duplication of effort: for over a year the Army and the Air Force both insisted on intercepting Russian and Chinese air communications, and it was not until March 1952, after months of negotiations, that ASA finally agreed to leave the job to the Air Force. The Navy meanwhile flatly refused to put its worldwide network of direction-finding stations—which provided the single most important source of information on the location and movement of Soviet surface ships and submarines—under central control. (lc 2949)
Also challenging was the problem of incorporating smart, innovative civilian experts into what had become rigid, hierarchical military organizations. Keeping these civilians -- often PhDs in mathematics -- motivated and productive within the strictures of a post-war military bureaucracy was exceptionally difficult. During WWII the atmosphere was conducive to innovative work:
At GC&CS and Arlington Hall in particular, formal lines of authority had never counted for much during the war; getting the job done was what mattered, and in large part because no one planned to make a career of the work, no one was very career-minded about office politics or promotion or pay or protecting their bureaucratic turf. Cecil Phillips remembered wartime Arlington Hall as a true “meritocracy” where a sergeant, who in a considerable number of cases might have a degree from MIT or Harvard or some other top school, and a lieutenant might work side by side as equals on the same problem and no one thought much about it. (lc 1417)
But after the war the bureaucratic military routines became a crushing burden:
At ASA, peace brought a flood of pettifogging orders, policy directives, and procedural instructions, accompanied by a succession of martinet junior officers who rotated in and out and often knew nothing about cryptanalysis but were sticklers for organization, military protocol, and the chain of command. Lengthy interoffice memoranda circulated dissecting the merits of developing a personnel handbook, or analyzing whether a proposed change in policy that would allow civilian employees of Arlington Hall to be admitted to the post movie theater was consistent with Paragraph 10, AR 210-389 of the Army Regulations. “Low pay and too many military bosses” would be a recurring complaint from ASA’s civilian workforce over the next few years, along with a sense that no matter how much experience they had or how qualified they were, the top positions in each division would always go to a less qualified Army officer. (lc 1430)
The problem of coordinating, directing, and managing these high-talent scientists proved to be an ever-challenging task for NSA as well:
Among the administrative nightmares of the explosively growing, disjointed, and highly technical top-secret organization that Canine inherited was a notable lack of skilled managers. That was a failing common to creative and technical enterprises, which always tended to attract people more at home dealing with abstract ideas than with their fellow human beings, but it was especially acute in the very abstract world of cryptanalysis. “I had a terrible time finding people that could manage,” Canine related. “We were long on technical brains at NSA and we were very short on management brains.” 50 The splintering of the work into hundreds of separate problems, each isolated technically and for security reasons from one another, exacerbated the difficulties of trying to assert managerial control on an organization made up of thousands of individualistic thinkers who marched to no identifiable drum known to management science. (lc 3582)
And of course the problem of insider spying turned out to be essentially insurmountable, from the defection of NSA employees William Martin and Bernon Mitchell in 1960 to the spy rings of John Walker from the 1960s to 1985 to the secret document collection and publication by Edward Snowden in 2013. Kim Philby comes into the story, having managed to position himself in Washington in a job that allowed him to collect and pass on the intelligence community's most intimate secrets (including the current status of its ability to decrypt Soviet codes and the progress being made at identifying Soviet agents within the US).

The agency's commitment to the polygraph as a way of evaluating employees' loyalty is, according to Budiansky, another source of organizational failure; the polygraph had no scientific validity, and the confidence it offered permitted the agency's security infrastructure to forego other more reliable ways of combatting insider spying.
As subsequent events would make all too clear, the touching faith that a piece of Edwardian pseudoscientific electrical gadgetry could safeguard the nation’s most important secrets would prove farcically mistaken, for almost every one of the real spies to betray NSA in the ensuing years passed a polygraph interview with flying colors, while obvious signs that in retrospect should have set off alarm bells about their behavior were blithely ignored, largely due to such misplaced confidence in hocus-pocus. (kl 3355)
Budiansky makes it clear that the extreme secrecy embedded within NSA was one of the organizational and political weaknesses of the entity. Its activities were kept secret from the political authorities of the country, and the agency was sometimes used to conceal intelligence considered to be harmful to those authorities. The case of the misuse of intelligence during the Tonkin Gulf crisis is a particularly clear example, where intelligence data were misused to support the administration's need to find an incident that could serve as a cause for war.
A classified, searingly honest accounting by NSA historian Robert J. Hanyok in 2001 found that in bolstering the administration’s version of events, NSA summary reports made use of only 15 of the relevant intercepts in its files, suppressing 122 others that all flatly contradicted the now “official” version of the August 4 events. Translations were altered; in one case two unrelated messages were combined to make them appear to have been from the same message; one of the NSA summary reports that did include a mention of signals relating to a North Vietnamese salvage operation obfuscated the timing to hide the fact that one of the recovered boats was being taken under tow at the very instant it was supposedly attacking the Maddox and Turner Joy . The original Vietnamese-language version of the August 4 attack message that had triggered the Critic alert meanwhile mysteriously vanished from NSA’s files. (kl 5096)
Budiansky is forthright in identifying the weaknesses and excesses of NSA and the intelligence services. But he also makes it clear how essential these capabilities are, from allowing the US to assess Soviet intentions during the Cuban Missile crisis to directing aircraft to hostile fighters on the basis of penetration of the air-to-air radio network in Korea and Vietnam. So the hard question for Budiansky, and for us as citizens, is how to structure and constrain the collection of intelligence so that it serves the goal of defending the country against attack without deviating into administrative chaos and politicized misdirection. There are many other expert organizations that have very similar dysfunctions, from advanced civilian scientific laboratories to modern corporate IT organizations. (Here is a discussion of Paul Rabinow's ethnography of the Cetus Corporation, the biotech research firm that invented PCR; link.)

Tuesday, October 25, 2016

Rational choice institutionalism


Where do institutions come from? And what kinds of social forces are at work to stabilize them once they are up and running?  These are questions that historical institutionalists like Kathleen Thelen have considered in substantial depth (linklink, link). But the rational-choice paradigm has also offered some answers to these questions as well. The basic idea presented by the RCT paradigm is that institutions are the result of purposive agents coping with existential problems, forming alliances, and pursuing their interests in a rational way. James Coleman is one of the exponents of this approach in Foundations of Social Theory, where he treats institutions and norms as coordinated and mutually reinforcing patterns of individual behavior (link).

An actor-centered theory of institutions requires a substantial amount of boot-strapping: we need to have an account of how a set of rules and practices could have emerged from the purposive but often conflictual activities of individuals, and we need a similar account of how those rules are stabilized and enforced by individuals who have no inherent interest in the stability of the rules within which they act. Further, we need to take account of well-known conflicts between private and public benefits, short-term and long-term benefits, and intended and unintended benefits. Rational-choice theorists since Mancur Olson in The Logic of Collective Action: Public Goods and the Theory of Groups have made it clear that we cannot explain social outcomes on the basis of the collective benefits that they provide; rather, we need to show how those arrangements result from relatively myopic, relatively self-interested actors with bounded ability to foresee consequences.

Ken Shepsle is a leading advocate for a rational-choice theory of institutions within political science. He offers an exposition of his thinking in his contribution to The Oxford Handbook of Political Institutions (link). He distinguishes between institutions as exogenous and institutions as endogenous. The first conception takes the rules and practices of an institution as fixed and external to the individuals who operate within them, while the second looks at the rules and practices as being the net result of the intentions and actions of those individuals themselves. On the second view, it is open to the individuals within an activity to attempt to change the rules; and one set of rules will perhaps have better results for one set of interests than another. So the choice of rules in an activity is not a matter of indifference to the participants. (For example, untenured faculty might undertake a campaign to change the way the university evaluates teaching in the context of the tenure process, or to change the relative weights assigned to teaching and research.) Shepsle also distinguishes between structured and unstructured institutions -- a distinction that other authors characterize as "formal/informal". The distinction has to do with the degree to which the rules of the activity are codified and reinforced by strong external pressures. Shepsle encompasses various informal solutions to collective action problems under the rubric of unstructured institutions -- fluid solutions to a transient problem.

This description of institutions begins to frame the problem, but it doesn't go very far. In particular, it doesn't provide much insight into the dynamics of conflict over rule-setting among parties with different interests in a process. Other scholars have pushed the analysis further.

French sociologists Crozier and Friedberg address this problem in Actors and Systems: The Politics of Collective Action (1980 [1977]). Their premise is that actors within organizations have substantially more agency and freedom than they are generally afforded by orthodox organization theory, and we can best understand the workings and evolution of the organization as (partially) the result of the strategic actions of the participants (instead of understanding the conduct of the participants as a function of the rules of the organization). They look at institutions as solutions to collective action problems -- tasks or performances that allow attainment of a goal that is of interest to a broad public, but for which there are no antecedent private incentives for cooperation. Organized solutions to collective problems -- of which organizations are key examples -- do not emerge spontaneously; instead, "they consist of nothing other than solutions, always specific, that relatively autonomous actors have created, invented, established, with their particular resources and capacities, to solve these challenges for collective action" (15). And Crozier and Friedberg emphasize the inherent contingency of these particular solutions; there are always alternative solutions, neither better nor worse. This is a rational-choice analysis, though couched in sociological terms rather than economists' terms. (Here is a more extensive discussion of Crozier and Friedberg; link.)

Jack Knight brings conflict and power into the rational-choice analysis of the emergence of institutions in Institutions and Social Conflict.
I argue that the emphasis on collective benefits in theories of social institutions fails to capture crucial features of institutional development and change. I further argue that our explanations should invoke the distributional effects of such institutions and the conflict inherent in those effects. This requires an investigation of those factors that determine how these distributional conflicts are resolved. (13-14)
Institutions are not created to constrain groups or societies in an effort to avoid suboptimal outcomes but, rather, are the by-product of substantive conflicts over the distributions inherent in social outcomes. (40)
Knight believes that we need to have microfoundations for the ways in which institutions emerge and behave (14), and he seeks those mechanisms in the workings of rational choices by the participants within the field of interaction within which the institution emerges.
Actors choose their strategies under various circumstances. In some situations individuals regard the rest of their environment, including the actions of others, as given. They calculate their optimal strategy within the constraints of fixed parameters.... But actors are often confronted by situations characterized by an interdependence between other actors and themselves.... Under these circumstances individuals must choose strategically by incorporating the expectations of the actions of others into their own decision making. (17)
This implies, in particular, that we should not expect socially optimal or efficient outcomes in the emergence of institutions; rather, we should expect institutions that differentially favor the interests of some groups and disfavor those of other groups -- even if the social total is lower than a more egalitarian arrangement.
I conclude that social efficiency cannot provide the substantive content of institutional rules. Rational self-interested actors will not be the initiators of such rules if they diminish their own utility. Therefore rational-choice explanations of social institutions based on gains in social efficiency fail as long as they are grounded in the intentions of social actors. (34)
Knight's work explicitly refutes the occasional Panglossian (or Smithian) assumptions sometimes associated with rational choice theory and micro-economics: the idea that individually rational action leads to a collectively efficient outcome (the invisible hand). This may be true in the context of certain kinds of markets; but it is not generally true in the social and political world. And Knight shows in detail how the assumption fails in the case of institutional emergence and ongoing workings.

Rational choice theory is one particular and specialized version of actor-centered social science (link). It differs from other approaches in the very narrow assumptions it makes about the actor's particular form of agency; it assumes narrow economic rationality rather than a broader conception of agency or practical rationality (link). What seems clear to me is that we need to take an actor-centered approach if we want to understand institutions -- either their emergence or their continuing functioning and change. So the approach taken by rational-choice theorists is ontologically correct. If RCT fails to provide an adequate analysis of institutions, it is because the underlying theory of agency is fundamentally unrealistic about human actors.