As we have noted in previous posts, social outcomes are highly path-dependent and contingent (link, link, link, link). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors -- say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.
This difficulty doesn't only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors -- the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) -- so activists and academic experts alike are forced to concede that they don't really know what the consequences will be.
One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn't possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.
A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly -- so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes -- actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, ... This is sometimes described as endogeneity -- the causal components themselves change their characteristics as a consequence of the process.
So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics -- it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself -- to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the "forces of order" -- the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well -- the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.
One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific "point" predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well -- war and peace, ethnic conflict, climate change, or democracy movements.
Agent-based modeling is one component of approaches like these (link). This means taking into account a wide range of social factors -- agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).
Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:
This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA). These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems. By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)
LTPA is an important example of a class of problems requiring decisionmaking under conditions of deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)And here, in a nutshell, is how the approach is supposed to work:
This study proposes four key elements of successful LTPA:
• Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.
These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it. (xiii)
Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM). Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations. (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions. And the method attempts to assign a measure of "regret" to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.
It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction -- not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.
Sounds a bit like the Science fiction series Foundation, where Hari Seldon uses Psychohistory to predict the history!
ReplyDelete