Navigation page

Pages

Tuesday, February 1, 2011

Decision-making in complex systems

source: The Financial Ninja (link)

How should we make intelligent decisions in contexts in which the object of choice involves the actions of other agents whose choices jointly determine the outcome and where the outcome is unpredictable?  Robert Axelrod and Michael Cohen address these issues in Harnessing Complexity: Organizational Implications of a Scientific Frontier.  They define a complex adaptive system in something like these terms: a body of causal processes and agents whose interactions lead to outcomes that are unpredictable. So the interactions among agents often have unpredictable consequences; and the agents themselves adapt their behavior based on past experiences: "They interact in intricate ways that continually reshape their collective future."  Here is how Axelrod and Cohen put their question:
In a world where many players are all adapting to each other and where the emerging future is extremely hard to predict, what actions should you take? (xi)
This book is about designing organizations and strategies in complex settings, where the full consequences of actions may be hard -- even impossible -- to predict. (2)
Complexity and chaos are often used interchangeably; but Axelrod and Cohen distinguish sharply between them in these terms:
Chaos deals with situations such as turbulence that rapidly become highly disordered and unmanageable.  On the other hand, complexity deals with systems composed of many interacting agents.  While complex systems may be hard to predict, they may also have a good deal of structure and permit improvement by thoughtful intervention. (xv)
Here is a simple current example -- an assembly of 1000 Egyptian citizens in January 2011, interested in figuring out what to do in light of their longstanding grievances and the example of Tunisia. Will the group erupt into defiant demonstration or dissolve into private strategies of self-preservation?  The dynamics of the situation are fundamentally undetermined; the outcome depends on things like who speaks first, how later speakers are influenced by earlier speakers, whether the PA system is working adequately, which positions happen to have a critical mass of supporters, the degree to which the government can make credible threats of retaliation, the presence of experienced organizers, and a dozen other factors.  So we cannot predict whether this group will move towards resistance or accommodation, even when we assume that all present have serious grievances against the Egyptian state.  

The fact of path dependence comes into this understanding of complexity, in that the order of actions by the agents can influence the outcome.  So we could run the Egypt scenario forward multiple times and arrive at different outcomes repeatedly.  We might imagine a tool along the lines of a Monte Carlo simulation that models the range of possible outcomes; and in the sorts of systems Axelrod and Cohen are interested in, the range of outcomes is very wide with no "modal" and most probable outcomes at the core.

The difficulty of prediction in the future development of a complex system derives in part from the adaptiveness of the agents who make it up; but it also derives from the fact of non-linearity of causation in complex systems.  Small influences can have large effects; there is often a discontinuity between the magnitude and direction of a cause and its effect.
What makes prediction especially difficult in these settings is that the forces shaping the future do not add up in a simple, systemwide manner.  Instead, their effects include nonlinear interactions among the components of the system.  The conjunction of a few small events can produce a big effect if their impacts multiply rather than add. (14)
Decision theorists distinguish between situations of parametric rationality and strategic rationality.  In the former the decision maker is playing against nature, with a fixed set of probabilities and causal properties; in the latter the decision maker is playing against and with other rational agents, and the outcome for each depends upon the choices made by all. Game theory offers a mathematical framework for analyzing strategic rationality, while expected utility theory is advanced as a framework for analyzing the problem of choice under risk and uncertainty.  The fundamental finding of game theory is that there are equilibria for multi-person games, both zero-sum and non-zero-sum, for any game that can be formulated in the canonical game matrix of agents' strategies and joint outcomes.  Whether those equilibria are discoverable for ordinary strategic reasoners is a separate question, so the behavioral relevance of the availability of an equilibrium set of strategies is limited.  And here is the key point: neither parametric rationality nor equilibrium-based strategic rationality helps much in the problem of decision-making within a complex adaptive system.

The situation that Axelrod and Cohen describe here is an instance of strategic rationality, but it doesn't yield to the framework of mathematical game theory.  This is because we can't attach payoffs to combinations of strategies for the separate agents; this follows from the unpredictability assumption built into the idea of complexity.  And, second, complex adaptive systems are usually in a dynamic process of change, so that the system never attains an equilibrium state.

Axelrod and Cohen are hoping to provide counsel for how decision makers can "harness" complexity -- that is, how they can design policies and strategies that perhaps push a complex situation in a favorable direction, or that insulate an organization from the worst outcomes that the complex system may produce.
Harnessing complexity ... means deliberately changing the structure of a system in order to increase some measure of performance, and to do so by exploiting an understanding that the system itself is complex. (9)
Axelrod and Cohen make use of three high-level concepts to describe the development of complex adaptive systems: variation, interaction, and selection.  Variation is critical here, as it is in evolutionary biology, because it provides a source of potentially successful innovation -- in strategies, in organizations, in rules of action.  The idea of adaptation is central to their analysis -- in this case, adaptation and modification of strategies by agents in light of current and past success.  Interaction occurs when agents and organizations intersect in the application of their strategies -- often producing unforeseen consequences.  (The strategy of open-source software development is one example that they look at, and the interactions that occurred as open-source innovations encountered closed-source innovations.)  An organization or a population is best served, they argue, when there is a regular source of innovations (variations); when these innovations are implemented in the form of variant strategies; and when it is possible to cultivate more successful variations and to damp out less successful (selection).  Here is how they summarize their view:
Agents, of a variety of types, use their strategies, in patterned interaction, with each other and with artifacts.  Performance measures on the resulting events drive the selection of agents and/or strategies through processes of error-prone copying and recombination, thus changing the frequencies of the types within the system.
And they arrive at eight rules of thumb for "harnessing complexity" when it comes to organizations and social policies:
  • Arrange organizational routines to generate a good balance between exploration and exploitation.
  • Link processes that generate extreme variation to processes that select with few mistakes in the attribution of credit.
  • Build networks of reciprocal interaction that foster trust and cooperation. 
  • Assess strategies in light of how their consequences can spread.
  • Promote effective neighborhoods.
  • Do not sow large failures when reaping small efficiencies.
  • Use social activity to support the growth and spread of valued criteria.
  • Look for shorter-term, finer-grained measures of success that can usefully stand in for longer-run, broader aims. (156-158)
So how should we understand these heuristics as a conclusion to this analysis?  They function as an "operating manual" for leaders and policy makers attempting to bring about good effects within a population of agents demonstrating adaptive complexity.  And perhaps these are plausible meta-strategies for intervening within a complex social system.

What is worrisome, though, is the implicit functionalism that seems to underlie the book: the idea that agents of good will and having the longterm best interests of the population in mind are making the rules.  But what happened to the predators -- the organized crime figures, the drug lords, the conspirators, the predatorial businesses, the anti-democrats?  Won't they too be looking to exploit (harness) the workings of complexity?  Axelrod's earlier work on repeated prisoners' dilemmas explicitly took into account the availability of strategies designed to exploit the cooperators; and his work on cooperation emphatically makes the point that cooperation is often deployed for anti-social and predatory purposes (cartels, extortion rackets, ...)  (The Evolution of Cooperation: Revised Edition).  Shouldn't this counter-social agency be incorporated into this analysis of complex adaptive systems as well?  As Charles Tilly points out, crime and piracy also depend upon "trust networks" and innovative forms of predation (Trust and Rule).

During the 1980s the Reagan administration wanted to create a "Star Wars" anti-missile shield, and some of their policy makers argued that we could solve the technical challenges because the U.S. had succeeded in putting a man on the moon.  But critics of this military space strategy rejoined, "But the moon didn't fight back;" whereas Soviet scientists and engineers were fully capable of adapting their ICBM technologies to evade the defensive characteristics of a missile shield.  There seems to be something of the same blind spot in this analysis of social complexity; predation and the common good are in competition with each other, and neither has a decisive advantage.

3 comments:

  1. Off topic, but I thought you would like to see the online cliodynamics journal

    http://escholarship.org/uc/irows_cliodynamics

    Jack Goldstone gives a glowing (except when it disagrees with him-LOL) review of one of your favorite books.

    ReplyDelete
  2. On topic this time:

    Based on your summary, it sounds like Axelrod and Cohen were channeling Napoleon:

    *Set up a flexible modular organization,

    *Set up a standardized doctrine,
    *Train your units in this doctrine,

    *Push aggressively to limit your opponent’s options,

    *At the point of contact, take what they give you.

    A later expansion of the last point was Boyd’s OODA loop (for observe, orient, decide, and act http://en.wikipedia.org/wiki/OODA_loop

    The ability to limit your opponents options (preemption) is a major tactic that I don’t see them discussing. Of course not every complex system has clear cut opponents, but they often do. It also (unfortunately as you point out) means that aggressive tactics by predatory groups can be very effective.

    ReplyDelete
  3. You may be interested in the Decision Making for a Social World webconference that has just started online at the International Cognition and Culture Institute:

    http://www.cognitionandculture.net/Social-decisions-workshop/workshop-decision-making-for-a-social-world.html

    ReplyDelete