Navigation page

Pages

Sunday, December 9, 2012

Simulating social mechanisms



A key premise of complexity theory is that a population of units has "emergent" properties that result from the interactions of units with dynamic characteristics. Call these units "agents".  The "agent" part of the description refers to the fact that the elements (persons) are self-directed units.  Social ensembles are referred to as "complex adaptive systems" -- systems in which outcomes are the result of complex interactions among the units AND in which the units themselves modify their behavior as a result of prior history.

Scott Page's Complex Adaptive Systems: An Introduction to Computational Models of Social Life provides an excellent introduction. Here is how Page describes an adaptive social system:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. It would be difficult to date the exact moment that such systems first arose on our planet -- perhaps it was when early single-celled organisms began to compete with one another for resources.... What it takes to move from an adaptive system to a complex adaptive system is an open question and one that can engender endless debate. At the most basic level, the field of complex systems challenges the notion that by perfectly understanding the behavior of each component part of a system we will then understand the system as a whole. (kl 151)
Herbert Simon added a new chapter on complexity to the third edition of The Sciences of the Artificial - 3rd Edition in 1996.
By adopting this weak interpretation of emergence, we can adhere (and I will adhere) to reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172).
This formulation amounts to the claim of what I referred earlier to as "relative explanatory autonomy"; link. It is a further articulation of Simon's view of "pragmatic holism" first expressed in 1962 (link).

So how would agent-based models (ABM) be applied to mechanical systems? Mechanisms are not intentional units. They are not "thoughtful", in Page's terms. In the most abstract version, a mechanism is an input-output relation, perhaps with governing conditions and with probabilistic outcomes -- perhaps something like this:


In this diagram A, B, and D are jointly sufficient for the working of the mechanism, and C is a "blocking condition" for the mechanism. When A,B,C,D are configured as represented the mechanism then does its work, leading with probability PROB to R and the rest of the time to S.

So how do we get complexity, emergence, or unpredictability out of a mechanical system consisting of a group of separate mechanisms? If mechanisms are determinate and exact, then it would seem that a mechanical system should not display "complexity" in Simon's sense; we should be able to compute the state of the system in the future given the starting conditions.

There seem to be several key factors that create indeterminacy or emergence within complex systems. One is the fact of causal interdependency, where the state of one mechanism influences the state of another mechanism which is itself a precursor to the first mechanism.  This is the issue of feedback loops or "coupled" causal processes. Second is non-linearity: small differences in input conditions sometimes bring about large differences in outputs. Whenever an outcome is subject to a threshold effect, we will observe this feature; small changes short of the threshold make no change in the output, whereas small changes at the threshold bring about large changes. And third is the adaptability of the agent itself.  If the agent changes behavioral characteristics in response to earlier experience (through intention, evolution, or some other mechanism) then we can expect outcomes that surprise us, relative to similar earlier sequences. And in fact, mechanisms display features of each of these characteristics. They are generally probabilistic, they are often non-linear, they are sensitive to initial conditions, and at least sometimes they "evolve" over time.

So here is an interesting question: how do these considerations play into the topic of understanding social outcomes on the basis of an analysis of underlying social mechanisms? Assume we have a theory of organizations that involves a number of lesser institutional mechanisms that affect the behavior of the organization. Is it possible to develop an agent-based model of the organization in which the institutional mechanisms are the units? Are meso-level theories of organizations and institutions amenable to implementation within ABM simulation techniques?

Here is a Google Talk by Adrien Treuille on "Modeling and Control of Complex Dynamics".



The talk provides an interesting analysis of "crowd behavior" based on a new way of representing a crowd.

1 comment:

  1. Hi Dr. Little --

    Thank you for this and other posts, I have been reading bits and pieces of your multiple blogs since you gave a colloquium talk at George Mason University a few months ago which I attended.

    I find this "relative explanatory autonomy" idea you raise here and elsewhere very reasonable. But something has been nagging at me about your writing; if I understand you correctly, I think you have not sufficiently clarified that relative explanatory autonomy is a necessity not because of what actually exists in the world but because of the limits of human cognition.

    In other words, a superintelligent scientific inquirer (who, let's say, might be many orders of magnitude more intelligent than the smartest human), might be able to understand sprawling complex systems containing multiple levels of emergence without resort to simplifying abstractions; they would possess an unfathomable ability to reason about pure complexity. On the other hand, a barely sentient being (say, a highly intelligent chimpanzee) might be best served by simple models of entities and causal mechanisms that seem trivial to us but provide good-enough abstractions with which to engage a complex system. What I am getting at is that I think that it is worth bringing the cognitive science of the human mind (and, for that matter, notions of distributed social intelligence within scientific communities?) into the discussion about what constitutes good explanations. Because ultimately the object is to usefully model the world within one or many actually existing trained human minds is it not?

    Curious about your thoughts!
    Josh

    ReplyDelete