Monday, October 24, 2011

Social complexity

Social ensembles are often said to be "complex". What does this mean?

Herbert Simon is one of the seminal thinkers in the study of complexity. His 1962 article, "The Architecture of Complexity" (link), put forward several ideas that have become core to the conceptual frameworks of people who now study social complexity. So it is worthwhile highlighting a few of the key ideas that were put forward in that article. Here is Simon's definition of complexity:
Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist. (468)
Notice several key ideas contained here, as well as several things that are not said. First, the complexity of a system derives from the "nonsimple" nature of the interaction of its parts (subsystems). A watch is a simple system, because it has many parts but the behavior of the whole is the simple sum of the direct mechanical interactions of the parts. The watchspring provides an (approximately) constant impulse to the gearwheel, producing a temporally regular motion in the gears. This motion pushes forward the time registers (second, minute, hour) in a fully predictable way. If the spring's tension influenced not only the gearwheel, but also the size of the step taken by the minute hand; or if the impulse provided by the spring varied significantly according to the alignment of the hour and second hands and the orientation of the spring -- then the behavior of the watch would be "complex". It would be difficult or impossible to predict the state of the time registers by counting the ticks in the watch gearwheel. So this is a first statement of the idea of complexity: the fact of multiple causal interactions among the many parts (subsystems) that make up the whole system.

A second main idea here is that the behavior of the system is difficult to predict as a result of the nonsimple interactions among the parts. In a complex system we cannot provide a simple aggregation model of the system that adds up the independent behaviors of the parts; rather, the parts are influenced in their behaviors by the behaviors of other components. The state of the system is fixed by interdependent subsystems; which implies that the system's behavior can oscillate wildly with apparently similar initial conditions. (This is one explanation of the Chernobyl nuclear meltdown: engineers attempted to "steer" the system to a safe shutdown by manipulating several control systems at once; but these control systems had complex effects on each other, with the result that the engineers catastrophically lost control of the system.)

A third important point here is Simon's distinction between "metaphysical reducibility" and "pragmatic holism." He accepts what we would today call the principle of supervenience: the state of the system supervenes upon the states of the parts. But he rejects the feasibility of performing a reduction of the behavior of the system to an account of the properties of the parts. He does not use the concept of "emergence" here, but this would be another way of putting his point: a metaphysically emergent property of a system is one that cannot in principle be derived from the characteristics of the parts. A pragmatically emergent property is one that supervenes upon the properties of the parts, but where it is computationally difficult or impossible to map the function from the state of the parts to the state of the system. This point has some relevance to the idea of "relative explanatory autonomy" mentioned in an earlier posting (link). The latter idea postulates that we can sometimes discover system properties (causal powers) of a complex system that are in principle fixed by the underlying parts, but where it is either impossible or unnecessary to discover the specific causal sequences through which the system's properties come to be as they are.

Another key idea in this article is Simon's idea of a hierarchic system.
By a hierarchic system, or hierarchy, I mean a system that is composed of interrelated subsystems, each of the latter being, in turn, hierarchic in structure until we reach some lowest level of elementary subsystem. (468) 
I have already given an example of one kind of hierarchy that is frequently encountered in the social sciences: a formal organization. Business firins, governments, universities all have a clearly visible parts-within-parts structure. (469)
Here the idea is also an important one. It is a formal specification of a particular kind of ensemble in which structures at one level of aggregation are found to be composed separately of structures or subsystems at a lower level of aggregation. Simon offers the example of a biological cell that can be analyzed into a set of exhaustive and mutually independent subsystems nested within each other. It is essential that there is a relation of enclosure as we descend the hierarchy of structures: the substructures of level S are entirely contained within it and do not serve as substructures of some other system S'.

It is difficult to think of biological examples that violate the conditions of hierarchy -- though we might ask whether an organism and its symbiote might be best understood as a non-hierarchical system. But examples are readily available in the social world. Labor unions and corporate PACs play significant causal roles in modern democracies. But they are not subsystems of the political process in a hierarchical sense: they are not contained within the state, and they play roles in non-state systems as well. (A business lobby group may influence both the policies chosen by a unit of government and the business strategy of a healthcare system.)

Simon appears to believe that hierarchies reduce the complexity of systems; and they support the feature of what we would now call "modularity", where we can treat the workings of a subsystem as a self-enclosed unit that works roughly the same no matter what changes occur in other subsystems.

Simon puts this point in his own language of "decomposability." A system is decomposable if we can disaggregate its behavior onto the sum of the independent behaviors of its parts. A system is "nearly decomposable" if the parts of the system have some effects on each other, but these effects are small relative to the overall workings of the system.

At least some kinds of hierarchic systems can be approximated successfully as nearly decomposable systems. The main theoretical findings from the approach can be summed up in two propositions:
(a) in a nearly decomposable system, the short-run behavior of each of the component subsystems is approximately independent of the short-run behavior of the other components; (b) in the long run, the behavior of any one of the components depends in only an aggregate way on the behavior of the other components. (474)
He illustrates this point in the case of social systems in these terms:
In the dynamics of social systems, where members of a system communicate with and influence other members, near decomposability is generally very prominent. This is most obvious in formal organizations, where the formal authority relation connects each member of the organization with one immediate superior and with a small number of subordinates. Of course many communications in organizations follow other channels than the lines of formal authoritv. But most of these channels lead from any particular individual to a very limited number of his superiors, subordinates, and associates. Hence, departmental boundaries play very much the same role as the walls in our heat example. (475)
And in summary:
We have seen that hierarchies have the property of near-decomposability. Intra-component linkages are generally stronger than intercomponent linkages. This fact has the effect of separating the high-frequency dynamics of a hierarchy -- involving the internal structure of the components-- from the low frequency dynamics-involving interaction among components. (477)
So why does Simon expect that systems will generally be hierarchical, and hierarchies will generally be near-decomposable?  It turns out that this is an expectation that derives from the notion that systems were created by designers (who would certainly favor these features because they make the system predictable and understandable) or evolved through some process of natural selection from simpler to more complex agglomerations.  So we might expect that hydroelectric plants and motion detector circuits in frogs' visual systems are hierarchical and near-decomposable.  

But here is an important point about social complexity.  Neither of these expectations is likely to be satisfied in the case of social systems.  Take the causal processes (sub-systems) that make up a city. And consider some aggregate properties we may be interested in -- emigration, resettlement, crime rates, school truancy, real estate values.  Some of the processes that influence these properties are designed (zoning boards, school management systems), but many are not.  Instead, they are the result of separate and non-teleological processes leading to the present.  And there is often a high degree of causal interaction among these separate processes.  As a result, it might be more reasonable to expect, contrary to Simon's line of thought here, that social systems are likely to embody greater complexity and less decomposability than the systems he uses as examples.

(A recent visit to the Center for Social Complexity at George Mason University (link) was very instructive for me.  There is a great deal of very interesting work underway at the Center using agent-based modeling techniques to understand large, complicated social processes: population movements, housing markets, deforestation, and more.  Particularly interesting is a blog by Andrew Crooks at the Center on various aspects of agent-based modeling of spatial processes.)