Navigation page

Pages

Tuesday, May 22, 2018

Social generativity and complexity


The idea of generativity in the realm of the social world expresses the notion that social phenomena are generated by the actions and thoughts of the individuals who constitute them, and nothing else (link, link). More specifically, the principle of generativity postulates that the properties and dynamic characteristics of social entities like structures, ideologies, knowledge systems, institutions, and economic systems are produced by the actions, thoughts, and dispositions of the set of individuals who make them up. There is no other kind of influence that contributes to the causal and dynamic properties of social entities. Begin with a population of individuals with such-and-so mental and behavioral characteristics; allow them to interact with each other over time; and the structures we observe emerge as a determinate consequence of these interactions.

This view of the social world lends great ontological support to the methods associated with agent-based models (link). Here is how Joshua Epstein puts the idea in Generative Social Science: Studies in Agent-Based Computational Modeling):
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest.... Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (42)
Consider an analogy with cooking. The properties of the cake are generated by the properties of the ingredients, their chemical properties, and the sequence of steps that are applied to the assemblage of the mixture from the mixing bowl to the oven to the cooling board. The final characteristics of the cake are simply the consequence of the chemistry of the ingredients and the series of physical influences that were applied in a given sequence.

Now consider the concept of a complex system. A complex system is one in which there is a multiplicity of causal factors contributing to the dynamics of the system, in which there are causal interactions among the underlying causal factors, and in which causal interactions are often non-linear. Non-linearity is important here, because it implies that a small change in one or more factors may lead to very large changes in the outcome. We like to think of causal systems as consisting of causal factors whose effects are independent of each other and whose influence is linear and additive.

A gardener is justified in thinking of growing tomatoes in this way: a little more fertilizer, a little more water, and a little more sunlight each lead to a little more tomato growth. But imagine a garden in which the effect of fertilizer on tomato growth is dependent on the recent gradient of water provision, and the effects of both positive influencers depend substantially on the recent amount of sunlight available. Under these circumstances it is difficult to predict the aggregate size of the tomato given information about the quantities of the inputs.

One of the key insights of complexity science is that generativity is fully compatible with a wicked level of complexity. The tomato's size is generated by its history of growth, determined by the sequence of inputs over time. But for the reason just mentioned, the complexity of interactions between water, sunlight, and fertilizer in their effects on growth mean that the overall dynamics of tomato growth are difficult to reconstruct.

Now consider the idea of strong emergence -- the idea that some aggregates possess properties that cannot in principle be explained by reference to the causal properties of the constituents of the aggregate. This means that the properties of the aggregate are not generated by the workings of the constituents; otherwise we would be able in principle to explain the properties of the aggregate by demonstrating how they derive from the (complex) pathways leading from the constituents to the aggregate. This version of the absolute autonomy of some higher-level properties is inherently mysterious. It implies that the aggregate does not supervene upon the properties of the constituents; there could be different aggregate properties with identical constituent properties. And this seems ontological untenable.

The idea of ontological individualism captures this intuition in the setting of social phenomena: social entities are ultimately composed of and constituted by the properties of the individuals who make them up, and nothing else. This does not imply methodological individualism; for reasons of complexity or computational limitations it may be practically impossible to reconstruct the pathways through which the social entity is generated out of the properties of individuals. But ontological individualism places an ontological constraint on the way that we conceptualize the social world. And it gives a concrete meaning to the idea of the microfoundations for a social entity. The microfoundations of a social entity are the pathways and mechanisms, known or unknown, through which the social entity is generated by the actions and intentionality of the individuals who constitute it.

1 comment:

  1. Thanks for this. It crystallized a question for me that I can illustrate with a simple example. Suppose we have a very lightly settled territory with a river, and the river has a couple of convenient landing spots a few miles apart. Suppose in simulations, a city will grow around one of these, but not both (growth around one inhibits growth around the other). And suppose that for "reasonable" parameters, which one gets chosen is evenly balanced, just a function of multiple random events -- e.g. in simulations the choice is dependent on the choice of a random number seed.

    In some sense then the choice of city site is totally mechanistic and micro-founded, so only weakly emergent. Conversely however we can never isolate any property of the agents that "picks" one or the other city centers.

    One key difference here from your examples (e.g. a cake) is that in choice of the city site each agent's choices depend on other agents' choices. In a cake the interaction of flour, sugar, baking powder, liquid etc. is pretty much independent across the whole cake.

    This is at least a big factor in the intuition of strong emergence: INTERdependence of micro-choices means that the system will often generate macro-properties that don't arise in any predictable way from the micro-foundations. This avoids "spooky" emergence or epiphenomenalism, but it leaves us with an ontological category that is at best hypothetically reducible since we can never know some systems at a fine enough resolution to predict what macro-structure they will generate. (This is related to the inherent unpredictability of deterministic chaos, but we can skip that tarpit.)

    I think this kind of quasi-strong emergence is in fact pervasive in the social world. It shows up in relatively neutral choices such as the city location above, but also in the generation of innovations, which leaders and celebrities get "critical mass", the timing and form of social crises and responses to those crises, and so forth.

    Furthermore, as social structure is generated more and more by intangible processes with no fixed physical base (i.e. social interaction hosted in the cloud) this sort of emergence is becoming more powerful, less constrained by the physical distribution and concrete situations of social actors. As we have seen unexpected new institutions, political configurations and crises can suddenly emerge in our increasingly intangible world.

    ReplyDelete