Showing posts with label ABM. Show all posts
Showing posts with label ABM. Show all posts

Friday, October 16, 2015

ABM approaches to social conflict

Source: Pfautz and Salwen (link)

An earlier post addressed the question of the dynamics through which a stable community consisting of multiple groups may begin to polarize and fission into antagonisms and conflict. I speculated there that the tools of agent-based modeling might be of use here. What I had in mind was something like this. Suppose we have an urban population spread across space in a distribution that reflects a degree of differentiation of residence by income, religion, and race. Suppose religion is more segregated than either income or race across the region. And suppose we have some background theoretical beliefs about social networks, civic associations, communication processes and other factors influencing a disposition to mobilize. Perhaps ABM methods could allow us to probe different scenarios to see what effects these different settings produce for polarization and conflict.

There is a fair amount of effort at modeling this kind of social phenomena within the field of social simulation. Carlos Lemos et al provide an overview of applications of ABM techniques in social conflict and civil violence in "Agent-based modeling of social conflict, civil violence and revolution: state-of-the-art-review and further prospects" (link). Here is an overview statement of their findings about one specific approach, the threshold-based approach:
Social conflict, civil violence and revolution ABM are inspired on classical models that use simple threshold-based rules to represent collective behavior and contagion effects, such as Schelling’s model of segregation [7] and Granovetter’s model of collective behavior [15]. Granovetter’s model is a theoretical description of social contagion or peer effects: each agent a has a threshold Ta and decides to turn “active” – e.g. join a protest or riot – when the number of other agents joining exceeds its threshold. Granovetter showed that certain initial distributions of the threshold can precipitate a chain reaction that leads to the activation of the entire population, whereas with other distributions only a few agents turn active. (section 3.1)
Here is a diagram of their way of conceptualizing the actors and the processes of social conflict into which they are sometimes mobilized.


Armano Srbljinovic and colleagues attempt to model the emergence of ethnic conflict in "An Agent-Based Model of Ethnic Mobilisation" (link). Their original impulse is to better explain the emergence of polarized and antagonistic ethnic conflict in the former Yugoslavia; their method of approach is to develop an agent-based model that might capture some of the parameters that induce or inhibit ethnic mobilization. They refer to the embracing project as "Social Correlates of the Homeland War". They believe an ABM can potentially illuminate the messy and complex processes of ethnic mobilization observed on the ground:
Our more moderate goals are based on a seemingly reasonable assumption that the results observed in a simplified, artificial society could give us some clues of what is going on, or perhaps show us where to centre our attention in further and more detailed examination of a more complex real-world society. (paragraph 1.4)
They describe the eighties and nineties in this region in these terms:
So, by the end of the eighties and the beginning of the nineties, the ethnic roles in the society of the former Yugoslavia, that were kept toward the middle of Banton's social roles-scale for more than forty years, now under the influence of political entrepreneurs, increased in importance. (paragraph 2.5)
And they would like to explain some aspects of the dynamics of this transition. They single out a handful of important social characteristics of individuals in the region: (a) ethnic membership, (b) ethnic mobilization, (c) civic mobilization, (d)grievance degree, (e) social network, (f) environmental conditions, and (g) appeals to action. Each actor in the model is assigned a value for factors a-e; environmental conditions are specified; and various patterns of appeals are inserted into the system over a number of trials

The algorithm of the model calculates the degree of mobilization intensity for all the agents as a function of the frequency of appeals, the antecedent grievance level of the agent, and a few features of the agents' social networks. If we add a substantive hypothesis about the threshold of M after which group action arises, we then have a model of the occurrence of ethnic strife.

The model uses a "SWARM" methodology. It postulates 200 agents, half red and half blue; and it calculates for each agent a level of mobilization intensity for a sequence of times, according to the following formula:
  • mi(t+1) = mi(t) + (miapp + misocnet + micoolt    (paragraph 3.8)
This formula calculates the i^th individual's new level of mobilization intensity m depending on the prior intensity, the delta created by the appeal, the delta created by the social network, and the "cooling" for the current period. (It is assumed that mobilization intensity decays over time unless re-stimulated by appeals and social network effects.)

This is a very interesting experiment in modeling of a complex interactive social process. But it also raises several important issues. One thing that is apparent from careful scrutiny of this model is that it is difficult to separate "veridical" results from artifacts. For example, consider this diagram:


Is the periodicity shown by Red and Blue mobilization intensities a real effect, or is it an artifact of the design of the model?

Second, it is important to notice the range of factors the simulation does not consider, which theorists like Tilly would think to be crucial: quality of leadership, quality and intensity of organization, content of appeals, differential pathways of appeals, and variety of political psychologies across agents. This simulation captures several important aspects of this particular kind of collective action. But it omits a great deal of substantial factors that theorists of collective action would take to be critical elements of the dynamics of the situation.

Here is a second example of an attempt to simulate aspects of ethnic mobilization provided by Stacey Pfautz and Michael Salwen, "A Hybrid Model of Ethnic Conflict, Repression, Insurgency and Social Strife" (link). Pfautz and Salwen describe their work in these terms:
Ethnic Conflict, Repression, Insurgency and Social Strife (ERIS) is a comprehensive, multi-level model of ethnic conflict that simulates how population dynamics impact state decision making and, in turn, respond to state actions and policies. Population pressures (e.g., relocation, civil unrest) affect and are affected by state actions. The long term goal of ERIS is to support operations development and analyses, enabling military planners to evaluate evolving situations, anticipate the emergence of ethnic conflict and its negative consequences, develop courses of action to defuse ethnic conflict, and mitigate the second and third order effects of U.S. actions on ethnic conflict. (211)
They refer to theirs as a hybrid model, incorporating a macro-level "systems dynamics" model and a micro-level ABM model. Their model thus attempts to represent both micro and macro causal forces on ethnic mobilization, illustrated in the diagram at the top. This model increases the level of "realism" in the assumptions represented in the simulation. Agents are heterogeneous, and their decision-making is contextualized to location on a GIS grid.
Agents represent 1000 individuals and are uniform with respect to religious affiliation. Agents are sampled with respect to age and sex ratio; however, skew sampling is used to create agents with different demographic profiles with respect to these attributes. Agents also have attributes to capture propensities to conflict and tolerance, which affect agent behavior and interact in the aggregate with the macro-level model to localize reports of conflict. (212)
Key variables in their simulation are religious identity, demographic change, population density, the history of recent inter-group conflict, and geographical location. The action space for individuals is: move location, mobilize for violence. And their model is calibrated to real data drawn from four states in Northwest India. Their basic finding is this: "Conflict is predicted in this model where islands or peninsulas of one ethnicity are surrounded by a sea of another (Figure 2.1)."

Kent McClelland offers a computational model that responds to Randall Collins' concepts of "C-Escalation" and "D-Escalation" in inter-group conflict. McClelland's piece is "CYCLES OF CONFLICT A Computational Modeling Alternative to Collins’s Theory of Conflict Escalation" (link). Here is how he describes his approach:
In this paper, I use a variation of systems theory to construct a multi-agent computational model of dynamic social interaction that shows how the conflict-escalation processes described by Collins can be generated in computer simulations. Like his, my model relies on feedback loops, but the mathematical formulas in my model use negative feedback loops, rather than positive feedback loops, to generate the collective processes of positive feedback described in Collins’s model of conflict escalation. My analysis relies on perceptual control theory (PCT), a dynamic-systems model of human behavior, which proposes that neural circuits in the brain are organized into hierarchies of negative-feedback control systems, and that individuals use these control systems to manipulate their own environments in order to control the flow of perceptual input in accordance with their internally generated preferences and expectations. (6)
Lars-Erik Cederman uses an ABM approach to model geopolitical boundaries (link). Here is how he describes his goals:
A decade ago, the Soviet Union ceased to exist, Yugoslavia started to disintegrate, and Germany reunified. Marking the end of the Cold War, these epochal events illustrate vividly that change in world politics features not just policy shifts but also can affect states' boundaries and, sometimes, their very existence. Clearly, any theory aspiring to explain such transformations or, more generally, the longue durée of history, must endogenize the actors themselves.

The current paper describes how agent based modeling can be used to capture transformations of this boundary transforming kind. This is a different argument from that advanced by most agent-based modelers, who resort to computational methods because they lend themselves to exploring heterogeneous and boundedly rational, but otherwise fixed, actors in complex social environments (1, 2). Without discounting the importance of this research, I will use illustrations from my own modeling framework to illustrate how it is possible to go beyond this mostly behavioral agenda. The main emphasis will be on the contribution of specific computational techniques to conceptualization of difficult to grasp notions such as agency, culture, and identity. Although a complete specification of the models goes beyond the current scope, the paper closes with a discussion of some of their key findings.
Cederman's model incorporates three primary dynamics: "Emergent Polarity" (the idea that boundaries result from a process of conquest); "Democratic Cooperation" (the idea that "Democracy" functions as a tag facilitating cooperation among subsets of actors); and "Nationalist Systems Change" (the idea that boundaries result from actors seeking locations placing them in proximity to other actors possessing the same ethnic identity).

Here is a diagram representing stylized results of the simulation.


Epstein, Steinbruner, and Parker offer a model of civil violence (link). Here are the parameters that are assigned to all actors (population and cops): grievance, hardship, perceived legitimacy, risk aversiveness, field of vision, net risk, location, and decision to act. This is a very simple analysis of collective action, plainly derivative from a rational-choice approach. Each actor decides to act or not depending on his/her calculation of risk and hardship/grievance. These assumptions are vastly weaker than those offered by students of contentious politics like McAdam, Tarrow, and Tilly; but they generate interesting collective results when embodied in a generative ABM.

This research is specifically interesting in the context of the question posed here about fissioning. Consider this series of frames from an animation reflecting the results of random fluctuation of densities in an ethnically mixed community:

Peaceful coexistence
Animation of process leading to ethnic separation / ethnic cleansing

With "peace-keepers" the results are different:


These are interesting results. Plainly the presence or absence of peace-enforcers is relevant to the extent of ethnic violence that occurs. But notice once again how sparse the behavioral assumptions are. The simulations essentially serve to calculate the interactive effects of this particular set of assumptions about agents' behavior -- with no ability to represent organizations, communication, variations in motivation, etc.

All these models warrant study. They attempt to codify the behavior of individuals within geographic and social space and to work out the dynamics of interaction that result. But it is very important to recognize the limitations of these models as predictors of outcomes in specific periods and locations of unrest. These simulation models probably don't shed much light on particular episodes of contention in Egypt or Tunisia during the Arab Spring. The "qualitative" theories of contention that have been developed probably shed more light on the dynamics of contention than the simulations do at this point in their development.

Saturday, September 12, 2015

A survey of agent-based models


Federico Bianchi and Flaminio Squazzoni have published a very useful survey of the development and uses of agent-based models in the social sciences over the past twenty-five years in WIREs Comput Stat 2015 (link). The article is a very useful reference and discussion for anyone interested in the applicability of ABM within sociology.

Here is their general definition of an ABM:
Agent-based models (ABMs) are computer simulations of social interaction between heterogeneous agents (e.g., individuals, firms, or states), embedded in social structures (e.g., social networks, spatial neighborhoods, or institutional scaffolds). These are built to observe and analyze the emergence of aggregate outcomes. By manipulating behavioral or interaction model parameters, whether guided by empirical evidence or theory, micro-generative mechanisms can be explored that can account for macro-scale system behavior, that is, an existing time series of aggregate data or certain stylized facts. (284)
This definition highlights several important features of the ABM approach:
  • unlike traditional rational choice theory and microeconomics, it considers heterogeneous agents
  • it explicitly attempts to represent concrete particulars of the social environment within which agents act
  • it is a micro to macro strategy, deriving macro outcomes from micro activities
  • it permits a substantial degree of "experimentation" in the form of modification of base assumptions
  • it is possible to provide empirical evidence to validate or invalidate the ABM simulation of a given aggregate outcome 
Bianchi and Squazzoni note that the primary areas of application of agent-based models in social-science research include a relatively limited range of topics. The first of these topics included uncoordinated cooperation, reciprocity, and altruism. Robert Axelrod's work on repeated prisoners' dilemmas represents a key example of modeling efforts in this area (link).

A peculiar form of altruism is punishment: imposition of a cost on non-cooperators by other actors. Without punishment the exploitation strategy generally extinguishes the cooperation strategy in a range of situations. A "reciprocator" is an actor who is open to cooperation but who punishes previous non-cooperators on the next interaction. Bianchi and Squazzoni spend time describing an ABM developed by Bowles and Gintis (link) to evaluate the three strategies of Selfish, Reciprocator, and Cooperator, and a derived Shirking rate in a hypothetical and heterogeneous population of hunter-gatherers. Here is Bowles and Gintis' hypothesis:
The hypothesis we explore is that cooperation is maintained because many humans have a predisposition to punish those who violate group-beneficial norms, even when this reduces their fitness relative to other group members. Compelling evidence for the existence and importance of such altruistic punishment comes from controlled laboratory experiments, particularly the study of public goods, common pool resource, ultimatum, and other games.
And here is their central finding, according to Bianchi and Squazzoni:
They found that the robustness of cooperation depended on the coexistence of these behaviors at a group level and that strong reciprocators were functional in keeping the level of cheating under control in each group (see the shirking rate as a measure of resources lost by the group due to cheating in Figure 1). This was due to the fact that the higher the number of cooperators in a group without reciprocators, the higher the chance that the group disbanded due to high payoffs for shirking. (288)
Here is the graph of the incidence of the three strategies over the first 3000 periods of the simulation published in the Bowles and Gintis article:
 
This graph represents the relative frequency of the three types of hunter-gatherers in the population, along with a calculated shirking rate for each period. The Selfish profile remains the most frequent (between 40% and 50%, but Reciprocators and Cooperators reach relatively stable levels of frequency as well (between 30% and 40%, and between 20% and 30%). As Bowles and Gintis argue, it is the robust presence of Reciprocators that keeps the Selfish group in check; the willingness of Reciprocators to punish Selfish actors keeps the latter group from rising to full domination.

In this simulation the frequencies of Selfish and Shirking begin high (>85%) and quickly decline to a relatively stable rate. After 1000 iterations the three strategies attain relatively stable frequencies, with Selfish at about 38%, Reciprocator at 37%, Cooperator at 25%, and a shirking rate at about 11%.

It is tempting to read the study as representing a population that reaches a rough equilibrium. However, it is possible that the appearance of equilibrium conveyed by the graph above is deceptive. Other areas of complex phenomena raise the possibility that this is not a longterm equilibrium, but rather that some future combination of percentages of the three strategies may set off a chaotic redistribution of success rates. This is the key characteristic of a chaotic system: small fluctuations in parameters can lead to major deviations in outcomes.

Also interesting in Bianchi and Squazzoni's review is their treatment of efforts to use ABMs to model the diffusion of cultural and normative attitudes (293ff.). Attitudes are treated as local "contagion" factors, and the goal of the simulation is to model how different adjacencies influence the pattern of spread of the cultural features.
Agents interacted with neighbors with a probability dependent on the number of identical cultural features they shared. A mechanism of interpersonal influence was added to align one randomly selected dissimilar cultural feature of an agent to that of the partner, after interaction. (294ff.)
Social network characteristics have been incorporated into ABMs in this area.

Bianchi and Squazzoni also consider ABMs in the topic areas of collective behavior and social inequality. They draw a number of useful conclusions about the potential role that ABMs can play in sociology, including especially the importance of considering heterogeneous agents:
At a substantive level, these examples show that exploring the fundamental heterogeneity of individual behavior is of paramount importance to understand the emergence of social patterns. Cross-fertilization between experimental and computational research is a useful process. It shows us that by conflating the concept of rationality with that of self-interest, as in standard game theory and economics, we cannot account for the subtle social nuances that characterize individual behavior in social contexts. (298)
And they believe -- perhaps unexpectedly -- that the experience of building ABMs in a range of sociological contexts underlines the importance of institutions, norms, and social context:
Moreover, these ABM studies can help us to understand the importance of social contexts even when looking at individual behavior in a more micro-oriented perspective. The role of social influence and the fact that we are embedded in complex social networks have implications for the type of information we access and the types of behavior we are exposed to. (301)
This is a useful contribution for sociologists, as a foundation for a third alternative between statistical studies of sociological phenomena and high-level deductive theories of those phenomena. ABMs have the potential of allowing us to derive large social patterns from well chosen and empirically validated behavioral assumptions about actors.

I mentioned the common finding in complexity studies that even fairly simple systems possess the capacity for sudden instability. Here is a simulation of a three-body gravitational system which illustrates periods of relative stability and then abrupt destabilization.



ABMs permit us to model populations of interactive adaptive agents, and often the simulation produces important and representative patterns at the aggregate level. Here is an interesting predator-prey simulation on YouTube using an ABM approach by SSmithy87:



The author makes a key point at 2:15: the pattern of variation of predator and prey presented in the simulation is a well-known characteristic of predator-prey populations. (Red is predator and blue is prey.)



But the equations representing this relationship were not built into the model; instead, this characteristic pattern is generated by the model based on the simple behavioral assumptions made about prey and predators. This is a vivid demonstration of the novelty and importance of ABM simulations.

Tuesday, October 21, 2014

Social mechanisms and ABM methods


One particularly appealing aspect of agent-based models is the role they can play in demonstrating the inner workings of a major class of social mechanisms, the group we might refer to as mechanisms of aggregation. An ABM is designed to work out how a field of actors of a certain description, in specified kinds of interaction, lead through time to a certain kind of aggregate effect. This class of mechanisms corresponds to the upward strut of Coleman's boat. This is certainly a causal story; it is a generative answer to the question, how does it work?

However, anyone who thinks carefully about causation will realize that there are causal sequences that occur only once. Consider this scenario: X occurs, conditions Ci take place in a chronological sequence, and Y is the result. So X caused Y through the causal steps instigated by Ci. We wouldn't want to say the complex of interactions and causal links associated with the progress of the system through Ci as a mechanism linking X to Y; rather, this ensemble is the particular (in this case unique) causal pathway from X to Y. But when we think about mechanisms, we generally have in mind the idea of "recurring causal linkages", not simply a true story about how X caused Y in these particular circumstances. In other words, for a causal story to represent a mechanism, it needs to be a story that can be found to hold in an indefinite number of cases. Mechanisms are recurring complexes of causal sequences.

An agent-based model serves to demonstrate how a set of actors give rise to a certain aggregate outcome. This is plainly a species of causal argument. But it is possible to apply ABM methods to circumstances that are unique and singular. This kind of ABM model lacks an important feature generally included in the definition of a mechanism-- the idea of recurrence across a number of cases. So we might single out for special attention those ABMs that identify and analyze processes that recur across multiple social settings. Here we might refer, for example, to the "Schelling mechanism" of residential segregation. There are certainly other unrelated mechanisms associated with urban segregation -- mortgage lending practices or real estate steering practices, for example. But the Schelling mechanism is one contributing factor in a range of empirical and historical cases. And it is a factor that works through the local preferences of individual actors.

So this seems to answer one important question: in what ways can ABM simulations be said to describe social mechanisms? They do so when (i) they describe an aggregative process through which a given meso-level outcome arises, and (ii) the sequence they describe can be said to recur in multiple instances of social process.

A question that naturally arises here is whether there are social mechanisms that fall outside this group. Are there social mechanisms that could not be represented by an ABM model? Or would we want to say that mechanisms are necessarily aggregative, so all mechanisms should be amenable to representation by an ABM?

This is a complicated question. One possible response seems easily refuted: there are mechanisms that work from meso level (organizations) to macro level (rise of fascism) that do not invoke the features of individual actors. Therefore there are mechanisms that do not conform strictly to the requirements of methodological individualism. However, there is nothing in the ABM methodology that requires that the actors should be biological individuals. Certainly it is possible to design an ABM representing the results of competition among firms with different behavioral characteristics. This example still involves an aggregative construction, a generation of the macro behavior on the basis of careful specification of the behavioral characteristics of the units.

Another possible candidate for mechanisms not amenable to ABM analysis might include the use of network analysis to incorporate knowledge-diffusion characteristics into analysis of civil unrest and other kinds of social change. It is sometimes argued that there are structural features of networks that are independent of actor characteristics and choices. But given that ABM theorists often incorporate aspects of network theory into their formal representations of a social process, it is hard to maintain that facts about networks cannot be incorporated into ABM methods.

Another candidate is what Chuck Tilly and pragmatist sociologists (Gross, Abbott, Joas) refer to as the "relational characteristics" of a social situation. Abbott puts the point this way: often a social outcome isn't the result of an ensemble of individuals making discrete choices, but rather is a dance of interaction in which each individual's moves both inform and self-inform later stages of the interaction. This line of thought seems effective as a rebuttal to methodological individualism, or perhaps even analytical sociology, but I don't think it demonstrates a limitation of the applicability of ABM modeling. ABM methods are agnostic about the nature of the actors and their interactions. So it is fully possible for an ABM theorist to attempt to produce a representation of the iterative process just described; or to begin the analysis with an abstraction of the resultant behavioral characteristics found in the group.

I've argued here that it is legitimate to postulate meso-to-meso causal mechanisms. Meso-level things can have causal powers that allow them to play a role in causal stories about social outcomes. I continue to believe that is so. But considerations brought forward here make me think that even in cases where a theorist singles out a meso-meso causal mechanism, he or she is still offering some variety of disaggregative analysis of the item to be explained. It seems that providing a mechanism is always a process of delving below the level of the explananda to uncover the underlying processes and causal powers that bring it about.

So the considerations raised here seem to lead to a strong conclusion -- that all social mechanisms can be represented within the framework of an ABM (stipulating that ABM methods are agnostic about the kinds of agents they postulate). Agent-based models are to social processes as molecular biology is to the workings of the cell.

In fact, we might say that ABM methods simply provide a syntax for constructing social explanations: to explain a phenomenon, identify some of the constituents of the phenomenon, arrive at specifications of the properties of those constituents, and demonstrate how the behavior of the constituents aggregates to the phenomenon in question.

(It needs to be recognized that identifying agent-based social mechanisms isn't the sole use of ABM models, of course. Other uses include prediction of the future behavior of a complex system, "what if" experimentation, and data-informed explanations of complex social outcomes. But these methods certainly constitute a particularly clear and rigorous way of specifying the mechanism that underlies some kinds of social processes.)

Sunday, October 12, 2014

Emergentism and generationism


media: lecture by Stanford Professor Robert Sapolsky on chaos and reduction

Several recent posts have focused on the topic of simulations in the social sciences. An interesting question here is whether these simulation models shed light on the questions of emergence and reduction that frequently arise in the philosophy of the social sciences. In most cases the models I've mentioned are "aggregation" models, in which the simulation attempts to capture the chief dynamics and interaction effects of the units and then work out the behavior and evolution of the ensemble. This is visibly clear when it comes to agent-based models. However, some of the scholars whose work I admire are "complexity" theorists, and a common view within complexity studies is the idea that the system has properties that are difficult or impossible to derive from the features of the units.

So does this body of work give weight to the idea of emergence, or does it incline us more in the direction of supervenience and ontological unit-ism?

John Miller and Scott Page provide an accessible framework within which to consider these kinds of problems in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. They look at certain kinds of social phenomena as constituting what they call "complex adaptive systems," and they try to demonstrate how some of the computational tools developed in the sciences of complex systems can be deployed to analyze and explain complex social outcomes. Here is how they characterize the key concepts:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. (kl 151)
Page and Miller believe that social phenomena often display "emergence" in a way that we can make sense of. Here is the umbrella notion they begin with:
The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (kl 826)
And they believe that the notion of emergence has "deep intuitive appeal". They find emergence to be applicable at several levels of description, including "disorganized complexity" (the central limit theorem, the law of large numbers) and "organized complexity" (the behavior of sand piles when grains have a small amount of control).
Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed. (kl 976)
Organized complexity, in their view, is a substantive and important kind of emergence in social systems, and this concept plays a key role in their view of complex adaptive systems.

Another -- and contrarian -- contribution to this field is provided by Joshua Epstein. His three-volume work on agent-based models is a fundamental text book for the field. Here are the titles:

Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science
Growing Artificial Societies: Social Science From the Bottom Up
Generative Social Science: Studies in Agent-Based Computational Modeling

Chapter 1 of Generative Social Science provides an overview of Epstein's approach is provided in "Agent-based Computational Models and Generative Social Science", and this is a superb place to begin (link). Here is how Epstein defines generativity:
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest.... Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (42)
Epstein describes an extensive attempt to model a historical population using agent-based modeling techniques, the Artificial Anasazi project (link). This work is presented in Dean, Gumerman, Epstein, Axtell, Swedlund, McCarroll, and Parker, "Understanding Anasazi Culture Change through Agent-Based Modeling" in Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes. The model takes a time series of fundamental environmental, climate, and agricultural data as given, and he and his team attempt to reconstruct (generate) the pattern of habitation that would result. Here is the finding they arrive at:

Generativity seems to be directly incompatible with the idea of emergence, and in fact Epstein takes pains to cast doubt on that idea.
I have always been uncomfortable with the vagueness--and occasional mysticism--surrounding this word and, accordingly, tried to define it quite narrowly.... There, we defined "emergent phenomena" to be simply "stable macroscopic patterns arising from local interaction of agents." (53)
So Epstein and Page both make use of the methods of agent based modeling, but they disagree about the idea of emergence. Page believes that complex adaptive systems give rise to properties that are emergent and irreducible; whereas Epstein doesn't think the idea makes a lot of sense. Rather, Epstein's view depends on the idea that we can reproduce (generate) the macro phenomena based on a model involving the agents and their interactions. Macro phenomena are generated by the interactions of the units; whereas for Page and Miller, macro phenomena in some systems have properties that cannot be easily derived from the activities of the units.

At the moment, anyway, I find myself attracted to Herbert Simon's effort to split the difference by referring to "weak emergence" (link):
... reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (Sciences of the Artificial 3rd edition 172)
This view emphasizes the computational and epistemic limits that sometimes preclude generating the phenomena in question -- for example, the problems raised by non-linear causal relations and causal interdependence. Many observers have noted that the behavior of tightly linked causal systems may be impossible to predict, even when we are confident that the system outcomes are the result of "nothing but" the interactions of the units and sub-systems.

Tuesday, October 7, 2014

Verisimilitude in models and simulations


Modeling always requires abstraction and simplification. We need to arrive at a system for representing the components of a system, the laws of action that describe their evolution and interaction, and a way of aggregating the results of the representation of the components and their interactions. Simplifications are required in order to permit us to arrive at computationally feasible representations of the reality in question; but deciding which simplifications are legitimate is a deeply pragmatic and contextual question. Ignoring air resistance is a reasonable simplification when we are modeling the trajectories of dense, massive projectiles through the atmosphere; it is wholly unreasonable if we are interested in modeling the fall of a leaf or a feather under the influence of gravity (link).

Modeling the social world is particularly challenging for a number of reasons. Not all social actors are the same; actors interact with each other in ways that are difficult to represent formally; and actors change their propensities for behavior as a result of their interactions. They learn, adapt, and reconfigure; they acquire new preferences and new ways of weighing their circumstances; and they sometimes change the frames within which they deliberate and choose.

Modeling the social world certainly requires the use of simplifying assumptions. There is no such thing as what we might call a Borges-class model -- one that represents every feature of the terrain. This means that the scientist needs to balance realism, tractability, and empirical adequacy in arriving at a set of assumptions about the actor and the environment, both natural and social. These judgments are influenced by several factors, including the explanatory and theoretical goals of the analysis. Is the analysis intended to serve as an empirical representation of an actual domain of social action -- the effects on habitat of the grazing strategies of a vast number of independent herders, say? Or is it intended to isolate the central tendency of a few key factors -- short term cost-benefit analysis in a context of a limited horizon of environmental opportunities, say?

If the goal of the simulation is to provide an empirically adequate reconstruction of the complex social situation, permitting adjustment of parameters in order to answer "what-if" questions, then it is reasonable to expect that the baseline model needs to be fairly detailed. We need to build in enough realism about the intentions and modes of reasoning of the actors, and we need a fair amount of detail concerning the natural, social, and policy environments in which they choose.

The discipline of economic geography provides good examples of both extremes of abstraction and realism of assumptions. At one extreme we have the work of von Thunen in his treatment of the Isolated State, producing a model of habitation, agriculture, and urbanization that reflects the economic rationality of the actors.


At the other extreme we have calibrated agent-based models of land use that build in more differentiated assumptions about the intentions of the actors and the legal and natural environment in which they make their plans and decisions. A very good and up-to-date volume dedicated to the application of calibrated agent-based models in economic geography is Alison Heppenstall, Andrew Crooks, Linda See, and Michael Batty, Agent-Based Models of Geographical Systems. The contribution by Crooks and Heppenstall provides an especially good introduction to the approach ("Introduction to Agent-Based Modelling"). Crook and Heppenstall describe the distinguishing features of the approach in these terms:
To understand geographical problems such as sprawl, congestion and segregation, researchers have begun to focus on bottom-up approaches to simulating human systems, specifically researching the reasoning on which individual decisions are made. One such approach is agent-based modelling (ABM) which allows one to simulate the individual actions of diverse agents, and to measure the resulting system behaviour and outcomes over time. The distinction between these new approaches and the more aggregate, static conceptions and representations that they seek to complement, if not replace, is that they facilitate the exploration of system processes at the level of their constituent elements. (86)
The volume also pays a good deal of attention to the problem of validation and testing of simulations. Here is how Manson, Sun, and Bonsal approach the problem of validation of ABMs in their contribution, "Agent-Based Modeling and Complexity":
Agent-based complexity models require careful and thorough evaluation, which is comprised of calibration, verification, and validation (Manson 2003 ) . Calibration is the adjustment of model parameters and specifications to fit certain theories or actual data. Verification determines whether the model runs in accordance with design and intention, as ABMs rely on computer code susceptible to programming errors. Model verification is usually carried out by running the model with simulated data and with sensitivity testing to determine if output data are in line with expectations. Validation involves comparing model outputs with real-world situations or the results of other models, often via statistical and geovisualization analysis. Model evaluation has more recently included the challenge of handling enormous data sets, both for the incorporation of empirical data and the production of simulation data. Modelers must also deal with questions concerning the relationship between pattern and process at all stages of calibration, verification, and validation. Ngo and See ( 2012 ) discuss these stages in ABM development in more detail. (125)
An interesting current illustration of the value of agent-based modeling in analysis and explanation of historical data is presented by Kenneth Sylvester, Daniel Brown, Susan Leonard, Emily Merchant, and Meghan Hutchins in "Exploring agent-level calculations of risk and return in relation to observed land-use changes in the US Great Plains, 1870-1940" (link). Their goal is to see whether it is possible to reproduce important features of land use in several Kansas counties by making specific assumptions about decision-making by the farmers, and specific information about the changing weather and policy circumstances within which choices were made. 

Here is how Sylvester and co-authors describe the problem of formulating a representation of the actors in their simulation:
Understanding the processes by which farming households made their land-use decisions is challenging because of the complexity of interactions between people and the places in which they lived and worked, and the often insufficient resolution of observed information. Complexity characterizes land-use processes because observed historical behaviors often represent accumulated decisions of heterogeneous actors who were affected by a wide range of environmental and human factors, and by specific social and spatial interactions. (1)
Here is a graph of the results of the Sylvester et al agent-based model, simulating the allocation of crop land across five different crops given empirical weather and rainfall data.
So how well does this calibrated agent-based model do as a simulation of the observed land use patterns? Not particularly well, in the authors' concluding remarks; their key finding is sobering:
Our base model, assuming profit maximization as the motive for land-use decision making, reproduced the historical record rather poorly in terms of both land use shares and farm size distributions in each township. We attribute the differences to deviations in decision making from profit-maximizing behavior. Each of the subsequent experiments illustrates how relatively simple changes in micro-level processes lead to different aggregate outcomes. With only minor adjustments to simple mechanisms, the pace, timing, and trajectories of land use can be dramatically altered.
However, they argue that this lack of fit does not discredit the ABM approach, but rather disconfirms the behavioral assumption that farmers are simple maximizers of earning. They argue, as sociologists would likely agree, that "trajectories of land-use depended not just on economic returns, but other slow processes of change, demographic, cultural, and ecological feedbacks, which shaped the decisions of farmers before and long after the middle of the twentieth century." And therefore it is necessary to provide more nuanced representations of actor intentionality if the model is to do a good job of reproducing the historical results and the medium-term behavior of the system.

(In an earlier post I discussed a set of formal features that have been used to assess the adequacy of formal models in economics and other mathematized social sciences (link). These criteria are discussed more fully in On the Reliability of Economic Models: Essays in the Philosophy of Economics.)

(Above I mentioned the whimsical idea of "Borges-class models" -- the unrealizable ideal of a model that reproduces every aspect of the phenomena that it seeks to simulate. Here is the relevant quotation from Jorge Borges.

On Exactitude in Science
Jorge Luis Borges, Collected Fictions, translated by Andrew Hurley.

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
—Borges quoting Suarez Miranda,Viajes devarones prudentes, Libro IV,Cap. XLV, Lerida, 1658)

Thursday, October 2, 2014

Computational models for social phenomena


There is a very lively body of work emerging in the intersection between computational mathematics and various fields of the social sciences. This emerging synergy between advanced computational mathematics and the social sciences is possible, in part, because of the way that social phenomena emerge from the actions and thoughts of individual actors in relationship to each other. This is what allows us to join mathematics to methodology and explanation. Essentially we can think of the upward strut of Coleman’s boat — the part of the story that has to do with the “aggregation dynamics” of a set of actors — and can try to create models that can serve to simulate the effects of these actions and interactions.

source: Hedstrom and Ylikoski (2010) "Causal Mechanisms in the Social Sciences" (link)
 

Here is an interesting example in the form of a research paper by Rahul Narain and colleagues on the topic of modeling crowd behavior ("Aggregate Dynamics for Dense Crowd Simulation", link). Here is their abstract:

Large dense crowds show aggregate behavior with reduced individual freedom of movement. We present a novel, scalable approach for simulating such crowds, using a dual representation both as discrete agents and as a single continuous system. In the continuous setting, we introduce a novel variational constraint called unilateral incompressibility, to model the large-scale behavior of the crowd, and accelerate inter-agent collision avoidance in dense scenarios. This approach makes it possible to simulate very large, dense crowds composed of up to a hundred thousand agents at near- interactive rates on desktop computers.

Federico Bianchi takes up this intersection between computational mathematics and social behavior in a useful short paper called "From Micro to Macro and Back Again: Agent-based Models for Sociology" (link). His paper focuses on one class of computational models, the domain of agent-based models. Here is how he describes this group of approaches to social explanation:

An Agent-Based Model (ABM) is a computational method which enables to study a social phenomenon by representing a set of agents acting upon micro-level behavioural rules and interacting within environmental macro-level (spatial, structural, or institutional) constraints. Agent-Based Social Simulation (ABSS) gives social scientists the possibility to test formal models of social phenomena, generating a virtual representation of the model in silico through computer programming, simulating its systemic evolution over time and comparing it with the observed empirical phenomenon. (1) 

 And here is how he characterizes the role of what I called "aggregation dynamics" above:

Solving the complexity by dissecting the macro-level facts to its micro-level components and reconstructing the mechanism through which interacting actors produce a macro-level social outcome. In other words, reconstructing the micro-macro link from interacting actors to supervenient macrosociological facts. (2)

Or in other words, the task of analysis is to provide a testable model that can account for the way the behaviors and interactions at the individual level can aggregate to the observed patterns at the macro level.

Another more extensive example of work in this area is Gianluca Manzo, Analytical Sociology: Actions and Networks. Manzo's volume proceeds from the perspective of analytical sociology and agent-based models. Manzo provides a very useful introduction to the approach, and Peter Hedstrom and Petri Ylikoski extend the introduction to the field with a chapter examining the role of rational-choice theory within this approach. The remainder of the volume takes the form of essays by more than a dozen sociologists who have used the approach to probe and explain specific kinds of social phenomena.

Manzo provides an account of explanation that highlights the importance of "generating" the phenomena to be explained. Here are several principles of methodology on this topic:

  • P4: in order to formulate the "generative model," provide a realistic description of the relevant micro-level entities (P4a) and activities (P4b) assumed to be at work, as well as of the structural interdependencies (P4c) in which these entities are embedded and their  activities unfold;
  • P5: in order rigorously to assess the internal consistency of the "generative model" and to determine its high-level consequences, translate the "generative model" into an agent-based computational model;
  • P6: in order to assess the generative sufficiency of the mechanisms postulated, compare the agent-based computational model's high-level consequences with the empirical description of the facts to be explained (9)

So agent-based modeling simulations are a crucial part of Manzo's understanding of the logic of analytical sociology. As agent-based modelers sometimes put the point, "you haven't explained a phenomenon until you've shown how it works on the basis of a detailed ABM." But the ABM is not the sole focus of sociological research, on Manzo's approach. Rather, Manzo points out that there are distinct sets of questions that need to be investigated: how do the actors make their choices? What are the structural constraints within which the actors exist? What kinds of interactions and relations exist among the actors? Answers to all these kinds of question are needed if we are to be able to design realistic and illuminating agent-based models of concrete phenomena.

Here is Manzo's summary table of the research cycle (8). And he suggests that each segment of this representation warrants a specific kind of analysis and simulation.

This elaborate diagram indicates that there are different locations within a complex social phenomenon where different kinds of analysis and models are needed. (In this respect the approach Manzo presents parallels the idea of structuring research methodology around the zones of activity singled out by the idea of methodological localism; link.) This is methodologically useful, because it emphasizes to the researcher that there are quite a few different kinds of questions that need to be addressed in order to successfully explain a give domain of phenomena.

The content-specific essays in the volume focus on one or another of the elements of this description of methodology. For example, Per-Olof Wikstrom offers a "situational action theory" account of criminal behavior; this definition of research focuses on the "Logics of Action" principle 4b.

People commit acts of crime because they perceive and choose (habitually or after some deliberation) a particular kind of act of crime as an action alternative in response to a specific motivation (a temptation or a provocation). People are the source of their actions but the causes of their actions are situational. (75)
SAT proposes that people with a weak law-relevant personal morality and weak ability to exercise self-control are more likely to engage in acts of crime because they are more likely to see and choose crime as an option. (87)

Wikstrom attempts to apply these ideas by using a causal model to reproduce crime hotspots based on situational factors (90).

The contribution of Gonzalez-Bailon et al, "Online networks and the diffusion of protest," focuses on the "Structural Interdependency" principle 4c.

One of the programmatic aims of analytical sociology is to uncover the individual-level mechanisms that generate aggregated patterns of behaviour.... The connection between these two levels of analysis, often referred to as the micro-macro link, is characterised by the complexity and nonlinearity that arises from interdependence; that is, from the influence that actors exert on each other when taking a course of action. (263)

Their contribution attempts to provide a basis for capturing the processes of diffusion that are common to a wide variety of types of social behavior, based on formal analysis of interpersonal networks.

Networks play a key role in diffusion processes because they facilitate threshold activation at the local level. Individual actors are not always able to monitor accurate the behavior of everyone else (as global thresholds assume) or they might be more responsive to a small group of people, represented in their personal networks. (271)

They demonstrate that the structure of the local network matters for the diffusion of an action and the activation of individual actors.

In short, Analytical Sociology: Actions and Networks illustrates a number of points of intersection between computational mathematics, simulation systems, and concrete sociological research. This is a very useful effort as social scientists attempt to bring more complex modeling tools to bear on concrete social phenomena.

Sunday, December 9, 2012

Simulating social mechanisms



A key premise of complexity theory is that a population of units has "emergent" properties that result from the interactions of units with dynamic characteristics. Call these units "agents".  The "agent" part of the description refers to the fact that the elements (persons) are self-directed units.  Social ensembles are referred to as "complex adaptive systems" -- systems in which outcomes are the result of complex interactions among the units AND in which the units themselves modify their behavior as a result of prior history.

Scott Page's Complex Adaptive Systems: An Introduction to Computational Models of Social Life provides an excellent introduction. Here is how Page describes an adaptive social system:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. It would be difficult to date the exact moment that such systems first arose on our planet -- perhaps it was when early single-celled organisms began to compete with one another for resources.... What it takes to move from an adaptive system to a complex adaptive system is an open question and one that can engender endless debate. At the most basic level, the field of complex systems challenges the notion that by perfectly understanding the behavior of each component part of a system we will then understand the system as a whole. (kl 151)
Herbert Simon added a new chapter on complexity to the third edition of The Sciences of the Artificial - 3rd Edition in 1996.
By adopting this weak interpretation of emergence, we can adhere (and I will adhere) to reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172).
This formulation amounts to the claim of what I referred earlier to as "relative explanatory autonomy"; link. It is a further articulation of Simon's view of "pragmatic holism" first expressed in 1962 (link).

So how would agent-based models (ABM) be applied to mechanical systems? Mechanisms are not intentional units. They are not "thoughtful", in Page's terms. In the most abstract version, a mechanism is an input-output relation, perhaps with governing conditions and with probabilistic outcomes -- perhaps something like this:


In this diagram A, B, and D are jointly sufficient for the working of the mechanism, and C is a "blocking condition" for the mechanism. When A,B,C,D are configured as represented the mechanism then does its work, leading with probability PROB to R and the rest of the time to S.

So how do we get complexity, emergence, or unpredictability out of a mechanical system consisting of a group of separate mechanisms? If mechanisms are determinate and exact, then it would seem that a mechanical system should not display "complexity" in Simon's sense; we should be able to compute the state of the system in the future given the starting conditions.

There seem to be several key factors that create indeterminacy or emergence within complex systems. One is the fact of causal interdependency, where the state of one mechanism influences the state of another mechanism which is itself a precursor to the first mechanism.  This is the issue of feedback loops or "coupled" causal processes. Second is non-linearity: small differences in input conditions sometimes bring about large differences in outputs. Whenever an outcome is subject to a threshold effect, we will observe this feature; small changes short of the threshold make no change in the output, whereas small changes at the threshold bring about large changes. And third is the adaptability of the agent itself.  If the agent changes behavioral characteristics in response to earlier experience (through intention, evolution, or some other mechanism) then we can expect outcomes that surprise us, relative to similar earlier sequences. And in fact, mechanisms display features of each of these characteristics. They are generally probabilistic, they are often non-linear, they are sensitive to initial conditions, and at least sometimes they "evolve" over time.

So here is an interesting question: how do these considerations play into the topic of understanding social outcomes on the basis of an analysis of underlying social mechanisms? Assume we have a theory of organizations that involves a number of lesser institutional mechanisms that affect the behavior of the organization. Is it possible to develop an agent-based model of the organization in which the institutional mechanisms are the units? Are meso-level theories of organizations and institutions amenable to implementation within ABM simulation techniques?

Here is a Google Talk by Adrien Treuille on "Modeling and Control of Complex Dynamics".



The talk provides an interesting analysis of "crowd behavior" based on a new way of representing a crowd.