Showing posts with label CAT_methodology. Show all posts
Showing posts with label CAT_methodology. Show all posts

Wednesday, August 22, 2012

Political polarization?

Is the American electorate "polarized" with regard to sets of political issues? McCarty, Rosenthal, and Poole accept the common view that we have in fact become more polarized in our politics over the past twenty years, and they offer an interesting theory of what is causing this polarization in Polarized America: The Dance of Ideology and Unequal Riches (Walras-Pareto Lectures). This theory was discussed in an earlier post. Delia Baldassarri and Peter Bearman take a different perspective, however, in a 2007 article, "Dynamics of Political Polarization" (AMERICAN SOCIOLOGICAL REVIEW, 2007, VOL. 72 (October:784–811)). Here is how Baldassarri and Bearman frame their research:
In this article we provide a parsimonious account for two puzzling empirical outcomes. The first is the simultaneous presence and absence of political polarization—the fact that attitudes rarely polarize, even though people believe polarization to be common. The second is the simultaneous presence and absence of social polarization—the fact that while individuals experience attitude homogeneity in their interpersonal networks, these networks retain attitude heterogeneity overall. We do this by investigating the joint effects of personal influence on attitudes and social relations. (784)
Baldassarri and Bearman quote a range of studies that find that the mass of the US population is not polarized in the bulk of its political attitudes, and that it has not increased in polarization in the past decade. "The evidence suggests that, aside from a small set of takeoff issues, 'the policy preferences of different social groupings generally move in parallel with each other'” (784). They resolve part of the paradox by distinguishing between activist opinions and public opinion:
In the same vein, Fiorina and colleagues (2005) dispute “The Myth of a Polarized America” and suggest that the “culture war” commonly conjured up in the media is a fictive construction. According to their analysis, there is no popular polarization, but simply partisan polarization—“those who affiliate with a party are more likely to affiliate with the ‘correct' party today than they were in earlier periods” (p. 25). It is the political elite and a small number of party activists that are polarized.
This all seems a little paradoxical, so it's worth looking at the assumptions these two groups of researchers are making about "polarization".

To start, what is meant by polarization with respect to a given issue -- say gay marriage? Essentially the concept is a characteristic of a population's distribution across an attitudinal scale from strongly support to strongly oppose with respect to the issue in question. A population is homogeneous if the distribution of scores has a single peak and a small standard deviation, and is polarized if it has two (or more) peaks. Here is a diagram representing the results of their agent-based model of attitude diffusion. Each issue eventually shows a pronounced degree of polarization after several hundred iterations, with about half the population distributed around a positive attitude and the other half distributed around a negative attitude. Presumably we can define increase in polarization as a shift apart of the two peaks (kurtosis) and perhaps a decrease in the deviation around the peaks.


Theoretically a population could be segmented into three distinct groups -- perhaps one-third who cluster around the zero point of indifference and two extreme groups on the left and right.

The most original part of their work here is an effort to model the emergence of issue polarization based on a theory of how social interactions in networks and small groups influence individuals' attitudes. They offer a sociological theory of inter-personal influence to explain how attitude diffusion occurs within a population, and they report the results of network simulations to illustrate the consequences of this theory. They argue that this model explains how members of society can perceive polarity while actually embodying a high degree of homogeneity.
In more general terms, we show that simple mechanisms of social interaction and personal influence can lead to both social segregation and ideological polarization. (785)
Our goal has been to deploy a model of inter-personal influence sensitive to dynamics of political discussion, where actors hold multiple opinions on diverse issues, interact with others relative to the intensity and orientation of their political preferences, and through evolving discussion networks shape their own and others' political contexts. In the model, opinion change depends on two factors: the selection of interaction partners, which determines the aggregate structure of the discussion network, and the process of interpersonal influence, which determines the dynamics of opinion change. In the next section, we organize the description of the model around these two elements. Table 1 summarizes the simulation algorithm. (788)
The simulations are very interesting. The authors specify assumptions about the structure of interactions; they specify how an individual's attitude is affected by the interaction; and they creat an initial distribution of attitudes for the 100 actors in the simulation. They then run the set of actors and interactions through 500 iterations and observe the resulting patterns of distribution of attitudes.

The cases resolve into two large groups: non-takeoff, where polarization does not emerge and takeoff, where polarization does occur. The first group is much more common, validating the prior finding that public opinion is not becoming more polarized. The "takeoff" group is much less common but important. For some initial distributions of attitudes and interaction pathways the population does develop IMO two sharply divided sub-groups. These two diagrams illustrate these two possibilities.



In the second figure the population is moving strongly towards polarization around the issue, whereas the first figure represents a population with no pattern of polarization.

Several things are striking about this work. First is the degree to which it presents a picture of public opinion that seems highly counterintuitive in 2012. The first half of their paradox seems even more compelling today than five years ago -- the American public does seem to be very divided in its opinions about social and moral issues. The second striking thing is perhaps an omission in the foundations of their theory of attitude formation. Their model works through 1-1 interactions. But it seems evident that a lot of attitude formation is happening through exposure to the media -- television, radio, Internet, social media. There doesn't appear to be an obvious way to incorporate these powerful influences into their model. And yet these may be much more influential than 1-1 interactions.

This research is of interest for two important reasons. First, it is a sustained effort to account for how issue separation occurs in real social groups. And second, it provides an excellent and detailed example of a microfoundational approach to an important social process, using a variety of agent-based modeling techniques to work out the consequences of the theory of social influence with which they begin. The models allow Baldassarri and Bearman to carefully probe the assumptions of the theory of individual-level attitude dynamics that they postulate. So the work is both substantively and methodologically rewarding. It is analytical sociology at its best.

Wednesday, April 25, 2012

Actor-centered sociology


I've advocated many times here for the advantages of what I've referred to as "actor-centered" sociology. Let's see here whether it is possible to say fairly specifically what that means. Here is an elliptical description of three aspects of what I mean by "actor-centered sociology":

First, it reflects a view of social ontology: Social things are composed, constituted, and propertied by the activities and interactions of individual actors -- perhaps 2, perhaps 300M. Second, it puts forward a constraint on theorizing: Our social theories need to be compatible with the ontology. The way I put the point is this: social theories, hypotheses, and assertions need microfoundations. Third, "actor-centered sociology" represents a heuristic about where to focus at least some of our research energy and attention: at the ordinary processes and relations through which social processes take place, the ordinary people who bring them about, and the ordinary processes through which the effects of action and interaction aggregate to higher levels of social organization.

(a) This means that sociological theory need to recognize and incorporate the idea that all social facts and structures supervene on the activities and interactions of socially constructed individual actors. It is meta-theoretically improper to bring forward hypotheses about social structures that cannot be appropriately related to the actions and interactions of individuals. Or in other words, it means that claims about social structures require microfoundations.

(b) The meta-theory of actor-centered sociology requires that all social theories, at whatever level, require a theory of the actor. Economics and ethnomethodology differ in the level of specificity they offer for their theories of the actor; but both have such a theory.  They both put forward fundamental ideas about how actors think and the mental processes that influence their actions.

(c) Actor-centered sociology suggests that careful study of local social mechanisms and behaviors is a worthwhile exercise for sociological research.  Ethnomethodology and the careful, place-based investigations offered by Goffman and Garfinkel move from the wings to the stage itself.

(d) It appears to imply that we may be able to provide an explanation of at least some higher-level social facts by showing how they emerge as a result of the workings of actors and their structured interactions. This is the aggregation-dynamics methodology (link).  Or in terms discussed elsewhere here, it is the micro-to-macro link of Coleman's boat (link).

(e) The actor-based sociology approach seems to imply that the regularities that may exist at the level of macro-social phenomena are bound to be weak and exception-laden. Heterogeneity within and across actors -- across history and across social settings -- seems to imply multiple sets of attainable aggregate outcomes.  Would fascist organizations flourish in Italy after World War I? The answer is indeterminate.  There were numerous groups of social actors with important differences in their states of agency, and these groups in turn were influenced by organizations of varying characteristics. So it would be impossible to say in advance with confidence either that fascism was likely to emerge or that it was unlikely to emerge (link).

(f) The actor-centered approach suggests that we can do better sociology by being more attentive to subtle differences in agency in specific groups and times. George Steinmetz's careful attention to the processes of formation through which colonial administrators took shape in nineteenth-century Germany illustrates the value of paying attention to the historical particulars of various groups of actors, and the historically specific circumstances in which their frames of agency were created (link). It implies that context and historical processes are crucial to sociological explanation.

(g) The actor-centered approach highlights the importance of careful analysis of the mechanisms of communication and interaction through which individuals influence each other and through which their actions aggregate to higher level social outcomes and structures.  Social networks, competitive markets, mass communications systems, and civic associations all represent important inter-actor linkages that have massively important consequences for aggregate social outcomes.

(h) Finally, the actor-centered approach has some of the advantages of the spotlight in a three-ring circus. The idea of actor-centered sociology points the spotlight to the parts of the arena where the action is happening: to the formation of the actor, to the concrete setting of the actor, to the interactions that occur among actors, to the aggregative processes that lead to larger outcomes, and to the causal properties that those larger structures come to have.

One thing that is somewhat troubling for anyone who has been reading this blog over time is that there seems to be a glaring inconsistency in two lines of thought emphasized repeatedly here: first, that social facts require microfoundations; and second, that meso-structures can have autonomous causal properties. Are these two ideas consistent?

In particular, one might interpret the imperative of actor-centered sociology as a particularly restrictive view of social causation: from configurations of actors to meso-level social facts.  So all the causal "action" is happening at the level of the actors, not the structures.  Dave Elder-Vass attempts to avoid this implication by arguing for emergent social causal properties (link); I've approached the problem by talking about relatively autonomous causal properties at the meso-level (link).  I continue to think the latter view works reasonably well.  In a post on "University as a causal structure," for example, I think a plausible case is made for both ideas: the tenure system is causally effective in constraining individual faculty members' behavior as well as being causally effective in influencing other structural features of the university; and every aspect of this system has microfoundations in the form of the structured circumstances of action and culturation through which the bureaucratic agents in the system behave. Or in other words: it is consistent to maintain both parts of the dilemma, actor-centered sociology and relatively autonomous meso-level social causation (link).

Sunday, April 17, 2011

Scenario-based projections of social processes


As we have noted in previous posts, social outcomes are highly path-dependent and contingent (link, link, link, link). This implies that it is difficult to predict the consequences of even a single causal intervention within a complex social environment including numerous actors -- say, a new land use policy, a new state tax on services, or a sweeping cap-and-trade policy on CO2 emissions. And yet policy changes are specifically designed and chosen in order to bring about certain kinds of outcomes. We care about the future; we adopt policies to improve this or that feature of the future; and yet we have a hard time providing a justified forecast of the consequences of the policy.

This difficulty doesn't only affect policy choices; it also pertains to large interventions like the democracy uprisings in the Middle East and North Africa. There are too many imponderable factors -- the behavior of the military, the reactions of other governments, the consequent strategies of internal political actors and parties (the Muslim Brotherhood in Egypt) -- so activists and academic experts alike are forced to concede that they don't really know what the consequences will be.

One part of this imponderability derives from the fact that social changes are conveyed through sets of individual and collective actors. The actors have a variety of motives and modes of reasoning, and the collective actors are forced to somehow aggregate the actions and wants of subordinate actors. And it isn't possible to anticipate with confidence the choices that the actors will make in response to changing circumstances. At a very high level of abstraction, it is the task of game theory to model strategic decision-making over a sequence of choices (problems of strategic rationality); but the tools of game theory are too abstract to allow modeling of specific complex social interactions.

A second feature of unpredictability in extended social processes derives from the fact that the agents themselves are not fixed and constant throughout the process. The experience of democracy activism potentially changes the agent profoundly -- so the expectations we would have had of his/her choices at the beginning may be very poorly grounded by the middle and end. Some possible changes may make a very large difference in outcomes -- actors may become more committed, more open to violence, more ready to compromise, more understanding of the grievances of other groups, ... This is sometimes described as endogeneity -- the causal components themselves change their characteristics as a consequence of the process.

So the actors change through the social process; but the same is often true of the social organizations and institutions that are involved in the process. Take contentious politics -- it may be that a round of protests begins around a couple of loose pre-existing organizations. As actors seek to achieve their political goals through collective action, they make use of the organizations for their communications and mobilization resources. But some actors may then also attempt to transform the organization itself -- to make it more effective or to make it more accommodating to the political objectives of this particular group of activists. (Think of Lenin as a revolutionary organization innovator.) And through their struggles, they may elicit changes in the organizations of the "forces of order" -- the police may create new tactics (kettling) and new sub-organizations (specialized intelligence units). So the process of change is likely enough to transform all the causal components as well -- the agents and their motivations as well as the surrounding institutions of mobilization and control. Rather than a set of billiard balls and iron rods with fixed properties and predictable aggregate consequences, we find a fluid situation in which the causal properties of each of the components of the process are themselves changing.

One way of trying to handle the indeterminacy and causal complexity of these sorts of causal processes is to give up on the goal of arriving at specific "point" predictions about outcomes and instead concentrate on tracing out a large number of possible scenarios, beginning with the circumstances, actors, and structures on the ground. In some circumstances we may find that there is a very wide range of possible outcomes; but we may find that a large percentage of the feasible scenarios or pathways fall within a much narrower range. This kind of reasoning is familiar to economists and financial analysts in the form of Monte Carlo simulations. And it is possible that the approach can be used for modeling likely outcomes in more complex social processes as well -- war and peace, ethnic conflict, climate change, or democracy movements.

Agent-based modeling is one component of approaches like these (link).  This means taking into account a wide range of social factors -- agents, groups, organizations, institutions, states, popular movements, and then modeling the consequences of these initial assumptions. Robert Axelrod and colleagues have applied a variety of modeling techniques to these efforts (link).

Another interesting effort to carry out such an effort is underway at the RAND Pardee Center, summarized in a white paper called Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis. Here is how the lead investigators describe the overall strategy of the effort:
This report describes and demonstrates a new, quantitative approach to long-term policy analysis (LTPA).  These robust decisionmaking methods aim to greatly enhance and support humans’ innate decisionmaking capabilities with powerful quantitative analytic tools similar to those that have demonstrated unparalleled effectiveness when applied to more circumscribed decision problems.  By reframing the question “What will the long-term future bring?” as “How can we choose actions today that will be consistent with our long-term interests?” robust decisionmaking can harness the heretofore unavailable capabilities of modern computers to grapple directly with the inherent difficulty of accurate long-term prediction that has bedeviled previous approaches to LTPA. (iii)
LTPA is an important example of a class of problems requiring decisionmaking under conditions of  deep uncertainty—that is, where analysts do not know, or the parties to a decision cannot agree on, (1) the appropriate conceptual models that describe the relationships among the key driving forces that will shape the long-term future, (2) the probability distributions used to represent uncertainty about key variables and parameters in the mathematical representations of these conceptual models, and/or (3) how to value the desirability of alternative outcomes. (iii)
And here, in a nutshell, is how the approach is supposed to work:
This study proposes four key elements of successful LTPA: 
Consider large ensembles (hundreds to millions) of scenarios.
• Seek robust, not optimal, strategies.
• Achieve robustness with adaptivity.
• Design analysis for interactive exploration of the multiplicity of plausible futures.
 
These elements are implemented through an iterative process in which the computer helps humans create a large ensemble of plausible scenarios, where each scenario represents one guess about how the world works (a future state of the world) and one choice of many alternative strategies that might be adopted to influence outcomes. Ideally, such ensembles will contain a sufficiently wide range of plausible futures that one will match whatever future, surprising or not, does occur—at least close enough for the purposes of crafting policies robust against it.  (xiii)
Thus, computer-guided exploration of scenario and decision spaces can provide a prosthesis for the imagination, helping humans, working individually or in groups, to discover adaptive near-term strategies that are robust over large ensembles of plausible futures. (xiv)
The hard work of this approach is to identify the characteristics of policy levers, exogenous uncertainties, measures, and relationship (XLRM).  Then the analysis turns to identifying a very large number of possible scenarios, depending on the initial conditions and the properties of the actors and organizations. (This aspect of the analysis is analogous to multiple plays of a simulation game like SimCity.) Finally, the approach requires aggregating the large number of scenarios to allow the analysis to reach some conclusions about the distribution of futures entailed by the starting position and the characteristics of the actors and institutions.  And the method attempts to assign a measure of "regret" to outcomes, in order to assess the policy steps that might be taken today that lead to the least regrettable outcomes in the distant future.

It appears, then, that there are computational tools and methods that may prove useful for social explanation and social prediction -- not of single outcomes, but of the range of outcomes that may be associated with a set of interventions, actors, and institutions.

Monday, December 13, 2010

Diagrams and economic thought

source: The Paretian System (link)

The most vivid part of any undergraduate student's study of economics is probably the diagrams.  Economists since Walras, Pareto, and Marshall have found it useful to express their theories and hypotheses making use of two-axis diagrams, allowing for very economical formulation of fundamental relationships. Supply-demand curves, production functions, and a graph of diminishing marginal product all provide a way of making geometrical sense of a given economic principle or hypothesis.  They allow us to visualize the relationships that are postulated among a set of factors.

Mark Blaug has made a long and fruitful career out of his remarkable ability of placing economic thought into its context (Economic Theory in Retrospect (1962), The Methodology of Economics: Or, How Economists Explain (1992)).  Now he has collaborated with Peter Lloyd to produce Famous Figures and Diagrams in Economics (2010), and the book is a marvelous contribution.

The book is organized into several large sections: Demand and supply curve analysis; Welfare economics; Special markets; General equilibrium analysis; Open economies; Macroeconomic analysis; and Growth and income distribution.  Experts have been recruited to write short, technical but accessible essays on some 58 topics, including discussion of about 150 diagrams.  

The figures that the book considers pretty much reproduce the history of modern economic thought.  And, indeed, some figures have been repeatedly rediscovered; Blaug attributes the "Marshallian cross" to Cournot (1838), Rau (1841), Dupuit (1844), Mangodt (1863), and Jenkin (1870).  Almost all the examples are drawn from the history of orthodox neo-classical economics; rare exceptions are Joan Robinson's "graph of discrimination" and August Losch's "market areas".  The main insights of classical economics are equally amenable to presentation through diagrams, so it is interesting that the classical economists (including Marx) were not particularly inclined to use them.  Here is a diagram not included in the book, representing Michio Morishima's effort to express some of Ricardo's central economic intuitions:


It is worth thinking a bit about what a diagram is, from a cognitive point of view.  To start, it is not a data graph; a diagram does not generally provide a summary of actual economic variables over time, such as unemployment.  But generally an economic diagram is not simply a graph of a given mathematical function either, plotting the value of a function over part of the domain of the independent variable.  We need more than a graphing calculator to create a useful economic diagram.

Rather, an economic diagram is a stylized representation of the behavior and interaction of (often) several variables in a range of interest.  Take the most fundamental diagram of neoclassical economics, the supply-demand diagram.  We are asked to consider "supply" and "demand" over a range of "price".  One curve represents the quantity of the good that will be produced at a low price through high price; the other curve represents the quantity of the good that will be purchased at the range of prices.  The intersection of the curves is the point of interest; it is the equilibrium at which quantity demanded equals quantity supplied.  The shape of the curve is significant; a straight line represents the view that supply and demand are linear with respect to price, whereas a curved line represents a non-linear relation between quantity and price.  (Each increment in price stimulates a smaller change in quantity.)

Here are some of the uses of diagrams in economics that Blaug and Lloyd mention in their introduction:
Figures and diagrams have been used in economic theory in several ways.  They have been used as a device to discover economic results; theorems or properties of models; or comparative static propositions and dynamic propositions. They have been used to prove some results. And they have been used as an expository device. (5)
They go on to quote Marshall:
It happens with a few unimportant exceptions all the results which have been obtained by the application of mathematical methods to pure economic theory can be obtained independently by the method of diagrams.  Diagrams represent simultaneously to the eye the chief forces which are at work, laid out, as it were, in a map; and thereby suggest results to which attention has not been directed by the use of methods of mathematical analysis. (5)
We might imagine that economic diagrams are purely mathematical constructs, and we might suppose that we have little choice in the way that a diagram is constructed.  But Edward Tufte has quite a bit to say on this subject in a series of books beginning with The Visual Display of Quantitative Information.  Essentially Tufte's message is that quantitative ideas can be conveyed in better and worse ways, and that much of the communication we do about quantities is misleading.  Conveying a quantitative relationship through a diagram can be done more or less insightfully; it is up to the economist to find a concise way of representing the relationships he/she is interested in exploring.

source: EJ Marey's train schedule, Paris to Lyons, in Edward Tufte,  The Visual Display of Quantitative Information

Blaug and Lloyd take some note of the "presentation aesthetics" of economic diagrams when they discuss modern methods of presentation:
In many areas of economic theory, the way in which economists understand economic concepts and propositions is through figures and diagrams.  What teacher of economic theory has not seen the dawn of understanding come over students when, failing to understand an exposition of some complex model in algebra or calculus, they are presented with a simple illustration? ... 
One can comprehend relationships among a number of variables (as in the box diagrams) or the effects of shifting curves or multiple equilibria more readily than in the corresponding algebra. This advantage has been increased by modern technologies.  Textbooks today use multi-coloured diagrams to great effect and the delivery of diagrams in classroom from computer-based programs allow overlays and other graphical techniques that aid the exposition of complex ideas. (8-9)
One of my favorite economic diagrams is the one introduced by Mark Elvin to represent his theory of a high-level equilibrium trap in agricultural development in The Pattern of the Chinese Past.


This diagram represents several different kinds of historical change in one compact figure: gradual technical progress along a production curve, shift of production curves through technical innovation, and the maximum production possibility curve that lies above each of these.  The axes represent "total output" and "rural population." The concave shape of each curve has a very specific economic and demographic meaning: as population grows within a given mix of techniques, output grows more slowly; so average output per capita approaches the subsistence line OS.  The HLET is graphically and laconically indicated on the upper right quadrant of the graph; there is no further room for technical improvement, and population has increased to the point where there is no surplus to fund radical technological innovation.  (Elvin's theory of the high-level equilibrium trap is discussed in my Microfoundations, Methods, and Causation; link.)

Saturday, March 27, 2010

Skinner's spatial imagination


images: presentations of Skinner's data by Center for Geographic Analysis, Harvard University, AAS 2010


G. William Skinner was a remarkably generous scholar who inspired and assisted several generations of China specialists.  (Here is a link to a remembrance of Bill.)  He was prolific and fertile, and there is much to learn from rereading his work. There is quite a corpus of unpublished work in the form of research reports and conference papers.  Rereading this work is profoundly stimulating. It holds up very well as a source of ideas about social science analysis of concrete historical and social data, and there are many avenues of research that remain to be further explored.

Skinner is best known for his efforts to provide regional systems analysis of spatial patterns in China.   He thought of a social-economic region as a system of flows of people, goods, and ideas.  He argued for the crucial role that water transport played in knitting together the economic activities of a region in the circumstances of pre-modern transport.   

Skinner's work demonstrated the great value of spatial analysis.  Patterns emerge visually once we’ve selected the appropriate level of scope.  Mapping social and economic data is tremendously insightful.  He was also highly sensitive to the social and cultural consequences of these flows of activity.  For example, patterns of gender ratios show a pronounced regional pattern; Skinner demonstrates the relevance of core-periphery structure to social-cultural variables such as this one. 

Skinner plainly anticipated the historical GIS revolution conceptually.  And this is a feature of imagination, not technology.

A classic series of articles on the spatial structure of the Chinese countryside in the 1960s provided an important basis for rethinking “village” society. They also provided a rigorous application of central place theory to the concrete specificity of China.  Here are several maps drawn from these essays ("Marketing and Social Structure in Rural China." Journal of Asian Studies 24 (1-3), 1964-65). Here Skinner is trying out the theories of central place theory, and the theoretical prediction of economic space being structured as a system of nested hexagons with places linked by roads.





Another key contribution of Skinner's work is his analysis of China in terms of a set of eight or nine “macroregions”.  He argues that China was not a single national economic system, and it was not a set of separate provincial economies.  Instead, it consisted of a small number of “macroregions” of trade, commerce, and population activity, linked by water transport.  And macroregions were internally differentiated into core and periphery.  

Skinner used meticulous county-level databases to map the economic and demographic boundaries of the region.  Skinner identified core and periphery in terms of population density, agricultural use, and other key variables.  And he then measured a host of other variables – female literacy, for example – and showed that these vary systemically from core to periphery.  There is also an important ecological dimension to the argument; Skinner demonstrated that there is a flow of fertility from periphery to core as a result of the transfer of food and fuel from forests to urban cores.  (This analysis is developed in "Regional Urbanization in Nineteenth-Century China" in The City in Late Imperial China, edited by G. W. Skinner, Stanford University Press, 1977.)  Here are three maps developed by Skinner and his collaborators on the basis of the macroregions analysis.



This is a particularly expressive map of the Lower Yangzi macroregion, differentiated into 4 levels of core and periphery.  This is pretty much the full development of the macroregional analysis.


Another key idea in Skinner's work is his analysis of city systems into a spatial and functional hierarchy. He argued that it is possible to distinguish clearly between higher-level and lower-level urban places, and that there is an orderly arrangement of economic functions and marketing scope associated with the various urban places in a macroregion.


So regional analysis of China is a key contribution in Skinner's work. But Skinner did not restrict his research to China alone. He also did significant work on Japanese demography and family structure and female infanticide in the 1980s (for example, "Reproductive Strategies and the Domestic Cycle among Tokugawa Villagers," an AAS presentation in 1988).

And he brought his regional systems analysis to bear on France in an extended piece of research in the late 1980s. The maps that follow are drawn from an unpublished conference paper titled "Regional Systems and the Modernization of Agrarian Societies: France, Japan, China," dated 1991. This paper builds upon a 1988 paper titled "The Population Geography of Agrarian Societies: Regional Systems in Eurasia."

This analysis builds a view of France as a set of interrelated regions with core-periphery stucture.  Through the series of working maps Skinner painstakingly constructs an empirically based analysis of the economic regions of France in mid-nineteenth century.  And Skinner then asks one of his typically foundational questions: how do these geographical features play a causal role in cultural and demographic characteristics?





This map of never-married/married female ratios is one illustration of Skinner's effort to relate social, cultural, and demographic variables to the core-periphery structure of a region.  The pattern of high ratio corresponds fairly well across the map of France to the regions identified by demographic and agricultural factors.  And this serves to confirm the underlying idea -- that economic regionalization has major consequences for cultural and demographic behavior.


Likewise patterns of female life expectancy and net migration; here again we find the kind of regionalization of important social variables that Skinner documents in great detail in late imperial China.


Finally, Skinner also played an important role as a “macro-historian” of China.  His 1985 Presidential Address to the Association for Asian Studies was a tour-de-force, bringing his macroregional analysis into a temporal framework (Skinner, G. William. 1985. Presidential Address: The Structure of Chinese History. Journal of Asian Studies XLIV (2):271-92).  In this piece he demonstrates a “long-wave” set of patterns of economic growth and contraction in two widely separated macroregions.  And he argues that we understand China’s economic history better when we see these sub-national patterns.  He analyzes the economic and population history of North China and Southeast Coast, two widely separated macroregions, over several centuries.  And he demonstrates that the two regions display dramatically different economic trajectories over the long duree.  Skinner brings Braudel to China.

Here is the pattern he finds for two macroregions over a centuries-long expanse of time.  And significantly, if these patterns were superimposed into a “national” pattern, it would show pretty much of a flat performance, since the two macroregions are significantly out of phase in their boom and bust cycles.



Finally, an enduring contribution that Skinner made is his cheerful disregard of discipline. Economic anthropology, regional studies, demography, urban studies, history … Skinner moved freely among all these and more. It was topics and questions, not disciplinary strictures, that guided Skinner’s fertile and rigorous imagination.  And area specialists and social scientists alike can fruitfully gain from continued study of his research.  Fortunately, work is underway to make Skinner's unpublished research and data available to other scholars.  Here are some major projects:
  • Data and maps are being curated and presented at Harvard. Here is a beta site and here is the platform the China GIS team is using at AfricaMap.
  • The Skinner Archive at Harvard (link)
  • Skinner's unpublished papers and research materials are being digitized and presented at the University of Washington.  Here is a link.
  • The China Historical GPS project at Fudan University is presenting an ambitious digital mapping collection as well (link). 
(Presented at the Association for Asian Studies, Philadelphia, March 2010; panel on Skinner's legacy.)

Thursday, December 3, 2009

Current historical sociology: George Steinmetz




George Steinmetz, professor of sociology at the University of Michigan, is a leading scholar in the contemporary field of historical sociology.  His most recent book is The Devil's Handwriting: Precoloniality and the German Colonial State in Qingdao, Samoa, and Southwest Africa, and his volume The Politics of Method in the Human Sciences: Positivism and Its Epistemological Others is a major contribution to current debates on the methods and aims of the social sciences today.  Here is a video interview and conversation I conducted with George at the University of Michigan this month.  Quite a few of George's academic papers are available on his website.

This is part of an ongoing series of interviews I am conducting with innovative social scientists.   The goal of the series is to provide a forum in which some very innovative and productive thinkers are able to reflect on some of the ideas and perspectives that are creating innovative work in sociology today.  Visit my research page for links to prior interviews.  All the videos are available on YouTube.

Tuesday, December 1, 2009

Styles of epistemology in world sociology



One of the basic organizing premises of the sociology of science is that there are meaningful differences in the conduct of a given area of science across separate communities, all the way down.  There is no pure language and method of science into which diverse research traditions ought to be translated.  Rather, there are complex webs of assumptions about ontology, evidence, observation, theory, method, and reasoning; and there are highly significant differences in the institutions through which scientific activities are undertaken and young scientists are trained.  Sociologists and philosophers such as Thomas Kuhn, David Bloor, Paul Feyerabend, Bruno Latour, and Wiebe Bijker have attempted to lay out the reasons for thinking these forms of difference are likely to be ubiquitous, and several of them have done detailed work in specific areas of scientific knowledge to demonstrate some of these differences.  (Here is a post on Kuhn's approach to the history of science.)

In this vein is a genuinely fascinating and important article by Gabriel Abend with the evocative title, "Styles of Sociological Thought: Sociologies, Epistemologies, and the Mexican and U.S. Quests for Truth" (Sociological Theory 24:1, 2006).  Abend attempts to take the measure of a particularly profound form of difference that might be postulated within the domain of world sociology: the idea that different national traditions of sociology may embody different epistemological frameworks that make their results genuinely incommensurable.  Abend offers an empirical analysis of the possibility that the academic disciplines of Mexican and U.S. sociology embody significantly different assumptions when it comes to articulating the role and relationships between "evidence" and "theory."  Here is how he summarizes his findings:
[The] main argument is that the discourses of Mexican and U.S. sociologies are consistently underlain by significantly different epistemological assumptions. In fact, these two Denkgemeinschaften are notably dissimilar in at least four clusters of variables ... : their thematic, theoretical, and methodological preferences; their historical development and intellectual influences; the society, culture, and institutions in which they are embedded; and the language they normally use.(2)
The core of Abend's analysis is an empirical study involving content analysis of four leading sociology journals in Mexico and the U.S. and sixty articles, randomly chosen through a constrained process.  He analyzes the articles with respect to the practices that each represents when it comes to the use of empirical evidence and sociological theory.  And he finds that the differences between the Mexican articles and the U.S. articles are highly striking.  Consider this tabulation of results on the question of the role of evidence and theory taken by the two sets of articles:



His central findings include these:
  • U.S. and Mexican sociologists have very different understandings of "theory" and the ways in which theories relate to data.  U.S. sociologists conform to Merton's idea of "theories of the middle range" in which a theory relates fairly directly to empirical observations through its deductive consequences.  Mexican sociologists tend to use theories and theoretical concepts as ways of interpreting or thematizing large social phenomena.
  • U.S. sociologists see the burden of their work to fall in the category of testing or confirming sociological hypotheses.  Mexican sociologists see the burden of their work in detailing and analyzing complex social phenomena at a fairly factual level.  "93 percent of M-ART are principally driven by the comprehension of an empirical problem" (10).
  • U.S. sociologists are strongly wedded to the hypothetico-deductive model of confirmation and explanation.  This model plays very little role in the arguments presented in the sample of articles from Mexican sociologists.
  • U.S. and Mexican sociologists have very different assumptions about "scientific objectivity".  U.S. authors aspire to impersonal neutrality, whereas Mexican authors embrace the fact that their analysis proceeds from a particular perspective.
  • U.S. authors attempt to exclude value judgments; Mexican author incorporate value judgments into their empirical analysis.  "Among U.S. sociologists, the standard reference is Weber’s purportedly sharp distinctions between value and fact, Wertfreiheit (value freedom or ethical neutrality) and Wertbezogenheit (value relevance or value relatedness), and context of discovery and context of justification" (22).   Concepts such as "oppression," "exploitation," and "domination" are used as descriptive terms in many of the Mexican research articles.
Here is a striking tabulation of epistemic differences between the two samples:



Abend believes that these basic epistemological differences between U.S. and Mexican sociology imply a fundamental incommensurability of results:
To consider the epistemological thesis, let us pose the following thought experiment. Suppose a Mexican sociologist claims p and a U.S. sociologist claims not-p.  Carnap’s or Popper’s epistemology would have the empirical world arbitrate between these two theoretical claims. But, as we have seen, sociologists in Mexico and the United States hold different stances regarding what a theory should be, what an explanation should look like, what rules of inference and standards of proof should be stipulated, what role evidence should play, and so on. The empirical world could only adjudicate the dispute if an agreement on these epistemological presuppositions could be reached (and there are good reasons to expect that in such a situation neither side would be willing to give up its epistemology). Furthermore, it seems to me that my thought experiment to some degree misses the point. For it imagines a situation in which a Mexican sociologist claims p and a U.S. sociologist claims not-p, failing to realize that that would only be possible if the problem were articulated in similar terms. However, we have seen that Mexican and U.S. sociologies also differ in how problems are articulated—rather than p and not-p, one should probably speak of p and q.  I believe that Mexican and U.S. sociologies are perceptually and semantically incommensurable as well. (27)
Though Abend's analysis is comparative, I find his analysis of the epistemological assumptions underlying the U.S. cases to be highly insightful all by itself.  In just a few pages he captures what seem to me to be the core epistemological assumptions of the conduct of sociological research in the U.S.  These include:
  • the assumption of "general regular reality" (the assumption that social phenomena are "really" governed by underlying regularities)
  • deductivism 
  • epistemic objectivity
  • a preference for quantification and abstract vocabulary
  • separation of fact and value; value neutrality
Abend is deliberately agnostic about the epistemic value of the two approaches; he is explicit in saying that he is interested in discovering the differences, not assessing the relative truthfulness of the two approaches.  But we cannot really escape the most basic question: where does truth fall in this analysis?  What is the status of truth claims in the two traditions?  Are there rational grounds for preferring one body of statements over the other, or for favoring one of these epistemologies over its alternative north or south?

This is important and original work.  Abend's research on this topic is an effort well worth emulating; it adds a great deal of depth and nuance to the effort to provide a philosophy of sociology.  I hope there will be further analysis along these lines by Abend and others.

(There is a lot of social observation and theory in the image above -- and no pretense of academic objectivity.  Class opposition, global property systems, and a general impression of deep social conflict pervade the image.)

Friday, October 30, 2009

Causal realism for sociology



The subject of causal explanation in the social sciences has been a recurring thread here (thread). Here are some summary thoughts about social causation.

First, there is such a thing as social causation. Causal realism is a defensible position when it comes to the social world: there are real social relations among social factors (structures, institutions, groups, norms, and salient social characteristics like race or gender). We can give a rigorous interpretation to claims like "racial discrimination causes health disparities in the United States" or "rail networks cause changes in patterns of habitation".

Second, it is crucial to recognize that causal relations depend on the existence of real social-causal mechanisms linking cause to effect. Discovery of correlations among factors does not constitute the whole meaning of a causal statement. Rather, it is necessary to have a theory of the mechanisms and processes that give rise to the correlation. Moreover, it is defensible to attribute a causal relation to a pair of factors even in the absence of a correlation between them, if we can provide evidence supporting the claim that there are specific mechanisms connecting them. So mechanisms are more fundamental than regularities.

Third, there is a key intellectual obligation that goes along with postulating real social mechanisms: to provide an account of the ontology or substrate within which these mechanisms operate. This I have attempted to provide through the theory of methodological localism (post) -- the idea that the causal nexus of the social world is constituted by the behaviors of socially situated and socially constructed individuals. To put the claim in its extreme form, every social mechanism derives from facts about institutional context, the features of the social construction and development of individuals, and the factors governing purposive agency in specific sorts of settings. And different research programs target different aspects of this nexus.

Fourth, the discovery of social mechanisms often requires the formulation of mid-level theories and models of these mechanisms and processes -- for example, the theory of free-riders. By mid-level theory I mean essentially the same thing that Robert Merton meant to convey when he introduced the term: an account of the real social processes that take place above the level of isolated individual action but below the level of full theories of whole social systems. Marx's theory of capitalism illustrates the latter; Jevons's theory of the individual consumer ss a utility maximizer illustrates the former. Coase's theory of transaction costs is a good example of a mid-level theory (The Firm, the Market, and the Law): general enough to apply across a wide range of institutional settings, but modest enough in its claim of comprehensiveness to admit of careful empirical investigation. Significantly, the theory of transaction costs has spawned major new developments in the new institutionalism in sociology (Mary Brinton and Victor Nee, eds., The New Institutionalism in Sociology).

And finally, it is important to look at a variety of typical forms of sociological reasoning in detail, in order to see how the postulation and discovery of social mechanisms play into mainstream sociological research. Properly understood, there is no contradiction between the effort to use quantitative tools to chart the empirical outlines of a complex social reality, and the use of theory, comparison, case studies, process-tracing, and other research approaches aimed at uncovering the salient social mechanisms that hold this empirical reality together.

Tuesday, September 29, 2009

Alternative economists


Traditional neoclassical economics has missed the mark quite a bit in the past two years. There is the financial and banking crisis, of course; neoclassical economists haven't exactly succeeded in explaining or "post-dicting" the crisis and recession through which we've traveled over the past year and more. But perhaps more fundamentally, neoclassical economics has failed to provide a basis for understanding the nuance and range of our economic institutions -- nationally or globally. Contemporary academic economics selects a pretty narrow range of questions as being legitimate subjects for economists to study; so topics such as hunger, labor unions, alternative economic institutions, and the history of economic thought generally get fairly short shrift. Don't expect to see the perspectives of Steven Marglin or Samuel Bowles in Economics 101 in most U.S. universities! The profession has a pretty narrow conception of what "economics" is.

And yet, when intelligent citizens think about the key problems of economics in a broader sense -- the problems that we really care about, the problems that will really influence our quality of life -- we certainly think of something broader than the mathematics of supply and demand or the solution of a general equilibrium model. We're ultimately not as interested in the formalisms of market equilibrium as we are in an analysis of the institutions that define the context of economic activity. We want to know more about the ways in which features of economic organization and the basic institutions of our economy influence individual behavior; we are curious about how our institutions create distributive outcomes that fundamentally affect people's lives differently across social groups. We would like to have a clearer understanding of some of the ways that non-economic factors -- race, gender, age, city -- influence people's economic outcomes. We want to know how the institutions and incentives defined by our economic system bring about effects on the natural environment. And we are often curious about how it might be possible to reform our basic economic institutions in ways that are more favorable to human development. In other words, we are often brought to think along the lines of some of the great dissenters in the economics tradition -- Polanyi, Dobb, Marx, Sen, McCloskey, and Dasgupta (An Inquiry into Well-Being and Destitution), for example. (In a very contemporary and topical way, Richard Florida takes on a lot of these issues; see his blog, the Creative Class.)

It is therefore pleasing to find that some publishers like Routledge are bringing out serious academic works in what they refer to as "social economics". The Routledge series, Advances in Social Economics, has a list of titles representing recent work that is rigorous and insightful but that explores other points of the compass within the field of political economy. I certainly hope that university libraries around the world are paying attention to this series; these are titles that can add a lot to the debate.

One book in the series in particular catches my eye. My colleague Bruce Pietrykowski raises an important set of "alternative" economic issues in his recent book, The Political Economy of Consumer Behavior: Contesting Consumption. (Here is a preview of the book from Routledge.) The book is a valuable contribution and very much worth reading.

Pietrykowski has two intertwined goals in the book. First, he wants to provide a broader basis for understanding consumer behavior and psychology than is presupposed by orthodox economists. And second, he wants to help contribute to a broader understanding of the scope, methods, and content of political economy than is provided by mainstream economics departments today.

Here is his preliminary statement of his goal:
I argue that in order to arrive at a more compelling account of consumer behavior we need to transform the discipline of economics by opening up the borders between economics and sociology, geography, feminist social theory, science studies and cultural studies. (2)
The fact of consumption is a crucial economic reality in any economy. How do individuals make choices about what and how to consume? Pietrykowski makes the point that consumption behavior shows enormous heterogeneity across groups defined in terms of ethnicity, gender, region, and time -- a point made here as well (post). So a single abstraction representing the universal consumer won't do the job. The standard economic assumption of the rationally self-interested consumer with consistent and complete preference rankings is seriously inadequate; instead, we need to develop a more nuanced set of views about the psychological and social factors that influence consumer preferences and choices.

So it is important to develop alternative theoretical tools in terms of which to analyze consumer psychology. Here Pietrykowski draws on ideas from Karl Marx (fetishism of commodities), Amartya Sen, and other political economists who have attempted to provide "thick" descriptions of economic behavior. The point here is not that we cannot usefully investigate and theorize about consumer behavior; rather, Pietrykowski is looking for an analytical approach that operates at the "middle range" between complete formal abstraction and the writing of many individual biographies.

Second, Pietrykowski is interested in contributing to a "re-mapping" of the knowledge system of economic thought, by exploring some of the alternative constructions that have been bypassed by the profession since World War II. (These arguments are largely developed in Chapter Two.) Pietrykowski begins with the assumption that the discipline and profession of economics is itself socially constructed and contingent; it took shape in response to a fairly specific set of theoretical and methodological ideas, it was subject to a variety of social and political pressures, and there were viable alternatives at every turn. Here is how he formulates the social construction perspective:
The claim that economic knowledge is socially constructed allows for an understanding of the field as the outcome of interpretation, negotiation and contestation over the constituents of economic knowledge and the legitimacy of particular practices, methods, and techniques of analysis. (19)
Like Marion Fourcade, Pietrykowski argues that there is a great deal of path dependence in the development of economics as a discipline and profession; and there are identifiable turning points where we can judge with confidence that themes that were eliminated at a certain time would have led to a substantially different intellectual system had they persisted. Pietrykowski's analysis of the fifty years of development of professional economics in the first half of the twentieth century is a very nice contribution to a contemporary history of science, and very compatible with Fourcade's important work in Economists and Societies: Discipline and Profession in the United States, Britain, and France, 1890s to 1990s.

The discipline of "home economics" in the 1920s and 1930s is the example that Pietrykowski examines in detail. "This task ... of defining economics as a distinct professional discipline involved both recruitment and exclusion" (28). Here is how Pietrykowski describes home economics:
Departments of home economics were quite diverse in the early twentieth century. Commonly associated with maintaining and preserving the cult of domesticity, home economics programs emerged from multiple sources including progressive political reform of public health, labor conditions, and household management. (35)
And, of course, home economics did not long remain a part of the professional discipline of economics. Pietrykowski looks in detail at the way in which home economics developed as an academic discipline at Cornell University; and he documents some ways in which the discipline of economics was constructed in a gendered way to exclude this way of understanding scientific economics: "The decision was made that women involved in the emerging field of home economics were to be excluded from the AEA.... Economics was to be concerned neither with women's activities in the home nor with women's activities in the workplace" (28-29).

Pietrykowski develops his full analysis of consumption by focusing on three heterodox approaches to understanding consumption: home economics and feminist analysis, psychological and behavioral research on consumer behavior (George Katona), and Fordism and the theory of mass consumption. He also gives some attention to the emerging importance of experimental economics as a tool for better understanding real economic decision-making and behavior (20-25).

After discussing these heterodox theories, Pietrykowski illustrates the value of the broader framework by examining three fascinating cases of consumption: the complex motivations that bring consumers to purchase the Toyota Prius, the motivations behind the Slow Food movement, and the choice that people in some communities have to engage in a system of alternative currency. These are each substantial examples of arenas where consumers are choosing products in ways that make it plain that their choices are influenced by culture, values, and commitments no less than calculations of utilities and preferences.

Between the theories and the cases, Pietrykowski offers a remarkably rich rethinking of how people choose to consume. He makes real sense of the idea that consumption is socially constructed (drawing sometimes on the social construction of technology (SCOT) literature). He demonstrates that models based on the theory of the universal consumer are not likely to fit well with actual economic outcomes. And he makes a strong and persuasive case for the need for academic economics to expand its horizons.

I find it interesting to notice that Pietrykowski's account of the ascendency of neoclassical economics since the 1950s converges closely with prior postings on positivist philosophy of science. One of the explicit appeals made by neoclassical economists was a methodological argument: they argued that their deductive, formal, and axiomatic treatments of economic fundamentals were more "scientific" than case studies and thick descriptions of economic behavior. So many of the failings of mainstream economic thought today can be traced to the shortcomings of the positivist program for the social sciences that was articulated in the middle of the twentieth century.

Tuesday, August 25, 2009

Revisiting Popper


Karl Popper's most commonly cited contribution to philosophy and the philosophy of science is his theory of falsifiability (The Logic of Scientific Discovery, Conjectures and Refutations: The Growth of Scientific Knowledge). (Stephen Thornton has a very nice essay on Popper's philosophy in the Stanford Encyclopedia of Philosophy.) In its essence, this theory is an alternative to "confirmation theory." Contrary to positivist philosophy of science, Popper doesn't think that scientific theories can be confirmed by more and more positive empirical evidence. Instead, he argues that the logic of scientific research is a critical method in which scientists do their best to "falsify" their hypotheses and theories. And we are rationally justified in accepting theories that have been severely tested through an effort to show they are false -- rather than accepting theories for which we have accumulated a body of corroborative evidence. Basically, he argues that scientists are in the business of asking this question: what is the most unlikely consequence of this hypothesis? How can I find evidence in nature that would demonstrate that the hypothesis is false? Popper criticizes theorists like Marx and Freud who attempt to accumulate evidence that corroborates their theories (historical materialism, ego transference) and praises theorists like Einstein who honestly confront the unlikely consequences their theories appear to have (perihelion of Mercury).

At bottom, I think many philosophers of science have drawn their own conclusions about both falsifiability and confirmation theory: there is no recipe for measuring the empirical credibility of a given scientific theory, and there is no codifiable "inductive logic" that might replace the forms of empirical reasoning that we find throughout the history of science. Instead, we need to look in greater detail at the epistemic practices of real research communities in order to see the nuanced forms of empirical reasoning that are brought forward for the evaluation of scientific theories. Popper's student, Imre Lakatos, makes one effort at this (Methodology of Scientific Research Programmes; Criticism and the Growth of Knowledge); so does William Newton-Smith (The Rationality of Science), and much of the philosophy of science that has proceeded under the rubrics of philosophy of physics, biology, or economics is equally attentive to the specific epistemic practices of real working scientific traditions. So "falsifiability" doesn't seem to have a lot to add to a theory of scientific rationality at this point in the philosophy of science. In particular, Popper's grand critique of Marx's social science on the grounds that it is "unfalsifiable" just seems to miss the point; surely Marx, Durkheim, Weber, Simmel, or Tocqueville have important social science insights that can't be refuted by deriding them as "unfalsifiable". And Popper's impatience with Marxism makes one doubt his objectivity as a sympathetic reader of Marx's work.

Of greater interest is another celebrated idea that Popper put forward, his critique of “historicism” in The Poverty of Historicism (1957). And unlike the theory of falsifiability, I think that there are important insights in this discussion that are even more useful today than they were in 1957, when it comes to conceptualizing the nature of the social sciences. So people who are a little dismissive of Popper may find that there are novelties here that they will find interesting.

Popper characterizes historicism as “an approach to the social sciences which assumes that historical prediction is their principal aim, and which assumes that this aim is attainable by discovering the ‘rhythms’ or the ‘patterns’, the ‘laws’ or the ‘trends’ that underlie the evolution of history” (3). Historicists differ from naturalists, however, in that they believe that the laws that govern history are themselves historically changeable. So a given historical epoch has its own laws and generalizations – unlike the laws of nature that are uniform across time and space. So historicism involves combining two ideas: prediction of historical change based on a formulation of general laws or patterns; and a recognition that historical laws and patterns are themselves variable over time, in reaction to human agency.

Popper’s central conclusion is that large predictions of historical or social outcomes are inherently unjustifiable -- a position taken up several times here (post, post). He finds that “holistic” or “utopian” historical predictions depend upon assumptions that simply cannot be justified; instead, he prefers “piecemeal” predictions and interventions (21). What Popper calls “historicism” amounts to the aspiration that there should be a comprehensive science of society that permits prediction of whole future states of the social system, and also supports re-engineering of the social system if we choose. In other words, historicism in his description sounds quite a bit like social physics: the aspiration of finding a theory that describes and predicts the total state of society.
The kind of history with which historicists wish to identify sociology looks not only backwards to the past but also forwards to the future. It is the study of the operative forces and, above all, of the laws of social development. (45)
Popper rejects the feasibility or appropriateness of this vision of social knowledge, and he is right to do so. The social world is not amenable to this kind of general theoretical representation.

The social thinker who serves as Popper’s example of this kind of holistic social theory is Karl Marx. According to Popper, Marx’s Capital (Marx 1977 [1867]) is intended to be a general theory of capitalist society, providing a basis for predicting its future and its specific internal changes over time. And Marx’s theory of historical materialism (“History is a history of class conflict,” “History is the unfolding of the contradictions between the forces and relations of production”; (Communist Manifesto, Preface to a Contribution to Political Economy)) is Popper’s central example of a holistic theory of history. And it is Marx’s theory of revolution that provides a central example for Popper under the category of utopian social engineering. In The Scientific Marx I argue that Popper’s representation of Marx’s social science contribution is flawed; rather, Marx's ideas about capitalism take the form of an eclectic combination of sociology, economic theory, historical description, and institutional analysis. It is also true, however, that Marx writes in Capital that he is looking to identify the laws of motion of the capitalist mode of production.

Whatever the accuracy of Popper's interpretation of Marx, his more general point is certainly correct. Sociology and economics cannot provide us with general theories that permit the prediction of large historical change. Popper’s critique of historicism, then, can be rephrased as a compelling critique of the model of the natural sciences as a meta-theory for the social and historical sciences. History and society are not law-governed systems for which we might eventually hope to find exact and comprehensive theories. Instead, they are the heterogeneous, plastic, and contingent compound of actions, structures, causal mechanisms, and conjunctures that elude systematization and prediction. And this conclusion brings us back to the centrality of agent-centered explanations of historical outcomes.

I chose the planetary photo above because it raises a number of complexities about theoretical systems, comprehensive models, and prediction that need sorting out. Popper observes that metaphors from astronomy have had a great deal of sway with historicists: "Modern historicists have been greatly impressed by the success of Newtonian theory, and especially by its power of forecasting the position of the planets a long time ahead" (36). The photo is of a distant planetary system in the making. The amount of debris in orbit makes it clear that it would be impossible to model and predict the behavior of this system over time; this is an n-body gravitational problem that even Newton despaired to solve. What physics does succeed in doing is identifying the processes and forces that are relevant to the evolution of this system over time -- without being able to predict its course in even gross form. This is a good example of a complex, chaotic system where prediction is impossible.