Showing posts with label supervenience. Show all posts
Showing posts with label supervenience. Show all posts

Monday, February 18, 2019

Is the Xerox Corporation supervenient?


Supervenience is the view that the properties of some composite entity B are wholly fixed by the properties and relations of the items A of which it is composed (link, link). The transparency of glass supervenes upon the properties of the atoms of silicon and oxygen of which it is composed and their arrangement.

Can the same be said of a business firm like Xerox when we consider its constituents to be its employees, stakeholders, and other influential actors and their relations and actions? (Call that total field of factors S.) Or is it possible that exactly these actors at exactly the same time could have manifested a corporation with different characteristics?

Let's say the organizational properties we are interested in include internal organizational structure, innovativeness, market adaptability, and level of internal trust among employees. And S consists of the specific individuals and their properties and relations that make up the corporation at a given time. Could this same S have manifested with different properties for Xerox?

One thing is clear. If a highly similar group of individuals had been involved in the creation and development of Xerox, it is entirely possible that the organization would have been substantially different today. We could expect that contingent events and a high level of path dependency would have led to substantial differences in organization, functioning, and internal structure. So the company does not supervene upon a generic group of actors defined in terms of a certain set of beliefs, goals, and modes of decision making over the history of its founding and development. I have sometimes thought this path dependency itself if enough to refute supervenience.

But the claim of supervenience is not a temporal or diachronic claim, but instead a synchronic claim: the current features of structure, causal powers, functioning, etc., of the higher-level entity today are thought to be entirely fixed by the supervenience base (in this case, the particular individuals and their relations and actions). Putting the idea in terms of possible-world theory, there is no possible world in which exactly similar individuals in exactly similar states of relationship and action would underlie a business firm Xerox* which had properties different from the current Xerox firm.

One way in which this counterfactual might be true is if a property P of the corporation depended on the states of the agents plus something else -- say, the conductivity of copper in its pure state. In the real world W copper is highly conductive, while in W* copper is not-conductive. And in W*, let's suppose, Xerox has property P* rather than P. On this scenario Xerox does not supervene upon the states of the actors, since these states are identical in W and W*. This is because dependence on the conductivity of copper make a difference not reflected in a difference in the states of the actors. 

But this is a pretty hypothetical case. We would only be justified in thinking Xerox does not supervene on S if we had a credible candidate for another property that would make a difference, and I'm hard pressed to do so.  

There is another possible line of response for the hardcore supervenience advocate in this case. I've assumed the conductivity of copper makes a difference to the corporation without making a difference for the actors. But I suppose it might be maintained that this is impossible: only the states of the actors affect the corporation, since they constitute the corporation; so the scenario I describe is impossible. 

The upshot seems to be this: there is no way of resolving the question at the level of pure philosophy. The best we can do is to do concrete empirical work on the actual causal and organizational processes through which the properties of the whole are constituted through the actions and thoughts of the individuals who make it up.

But here is a deeper concern. What makes supervenience minimally plausible in the case of social entities is the insistence on synchronic dependence. But generally speaking, we are always interested in the diachronic behavior and evolution of a social entity. And here the idea of path dependence is more credible than the idea of moment-to-moment dependency on the "supervenience base". We might say that the property of "innovativeness" displayed by the Xerox Corporation at some periods in its history supervenes moment-to-moment on the actions and thoughts of its constituent individuals; but we might also say that this fact does not explain the higher-level property of innovativeness. Instead, some set of events in the past set the corporation on a path that favored innovation; this corporate culture or climate influenced the selection and behavior of the individuals who make it up; and the day-to-day behavior reflects both the path-dependent history of its higher-level properties and the current configuration of its parts.

(Thanks, Raphael van Riel, for your warm welcome to the Institute of Philosophy at the University of Duisburg-Essen during my visit, and for the many stimulating conversations we had on the topics of supervenience, generativity, and functionalism.)


Friday, May 12, 2017

Brian Epstein's radical metaphysics


Brian Epstein is adamant that the social sciences need to think very differently about the nature of the social world. In The Ant Trap: Rebuilding the Foundations of the Social Sciences he sets out to blow up our conventional thinking about the relation between individuals and social facts. In particular, he is fundamentally skeptical about any conception of the social world that depends on the idea of ontological individualism, directly or indirectly. Here is the plainest statement of his view:
When we look more closely at the social world, however, this analogy [of composition of wholes out of independent parts] falls apart. We often think of social facts as depending on people, as being created by people, as the actions of people. We think of them as products of the mental processes, intentions, beliefs, habits, and practices of individual people. But none of this is quite right. Research programs in the social sciences are built on a shaky understanding of the most fundamental question of all: What are the social sciences about? Or, more specifically: What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain? 
My aim in this book is to take a first step in challenging what has come to be the settled view on these questions. That is, to demonstrate that philosophers and social scientists have an overly anthropocentric picture of the social world. How the social world is built is not a mystery, not magical or inscrutable or beyond us. But it turns out to be not nearly as people-centered as is widely assumed. (p. 7)
Here is one key example Epstein provides to give intuitive grasp of the anti-reductionist metaphysics he has in mind -- the relationship between "the Supreme Court" and the nine individuals who make it up.
One of the examples I will be discussing in some detail is the United States Supreme Court. It is small— nine members— and very familiar, so there are lots of facts about it we can easily consider. Even a moment’s reflection is enough to see that a great many facts about the Supreme Court depend on much more than those nine people. The powers of the Supreme Court are not determined by the nine justices, nor do the nine justices even determine who the members of the Supreme Court are. Even more basic, the very existence of the Supreme Court is not determined by those nine people. In all, knowing all kinds of things about the people that constitute the Supreme Court gives us very little information about what that group is, or about even the most basic facts about that group. (p. 10)
Epstein makes an important observation when he notes that there are two "consensus" views of the individual-level substrate of the social world, not just one. The first is garden-variety individualism: it is individuals and their properties (psychological, bodily) involved in external relations with each other that constitute the individual-level substrate of the social. In this case is reasonable to apply the supervenience relation to the relation between individuals and higher-level social facts (link).

The second view is more of a social-constructivist orientation towards individuals: individuals are constituted by their representations of themselves and others; the individual-level is inherently semiotic and relational. Epstein associates this view with Searle (50 ff.); but it seems to characterize a range of other theorists, from Geertz to Goffman and Garfinkel. Epstein refers to this approach as the "Standard Model" of social ontology. Fundamental to the Standard View is the idea of institutional facts -- the rules of a game, the boundaries of a village, the persistence of a paper currency. Institutional facts are held in place by the attitudes and performances of the individuals who inhabit them; but they are not reducible to an ensemble of individual-level psychological facts. And the constructionist part of the approach is the idea that actors jointly constitute various social realities -- a demonstration against the government, a celebration, or a game of bridge. And Epstein believes that supervenience fails in the constructivist ontology of the Standard View (57).

Both views are anti-dualistic (no inherent social "stuff"); but on Epstein's approach they are ultimately incompatible with each other.

But here is the critical point: Epstein doesn't believe that either of these views is adequate as a basis for social metaphysics. We need a new beginning in the metaphysics of the social world. Where to start this radical work? Epstein offers several new concepts to help reshape our metaphysical language about social facts -- what he refers to as "grounding" and "anchoring" of social facts. "Grounding" facts for a social fact M are lower-level facts that help to constitute the truth of M. "Bob and Jane ran down Howe Street" partially grounds the fact "the mob ran down Howe Street" (M). The fact about Bob and Jane is one of the features of the world that contributes to the truth and meaning of M. "Full grounding" is a specification of all the facts needed in order to account for M. "Anchoring" facts are facts that characterize the constructivist aspect of the social world -- conformance to meanings, rules, or institutional structures. An anchoring fact is one that sets the "frame" for a social fact. (An earlier post offered reflections on anchor individualism; link.)

Epstein suggests that "grounding" corresponds to classic ontological individualism, while "anchoring" corresponds to the Standard View (the constructivist view).
What I will call "anchor individualism" is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (100)
And he believes that a more adequate social ontology is one that incorporates both grounding and anchoring relations. "Anchoring and grounding fit together into a single model of social ontology" (82).

Here is an illustrative diagram of how the two kinds of relations work in a particular social fact (Epstein 94):


So Epstein has done what he set out to do: he has taken the metaphysics of the social world as seriously as contemporary metaphysicians do on other important topics, and he has teased out a large body of difficult questions about constitution, causation, formation, grounding, and anchoring. This is a valuable and innovative contribution to the philosophy of social science.

But does this exercise add significantly to our ability to conduct social science research and theory? Do James Coleman, Sam Popkin, Jim Scott, George Steinmetz, or Chuck Tilly need to fundamentally rethink their approach to the social problems they attempted to understand in their work? Do the metaphysics of "frame", "ground", and "anchor" make for better social research?

My inclination is to think that this is not an advantage we can attribute to The Ant Trap. Clarity, precision, surprising conceptual formulations, yes; these are all virtues of the book. But I am not convinced that these conceptual innovations will actually make the work of explaining industrial actions, rebellious behavior, organizational failures, educational systems that fail, or the rise of hate-based extremism more effective or insightful.

In order to do good social research we do of course need to have a background ontology. But after working through The Ant Trap several times, I'm still not persuaded that we need to move beyond a fairly commonsensical set of ideas about the social world:
  • individuals have mental representations of the world they inhabit
  • institutional arrangements exist through which individuals develop, form, and act
  • individuals form meaningful relationships with other individuals
  • individuals have complicated motivations, including self-interest, commitment, emotional attachment, political passion
  • institutions and norms are embodied in the thoughts, actions, artifacts, and traces of individuals (grounded and anchored, in Epstein's terms)
  • social causation proceeds through the substrate of individuals thinking, acting, re-acting, and engaging with other individuals
These are the assumptions that I have in mind when I refer to "actor-centered sociology" (link). This is not a sophisticated philosophical theory of social metaphysics; but it is fully adequate to grounding a realist and empirically informed effort to understand the social world around us. And nothing in The Ant Trap leads me to believe that there are fundamental conceptual impossibilities embedded in these simple, mundane individualistic ideas about the social world.

And this leads me to one other conclusion: Epstein argues the social sciences need to think fundamentally differently. But actually, I think he has shown at best that philosophers can usefully think differently -- but in ways that may in the end not have a lot of impact on the way that inventive social theorists need to conceive of their work.

(The photo at the top is chosen deliberately to embody the view of the social world that I advocate: contingent, institutionally constrained, multi-layered, ordinary, subject to historical influences, constituted by indefinite numbers of independent actors, demonstrating patterns of coordination and competition. All these features are illustrated in this snapshot of life in Copenhagen -- the independent individuals depicted, the traffic laws that constrain their behavior, the polite norms leading to conformance to the crossing signal, the sustained effort by municipal actors and community based organizations to encourage bicycle travel, and perhaps the lack of diversity in the crowd.)

Thursday, November 24, 2016

Coarse-graining of complex systems


The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of "coarse-graining" represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, "Coarse-graining of cellular automata, emergence, and the predictability of complex systems" (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let's see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:
We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.
This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the "rules that govern the large scale dynamics" -- rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits "averaging" of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3x3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3x3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might "coarse-grain" this information by replacing the household-level data with the majority representation of 3x3 grids of households. We might take another step of aggregation by considering 3x3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:
Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)
Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived ("extracted") from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven't yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:
The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)
Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, "Novel Algorithm for Coarse-Graining of Cellular Automata" (link). They describe "coarse-graining" in their abstract in these terms:
The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.
Here K&D seem to be expressing the view that the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level -- the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?

Sunday, November 22, 2015

Are emergence and microfoundations contraries?

image: micro-structure of a nanomaterial (link)

Are there strong logical relationships among the ideas of emergence, microfoundations, generative dependency, and supervenience? It appears that there are.


The diagram represents the social world as a laminated set of layers of entities, processes, powers, and laws. Entities at L2 are composed of or caused by some set of entities and forces at L1. Likewise L3 and L4. Arrows indicate microfoundations for L2 facts based on L1 facts. Diamond-tipped arrows indicate the relation of generative dependence from one level to another. Square-tipped lines indicate the presence of strongly emergent facts at the higher level relative to the lower level. The solid line (L4) represents the possibility of a level of social fact that is not generatively dependent upon lower levels. The vertical ellipse at the right indicates the possibility of microfoundations narratives involving elements at different levels of the social world (individual and organizational, for example).

We might think of these levels as "individuals," "organization, value communities, social networks," "large aggregate institutions like states," etc.

This is only one way of trying to represent the structure of the social world. The notion of a "flat" ontology was considered in an earlier post (link). Another structure that is excluded by this diagram is one in which there is multi-directional causation across levels, both upwards and downwards. For example, the diagram excludes the possibility that L3 entities have causal powers that are original and independent from the powers of L2 or L1 entities. The laminated view described here is the assumption built into debates about microfoundations, supervenience, and emergence. It reflects the language of micro, meso, and macro levels of social action and organization.

Here are definitions for several of the primary concepts.
  • Microfoundations of facts in L2 based on facts in L1 : accounts of the causal pathways through which entities, processes, powers, and laws of L1 bring about specific outcomes in L2. Microfoundations are small causal theories linking lower-level entities to higher-level outcomes.
  • Generative dependence of L2 upon L1: the entities, processes, powers, and laws of L2 are generated by the properties of level L1 and nothing else. Alternatively, the entities, processes, powers, and laws of A suffice to generate all the properties of L2. A full theory of L1 suffices to derive the entities, processes, powers, and laws of L2.
  • Reducibility of y to x : it is possible to provide a theoretical or formal derivation of the properties of y based solely on facts about x.
  • Strong emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties that do not depend wholly upon the properties of L1.
  • Weak emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties for which we cannot (now or in the future) provide derivations based wholly upon the properties of L1.
  • Supervenience of L2 with respect to properties of L1: all the properties of L2 depend strictly upon the properties of L1 and nothing else.
    We also can make an effort to define some of these concepts more formally in terms of the diagram.


Consider these statements about facts at levels L1 and L2:
  1. UM: all facts at L2 possess microfoundations at L1. 
  2. XM: some facts at L2 possess inferred but unknown microfoundations at L1. 
  3. SM: some facts at L2 do not possess any microfoundations at L1. 
  4. SE: L2 is strongly emergent from L1. 
  5. WE: L2 is weakly emergent from L1. 
  6. GD: L2 is generatively dependent upon L1. 
  7. R: L2 is reducible to L1. 
  8. D: L2 is determined by L1. 
  9. SS: L2 supervenes upon L1. 
Here are some of the logical relations that appear to exist among these statements.
  1. UM => GD 
  2. UM => ~SE 
  3. XM => WE 
  4. SE => ~UM 
  5. SE => ~GD 
  6. GD => R 
  7. GD => D 
  8. SM => SE 
  9. UM => SS 
  10. GD => SS 
On this analysis, the question of the availability of microfoundations for social facts can be understood to be central to all the other issues: reducibility, emergence, generativity, and supervenience. There are several positions that we can take with respect to the availability of microfoundations for higher-level social facts.
  1. If we have convincing reason to believe that all social facts possess microfoundations at a lower level (known or unknown) then we know that the social world supervenes upon the micro-level; strong emergence is ruled out; weak emergence is true only so long as some microfoundations remain unknown; and higher-level social facts are generatively dependent upon the micro-level.   
  2. If we take a pragmatic view of the social sciences and conclude that any given stage of knowledge provides information about only a subset of possible microfoundations for higher-level facts, then we are at liberty to take the view that each level of social ontology is at least weakly emergent from lower levels -- basically, the point of view advocated under the banner of "relative explanatory autonomy" (link). This also appears to be roughly the position taken by Herbert Simon (link). 
  3. If we believe that it is impossible in principle to fully specify the microfoundations of all social facts, then weak emergence is true; supervenience is false; and generativity is false. (For example, we might believe this to be true because of the difficulty of modeling and calculating a sufficiently large and complex domain of units.) This is the situation that Fodor believes to be the case for many of the special sciences. 
  4. If we have reason to believe that some higher-level facts simply do not possess microfoundations at a lower level, then strong emergence is true; the social world is not generatively dependent upon the micro-world; and the social world does not supervene upon the micro-world. 
In other words, it appears that each of the concepts of supervenience, reduction, emergence, and generative dependence can be defined in terms of the availability or inavailability of microfoundations for some or all of the facts at a higher level based on facts at the lower level. Strong emergence and generative dependence turn out to be logical contraries (witness the final two definitions above).

Tuesday, July 28, 2015

Supervenience, isomers, and social isomers


A prior post focused on the question of whether chemistry supervenes upon physics, and I relied heavily on R. F. Hendry's treatment of the way that quantum chemistry attempts to explain the properties of various molecules based on fundamentals of quantum mechanics. This piece raised quite a bit of great discussion, from people who agree with Hendry and those who disagree.

It occurs to me that there is a simpler reason for thinking that chemistry fails to supervene upon the physics of atoms, however, which does not involve the subtleties of quantum mechanics. This is the existence of isomers for various molecules. An isomer is a molecule with the same chemical composition as another but a different geometry and different chemical properties. From the facts about the constituent atoms we cannot infer uniquely what geometry a molecule consisting of these atoms will take. Instead, we need more information external to the physics of the atoms involved; we need an account of the path of interactions that the atoms took in "folding" into one isomer or the other. Therefore chemistry does not supervene upon the quantum-mechanical or physical properties of atoms alone.

For example, the properties of the normal prion protein and its isomer, infectious prion protein, are not fixed by the constituent elements; the geometries associated with these two compounds result from other causal influences. The constituent elements are compatible with both non-equivalent expressions. The prion molecules do not supervene upon the properties of the constituent elements. The question of which isomer emerges is one of a contingent path dependent process.

It is evident that this is not an argument that chemistry does not supervene upon physics more generally, since the history of interactions through which a given isomer emerges is itself a history of physical interactions. But it does appear to be a rock-solid refutation of the idea that molecules supervene upon the atoms of which they are constituted.

Significantly, this example appears to have direct implications for the relation between social facts and individual actors. If we consider the possibility of "social isomers" -- social structures consisting of exactly similar actors but different histories and different configurations and causal properties in the present -- then we also have a refutation of the idea that social facts supervene upon the actors of which they are constituted. Instead, we would need to incorporate the "path-dependent" series of interactions that led to the formation of one "geometry" of social arrangements rather than another, as well as to the full suite of properties associated with each individual actor. So QED -- social structures do not supervene on the features of the actors. And if some of the events that influence the emergence of one social structure rather than another are stochastic or random -- one social isomer instead of its compositional equivalent -- then at best social structures supervene on individuals conjoined with chance events in a path-dependent process.

There has been much discussion of the question of multiple realizability -- that one higher-level structure may correspond to multiple underlying configurations of components and processes. But so far as I have been able to see, there has been no discussion of the converse possibility -- multiple higher-level structures corresponding to a single underlying configuration. And yet this is precisely what is the case in chemistry for isomers and in the hypothetical but plausible possibility sketched here for "social isomers". This is indeed the key finding of the discovery of path-dependencies in social outcomes.

Sunday, July 26, 2015

Is chemistry supervenient upon physics?


Many philosophers of science and physicists take it for granted that "physics" determines "chemistry". Or in terms of the theory of supervenience, it is commonly supposed that the domain of chemistry supervenes upon the domain of fundamental physics. This is the thesis of physicalism: the idea that all causation ultimately depends on the causal powers of the phenomena described by fundamental physics.

R. F. Hendry takes up this issue in his contribution to Davis Baird, Eric Scerri, and Lee McIntyre's very interesting volume, Philosophy of Chemistry. Hendry takes the position that this relation of supervenience does not obtain; chemistry does not supervene upon fundamental physics.

Hendry points out that the dependence claim depends crucially on two things: what aspects of physics are to be considered? And second, what kind of dependency do we have in mind between higher and lower levels? For the first question, he proposes that we think about fundamental physics -- quantum mechanics and relativity theory (174). For the second question, he enumerates several different kinds of dependency: supervenience, realization, token identity, reducibility, and derivability (175). In discussing the macro-property of transparency in glass, he cites Jaegwon Kim in maintaining that transparency in glass is "nothing more" than the features of the microstructure of glass that permit it to transmit light. But here is a crucial qualification:
But as Kim admits, this last implication only follows if it is accepted that “the microstructure of a system determines its causal/nomic properties” (283), for the functional role is specified causally, and so the realizer’s realizing the functional property that it does (i.e., the realizer–role relation itself) depends on how things in fact go in a particular kind of system. For a microstructure to determine the possession of a functional property, it must completely determine the causal/nomic properties of that system. (175)
Hendry argues that the key issue underlying claims of dependence of B upon A is whether there is downward causation from the level of chemistry (B) to the physical level (A); or, on the contrary, is physics "causally complete". If the causal properties of the higher level are fully fixed by the causal properties of the underlying level, then supervenience is possible; but if the higher level has causal properties that permit influence on the lower level, then supervenience is not possible.

In order to gain insight into the specific issues arising concerning chemistry and physics, Hendry makes use of the "emergentist" thinking associated with C.D. Broad. He finds that Broad offers convincing arguments against "Pure Mechanism", the view that all material things are determined by the micro-physical level (177). Here are Broad's two contrasting possibilities for understanding the relations between higher levels and the physical micro-level:
(i) On the first form of the theory the characteristic behavior of the whole could not, even in theory, be deduced from the most complete knowledge of the behavior of its components, taken separately or in other combinations, and of their proportions and arrangements in this whole . . .
(ii) On the second form of the theory the characteristic behavior of the whole is not only completely determined by the nature and arrangements of its components; in addition to this it is held that the behavior of the whole could, in theory at least, be deduced from a sufficient knowledge of how the components behave in isolation or in other wholes of a simpler kind (1925, 59). [Hendry, 178]
The first formulation describes "emergence", whereas the second is "mechanism". In order to give more contemporary expression to the two views Hendry introduces the key concept of quantum chemistry, the Hamiltonian for a molecule. A Hamiltonian is an operator describing the total energy of a system. A "resultant" Hamiltonian is the operator that results from identifying and summing up all forces within a system; a configurational Hamiltonian is one that has been observationally adjusted to represent the observed energies of the system. The first version is "fundamental", whereas the second version is descriptive.

Now we can pose the question of whether chemistry (behavior of molecules) is fixed by the resultant Hamiltonian for the components of the atoms involved (electrons, protons, neutrons) and the forces that they exert on each other. Or, on the other hand, does quantum chemistry achieve its goals by arriving at configurational Hamiltonians for molecules, and deriving properties from these descriptive operators? Hendry finds that the latter is the case for existing derivations; and this means that quantum chemistry (as it is currently practiced) does not derive chemical properties from fundamental quantum theory. Moreover, the configuration of the Hamiltonians used requires abstractive description of the hypothesized geometry of the molecule and the assumption of the relatively slow motion of the nucleus. But this is information at the level of chemistry, not fundamental physics. And it implies downward causation from the level of chemical structure to the level of fundamental physics.
Furthermore, to the extent that the behavior of any subsystem is affected by the supersystems in which it participates, the emergent behavior of complex systems must be viewed as determining, but not being fully determined by, the behavior of their constituent parts. And that is downward causation. (180)
So chemistry does not derive from fundamental physics. Here is Hendry's conclusion, supporting pluralism and anti-reductionism in the case of chemistry and physics:
On the other hand is the pluralist version, in which physical law does not fully determine the behavior of the kinds of systems studied by the special sciences. On this view, although the very abstractness of the physical theories seems to indicate that they could, in principle, be regarded as applying to special science systems, their applicability is either trivial (and correspondingly uninformative), or if non-trivial, the nature of scientific inquiry is such that there is no particular reason to expect the relevant applications to be accurate in their predictions.... The burden of my argument has been that strict physicalism fails, because it misrepresents the details of physical explanation (187)
Hendry's argument has a lot in common with Herbert Simon's arguments about system complexity (link) and with Nancy Cartwright's arguments about the limitations of (real) physics' capability of representing and calculating the behavior of complex physical systems based on first principles (link). In each case we get a pragmatic argument against reductionism, and a weakened basis for assuming a strict supervenience relation between higher-level structures and a limited set of supposedly fundamental building blocks. What is striking is that Hendry's arguments undercut the reductionist impulse at what looks like its most persuasive juncture -- the relationship between quantum physics and quantum chemistry.


Wednesday, July 15, 2015

Supervenience and the social: Epstein's critique



Does the social world supervene upon facts about individuals and the physical environment of action? Brian Epstein argues not in several places, most notably in "Ontological Individualism Reconsidered" (2009; link). (I plan to treat Epstein's more recent arguments in his very interesting book The Ant Trap: Rebuilding the Foundations of the Social Sciences in a later post.) The core of his argument is the idea that there are other factors influencing social facts besides facts about individuals. Social facts then fail to supervene in the strict sense: they depend on facts other than facts about individuals. There are indeed differences at the level of the social that do not correspond to a difference in the facts at the level of the individual. Here is how Epstein puts the core of his argument:
My aim in this paper is to challenge this [the idea that individualism is simply the denial of spooky social autonomy]. But ontological individualism is a stronger thesis than this, and on any plausible interpretation, it is false. The reason is not that social properties are determined by something other than physical properties of the world. Instead it is that social properties are often determined by physical ones that cannot plausibly be taken to be individualistic properties of persons. Only if the thesis of ontological individualism is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social properties by individualistic ones. (3)
And here is how Epstein formulates the claim of weakly local supervenience of social properties upon individual properties:
Social properties weakly locally supervene on individualistic properties if and only if for any possible world w and any entities x and y in w, if x and y are individualistically indiscernible in w, then they are socially indiscernible in w. Two objects are individualistically- or socially-indiscernible if and only if they are exactly like with respect to every individualistic property or every social property, respectively. (9)
The causal story for supervenience of the social upon the individual perhaps looks like this:




The causal story for non-supervenience that Epstein tells looks like this:


In this case supervenience fails because there can be differences in S without any difference in I (because of differences in O).

But maybe the situation is even worse, as emergentists want to hold:


Here supervenience fails because social facts may be partially "auto-causal" -- social outcomes are partially influenced by differences in social facts that do not depend on differences in individual facts and other facts.

In one sense Epstein's line of thought is fairly easy to grasp. The outcome of a game of baseball between the New York Yankees and the Boston Red Sox depends largely on the actions of the players on the field and in the dugout; but not entirely and strictly. There are background facts and circumstances that also influence the outcome but are not present in the motions and thoughts of the players. The rules of baseball are not embodied on the field or in the minds of the players; so there may be possible worlds in which the same pitches, swings, impacts of bats on balls, catches, etc., occur; and yet the outcome of the game is different. The Boston pitcher may be subsequently found to be ineligible to play that day, and the Red Sox are held to forfeit the game. The rule in our world holds that "tie goes to the runner"; whereas in alto-world it may be that the tie goes to the defensive team; and this means that the two-run homer in the ninth does not result in two runs, but rather the final out. So the game does not depend on the actions of the players alone, but on distant and abstract facts about the rules of the game.

So what are some examples of "other facts" that might be causally relevant to social outcomes? The scenario offered here captures some of the key "extra-individual" facts that Epstein highlights, and that play a key role in the social ontology of John Searle: situating rules and interpretations that give semantic meaning to behaviors. Epstein highlights facts that determine "membership" in meaningful social contexts: being President, being the catcher on the Boston Red Sox. Both Epstein and Searle emphasize that there are a wide range of dispersed facts that must be true in order for Barack Obama to be President and Ryan Hanigan to be catcher. This is not a strictly "individual-level" fact about either man. Epstein quotes Gregorie Currie on this point: "My being Prime Minister ... is not just a matter of what I think and do; it depends on what others think and do as well. So my social characteristics are clearly not determined by my individual characteristics alone" (11).

So, according to Epstein, local supervenience of the social upon the individual fails. What about global supervenience? He believes that this relation fails as well. And this is because, for Epstein, "social properties are determined by physical properties that are not plausibly the properties of individuals" (20). These are the "other facts" in the diagrams above. His simplest illustration is this: without cellos there can be no cellists (24). And without hanging chads, George W. Bush would not have been President. And, later, one can be an environmental criminal because of a set of facts that were both distant and unknown to the individual at the time of a certain action (33).

Epstein's analysis is careful and convincing in its own terms. Given the modal specification of the meaning of supervenience (as offered by Jaegwon Kim and successors), Epstein makes a powerful case for believing that the social does not supervene upon the individual in a technical and specifiable sense. However, I'm not sure that very much follows from this finding. For researchers within the general school of thought of "actor-centered sociology", their research strategy is likely to remain one that seeks to sort out the mechanisms through which social outcomes of interest are created as a result of the actions and interactions of individuals. If Epstein's arguments are accepted, that implies that we should not couch that research strategy in terms of the idea of supervenience. But this does not invalidate the strategy, or the broad intuition about the relation between the social and the actions of locally situated actors upon which it rests. These are the intuitions that I try to express through the idea of "methodological localism"; link, link. And since I also want to argue for the possibility of "relative explanatory autonomy" for facts at the level of the social (for example, features of an organization; link), I am not too troubled by the failure of a view of the social and individual that denies strict determination of the former by the latter. (Here is an earlier post where I wrestled with the idea of supervenience; link.)

Sunday, October 12, 2014

Emergentism and generationism


media: lecture by Stanford Professor Robert Sapolsky on chaos and reduction

Several recent posts have focused on the topic of simulations in the social sciences. An interesting question here is whether these simulation models shed light on the questions of emergence and reduction that frequently arise in the philosophy of the social sciences. In most cases the models I've mentioned are "aggregation" models, in which the simulation attempts to capture the chief dynamics and interaction effects of the units and then work out the behavior and evolution of the ensemble. This is visibly clear when it comes to agent-based models. However, some of the scholars whose work I admire are "complexity" theorists, and a common view within complexity studies is the idea that the system has properties that are difficult or impossible to derive from the features of the units.

So does this body of work give weight to the idea of emergence, or does it incline us more in the direction of supervenience and ontological unit-ism?

John Miller and Scott Page provide an accessible framework within which to consider these kinds of problems in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. They look at certain kinds of social phenomena as constituting what they call "complex adaptive systems," and they try to demonstrate how some of the computational tools developed in the sciences of complex systems can be deployed to analyze and explain complex social outcomes. Here is how they characterize the key concepts:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. (kl 151)
Page and Miller believe that social phenomena often display "emergence" in a way that we can make sense of. Here is the umbrella notion they begin with:
The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (kl 826)
And they believe that the notion of emergence has "deep intuitive appeal". They find emergence to be applicable at several levels of description, including "disorganized complexity" (the central limit theorem, the law of large numbers) and "organized complexity" (the behavior of sand piles when grains have a small amount of control).
Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed. (kl 976)
Organized complexity, in their view, is a substantive and important kind of emergence in social systems, and this concept plays a key role in their view of complex adaptive systems.

Another -- and contrarian -- contribution to this field is provided by Joshua Epstein. His three-volume work on agent-based models is a fundamental text book for the field. Here are the titles:

Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science
Growing Artificial Societies: Social Science From the Bottom Up
Generative Social Science: Studies in Agent-Based Computational Modeling

Chapter 1 of Generative Social Science provides an overview of Epstein's approach is provided in "Agent-based Computational Models and Generative Social Science", and this is a superb place to begin (link). Here is how Epstein defines generativity:
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest.... Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (42)
Epstein describes an extensive attempt to model a historical population using agent-based modeling techniques, the Artificial Anasazi project (link). This work is presented in Dean, Gumerman, Epstein, Axtell, Swedlund, McCarroll, and Parker, "Understanding Anasazi Culture Change through Agent-Based Modeling" in Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes. The model takes a time series of fundamental environmental, climate, and agricultural data as given, and he and his team attempt to reconstruct (generate) the pattern of habitation that would result. Here is the finding they arrive at:

Generativity seems to be directly incompatible with the idea of emergence, and in fact Epstein takes pains to cast doubt on that idea.
I have always been uncomfortable with the vagueness--and occasional mysticism--surrounding this word and, accordingly, tried to define it quite narrowly.... There, we defined "emergent phenomena" to be simply "stable macroscopic patterns arising from local interaction of agents." (53)
So Epstein and Page both make use of the methods of agent based modeling, but they disagree about the idea of emergence. Page believes that complex adaptive systems give rise to properties that are emergent and irreducible; whereas Epstein doesn't think the idea makes a lot of sense. Rather, Epstein's view depends on the idea that we can reproduce (generate) the macro phenomena based on a model involving the agents and their interactions. Macro phenomena are generated by the interactions of the units; whereas for Page and Miller, macro phenomena in some systems have properties that cannot be easily derived from the activities of the units.

At the moment, anyway, I find myself attracted to Herbert Simon's effort to split the difference by referring to "weak emergence" (link):
... reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (Sciences of the Artificial 3rd edition 172)
This view emphasizes the computational and epistemic limits that sometimes preclude generating the phenomena in question -- for example, the problems raised by non-linear causal relations and causal interdependence. Many observers have noted that the behavior of tightly linked causal systems may be impossible to predict, even when we are confident that the system outcomes are the result of "nothing but" the interactions of the units and sub-systems.

Monday, February 18, 2013

Supervenience of the social?


I have found it appealing to try to think of the macro-micro relation in terms of the idea of supervenience (link).  Supervenience is a concept that was developed in the context of physicalism and psychology, as a way of specifying a non-reductionist but still constraining relationship between psychological properties and physical states of the brain. Physicalism and ontological individualism are both ontological theories about the relationship between higher and lower levels of entities in several different domains. But neither doctrine dictates how explanations in these domains need to proceed; i.e., neither forces us to be reductionist in either psychology or sociology.

The supervenience relation holds that --
  • X supervenes on Y =df no difference in X without some difference in the states of Y
Analogously, to say that the "social" supervenes upon "the totality of individuals making up a social arrangement" seems to have a superficial plausibility, without requiring that we attempt to reduce the social characteristics to ensembles of facts about individuals.

I'm no longer so sure that this is a helpful move, however, for the purposes of the macro-micro relationship.  Suppose we are considering a statement along these lines:
  • The causal properties of organization X supervene on the states of the individuals who make up X and who interact with X.
There seem to be quite a few problems that arise when we try to make use of this idea.

(a) First, what are we thinking of when we specify "the states of the individuals"? Is it all characteristics, known and unknown? Or is it a specific list of characteristics? If it is all characteristics of the individual, including as-yet unknown characteristics, then the supervenience relation is impossible to apply in practice. We would never know whether two substrate populations were identical all the way down. This represents a kind of "twin-earth" thought experiment that doesn't shed light on real sociological questions.

In the psychology-neurophysiology examples out of which supervenience theory originated these problems don't seem so troubling. First, we think we know which properties of nerve cells are relevant to their functioning: electrical properties and network connections. So our supervenience claim for psychological states is more narrow:
  • The causal properties of a psychological process supervene on the functional properties of the states of the nerve cells of the corresponding brain. 
The nerve cells may differ in other ways that are irrelevant to the psychological processes at the higher level: they may be a little larger or smaller, they may have a slightly different content of trace metals, they may be of different ages. But our physicalist claim is generally more refined than this; it ignores these "irrelevant" differences across cells and specifies identity among the key functional characteristics of the cells. Put this way, the supervenience claim is an empirical theory; it says that electrical properties and network connections are causally relevant to psychological processes, but cell mass and cell age are not (within broad parameters).

(b) Second and relatedly, there are always some differences between two groups of people, no matter how similar; and if the two groups are different in the slightest degree -- say, one member likes ice cream and the corresponding other does not -- then the supervenience relation says nothing about the causal properties of X. The organizational features may be as widely divergent as could be imagined; supervenience is silent about the delta to epsilon relations from substrate to higher level. It specifies only that identical substrates produce identical higher level properties. More useful would be something like the continuity concept in calculus to apply here: small deviations in lower-level properties result in small deviations in higher-level properties. But it is not clear that this is true in the social case.

(c) Also problematic for the properties of social structures is an issue that depends upon the idea of path dependence. Let's say that we are working with the idea that a currently existing institution depends for its workings (its properties) on the individuals who make it up at present. And suppose that the institution has emerged through a fifty-year process of incremental change, while populated at each step by approximately similar individuals. The well-established fact of path dependence in the evolution of institutions (Thelen, How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan) entails that the properties of the institution today are not uniquely determined by the features of the individuals currently involved in the institution in its various stages. Rather there were shaping events that pushed the evolution of the institution in this direction or that at various points in time. This means that the current properties of the institution are not best explained by the current properties of the substrate individuals at present, but rather by the history of development that led this population to this point.

It will still be true that the workings of the institution at present are dependent on the features of the individuals at present; but the path-dependency argument says that those individuals will have adjusted in small ways so as to embody the regulative system of the institution in its current form, without becoming fundamentally different kinds of individuals. Chiefly they will have internalized slightly different systems of rules that embody the current institution, and this is what gives the institution its characteristic mode of functioning in the present.

So explanation of the features of the institution in the present is not best couched in terms of the current characteristics of the individuals who make it up, but rather by an historical account of the path that led to this point (and the minute changes in individual beliefs and behaviors that went along with this).

These concerns make me less satisfied with the general idea of supervenience as a way of specifying the relation between social structures and substrate individuals. What would satisfy me more would be something like this:
  • Social structures supervene upon the states of individuals in the substrate described at a given level of granularity corresponding to our current theory of the actor.
  • Small differences in the substrate will produce only small differences in the social structure.
These add up to a strong claim; they entail that any organization with similar rules of behavior involving roughly similar actors (according to the terms of our best theory of the actor) will have roughly similar causal properties. And this in turn invites empirical investigation through comparative methods.

As for the path-dependency issue raised in comment (c), perhaps this is the best we can say: the substrate analysis of the behavior of the individuals tells us how the institution works, but the historical account of the path-dependent process through which the institution came to have the characteristics it currently has tells us why it works this way. And these are different kinds of explanations.

Thursday, November 1, 2012

Methodological individualism today

Is it possible to draw a few conclusions on the topic of methodological individualism after dozens of years of debate? (Lars Udehn's Methodological Individualism: Background, History and Meaning is a great study of the long history of the debate over this issue. It is unfortunate there isn't an affordable digital edition of the book. Joseph Heath's entry on the subject in the Stanford Encyclopedia of Philosophy gives a very good overview; link.) Here is Jon Elster's formulation of the concept in Nuts and Bolts for the Social Sciences (1989):
The elementary unit of social life is the individual human action. To explain social institutions and social change is to show how they arise as the result of the actions and interaction of individuals. This view, often referred to as methodological individualism, is in my view trivially true. (13)
Max Weber is often identified as the modern originator of theory of methodological individualism. (Weber's student Joseph Schumpeter was the first to use the concept in print.) Weber's reason for advocating for MI derived from his view of action as purposive behavior, and his view that social outcomes need to be explained on the basis of the purposive actions of the individual actors who constitute them. So MI began with a presupposition about the unique importance of rational-intentional behavior in social life. Weber insisted on a rational actor foundation for the social sciences. And this prepared the ground for a joining of forces between methodological individualism and rational choice theory.

The emphasis on methodological individualism sometimes reflected a strong disposition towards eliminative reductionism with respect to social entities and properties: the early twentieth century exponents like J.W.N. Watkins wanted to find logical formulations through which social terms could be eliminated in favor of a logical compound of statements about individuals. And what was the motivation for this effort? It appears to be a version of the physicist’s preference for reduction to ensembles of simple homogeneous "atoms" transported to the social and behavioral sciences. This demand for reduction might take the form of conceptual reduction or compositional reduction. The latter takes the form of demonstrations of how higher level properties are made up of lower level systems. The conceptual reduction program didn't work out well, any more than Carnap's phenomenological physics did.

In addition to this bias derived from positivist philosophy of science, there was also a political subtext in some formulations of the theory in the 1950s. Karl Popper and JWN Watkins advocated for MI because they thought this methodology was less conducive to the "collectivist" theories of Marx and the socialists. If collectivities don't exist, then collectivism is foolish.

Another phase of thinking was more ontological than conceptual. These thinkers wanted to make it clear that social things, causes, and structures depended on the activities of individuals and nothing else. Another way of putting the point is to say that social entities are composed of ensembles of individuals and nothing else. Their concern was to avoid the social analogue of vitalism -- the idea in the life sciences that there is some special "sauce" of life activity that is wholly independent from the molecular and physical structures that make up the organism. Essentially this crowd wants to hold that the properties of the whole are fixed solely and completely by the physical structures that make it up. The theory of supervenience pretty well captures this ontological position: no differences at the upper level without some difference at the lower level. (This position doesn't imply its converse statement: if two physical systems differ then their upper-level systems must differ too. This is the point of multiple functional realizability.) The position does rule out some forms of emergentism, however. The idea of microfoundations comes into this line of thought. If we make a claim about the structural or causal properties of an upper-level thing, we need to be confident that there are microfoundations that would show how this feature comes about. In the strongest case, we need to actually provide the microfoundations.

There is another important stream of MI thinking that derives from a set of ideas about how higher-level facts ought to be explained: they should be explained on the basis of demonstrations of how the upper-level entity is given its properties by the organized system of elements from which it is comprised. This is essentially what the analytical sociologists seem to demand, by insisting on the logic of Coleman's boat. This approach privileges a certain kind of explanation--constructive or compositional explanations.

There is one aspect of the tradition that I haven't mentioned yet: the idea that we can carve out the individual as separate from and prior to the social -- a view sometimes referred to as "atomistic". In classical physics the analogous claim is supportable. Sodium atoms are homogeneous and interchangeable. But it is not plausible in the human world. Social facts intertwine with the mind and actions of individuals all the way down. So from the start, it would seem that the program of MI should be formulated in terms of reduction from the big-social to the small-social, not the non-social.

So what kinds of social claims do these various formulations rule out?

All of them rule out spooky holism, those social theories that claim that social entities exist that are wholly independent of the features of individuals.

Several of them rule out strong emergentism -- the view that there are social properties that could not in principle be derived from full knowledge about the states and properties of the constituent individuals.

They by and large rule out explanatory autonomy for the social level. This is the idea that there might be fully satisfactory causal arguments that proceed from statements about the properties of one set of social factors and the explains another set of social outcomes on this basis. (The ontological thesis does not have this implication.)

As Heath argues in his SEP essay, they rule out macro-level statistical explanations and what he calls micro-level sub-intentional explanations.

In my view, the only claims about methodological individualism that seem unequivocally plausible today are the ontological requirements -- the various formulations of the notion that social things are composed of the actions and thoughts of individuals and nothing else. This implies as well that the supervenience claim and the microfoundations claim are plausible as well.

But to concede that x's are composed of y's does not entail the need for any kind of reductionism from x to y. And this extends to the idea of explanatory reduction as well. So methodological individualism does not create valid limits on the structure of social explanations, and meso-level explanations are not excluded.

So it seems as though we can now draw several conclusions about the field of methodological individualism. The ontological thesis is roughly true, but it is compatible with a range of different ideas about within- and cross-level explanation. So reductionism doesn't follow. The micro-level can't be a hypothetical pre-social or non-social individual. Finally, there is no reason to associate the plausible core of MI theory with one specific theory of action, the rational-intentional theory. As pragmatist sociologists are now arguing, there are compelling theories of the actor that do not privilege the model of conscious deliberative choice.

Friday, January 6, 2012

Emergence


photos: Niklas Luhman (top), Mario Bunge (bottom)

One view that has been taken about the causal properties of social structures is that they are emergent: they are properties that appear only at a certain level of complexity, and do not pertain to the items of which the social structure is composed. This view has a couple of important problems, not least of which is one of definition. What specifically is the idea of emergence supposed to mean? And do we have any good reasons to believe that it applies to the social world?

An important recent exponent of the view in question is David Elder-Vass in The Causal Power of Social Structures: Emergence, Structure and Agency. Elder-Vass is in fact specific about what he means by the concept. He defines a property of a compound entity or structure as emergent when the property applies only to the structure itself and not to any of its components.
A thing ... can have properties or capabilities that are not possessed by its parts. Such properties are called emergent properties. (4)
An emergent property is one that is not possessed by any of the parts individually and that would not be possessed by the full set of parts in the absence of a structuring set of relations between them. (17)
But, as I argued in an earlier post, this is such a tame version of emergence that it doesn't seem to add much. By E-V's criterion, most properties are emergent -- the sweetness of sugar, the flammability of woven cotton, the hardness of bronze.

What gives the idea of emergence real bite -- but also makes it fundamentally mysterious -- is the additional idea that the property cannot be derived from facts about the components and their arrangements within the structure in question. By this criterion, none of the properties just mentioned are emergent, because their characteristics can in principle be derived from what we know about their components in interaction with each other.

This is the concept of emergence that is associated with holism and anti-reductionism. Essentially it requires us to do our scientific work entirely at the level of the structure itself -- discover system-level properties and powers, and turn our backs on the impulse to explain through analysis.

A kind of compromise view is offered by Herbert Simon in his conception of a complex system in a 1962 article, "The Architecture of Complexity" (link). Here is how he defines the relevant notion of complexity:
Roughly, by a complex system I mean one made up of a large number of parts that interact in a nonsimple way. In such systems, the whole is more than the sum of the parts, not in an ultimate, metaphysical sense, but in the important pragmatic sense that, given the properties of the parts and the laws of their interaction, it is not a trivial matter to infer the properties of the whole. In the face of complexity, an in-principle reductionist may be at the same time a pragmatic holist. (468)
Here Simon favors a view that does not assert ontological independence of system characteristics from individual characteristics, but does assert pragmatic and explanatory independence. In fact, his position seems equivalent to the supervenience thesis: social facts supervene upon facts about individuals. But the implication for research is plain: it is useless to pursue a reductionist strategy for understanding system-level properties of complex systems.

A recent issue of Philosophy of the Social Sciences contains three interesting contributions to different aspects of this topic. Mariam Thalos ("Two Conceptions of Fundamentality") and Shiping Tang ("Foundational Paradigms of Social Sciences") are both worth reading. But Poe Yu-ze Wan's "Emergence a la Systems Theory: Epistemological Totalausschluss or Ontological Novelty?") is directly relevant to the question of emergence, so here I'll focus on his analysis.

Wan distinguishes between two schools of thought about emergence, associated with Niklas Luhmann and Mario Bunge.  Luhmann's conception is extravagantly holistic, whereas Bunge's conception is entirely consistent with the idea that emergent characteristics are nonetheless fixed by properties of the constituents. Wan argues that Luhmann has an "epistemological" understanding of emergence -- the status of a property as emergent is a feature of its derivability or explicability on the basis of lower-level facts.  Bunge's approach, on the other hand, is ontological: even if we can fully explain the higher-level phenomenon in terms of the properties of the lower level, the property itself is still emergent.  So for Bunge, "emergence" is a fact about being, not about knowledge.  Wan also notes that Luhmann wants to replace the "part-whole" distinction with the "environment-system" distinction -- which Wan believes is insupportable (180).  Here is a statement from Luhmann quoted by Wan:
Whenever there is an emergent order, we find the the elements of a presupposed materiality- or energy-continuum … are excluded.  Total exclusion (Totalausschluss) is the condition of emergence. (Luhmann, Niklas. 1992. Wer kennt Wil Martens? Eine Anmerkung zum Problem der Emergenz sozialer System.  Kolner Zeitschrift fur Soziologie und Sozialpsychologie 44(1): 139-42, 141)
And here is Bunge's definition of emergence, quoted by Wan:
To say that P is an emergent property of systems of kind K is short for "P is a global (or collective or non-distributive) property of a system of kind K, none of whose components or precursors possesses P. (Emergence and Convergence: Qualitative Novelty and the Unity of Knowledge, 15)
Bunge's position here is exactly the same as the conception offered by Elder-Vass above.  It defines emergence as novelty at the higher level -- whether or not that novelty can be explained by facts about the constituents.  Bunge's conception is consistent with the supervenient principle, in my reading, whereas Lumann's is not.

Wan provides an excellent review of the history of thinking about this concept, and his assessment of the issues is one that I for one agree with.  In particular, his endorsement of Bunge's position of "rational emergentism" seems to me to get the balance exactly right: social properties are in some sense fixed by the properties of the constituents; they are nonetheless distinct from those underlying properties; and good scientific theories are justified in referring to these emergent properties without the need of reducing them or replacing them with properties at the lower level.  This is what Simon seems to be getting at in his definition of complex systems, quoted above; and it seems to be equivalent to the idea of explanatory autonomy argued in an earlier post.

My own strategy on this issue is to avoid use of the concept of emergence and to favor instead the idea of explanatory autonomy. This is the idea that mid-level system properties are often sufficiently stable that we can pursue causal explanations at that level, without providing derivations of those explanations from some more fundamental level (link).

The explanatory challenge is very clear: if we want to explain meso-level outcomes on the basis of reference to emergent system characteristics, we can do so.  But we need to have good replicable knowledge of the causal properties of the emergent features in order to develop explanations of other kinds of outcomes based on the workings of the system characteristics.  I would also add that we need to have confidence that the hypothesized system-level characteristics do in fact possess microfoundations at the level of the individual and social actions that underly them; or, in other words, we need to have reason for confidence that the emergent properties our explanations hypothesize do in fact conform to the supervenient relation.

A couple of Wan's sources are particularly valuable for investigators who are interested in pursuing the idea of emergence further:

David Blitz, Emergent Evolution: Qualitative Novelty and the Levels of Reality (Episteme)
Richard Jones, Reductionism: Analysis and the Fullness of Reality
Keith Sawyer, Social Emergence: Societies As Complex Systems