Showing posts with label reductionism. Show all posts
Showing posts with label reductionism. Show all posts

Friday, May 12, 2017

Brian Epstein's radical metaphysics


Brian Epstein is adamant that the social sciences need to think very differently about the nature of the social world. In The Ant Trap: Rebuilding the Foundations of the Social Sciences he sets out to blow up our conventional thinking about the relation between individuals and social facts. In particular, he is fundamentally skeptical about any conception of the social world that depends on the idea of ontological individualism, directly or indirectly. Here is the plainest statement of his view:
When we look more closely at the social world, however, this analogy [of composition of wholes out of independent parts] falls apart. We often think of social facts as depending on people, as being created by people, as the actions of people. We think of them as products of the mental processes, intentions, beliefs, habits, and practices of individual people. But none of this is quite right. Research programs in the social sciences are built on a shaky understanding of the most fundamental question of all: What are the social sciences about? Or, more specifically: What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain? 
My aim in this book is to take a first step in challenging what has come to be the settled view on these questions. That is, to demonstrate that philosophers and social scientists have an overly anthropocentric picture of the social world. How the social world is built is not a mystery, not magical or inscrutable or beyond us. But it turns out to be not nearly as people-centered as is widely assumed. (p. 7)
Here is one key example Epstein provides to give intuitive grasp of the anti-reductionist metaphysics he has in mind -- the relationship between "the Supreme Court" and the nine individuals who make it up.
One of the examples I will be discussing in some detail is the United States Supreme Court. It is small— nine members— and very familiar, so there are lots of facts about it we can easily consider. Even a moment’s reflection is enough to see that a great many facts about the Supreme Court depend on much more than those nine people. The powers of the Supreme Court are not determined by the nine justices, nor do the nine justices even determine who the members of the Supreme Court are. Even more basic, the very existence of the Supreme Court is not determined by those nine people. In all, knowing all kinds of things about the people that constitute the Supreme Court gives us very little information about what that group is, or about even the most basic facts about that group. (p. 10)
Epstein makes an important observation when he notes that there are two "consensus" views of the individual-level substrate of the social world, not just one. The first is garden-variety individualism: it is individuals and their properties (psychological, bodily) involved in external relations with each other that constitute the individual-level substrate of the social. In this case is reasonable to apply the supervenience relation to the relation between individuals and higher-level social facts (link).

The second view is more of a social-constructivist orientation towards individuals: individuals are constituted by their representations of themselves and others; the individual-level is inherently semiotic and relational. Epstein associates this view with Searle (50 ff.); but it seems to characterize a range of other theorists, from Geertz to Goffman and Garfinkel. Epstein refers to this approach as the "Standard Model" of social ontology. Fundamental to the Standard View is the idea of institutional facts -- the rules of a game, the boundaries of a village, the persistence of a paper currency. Institutional facts are held in place by the attitudes and performances of the individuals who inhabit them; but they are not reducible to an ensemble of individual-level psychological facts. And the constructionist part of the approach is the idea that actors jointly constitute various social realities -- a demonstration against the government, a celebration, or a game of bridge. And Epstein believes that supervenience fails in the constructivist ontology of the Standard View (57).

Both views are anti-dualistic (no inherent social "stuff"); but on Epstein's approach they are ultimately incompatible with each other.

But here is the critical point: Epstein doesn't believe that either of these views is adequate as a basis for social metaphysics. We need a new beginning in the metaphysics of the social world. Where to start this radical work? Epstein offers several new concepts to help reshape our metaphysical language about social facts -- what he refers to as "grounding" and "anchoring" of social facts. "Grounding" facts for a social fact M are lower-level facts that help to constitute the truth of M. "Bob and Jane ran down Howe Street" partially grounds the fact "the mob ran down Howe Street" (M). The fact about Bob and Jane is one of the features of the world that contributes to the truth and meaning of M. "Full grounding" is a specification of all the facts needed in order to account for M. "Anchoring" facts are facts that characterize the constructivist aspect of the social world -- conformance to meanings, rules, or institutional structures. An anchoring fact is one that sets the "frame" for a social fact. (An earlier post offered reflections on anchor individualism; link.)

Epstein suggests that "grounding" corresponds to classic ontological individualism, while "anchoring" corresponds to the Standard View (the constructivist view).
What I will call "anchor individualism" is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (100)
And he believes that a more adequate social ontology is one that incorporates both grounding and anchoring relations. "Anchoring and grounding fit together into a single model of social ontology" (82).

Here is an illustrative diagram of how the two kinds of relations work in a particular social fact (Epstein 94):


So Epstein has done what he set out to do: he has taken the metaphysics of the social world as seriously as contemporary metaphysicians do on other important topics, and he has teased out a large body of difficult questions about constitution, causation, formation, grounding, and anchoring. This is a valuable and innovative contribution to the philosophy of social science.

But does this exercise add significantly to our ability to conduct social science research and theory? Do James Coleman, Sam Popkin, Jim Scott, George Steinmetz, or Chuck Tilly need to fundamentally rethink their approach to the social problems they attempted to understand in their work? Do the metaphysics of "frame", "ground", and "anchor" make for better social research?

My inclination is to think that this is not an advantage we can attribute to The Ant Trap. Clarity, precision, surprising conceptual formulations, yes; these are all virtues of the book. But I am not convinced that these conceptual innovations will actually make the work of explaining industrial actions, rebellious behavior, organizational failures, educational systems that fail, or the rise of hate-based extremism more effective or insightful.

In order to do good social research we do of course need to have a background ontology. But after working through The Ant Trap several times, I'm still not persuaded that we need to move beyond a fairly commonsensical set of ideas about the social world:
  • individuals have mental representations of the world they inhabit
  • institutional arrangements exist through which individuals develop, form, and act
  • individuals form meaningful relationships with other individuals
  • individuals have complicated motivations, including self-interest, commitment, emotional attachment, political passion
  • institutions and norms are embodied in the thoughts, actions, artifacts, and traces of individuals (grounded and anchored, in Epstein's terms)
  • social causation proceeds through the substrate of individuals thinking, acting, re-acting, and engaging with other individuals
These are the assumptions that I have in mind when I refer to "actor-centered sociology" (link). This is not a sophisticated philosophical theory of social metaphysics; but it is fully adequate to grounding a realist and empirically informed effort to understand the social world around us. And nothing in The Ant Trap leads me to believe that there are fundamental conceptual impossibilities embedded in these simple, mundane individualistic ideas about the social world.

And this leads me to one other conclusion: Epstein argues the social sciences need to think fundamentally differently. But actually, I think he has shown at best that philosophers can usefully think differently -- but in ways that may in the end not have a lot of impact on the way that inventive social theorists need to conceive of their work.

(The photo at the top is chosen deliberately to embody the view of the social world that I advocate: contingent, institutionally constrained, multi-layered, ordinary, subject to historical influences, constituted by indefinite numbers of independent actors, demonstrating patterns of coordination and competition. All these features are illustrated in this snapshot of life in Copenhagen -- the independent individuals depicted, the traffic laws that constrain their behavior, the polite norms leading to conformance to the crossing signal, the sustained effort by municipal actors and community based organizations to encourage bicycle travel, and perhaps the lack of diversity in the crowd.)

Sunday, July 10, 2016

Elias on figurational sociology




A premise shared by all actor-centered versions of sociology is that individuals and their actions are the rock-bottom level of the social world. Every other social fact derives from facts at this level. Norbert Elias raises a strong and credible challenge to this ontological assumption in his work, offering a view of social action that makes "figurations" of actors just as real as individual actors themselves. By figuration he means something like an interlocking set of individuals whose actions are a fluid series of reactions to and expectations about others. Figurations include both conflict and cooperation. And he is insistent that figurations cannot be reduced to the sum of a collection of independent actors and their choices. "Imagine the interlocking of the plans and actions, not of two, but of two thousand or two million interdependent players. The ongoing process which one encounters in this case does not take place independently of individual people whose plans and actions keep it going. Yet it has a structure and demands an explanation sui generis. It cannot be explained in terms of the ‘ideas’ or the ‘actions’ of individual people" (52). So good sociology needs to pay attention to figurations, not just individuals and their mental states.

Elias's most vivid illustration of what he means by a figuration comes from his reflections on the game of soccer and the flow of action across two teams and twenty-two individual players over extended episodes of play. These arguments constitute the primary topic of volume 7 of his collected writings, Elias and Dunning, Quest for Excitement: Sport and Leisure in the Civilising Process. (This is particularly relevant at a time when millions of people are viewing the Euro Cup.)
The observation of an ongoing game of football can be of considerable help as an introduction to the understanding of such terms as interlocking plans and actions. Each team may have planned its strategy in accordance with the knowledge of their own and their opponents’ skills and foibles. However, as the game proceeds, it often produces constellations which were not intended or foreseen by either side. In fact, the flowing pattern formed by players and ball in a football game can serve as a graphic illustration not only of the concept of ‘figurations’ but also of that of ‘social process’. The game-process is precisely that, a flowing figuration of human beings whose actions and experiences continuously interlock, a social process in miniature. One of the most instructive aspects of the fast-changing pattern of a football game is the fact that this pattern is formed by the moving players of both sides. If one concentrated one’s attention only on the activities of the players of one team and turned a blind eye to the activities of the other, one could not follow the game. The actions and experiences of the members of the team which one tried to observe in isolation and independently of the actions and perceptions of the other would remain incomprehensible. In an ongoing game, the two teams form with each other a single figuration. It requires a capacity for distancing oneself from the game to recognize that the actions of each side constantly interlock with those of their opponents and thus that the two opposing sides form a single figuration. So do antagonistic states. Social processes are often uncontrollable because they are fuelled by enmity. Partisanship for one side or another can easily blur that fact. (51-52; italics mine)
Here is a more theoretical formulation from Elias, from "Dynamics of sports groups" in the same volume.
Let us start with the concept of ‘figuration’. It has already been said that a game is the changing figuration of the players on the field. This means that the figuration is not only an aspect of the players. It is not as one sometimes seems to believe if one uses related expressions such as ‘social pattern’, ‘social group’, or ‘society’, something abstracted from individual people. Figurations are formed by individuals, as it were ‘body and soul’. If one watches the players standing and moving on the field in constant inter-dependence, one can actually see them forcing a continuously changing figuration. If groups or societies are large, one usually cannot see the figurations their individual members form with one another. Nevertheless, in these cases too people form figurations with each other — a city, a church, a political party, a state — which are no less real than the one formed by players on a football field, even though one cannot take them in at a glance.

To envisage groupings of people as figurations in this sense, with their dynamics, their problems of tension and of tension control and many others, even though one cannot see them here and now, requires a specific training. This is one of the tasks of figurational sociology, of which the present essay is an example. At present, a good deal of uncertainty still exists with regard to the nature of that phenomenon to which one refers as ‘society’. Sociological theories often appear to start from the assumption that ‘groups’ or ‘societies’, and ‘social phenomean’ in general, are something abstracted from individual people, or at least that they are not quite as ‘real’ as individuals, whatever that may mean. The game of football — as a small-scale model — can help to correct this view. It shows that figurations of individuals are neither more nor less real than the individuals who form them. Figurational sociology is based on observations such as this. In contrast to sociological theories which treat societies as if they were mere names, an ‘ideal type’, a sociologist’s construction, and which are in that sense representative of sociological nominalism, it represents a sociological realism. Individuals always come in figurations and figurations are always formed by individuals. (199)
This ontological position converges closely with the "relational" view of social action advocated by the new pragmatists as well as Chuck Tilly. The pragmatists' idea that individual actions derive from the flow of opportunities and reactions instigated by the movements of others is particularly relevant. But Elias's view also seems to have some resonance with the idea of methodological localism as well: "individuals in local social interactions are the molecule of the social world."

What seems correct here is an insight into the "descriptive ontology" of the social world. Elias credibly establishes the fact of entangled, flowing patterns of action by individuals during an episode, and makes it credible that these collective patterns don't derive fully in any direct way from the individual intentions of the participants. "Figurations are just as real as individuals." So the sociologist's ontology needs to include figurations. Moreover the insight seems to cast doubt as well on the analytical sociologists' strategy of "dissection". These points suggest that Elias provides a basis for a critique of ontological individualism. And Elias can be understood as calling for more realism in sociological description. 

What this analysis does not provide is any hint about how to use this idea in constructing explanations of larger-scale social outcomes or patterns. Are we forced to stop with the discovery of a set of figurations in play in a given social occurrence? Are we unable to provide any underlying explanation of the emergence of the figuration itself? Answers to these questions are not clear in Elias's text. And yet this is after all the purpose of explanatory sociology.

It is also not completely convincing to me that the figurations described by Elias could not be derived through something like an agent-based simulation. The derivation of flocking and swarming behavior in fish and birds seems to be exactly this -- a generative account of the emergence of a collective phenomenon (figuration) from assumptions about the decision-making of the individuals. So it seems possible that we might look at Elias's position as seeing a challenge to actor-based sociology that now can be addressed rather than a refutation. 

In this sense it appears that figurational sociology is in the same position as various versions of microsociology considered elsewhere (e.g. Goffman): it identifies a theoretical lacuna in rational choice theory and thin theories of the actor, but it does not provide recommendations for how to proceed with a more adequate explanatory theory.

(Recall the earlier discussion on non-generative social facts and ontological individualism; link. That post makes a related argument for the existence of social facts that cannot be derived from facts about the individual actors involved. In each case the problem derives from the highly path-dependent nature of social outcomes.)

Tuesday, March 8, 2016

Reduction and generativeness


Providing an ontology of complex entities seems to force us to refer to some notion of higher-level and lower-level things. Proteins consist of atoms; atoms consist of protons, electrons, and neutrons; and cells are agglomerations of many things, including proteins. This describes a relation of composition between a set of lower-level things and the higher-level thing. And this in turn seems to involve some kind of notion of "levels" of things in the world. Things at each level have relations and properties constituting the domain of facts at that level, and the properties of the higher-level thing are sometimes different from the properties of the lower-level things. (Not all the properties, of course -- proteins and atoms alike have mass and momentum.) But for the properties that differ, we have an important question to answer: what explains or determines the properties of the higher-level thing? Several positions have been considered:

  • Facts about things and properties of B are generated by facts of A
  • Facts about things and properties of B can be reduced to facts of A
  • Facts about things and properties of B supervene upon properties of A
I want to discuss these relations here, but it's worth recalling the other important relations across levels that are sometimes invoked.
  • Facts about things and properties of B are weakly emergent from properties of A
  • Facts about things and properties of B are strongly emergent from properties of A
  • Facts about things and properties of B are in part independent from the properties of A
  • Facts about things and properties of B causally influence the properties of A

So let's focus here on reduction and generation. These are sometimes thought to be equivalent notions; but they are not. Let's grant that the facts about B jointly serve to generate the facts about A. Then A supervenes upon B, by definition. Do these facts imply that A is reducible to B, or that facts of A can be or should be reduced to B? Emphatically not. Reducibility is a feature of the relationship between bodies of knowledge or theories -- our knowledge of A and our knowledge of B. To reduce A to B means deriving what we know about B from what we know about A. For example, the laws of planetary motion are derivable from the law of universal gravitation: by working through the mathematics of gravity it is possible to derive the orbits of the planets around the sun. So the laws of planetary motion are reducible to the law of universal gravitation.

Generativity is not a feature of theories; instead, it is an ontological feature of the world. Physicalism is such a conception. Physicalism maintains that facts about the physical body, including the nervous system, jointly generate all mental phenomena. Generativity involves the idea that, taking the full reality of the properties and powers of B, the properties of A result. The properties of the entities at level B suffice to generate all the properties of the entities at level A. But there is no assurance that our current knowledge about B permits a mathematical derivation of A. Further, there is no assurance that a "full and complete theory" of B would permit such a derivation -- because there is no assurance that such a theory exists at all. And then there is the issue of computability: it may be radically in feasible to perform the calculations necessary to derive A from B.

And so it is clear that reducibility does not follow from generativeness.

There is a second level argument separating generativeness from reducibility as well. This is the fact that there are numerous scientific purposes for which reduction is unnecessary even if it were feasible. It might be possible to derive the motion of a cannonball from a calculation of the motions of the component molecules. But this would be silly. We have no scientific interest or need in doing so.

So it is fully consistent for us to take the position of generativeness and anti-reductionism. And this position makes very good sense in the case of macro and micro social facts. We can take the view that all social entities are embodied in facts about various individuals, their social interactions, and their states of mind. This implies that social facts are generated by facts at the actor level, or that the facts of A supervene upon the facts of B. And yet we can also be emphatic in affirming that there is not need or general possibility for reduction from the one level to the other.

Or in other words, the generativeness of the situation is wholly uninformative.

Saturday, January 2, 2016

Is the mind/body problem relevant to social science?


Is solving the mind-body problem crucial to providing a satisfactory sociological theory?

No, it isn't, in my opinion. But Alex Wendt thinks otherwise in Quantum Mind and Social Science: Unifying Physical and Social Ontology. In fact, he thinks a solution to the mind-body problem is crucial to a coherent social science. Which is to say, in Wendt's words:
Some of the deepest philosophical controversies in the social sciences are just local manifestations of the mind–body problem. So if the theory of quantum consciousness can solve that problem then it may solve fundamental problems of social science as well. (5)
Why so? There are two core problems in the philosophy of mind that Wendt thinks are unavoidable and must be confronted by the social sciences. The first is the problem of consciousness and intentionality; the second is the problem of freedom of the will. How is it possible for a physical, material system (a computer, a brain, a vacuum cleaner) to possess any of these mental properties?

Experts refer to the "hard problem" in the philosophy of mind. We might also call this the discontinuity problem: the unavoidable necessity of a radical break between a non conscious substrate and a conscious super-strate. How is it possible for an amalgamation of inherently non-conscious things (neurons, transistors, routines in an AI software package) to create an ensemble that possesses consciousness? Isn't this as mysterious as imagining a world in which matter is composed of photons, where the constituents lack mass and the ensemble possesses mass? In such a case we would get mass out of non-mass; in the case of consciousness we get consciousness out of non-consciousness. "Pan-massism" would be a solution: all things, from stars to boulders to tables and chairs to subatomic components, possess mass.

But physicalist philosophers of mind are not persuaded by the discontinuity argument. As we have noted many times in this place, there are abundant examples of properties that are emergent in a non-spooky way. It simply is not the case that the sciences need to proceed in a Cartesian, foundationalist fashion. We do not need to reduce each level of the world to the workings of a lower level of things and processes.

Consider a parallel problem: is solving the question of the fundamental mechanisms of quantum mechanics crucial for understanding chemistry and the material properties of medium-scale objects? Here it seems evident that we can't require this level of ontological continuity from micro to macro -- in fact, there may reasons for believing the task cannot be carried out in principle. (See the earlier post on the question of whether chemistry supervenes upon quantum theory; link.)

Here is the solution to the mind-body problem that Wendt favors: panpsychism. Panpsychism is the notion that consciousness is a characteristic of the world all the way down -- from human beings to sub-atomic particles.
Panpsychism takes a known effect at the macroscopic level–that we are conscious–and scales it downward to the sub-atomic level, meaning that matter is intrinsically minded. (30) 
Exploiting this possibility, quantum consciousness theorists have identified mechanisms in the brain that might allow this sub-atomic proto-consciousness to be amplified to the macroscopic level. (5)
Quantum consciousness theory builds on these intuitions by combining two propositions: (1) the physical claim of quantum brain theory that the brain is capable of sustaining coherent quantum states ( Chapter 5 ), and (2) the metaphysical claim of panpsychism that consciousness inheres in the very structure of matter ( Chapter 6 ). (92)
Panpsychism strikes me as an extravagant and unhelpful theoretical approach, however. Why should we attempt to analyze "Robert is planning to embarrass the prime minister" into a vast ensemble of psychic bits associated with the sub-atomic particles of his body? How does it even make sense to imagine a "sub-atomic bit of consciousness"? And how does the postulation of sub-atomic characteristics of consciousness give us any advantage in understanding ordinary human consciousness, deliberation, and intentionality?

Another supposedly important issue in the domain of the mind-body problem is the problem of freedom of the will. As ordinary human beings in the world we work on the assumption that individuals make intentional choices among feasible alternatives; their behavior is not causally determined by any set of background conditions. But if individuals are composed of physically deterministic parts (classical physics) then how is it possible for the organism to be "free"? And equally, if individuals are composed of physically indeterministic parts (probabilistic sub-particles) then how is it possible for the organism to be intentional (since chance doesn't produce intentionality)? So neither classical physics nor quantum physics seems to leave room for intentional free choice among alternatives.

Consider the route of the Roomba robotic vacuum cleaner through the cluttered living room (link): its course may appear either random or strategic, but in fact it is neither. Instead, the Roomba's algorithms dictate the turns and trajectories that the device takes in either an unobstructed run or an obstructed run. The behavior of the Roomba is determined by its algorithms and the inputs of its sensors; there is no room for freedom of choice in the Roomba. How can it be different for a dog or a human being, given that we too are composed of algorithmic computing systems?


Social theory presupposes intentional actors; but our current theories of neuroscience don't permit us to reproduce how intentionality, consciousness, and freedom are possible. So don't we need to solve the problem of freedom of the will before we can construct valid sociological theories that depend upon conscious, intentional and free actors?

Again, my answer is negative. It is an interesting question, to be sure, how freedom, consciousness, and intentionality can emerge from the wetware of the brain. But it is not necessary to solve this problem before we proceed with social science. Instead, we can begin with phenomenological truisms: we are conscious, we are intentional, and we are (in a variety of conditioned senses) free. How the organism achieves these higher-level capabilities is intriguing to study; but we don't have to premise our sociological theories on any particular answer to this question.

So the position I want to take here is that we don't have to solve the mysteries of quantum mechanics in order to understand social processes and social causation. We can bracket the metaphysics of the quantum world -- much as the Copenhagen interpretation sought to do -- without abandoning the goal of providing a good explanation of aspects of the social world and social actors. Wendt doesn't like this approach:
Notwithstanding its attractions to some, this refusal to deal with ontological issues also underlies the main objection to the Copenhagen approach: that it is essentially incomplete. (75)
But why is incompleteness a problem for the higher-level science (psychology or sociology, for example)? Why are we not better served by a kind of middle-level theory of human action and the social world, a special science, that refrains altogether from the impulse of reductionism? This middle-level approach would certainly leave open the research question of how various capabilities of the conscious, intentional organism are embodied in neurophysiology. But it would not require providing such an account in order to validate the human-level or social-level theory.


Sunday, November 22, 2015

Are emergence and microfoundations contraries?

image: micro-structure of a nanomaterial (link)

Are there strong logical relationships among the ideas of emergence, microfoundations, generative dependency, and supervenience? It appears that there are.


The diagram represents the social world as a laminated set of layers of entities, processes, powers, and laws. Entities at L2 are composed of or caused by some set of entities and forces at L1. Likewise L3 and L4. Arrows indicate microfoundations for L2 facts based on L1 facts. Diamond-tipped arrows indicate the relation of generative dependence from one level to another. Square-tipped lines indicate the presence of strongly emergent facts at the higher level relative to the lower level. The solid line (L4) represents the possibility of a level of social fact that is not generatively dependent upon lower levels. The vertical ellipse at the right indicates the possibility of microfoundations narratives involving elements at different levels of the social world (individual and organizational, for example).

We might think of these levels as "individuals," "organization, value communities, social networks," "large aggregate institutions like states," etc.

This is only one way of trying to represent the structure of the social world. The notion of a "flat" ontology was considered in an earlier post (link). Another structure that is excluded by this diagram is one in which there is multi-directional causation across levels, both upwards and downwards. For example, the diagram excludes the possibility that L3 entities have causal powers that are original and independent from the powers of L2 or L1 entities. The laminated view described here is the assumption built into debates about microfoundations, supervenience, and emergence. It reflects the language of micro, meso, and macro levels of social action and organization.

Here are definitions for several of the primary concepts.
  • Microfoundations of facts in L2 based on facts in L1 : accounts of the causal pathways through which entities, processes, powers, and laws of L1 bring about specific outcomes in L2. Microfoundations are small causal theories linking lower-level entities to higher-level outcomes.
  • Generative dependence of L2 upon L1: the entities, processes, powers, and laws of L2 are generated by the properties of level L1 and nothing else. Alternatively, the entities, processes, powers, and laws of A suffice to generate all the properties of L2. A full theory of L1 suffices to derive the entities, processes, powers, and laws of L2.
  • Reducibility of y to x : it is possible to provide a theoretical or formal derivation of the properties of y based solely on facts about x.
  • Strong emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties that do not depend wholly upon the properties of L1.
  • Weak emergence of properties in L2 with respect to the properties of L1: L2 possesses some properties for which we cannot (now or in the future) provide derivations based wholly upon the properties of L1.
  • Supervenience of L2 with respect to properties of L1: all the properties of L2 depend strictly upon the properties of L1 and nothing else.
    We also can make an effort to define some of these concepts more formally in terms of the diagram.


Consider these statements about facts at levels L1 and L2:
  1. UM: all facts at L2 possess microfoundations at L1. 
  2. XM: some facts at L2 possess inferred but unknown microfoundations at L1. 
  3. SM: some facts at L2 do not possess any microfoundations at L1. 
  4. SE: L2 is strongly emergent from L1. 
  5. WE: L2 is weakly emergent from L1. 
  6. GD: L2 is generatively dependent upon L1. 
  7. R: L2 is reducible to L1. 
  8. D: L2 is determined by L1. 
  9. SS: L2 supervenes upon L1. 
Here are some of the logical relations that appear to exist among these statements.
  1. UM => GD 
  2. UM => ~SE 
  3. XM => WE 
  4. SE => ~UM 
  5. SE => ~GD 
  6. GD => R 
  7. GD => D 
  8. SM => SE 
  9. UM => SS 
  10. GD => SS 
On this analysis, the question of the availability of microfoundations for social facts can be understood to be central to all the other issues: reducibility, emergence, generativity, and supervenience. There are several positions that we can take with respect to the availability of microfoundations for higher-level social facts.
  1. If we have convincing reason to believe that all social facts possess microfoundations at a lower level (known or unknown) then we know that the social world supervenes upon the micro-level; strong emergence is ruled out; weak emergence is true only so long as some microfoundations remain unknown; and higher-level social facts are generatively dependent upon the micro-level.   
  2. If we take a pragmatic view of the social sciences and conclude that any given stage of knowledge provides information about only a subset of possible microfoundations for higher-level facts, then we are at liberty to take the view that each level of social ontology is at least weakly emergent from lower levels -- basically, the point of view advocated under the banner of "relative explanatory autonomy" (link). This also appears to be roughly the position taken by Herbert Simon (link). 
  3. If we believe that it is impossible in principle to fully specify the microfoundations of all social facts, then weak emergence is true; supervenience is false; and generativity is false. (For example, we might believe this to be true because of the difficulty of modeling and calculating a sufficiently large and complex domain of units.) This is the situation that Fodor believes to be the case for many of the special sciences. 
  4. If we have reason to believe that some higher-level facts simply do not possess microfoundations at a lower level, then strong emergence is true; the social world is not generatively dependent upon the micro-world; and the social world does not supervene upon the micro-world. 
In other words, it appears that each of the concepts of supervenience, reduction, emergence, and generative dependence can be defined in terms of the availability or inavailability of microfoundations for some or all of the facts at a higher level based on facts at the lower level. Strong emergence and generative dependence turn out to be logical contraries (witness the final two definitions above).

Saturday, August 1, 2015

Microfoundations 2.0?


Figure. An orderly ontological hierarchy (University of Leeds (link)


Figure. Complex non-reductionist social outcome -- blight

The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear  in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:
The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.  
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)
The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)

Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In one sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

Sunday, July 26, 2015

Is chemistry supervenient upon physics?


Many philosophers of science and physicists take it for granted that "physics" determines "chemistry". Or in terms of the theory of supervenience, it is commonly supposed that the domain of chemistry supervenes upon the domain of fundamental physics. This is the thesis of physicalism: the idea that all causation ultimately depends on the causal powers of the phenomena described by fundamental physics.

R. F. Hendry takes up this issue in his contribution to Davis Baird, Eric Scerri, and Lee McIntyre's very interesting volume, Philosophy of Chemistry. Hendry takes the position that this relation of supervenience does not obtain; chemistry does not supervene upon fundamental physics.

Hendry points out that the dependence claim depends crucially on two things: what aspects of physics are to be considered? And second, what kind of dependency do we have in mind between higher and lower levels? For the first question, he proposes that we think about fundamental physics -- quantum mechanics and relativity theory (174). For the second question, he enumerates several different kinds of dependency: supervenience, realization, token identity, reducibility, and derivability (175). In discussing the macro-property of transparency in glass, he cites Jaegwon Kim in maintaining that transparency in glass is "nothing more" than the features of the microstructure of glass that permit it to transmit light. But here is a crucial qualification:
But as Kim admits, this last implication only follows if it is accepted that “the microstructure of a system determines its causal/nomic properties” (283), for the functional role is specified causally, and so the realizer’s realizing the functional property that it does (i.e., the realizer–role relation itself) depends on how things in fact go in a particular kind of system. For a microstructure to determine the possession of a functional property, it must completely determine the causal/nomic properties of that system. (175)
Hendry argues that the key issue underlying claims of dependence of B upon A is whether there is downward causation from the level of chemistry (B) to the physical level (A); or, on the contrary, is physics "causally complete". If the causal properties of the higher level are fully fixed by the causal properties of the underlying level, then supervenience is possible; but if the higher level has causal properties that permit influence on the lower level, then supervenience is not possible.

In order to gain insight into the specific issues arising concerning chemistry and physics, Hendry makes use of the "emergentist" thinking associated with C.D. Broad. He finds that Broad offers convincing arguments against "Pure Mechanism", the view that all material things are determined by the micro-physical level (177). Here are Broad's two contrasting possibilities for understanding the relations between higher levels and the physical micro-level:
(i) On the first form of the theory the characteristic behavior of the whole could not, even in theory, be deduced from the most complete knowledge of the behavior of its components, taken separately or in other combinations, and of their proportions and arrangements in this whole . . .
(ii) On the second form of the theory the characteristic behavior of the whole is not only completely determined by the nature and arrangements of its components; in addition to this it is held that the behavior of the whole could, in theory at least, be deduced from a sufficient knowledge of how the components behave in isolation or in other wholes of a simpler kind (1925, 59). [Hendry, 178]
The first formulation describes "emergence", whereas the second is "mechanism". In order to give more contemporary expression to the two views Hendry introduces the key concept of quantum chemistry, the Hamiltonian for a molecule. A Hamiltonian is an operator describing the total energy of a system. A "resultant" Hamiltonian is the operator that results from identifying and summing up all forces within a system; a configurational Hamiltonian is one that has been observationally adjusted to represent the observed energies of the system. The first version is "fundamental", whereas the second version is descriptive.

Now we can pose the question of whether chemistry (behavior of molecules) is fixed by the resultant Hamiltonian for the components of the atoms involved (electrons, protons, neutrons) and the forces that they exert on each other. Or, on the other hand, does quantum chemistry achieve its goals by arriving at configurational Hamiltonians for molecules, and deriving properties from these descriptive operators? Hendry finds that the latter is the case for existing derivations; and this means that quantum chemistry (as it is currently practiced) does not derive chemical properties from fundamental quantum theory. Moreover, the configuration of the Hamiltonians used requires abstractive description of the hypothesized geometry of the molecule and the assumption of the relatively slow motion of the nucleus. But this is information at the level of chemistry, not fundamental physics. And it implies downward causation from the level of chemical structure to the level of fundamental physics.
Furthermore, to the extent that the behavior of any subsystem is affected by the supersystems in which it participates, the emergent behavior of complex systems must be viewed as determining, but not being fully determined by, the behavior of their constituent parts. And that is downward causation. (180)
So chemistry does not derive from fundamental physics. Here is Hendry's conclusion, supporting pluralism and anti-reductionism in the case of chemistry and physics:
On the other hand is the pluralist version, in which physical law does not fully determine the behavior of the kinds of systems studied by the special sciences. On this view, although the very abstractness of the physical theories seems to indicate that they could, in principle, be regarded as applying to special science systems, their applicability is either trivial (and correspondingly uninformative), or if non-trivial, the nature of scientific inquiry is such that there is no particular reason to expect the relevant applications to be accurate in their predictions.... The burden of my argument has been that strict physicalism fails, because it misrepresents the details of physical explanation (187)
Hendry's argument has a lot in common with Herbert Simon's arguments about system complexity (link) and with Nancy Cartwright's arguments about the limitations of (real) physics' capability of representing and calculating the behavior of complex physical systems based on first principles (link). In each case we get a pragmatic argument against reductionism, and a weakened basis for assuming a strict supervenience relation between higher-level structures and a limited set of supposedly fundamental building blocks. What is striking is that Hendry's arguments undercut the reductionist impulse at what looks like its most persuasive juncture -- the relationship between quantum physics and quantum chemistry.


Monday, June 29, 2015

Quantum mental processes?


One of the pleasant aspects of a long career in philosophy is the occasional experience of a genuinely novel approach to familiar problems. Sometimes one's reaction is skeptical at first -- "that's a crazy idea!". And sometimes the approach turns out to have genuine promise. I've had that experience of moving from profound doubt to appreciation several times over the years, and it is an uplifting learning experience. (Most recently, I've made that progression with respect to some of the ideas of assemblage and actor-network theory advanced by thinkers such as Bruno Latour; link, link.)

I'm having that experience of unexpected dissonance as I begin to read Alexander Wendt's Quantum Mind and Social Science: Unifying Physical and Social Ontology. Wendt's book addresses many of the issues with which philosophers of social science have grappled for decades. But Wendt suggests a fundamental switch in the way that we think of the relation between the human sciences and the natural world. He suggests that an emerging paradigm of research on consciousness, advanced by Giuseppi Vitiello, John Eccles, Roger Penrose, Henry Stapp, and others, may have important implications for our understanding of the social world as well. This is the field of "quantum neuropsychology" -- the body of theory that maintains that puzzles surrounding the mind-body problem may be resolved by examining the workings of quantum behavior in the central nervous system. I'm not sure which category to put the idea of quantum consciousness yet, but it's interesting enough to pursue further.

The familiar problem in this case is the relation between the mental and the physical. Like all physicalists, I work on the assumption that mental phenomena are embodied in the physical infrastructure of the central nervous system, and that the central nervous system works according to familiar principles of electrochemistry. Thought and consciousness are somehow the "emergent" result of the workings of the complex physical structure of the brain (in a safe and bounded sense of emergence). The novel approach is the idea that somehow quantum physics may play a strikingly different role in this topic than ever had been imagined. Theorists in the field of quantum consciousness speculate that perhaps the peculiar characteristics of quantum events at the sub-atomic level (e.g. quantum randomness, complementary, entanglement) are close enough to the action of neural networks that they serve to give a neural structure radically different properties from those expected by a classical-physics view of the brain. (This idea isn't precisely new; when I was an undergraduate in the 1960s it was sometimes speculated that freedom of the will was possible because of the indeterminacy created by quantum physics. But this wasn't a very compelling idea.)

Wendt's further contribution is to immerse himself in some of this work, and then to formulate the question of how these perspectives on intentionality and mentality might affect key topics in the philosophy of society. For example, how do the longstanding concepts of structure and agency look when we begin with a quantum perspective on mental activity?

A good place to start in preparing to read Wendt's book is Harald Atmanspacher's excellent article in the Stanford Encyclopedia of Philosophy (link). Atmanspacher organizes his treatment into three large areas of application of quantum physics to the problem of consciousness: metaphorical applications of the concepts of quantum physics; applications of the current state of knowledge in quantum physics; and applications of possible future advances in knowledge in quantum physics.
Among these [status quo] approaches, the one with the longest history was initiated by von Neumann in the 1930s.... It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. (13)
A physical state reduction is the event that occurs when a quantum probability field resolves into a discrete particle or event upon having been measured. Some theorists (e.g. Henry Stapp) speculate that conscious human intention may influence the physical state reduction -- thus a "mental" event causes a "physical" event. And some process along these lines is applied to the "activation" of a neuronal assembly:
The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. (20)
Also of interest in Atmanspacher's account is the idea of emergence: are mental phenomena emergent from physical phenomena, and in what sense? Atmanspacher specifies a clear but strong definition of emergence, and considers whether mental phenomena are emergent in this sense:
Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them. (6)
This is a strong conception in a very specific way; it specifies that material facts are not sufficient to explain "emergent" mental properties. This implies that we need to know some additional facts beyond facts about the material brain in order to explain mental states; and it is natural to ask what the nature of those additional facts might be.

The reason this collection of ideas is initially shocking to me is the difference in scale between the sub-atomic level and macro-scale entities and events. There is something spooky about postulating causal links across that range of scales. It would be wholly crazy to speculate that we need to invoke the mathematics and theories of quantum physics to explain billiards. It is pretty well agreed by physicists that quantum mechanics reduces to Newtonian physics at this scale. Even though the component pieces of a billiard ball are quantum entities with peculiar properties, as an ensemble of 10^25 of these particles the behavior of the ball is safely classical. The peculiarities of the quantum level wash out for systems with multiple Avogadro's numbers of particles through the reliable workings of statistical mechanics. And the intuitions of most people comfortable with physics would lead them to assume that neurons are subject to the same independence; the scale of activity of a neuron (both spatial and temporal) is orders of magnitude too large to reflect quantum effects. (Sorry, Schrodinger's cat!)

Charles Seife reports a set of fundamental physical computations conducted by Max Tegmark intended to demonstrate this in a recent article in Science Magazine, "Cold Numbers Unmake the Quantum Mind" (link). Tegmark's analysis focuses on the speculations offered by Penrose and others on the possible quantum behavior of "microtubules." Tegmark purports to demonstrate that the time and space scales of quantum effects are too short by orders of magnitude to account for the neural mechanisms that can be observed (link). Here is Tegmark's abstract:
Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10^−13–10^−20s) are typically much shorter than the relevant dynamical time scales (∼10^−3–10^−1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way. (link)
I am grateful to Atmanspacher for providing such a clear and logical presentation of some of the main ideas of quantum consciousness; but I continue to find myself sceptical. There is a risk in this field to succumb to the temptation towards unbounded speculation: "Maybe if X's could influence Y's, then we could explain Z" without any knowledge of how X, Y, and Z are related through causal pathways. And the field seems sometimes to be prey to this impulse: "If quantum events were partially mental, then perhaps mental events could influence quantum states (and from there influence macro-scale effects)."

In an upcoming post I'll look closely at what Alex Wendt makes of this body of theory in application to the level of social behavior and structure.

Saturday, October 5, 2013

Issues about microfoundations

photo

I believe that hypotheses, theories, and explanations in the social sciences need to be subject to the requirement of microfoundationalism. This requirement can be understood in a weak and a strong version, and sometimes people understand the idea as a requirement of reductionism.  In brief, I defend the position in a weak form that does not imply a reductionist theory of social explanation. Recent discussions with Julie Zahle have led me to sharpen my understanding of the requirement of microfoundations in social theorizing and explanation. Here I would like to clarify my own thinking about the role and scope of the principle of microfoundationalism.

A microfoundation is something like this: an account of the mechanisms at the individual actor level (and perhaps at levels intermediate between actors and the current level -- e.g. institutions) that work to create the structural and causal properties that we observe at the meso or macro level. A fully specified microfoundational account of a meso-level feature consists of an account that traces out (1) the features of the actors and (2) the characteristics of the action environment (including norms and institutions) which jointly lead to (3) the social pattern or causal power we are interested in. A microfoundation specifies the individual-level mechanisms that lead to the macro- or meso-level social fact. This is the kind of account that Thomas Schelling illustrates so well in Micromotives and Macrobehavior.

My thinking about the need for microfoundations has changed over the years from a more narrow requirement ("we need to have a pretty good idea of what the individual-level mechanisms are for a macro-property") to a less restrictive requirement ("we need to have reason to believe that there are individual-level mechanisms for a macro-property"). In The Scientific Marx I liked the idea of “aggregative explanations”, which are really explanations that move from features of individual actors and their interactions, to derivations about social and collective behavior. In Varieties of Social ExplanationI relaxed the idea along these lines:

This doctrine [of microfoundationalism] may be put in both a weak and a strong version. Weakly social explanations must be compatible with the existence of microfoundations of the postulated social regularities, which may, however, be entirely unknown. More strongly social explanations must be explicitly grounded on an account of the microfoundations that produce them. I will argue for an intermediate form—that we must have at least an approximate idea of the underlying mechanisms at the individual level if we are to have a credible hypothesis about explanatory social regularities at all, A putative explanation couched in terms of high-level social factors whose underlying individual-level mechanisms are entirely unknown is no explanation at all. (kl 4746)

My adherence to microfoundationalism today is a little bit weaker still. I now advocate a version of microfoundationalism that specifies only that we must be confident (an epistemic concept) that such micro-to-macro relations exist. We must be confident there are such mechanisms but not obliged to specify them. (I also hold that the best ground for having that confidence is being able to gesture plausibly towards roughly how they might work.) Another way to put it is this requirement: "No magical thinking!" That is, we exclude explanations that would only be possible if we assumed action at a distance, blocks of wood that have complicated mental lives, or intelligent beings with infinite computational faculties. A convincing way of discrediting a meso-level assertion is to give an argument that it is unlikely that real human agents would in fact act in ways that lead to this meso-level situation. (Example: Chinese planners who created the collective farming system in the Great Leap Forward assumed that collective farms would be highly productive because a "new socialist man" would emerge. This was unlikely, and therefore the individual behavior to be expected on collective farms would lead to "easy riding" and low productivity.)

Here is an effort to simplify these issues into a series of assertions:

  1. All social forces, powers, structures, processes, and laws (social features) are ultimately constituted by mechanisms at the level of individual actors. (ontological principle)
  2. When we assert the reality or causal powers of a social entity, we need to be confident that there are microfoundations that cause this social entity to have the properties we attribute to it. (microfoundations principle)
    1. A description of the microfoundations of a social entity S is an account of the circumstances and individual mechanisms that bring about patterns of individual activity resulting in the properties of S.
    2. Strong version: we must provide a credible statement of the microfoundations.
    3. Intermediate version: we must have a back-of-envelope sketch of possible microfoundations.
    4. Weak version: we must have confidence that there are microfoundations, but we don’t have to have any specific ideas about what they are.
  3. A "vertical" social explanation of the features of a social entity S is a derivation of S from facts about the individual level. This is equivalent to providing a specification of the microfoundations of S; a derivation of the properties of S from a model of the action situation of the individuals involved; an agent-based model. This is what JZ calls an individualist explanation.
  4. A "horizontal" social explanation is one in which we explain a social entity or structure S by referring to the causal properties of other meso-level entities and conditions. This is what we call a meso-level explanation. (The diagram above illustrates these ideas.)
    1. Horizontal explanations are likewise subject to the microfoundations requirement 2: the entities and powers postulated need to be such that we have good reason to believe that there are microfoundations available for these entities and properties. (Epistemic requirement)
    2. Or slightly stronger: we need to be able to offer at least a plausible sketch of the microfoundations / individual-level mechanisms that would support the postulated entities. (Epistemic+ requirement)
  5. Providing or hypothesizing about microfoundations always involves modeling the behaviors and interactions of individuals; so it requires assuming a theory of the actor. So when we try to specify or hypothesize about microfoundations for something we are obliged to make use of some theory of the actor.
  6. Traditional theories of the actor are generally too abstract and too committed to a rational-choice model.
  7. Social scientists will be better able to hypothesize microfoundations when they have richer theories of the actor. (heuristic principle)

So the ontological principle is simply that social entities are wholly fixed by the properties and dynamics of the actions of the actors that constitute them. The requirement of microfoundations simply reproduces the ontological principle, ruling out ontologically impossible relations among social entities. The requirement of microfoundations is not a requirement on what an explanation needs to look like; rather, it is a requirement about certain beliefs we need to be justified in accepting when we advance a claim about social entities. It is what JZ calls a “confirmation” requirement (or perhaps better, a justificatory requirement). A better theory of the actor supports the discovery of microfoundations for social assertions. Further, it provides a richer "sociological imagination" for macro- and meso-level sociologists. So the requirement of microfoundations and the recommendation that social scientists seek out better theories of the actor are also valuable as heuristics for social research: they provide intellectual resources that help social researchers decide where to look for explanatory links, and what kinds of mechanisms might turn out to be relevant.

Wednesday, September 25, 2013

What is reduction?

Screen Shot 2013-09-25 at 10.23.31 AM

The topics of methodological individualism and microfoundationalism unavoidably cross with the idea of reductionism -- the notion that higher level entities and structures need somehow to be "reduced" to facts or properties having to do with lower level structures. In the social sciences, this amounts to something along these lines: the properties and dynamics of social entities need to be explained by the properties and interactions of the individuals who constitute them. Social facts need to reduce to a set of individual-level facts and laws. Similar positions arise in psychology ("psychological properties and dynamics need to reduce to facts about the activities and properties of the central nervous system") and biology ("complex biological systems like genes and cells need to reduce to the biochemistry of the interacting systems of molecules that make them up").

Reductionism has a bad flavor within much of philosophy, but it is worth dwelling on the concept a bit more fully.

Why would the strategy of reduction be appealing within a scientific research tradition? Here is one reason: there is evident explanatory gain that results from showing how the complex properties and functionings of a higher-level entity are the result of the properties and interactions of its lower level constituents. This kind of demonstration serves to explain the upper level system's properties in terms of the entities that make it up. This is the rationale for Peter Hedstrom's metaphor of "dissecting the social" (Dissecting the Social: On the Principles of Analytical Sociology); in his words,

To dissect, as the term is used here, is to decompose a complex totality into its constituent entities and activities and then to bring into focus what is believed to be its most essential elements. (kl 76)

Aggregate or macro-level patterns usually say surprisingly little about why we observe particular aggregate patterns, and our explanations must therefore focus on the micro-level processes that brought them about. (kl 141)

The explanatory strategy illustrated by Thomas Schelling in Micromotives and Macrobehavior proceeds in a similar fashion. Schelling wants to show how a complex social phenomenon (say, residential segregation) can be the result of a set of preferences and beliefs of the independent individuals who make up the relevant population. And this is also the approach that is taken by researchers who develop agent-based models (link).

Why is the appeal to reduction sometimes frustrating to other scientists and philosophers? Because it often seems to be a way of changing the subject away from our original scientific interest. We started out, let's say, with an interest in motion perception, looking at the perceiver as an information-processing system, and the reductionist keeps insisting that we turn our attention to the organization of a set of nerve cells. But we weren't interested in nerve cells; we were interested in the computational systems associated with motion perception.

Another reason to be frustrated with "methodological reductionism" is the conviction that mid-level entities have stable properties of their own. So it isn't necessary to reduce those properties to their underlying constituents; rather, we can investigate those properties in their own terms, and then make use of this knowledge to explain other things at that level.

Finally, it is often the case that it is simply impossible to reconstruct with any useful precision the micro-level processes that give rise to a given higher-level structure. The mathematical properties of complex systems come in here: even relatively simple physical systems, governed by deterministic mechanical laws, exhibit behavior that cannot be calculated on the basis of information about the starting conditions of the system. A solar system with a massive star at the center and a handful of relatively low-mass planets produces a regular set of elliptical orbits. But a three-body gravitational system creates computational challenges that make it impossible to predict the future state of the system; even small errors of measurement or intruding forces can significantly shift the evolution of the system. (Here is an interesting animation of a three-body gravitational system; the image at the top is a screenshot.)

We might capture part of this set of ideas by noting that we can distinguish broadly between vertical and lateral explanatory strategies. Reduction is a vertical strategy. The discovery of the causal powers of a mid-level entity and use of those properties to explain the behavior of other mid-level entities and processes is a lateral or horizontal strategy. It remains within a given level of structure rather than moving up and down over two or more levels.

William Wimsatt is a philosopher of biology whose writings about reduction have illuminated the topic significantly. His article "Reductionism and its heuristics: Making methodological reductionism honest" is particularly useful (link). Wimsatt distinguishes among three varieties of reductionism in the philosophy of science: inter-level reductive explanations, same-level reductive theory succession, and eliminative reduction (448). He finds that eliminative reduction is a non-starter; virtually no scientists see value in attempting to eliminate references to the higher-level domain in favor of a lower-level domain. Inter-level reduction is essentially what was described above. And theory-succession reduction is a mapping from one theory to the next of the ontologies that they depend upon. Here is his description of "successional reduction":

Successional reductions commonly relate theories or models of entities which are either at the same compositional level or they relate theories that aren't level-specific.... They are relationships between theoretical structures where one theory or model is transformed into another ... to localize similarities and differences between them. (449)

I suppose an example of this kind of reduction is the mapping of the quantum theory of the atom onto the classical theory of the atom.

Here is Wimsatt's description of inter-level reductive explanation:

Inter-level reductions explain phenomena (entities, relations, causal regularities) at one level via operations of often qualitatively different mechanisms at lower levels. (450)

Here is an example he offers of the "reduction" of Mendel's factors in biology:

Mendel's factors are successively localized through mechanistic accounts (1) on chromosomes by the Boveri–Sutton hypothesis (Darden, 1991), (2) relative to other genes in the chromosomes by linkage mapping (Wimsatt, 1992), (3) to bands in the physical chromosomes by deletion mapping (Carlson, 1967), and finally (4) to specific sites in chromosomal DNA thru various methods using PCR (polymerase chain reaction) to amplify the number of copies of targeted segments of DNA to identify and localize them (Waters, 1994).

What I find useful about Wimsatt's approach is the fact that he succeeds in de-dramatizing this issue. He puts aside the comprehensive and general claims that have sometimes been made on behalf of "methodological reductionism" in the past, and considers specific instances in biology where scientists have found it very useful to investigate the vertical relations that exist between higher-level and lower-level structures. This takes reductionism out of the domain of a general philosophical principle and into that of a particular research heuristic.

Monday, February 18, 2013

Supervenience of the social?


I have found it appealing to try to think of the macro-micro relation in terms of the idea of supervenience (link).  Supervenience is a concept that was developed in the context of physicalism and psychology, as a way of specifying a non-reductionist but still constraining relationship between psychological properties and physical states of the brain. Physicalism and ontological individualism are both ontological theories about the relationship between higher and lower levels of entities in several different domains. But neither doctrine dictates how explanations in these domains need to proceed; i.e., neither forces us to be reductionist in either psychology or sociology.

The supervenience relation holds that --
  • X supervenes on Y =df no difference in X without some difference in the states of Y
Analogously, to say that the "social" supervenes upon "the totality of individuals making up a social arrangement" seems to have a superficial plausibility, without requiring that we attempt to reduce the social characteristics to ensembles of facts about individuals.

I'm no longer so sure that this is a helpful move, however, for the purposes of the macro-micro relationship.  Suppose we are considering a statement along these lines:
  • The causal properties of organization X supervene on the states of the individuals who make up X and who interact with X.
There seem to be quite a few problems that arise when we try to make use of this idea.

(a) First, what are we thinking of when we specify "the states of the individuals"? Is it all characteristics, known and unknown? Or is it a specific list of characteristics? If it is all characteristics of the individual, including as-yet unknown characteristics, then the supervenience relation is impossible to apply in practice. We would never know whether two substrate populations were identical all the way down. This represents a kind of "twin-earth" thought experiment that doesn't shed light on real sociological questions.

In the psychology-neurophysiology examples out of which supervenience theory originated these problems don't seem so troubling. First, we think we know which properties of nerve cells are relevant to their functioning: electrical properties and network connections. So our supervenience claim for psychological states is more narrow:
  • The causal properties of a psychological process supervene on the functional properties of the states of the nerve cells of the corresponding brain. 
The nerve cells may differ in other ways that are irrelevant to the psychological processes at the higher level: they may be a little larger or smaller, they may have a slightly different content of trace metals, they may be of different ages. But our physicalist claim is generally more refined than this; it ignores these "irrelevant" differences across cells and specifies identity among the key functional characteristics of the cells. Put this way, the supervenience claim is an empirical theory; it says that electrical properties and network connections are causally relevant to psychological processes, but cell mass and cell age are not (within broad parameters).

(b) Second and relatedly, there are always some differences between two groups of people, no matter how similar; and if the two groups are different in the slightest degree -- say, one member likes ice cream and the corresponding other does not -- then the supervenience relation says nothing about the causal properties of X. The organizational features may be as widely divergent as could be imagined; supervenience is silent about the delta to epsilon relations from substrate to higher level. It specifies only that identical substrates produce identical higher level properties. More useful would be something like the continuity concept in calculus to apply here: small deviations in lower-level properties result in small deviations in higher-level properties. But it is not clear that this is true in the social case.

(c) Also problematic for the properties of social structures is an issue that depends upon the idea of path dependence. Let's say that we are working with the idea that a currently existing institution depends for its workings (its properties) on the individuals who make it up at present. And suppose that the institution has emerged through a fifty-year process of incremental change, while populated at each step by approximately similar individuals. The well-established fact of path dependence in the evolution of institutions (Thelen, How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan) entails that the properties of the institution today are not uniquely determined by the features of the individuals currently involved in the institution in its various stages. Rather there were shaping events that pushed the evolution of the institution in this direction or that at various points in time. This means that the current properties of the institution are not best explained by the current properties of the substrate individuals at present, but rather by the history of development that led this population to this point.

It will still be true that the workings of the institution at present are dependent on the features of the individuals at present; but the path-dependency argument says that those individuals will have adjusted in small ways so as to embody the regulative system of the institution in its current form, without becoming fundamentally different kinds of individuals. Chiefly they will have internalized slightly different systems of rules that embody the current institution, and this is what gives the institution its characteristic mode of functioning in the present.

So explanation of the features of the institution in the present is not best couched in terms of the current characteristics of the individuals who make it up, but rather by an historical account of the path that led to this point (and the minute changes in individual beliefs and behaviors that went along with this).

These concerns make me less satisfied with the general idea of supervenience as a way of specifying the relation between social structures and substrate individuals. What would satisfy me more would be something like this:
  • Social structures supervene upon the states of individuals in the substrate described at a given level of granularity corresponding to our current theory of the actor.
  • Small differences in the substrate will produce only small differences in the social structure.
These add up to a strong claim; they entail that any organization with similar rules of behavior involving roughly similar actors (according to the terms of our best theory of the actor) will have roughly similar causal properties. And this in turn invites empirical investigation through comparative methods.

As for the path-dependency issue raised in comment (c), perhaps this is the best we can say: the substrate analysis of the behavior of the individuals tells us how the institution works, but the historical account of the path-dependent process through which the institution came to have the characteristics it currently has tells us why it works this way. And these are different kinds of explanations.