Wednesday, September 25, 2013

What is reduction?

Screen Shot 2013-09-25 at 10.23.31 AM

The topics of methodological individualism and microfoundationalism unavoidably cross with the idea of reductionism -- the notion that higher level entities and structures need somehow to be "reduced" to facts or properties having to do with lower level structures. In the social sciences, this amounts to something along these lines: the properties and dynamics of social entities need to be explained by the properties and interactions of the individuals who constitute them. Social facts need to reduce to a set of individual-level facts and laws. Similar positions arise in psychology ("psychological properties and dynamics need to reduce to facts about the activities and properties of the central nervous system") and biology ("complex biological systems like genes and cells need to reduce to the biochemistry of the interacting systems of molecules that make them up").

Reductionism has a bad flavor within much of philosophy, but it is worth dwelling on the concept a bit more fully.

Why would the strategy of reduction be appealing within a scientific research tradition? Here is one reason: there is evident explanatory gain that results from showing how the complex properties and functionings of a higher-level entity are the result of the properties and interactions of its lower level constituents. This kind of demonstration serves to explain the upper level system's properties in terms of the entities that make it up. This is the rationale for Peter Hedstrom's metaphor of "dissecting the social" (Dissecting the Social: On the Principles of Analytical Sociology); in his words,

To dissect, as the term is used here, is to decompose a complex totality into its constituent entities and activities and then to bring into focus what is believed to be its most essential elements. (kl 76)

Aggregate or macro-level patterns usually say surprisingly little about why we observe particular aggregate patterns, and our explanations must therefore focus on the micro-level processes that brought them about. (kl 141)

The explanatory strategy illustrated by Thomas Schelling in Micromotives and Macrobehavior proceeds in a similar fashion. Schelling wants to show how a complex social phenomenon (say, residential segregation) can be the result of a set of preferences and beliefs of the independent individuals who make up the relevant population. And this is also the approach that is taken by researchers who develop agent-based models (link).

Why is the appeal to reduction sometimes frustrating to other scientists and philosophers? Because it often seems to be a way of changing the subject away from our original scientific interest. We started out, let's say, with an interest in motion perception, looking at the perceiver as an information-processing system, and the reductionist keeps insisting that we turn our attention to the organization of a set of nerve cells. But we weren't interested in nerve cells; we were interested in the computational systems associated with motion perception.

Another reason to be frustrated with "methodological reductionism" is the conviction that mid-level entities have stable properties of their own. So it isn't necessary to reduce those properties to their underlying constituents; rather, we can investigate those properties in their own terms, and then make use of this knowledge to explain other things at that level.

Finally, it is often the case that it is simply impossible to reconstruct with any useful precision the micro-level processes that give rise to a given higher-level structure. The mathematical properties of complex systems come in here: even relatively simple physical systems, governed by deterministic mechanical laws, exhibit behavior that cannot be calculated on the basis of information about the starting conditions of the system. A solar system with a massive star at the center and a handful of relatively low-mass planets produces a regular set of elliptical orbits. But a three-body gravitational system creates computational challenges that make it impossible to predict the future state of the system; even small errors of measurement or intruding forces can significantly shift the evolution of the system. (Here is an interesting animation of a three-body gravitational system; the image at the top is a screenshot.)

We might capture part of this set of ideas by noting that we can distinguish broadly between vertical and lateral explanatory strategies. Reduction is a vertical strategy. The discovery of the causal powers of a mid-level entity and use of those properties to explain the behavior of other mid-level entities and processes is a lateral or horizontal strategy. It remains within a given level of structure rather than moving up and down over two or more levels.

William Wimsatt is a philosopher of biology whose writings about reduction have illuminated the topic significantly. His article "Reductionism and its heuristics: Making methodological reductionism honest" is particularly useful (link). Wimsatt distinguishes among three varieties of reductionism in the philosophy of science: inter-level reductive explanations, same-level reductive theory succession, and eliminative reduction (448). He finds that eliminative reduction is a non-starter; virtually no scientists see value in attempting to eliminate references to the higher-level domain in favor of a lower-level domain. Inter-level reduction is essentially what was described above. And theory-succession reduction is a mapping from one theory to the next of the ontologies that they depend upon. Here is his description of "successional reduction":

Successional reductions commonly relate theories or models of entities which are either at the same compositional level or they relate theories that aren't level-specific.... They are relationships between theoretical structures where one theory or model is transformed into another ... to localize similarities and differences between them. (449)

I suppose an example of this kind of reduction is the mapping of the quantum theory of the atom onto the classical theory of the atom.

Here is Wimsatt's description of inter-level reductive explanation:

Inter-level reductions explain phenomena (entities, relations, causal regularities) at one level via operations of often qualitatively different mechanisms at lower levels. (450)

Here is an example he offers of the "reduction" of Mendel's factors in biology:

Mendel's factors are successively localized through mechanistic accounts (1) on chromosomes by the Boveri–Sutton hypothesis (Darden, 1991), (2) relative to other genes in the chromosomes by linkage mapping (Wimsatt, 1992), (3) to bands in the physical chromosomes by deletion mapping (Carlson, 1967), and finally (4) to specific sites in chromosomal DNA thru various methods using PCR (polymerase chain reaction) to amplify the number of copies of targeted segments of DNA to identify and localize them (Waters, 1994).

What I find useful about Wimsatt's approach is the fact that he succeeds in de-dramatizing this issue. He puts aside the comprehensive and general claims that have sometimes been made on behalf of "methodological reductionism" in the past, and considers specific instances in biology where scientists have found it very useful to investigate the vertical relations that exist between higher-level and lower-level structures. This takes reductionism out of the domain of a general philosophical principle and into that of a particular research heuristic.

4 comments:

p9 said...

"Another reason to be frustrated with "methodological reductionism" is the conviction that mid-level entities have stable properties of their own"

So where do those properties come from? I agree that we have to treat 'mid-level entities' as if they have stable properties of their own, but that cannot be the case. It can't even be close to being the case, because that would be tantamount to magic.

"He finds that eliminative reduction is a non-starter; virtually no scientists see value in attempting to eliminate references to the higher-level domain in favor of a lower-level domain."

And yet it seems to be the case that all phenomena *actually* reduce to elementary particles in just this way. It's a terrible explanatory strategy, but it appears to be the way the universe is: there is nothing but elementary particles, and any other position has to assume that there are some properties that are not determined by elementary particles - i.e., the things that actually exist and constitute everything. The alternative to eliminative reductionism is magic.

I'm not saying that we have to explain everything in terms of constituent elementary particles, because that cannot work. We cannot gather the data to do this and simply do not have the cognitive processing capability to handle it. But that's a human failing, not a problem with the concept. I'd say that even if we never do it, explanation must always be potentially relatable to smaller and smaller events until we reach the elementary particles that actually constitute everything.

Lee A. Arnold said...

I think that part of the problem is that social science is missing the proper fundamental unit. I think it is "relationship, in a context": two actors with two different types of connections at the same time: 1) the immediate transaction to each other, and 2) the lines of responsibility and judgment to their context, a shared understand based on prior agreement or their adoption of a pre-existing social construction ("we will have a market-style transaction" or "we are going to a party"). There are always two things to look at, epistemologically. This appears to be against eliminative reductionism, but a lot of very different structures can be examined in terms of a different scientific fundamental. This may only ever be a non-mathematical entity, however:
http://www.youtube.com/watch?v=UKXlqRIA92U&list=PLT-vY3f9uw3AcZVEOpeL89YNb9kYdhz3p

and search for "New Chart, for Descartes".

(Part of the problem is that modern social science must be couched in terms of math, but this may not always be appropriate, for formal reasons.)

Doug Blum said...

I think there are two issues here, one of which is conceptual, and the other which has to do with the researcher’s objectives.

The first has to do with how we approach the concept of emergence. As AJ claims, "I agree that we have to treat 'mid-level entities' as if they have stable properties of their own, but that cannot be the case. It can't even be close to being the case, because that would be tantamount to magic." As far as it goes this is correct; any complex entity can be deconstructed into its constituent parts and their interactions. That is precisely what we have in mind when we invoke the idea of emergence; i.e., qualitative changes which result when constituent parts interact. But at the same time, to use Dan’s phrase, resulting entities do indeed come to have “stable properties of their own.” “Stable” in this sense ought not imply that such properties have always existed in constant form. Instead properties are produced at a particular time through the interaction of parts (and may change again through subsequent interactions). Nevertheless, properties may well be stable in the sense of being concrete, consistent and definable. Second, insofar as we can conceive of entities as distinct things, it follows that we can talk of “their own” properties in the sense of inherent characteristics. But this in no way implies existential autonomy, since their properties are ultimately derivative from the interaction of parts. In short, there is no necessary contradiction between the two (seemingly opposing) assertions.

The second issue has to do with how we approach reductionism. For Dan, reductionism in the social sciences is the position that "the properties and dynamics of social entities need to be explained by the properties and interactions of the individuals who constitute them." The operative word in this sentence is “need.” AJ’s approach to reductionism is quite different: "I'd say that even if we never do it, explanation must always be potentially relatable to smaller and smaller events until we reach the elementary particles that actually constitute everything." The key word here is “potentially.”

I strongly agree with AJ that it is desirable to be able to relate meso-level properties to the discrete components and interactions which combine to produce them. Absent this ability, our explanations will always remain incomplete. Tracing events allows us to fully understand the mechanisms that underlie interaction, and thereby to gauge specific interaction effects. But once these properties have emerged and become (provisionally) stabilized, then, as Dan claims, “it isn't necessary to reduce those properties to their underlying constituents.” Instead, if what we wish to do is explain the causal significance of emergent properties themselves, “we can investigate those properties in their own terms, and then make use of this knowledge to explain other things at that level.”

This understanding indeed “takes reductionism out of the domain of a general philosophical principle and into that of a particular research heuristic.” It all depends on what one wants to achieve.

Lee A. Arnold said...

You have to consider however that the idea that "any complex entity can be deconstructed into its constituent parts and their interactions" and "absent this ability, our explanations will always remain incomplete" (Doug Blum), as well as the idea that the failure to do this is "a human failing, not a problem with the concept" (A.J West), are themselves unproven beliefs and intellectual prejudices. They may or may not be correct. Certainly they are not correct in regards to mathematics itself, which Gödel showed will always remain incomplete and inexhaustible (his words). We cannoot even decide whether numbers are metaphysical objects that exist independently of us (i.e. mathematical Platonism, which many mathematicians believe) or whether they are high-level human cognitive constructions with curiously penetrating, but not universal, applicability. In either case, Frege's idea that numbers are reducible to more basic logical constructions was discarded a long time ago, and numbers are not presently considered to be reducible to something else. Within physics, it is not even clear whether everything is "in principle" reducible to simpler physics: Stuart Kauffman's most recent book contains a chapter on physicists' current discussions of the evidence, and some are beginning to think that, for example, the Navier-Stokes equations are not reducible to particle motions. Irreducibility of higher-level things and concepts goes on and on. Social scientists are wise not to adduce magic, but that ought to include any fallback position that things are "in principle" reducible to smaller units, while avoiding other sorts of magical explanations as well.