Showing posts with label emergence. Show all posts
Showing posts with label emergence. Show all posts

Monday, June 29, 2015

Quantum mental processes?


One of the pleasant aspects of a long career in philosophy is the occasional experience of a genuinely novel approach to familiar problems. Sometimes one's reaction is skeptical at first -- "that's a crazy idea!". And sometimes the approach turns out to have genuine promise. I've had that experience of moving from profound doubt to appreciation several times over the years, and it is an uplifting learning experience. (Most recently, I've made that progression with respect to some of the ideas of assemblage and actor-network theory advanced by thinkers such as Bruno Latour; link, link.)

I'm having that experience of unexpected dissonance as I begin to read Alexander Wendt's Quantum Mind and Social Science: Unifying Physical and Social Ontology. Wendt's book addresses many of the issues with which philosophers of social science have grappled for decades. But Wendt suggests a fundamental switch in the way that we think of the relation between the human sciences and the natural world. He suggests that an emerging paradigm of research on consciousness, advanced by Giuseppi Vitiello, John Eccles, Roger Penrose, Henry Stapp, and others, may have important implications for our understanding of the social world as well. This is the field of "quantum neuropsychology" -- the body of theory that maintains that puzzles surrounding the mind-body problem may be resolved by examining the workings of quantum behavior in the central nervous system. I'm not sure which category to put the idea of quantum consciousness yet, but it's interesting enough to pursue further.

The familiar problem in this case is the relation between the mental and the physical. Like all physicalists, I work on the assumption that mental phenomena are embodied in the physical infrastructure of the central nervous system, and that the central nervous system works according to familiar principles of electrochemistry. Thought and consciousness are somehow the "emergent" result of the workings of the complex physical structure of the brain (in a safe and bounded sense of emergence). The novel approach is the idea that somehow quantum physics may play a strikingly different role in this topic than ever had been imagined. Theorists in the field of quantum consciousness speculate that perhaps the peculiar characteristics of quantum events at the sub-atomic level (e.g. quantum randomness, complementary, entanglement) are close enough to the action of neural networks that they serve to give a neural structure radically different properties from those expected by a classical-physics view of the brain. (This idea isn't precisely new; when I was an undergraduate in the 1960s it was sometimes speculated that freedom of the will was possible because of the indeterminacy created by quantum physics. But this wasn't a very compelling idea.)

Wendt's further contribution is to immerse himself in some of this work, and then to formulate the question of how these perspectives on intentionality and mentality might affect key topics in the philosophy of society. For example, how do the longstanding concepts of structure and agency look when we begin with a quantum perspective on mental activity?

A good place to start in preparing to read Wendt's book is Harald Atmanspacher's excellent article in the Stanford Encyclopedia of Philosophy (link). Atmanspacher organizes his treatment into three large areas of application of quantum physics to the problem of consciousness: metaphorical applications of the concepts of quantum physics; applications of the current state of knowledge in quantum physics; and applications of possible future advances in knowledge in quantum physics.
Among these [status quo] approaches, the one with the longest history was initiated by von Neumann in the 1930s.... It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. (13)
A physical state reduction is the event that occurs when a quantum probability field resolves into a discrete particle or event upon having been measured. Some theorists (e.g. Henry Stapp) speculate that conscious human intention may influence the physical state reduction -- thus a "mental" event causes a "physical" event. And some process along these lines is applied to the "activation" of a neuronal assembly:
The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. (20)
Also of interest in Atmanspacher's account is the idea of emergence: are mental phenomena emergent from physical phenomena, and in what sense? Atmanspacher specifies a clear but strong definition of emergence, and considers whether mental phenomena are emergent in this sense:
Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them. (6)
This is a strong conception in a very specific way; it specifies that material facts are not sufficient to explain "emergent" mental properties. This implies that we need to know some additional facts beyond facts about the material brain in order to explain mental states; and it is natural to ask what the nature of those additional facts might be.

The reason this collection of ideas is initially shocking to me is the difference in scale between the sub-atomic level and macro-scale entities and events. There is something spooky about postulating causal links across that range of scales. It would be wholly crazy to speculate that we need to invoke the mathematics and theories of quantum physics to explain billiards. It is pretty well agreed by physicists that quantum mechanics reduces to Newtonian physics at this scale. Even though the component pieces of a billiard ball are quantum entities with peculiar properties, as an ensemble of 10^25 of these particles the behavior of the ball is safely classical. The peculiarities of the quantum level wash out for systems with multiple Avogadro's numbers of particles through the reliable workings of statistical mechanics. And the intuitions of most people comfortable with physics would lead them to assume that neurons are subject to the same independence; the scale of activity of a neuron (both spatial and temporal) is orders of magnitude too large to reflect quantum effects. (Sorry, Schrodinger's cat!)

Charles Seife reports a set of fundamental physical computations conducted by Max Tegmark intended to demonstrate this in a recent article in Science Magazine, "Cold Numbers Unmake the Quantum Mind" (link). Tegmark's analysis focuses on the speculations offered by Penrose and others on the possible quantum behavior of "microtubules." Tegmark purports to demonstrate that the time and space scales of quantum effects are too short by orders of magnitude to account for the neural mechanisms that can be observed (link). Here is Tegmark's abstract:
Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10^−13–10^−20s) are typically much shorter than the relevant dynamical time scales (∼10^−3–10^−1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way. (link)
I am grateful to Atmanspacher for providing such a clear and logical presentation of some of the main ideas of quantum consciousness; but I continue to find myself sceptical. There is a risk in this field to succumb to the temptation towards unbounded speculation: "Maybe if X's could influence Y's, then we could explain Z" without any knowledge of how X, Y, and Z are related through causal pathways. And the field seems sometimes to be prey to this impulse: "If quantum events were partially mental, then perhaps mental events could influence quantum states (and from there influence macro-scale effects)."

In an upcoming post I'll look closely at what Alex Wendt makes of this body of theory in application to the level of social behavior and structure.

Sunday, April 26, 2015

Gorski on critical realism


Earlier posts have considered the reception of critical realism by two major historical sociologists, Peggy Somers and George Steinmetz (link, link). These researchers believe that the philosophical issues and positions raised by Roy Bhaskar in his evolving theory of critical realism are insightful and useful for the conduct of post-positivist sociology. Philip Gorski is a third contemporary sociologist who has weighed in on the relevance of critical realism for the practice of sociology. Gorski's primary research divides between sociological theory and comparative historical sociology, and he is the most explicit of the three in his advocacy for the philosophy of science associated with critical realism. Several articles are particularly direct in this regard, including What is critical realism? (2013) in Contemporary Sociology, The poverty of deductivism (2004) in Sociological Methodology, and the concluding chapter in his recently edited and very interesting volume, Bourdieu and Historical Analysis (2013) (discussed earlier here).

The 2013 article offers both exposition and polemic. Gorski wants to make the case that sociologists benefit from the philosophy of science because it serves as an intellectual support for research and theory -- there is a reason for sociologists to be concerned about debates in the philosophy of science; and that the alternatives on offer are woefully inadequate (positivism, interpretivism, constructivism).

Gorski makes the important point that sociological research needs to be pluralistic when it comes to methodology. Interpretation requires contextualization (662); causal hypotheses are often supported by statistical evidence; it is reasonable for realists about structures to look for the constitutive actions of lower-level powers that make them up. There isn't a primary method of inquiry or empirical reasoning that works best for all social research; instead, sociologists need to define significant research topics and then craft methods of inquiry and inference that are best suited to those topics. Quantitative, interpretive, comparative, deductive, inductive, abductive, descriptive, and explanatory approaches are all appropriate methods for some problems of social research. So it is important for sociologists to "unlearn" some of the dogmas of positivist methodology that have often thought to be constitutive of the scientific warrant of sociology.

Against the dominant philosophies of social science of positivism, interpretivism, and constructivism, Gorski argues that there is only one credible alternative, the philosophical theory of critical realism. And this means, largely, the writings of Roy Bhaskar. So the bulk of this piece is an exposition of Bhaskar's central ideas. Gorski provides a precis of Bhaskar's thoughts in three stages of his work, A Realist Theory of Science, The Possibility of Naturalism: A philosophical critique of the contemporary human sciences, and Scientific Realism and Human Emancipation. And he argues that the ontological framework embodied in these aspects of a specific philosophy of science -- realism about underlying structures, powers, and processes -- is the best available as a general starting point for the sciences.

Other philosophies of social science place epistemology or methodology at the center of what a philosophy of the sciences should resolve. Gorski, on the other hand, emphasizes the importance of the ontology of critical realism -- what CR conveys about the nature and stratification of the social world.

Crucial within that ontology is the notion of the reality of social structures and the idea that social properties are emergent. Here are a few comments about emergence:
[CR] does not deny that reduction can sometimes be illuminating, but it insists that the social is an emergent reality with its own specific powers and properties. (659) 
The genesis of the social sciences hinged on the discovery of emergence, and major advances in them have typically involved the discovery of emergent proper- ties (e.g., of economic markets, social classes, collective conscience, value spheres, social fields, and so on). (662) 
But they invariably fall short of their epistemic goals: to explain one strata of reality in terms of a lower-order one. Why? Because of ‘‘emergence.’’ The combination and inter- action of entities and properties at one level of reality generates ‘‘emergent’’ entities and properties at others. (664)
As I've observed elsewhere here (link, link, link), the concept of emergence is elusive, and it seems to shift back and forth between "fundamentally independent of lower level phenomena" and "dependent upon but autonomous from lower level phenomena". The latter is the version that I favor with the concept of "relative explanatory autonomy"; link). Like Tuukka Kaidesoja, I'm not yet satisfied with the treatment of emergence that is associated with critical realism (link).

A related and more novel idea about social ontology employed by Gorski is the concept of "lamination" first introduced in Bhaskar's Dialectic: The Pulse of Freedom to describe the nature of the social world and the relations among levels of social action and structure:
Because there are multiple layers of agents and powers, moreover, observable events will have a "laminated" character; they are simultaneously governed by normic laws at various levels. This has important consequences for causal inference.... It is simultaneously and jointly determined by all of them. It is a "laminated" process. (665)
This is a vivid metaphor to use in conveying the idea of the "levels" of the social. But it has implications that may be unintended. For example, the idea of lamination suggests a sharp separation between layers; whereas many social domains seem to be better described as a continuous flow from lower to higher levels (and from higher to lower levels).

I find Gorski's exposition of critical realism to be useful and clear. However, there is a large omission in Gorski's account of the philosophical grounds of realism: Gorski assumes that there is no other serious tradition of post-positivist realism that serves as an alternative to critical realism. He dismisses what he calls the "conventional" realism of analytical sociology (659). He faults this version for its commitment to a form of individualism, structural individualism. Gorski insists on the "emergent" character of social structures and properties. And he presumes that there is no other viable form of realism available to the philosophy of science.

This binary and exclusive opposition between empiricism and critical realism as the only possibilities is misleading. There is a very substantial literature in the philosophy of science that rejects positivism; that maintains that theories are potentially good guides to the real "ontological" properties of the world; and that does not share the particular philosophical assumptions that Bhaskar brings into critical realism. I am thinking of Hilary Putnam, Dick Boyd, and Jarrett Leplin in particular. But there are dozens of other examples that could be considered.

A particularly clear exposition of this other tradition of post-positivist realism can be found in Richard Boyd's 1990 contribution in Wade Savage's Scientific Theories, "Realism, Approximate Truth, and Philosophical Method" (link). Here is how Boyd opens this piece:
Scientific realists hold that the characteristic product of successful scientific research is knowledge of largely theory-independent phenomena and that such knowledge is possible (indeed actual) even in those cases in which the relevant phenomena are not, in any non-question-begging sense, observable. The characteristic philosophical arguments for scientific realism embody the claim that certain central principles of scientific methodology require a realist explication. In its most completely developed form, this sort of abductive argument embodies the claim that a realist conception of scientific inquiry is required in order to justify, or to explain the reliability with respect to instrumental knowledge of, all of the basic methodological principles of mature scientific inquiry. (355)
This is a form of realism that reaches similar conclusions to those advocated by Bhaskar, but derives from an independent path of philosophical investigation following the collapse of positivist philosophy of science in the 1960s and 1970s. And this version of realism makes substantially fewer philosophical assumptions than Bhaskar's system puts forward.

(See also Jarrett Leplin, A Novel Defense of Scientific Realism.)

Sunday, October 12, 2014

Emergentism and generationism


media: lecture by Stanford Professor Robert Sapolsky on chaos and reduction

Several recent posts have focused on the topic of simulations in the social sciences. An interesting question here is whether these simulation models shed light on the questions of emergence and reduction that frequently arise in the philosophy of the social sciences. In most cases the models I've mentioned are "aggregation" models, in which the simulation attempts to capture the chief dynamics and interaction effects of the units and then work out the behavior and evolution of the ensemble. This is visibly clear when it comes to agent-based models. However, some of the scholars whose work I admire are "complexity" theorists, and a common view within complexity studies is the idea that the system has properties that are difficult or impossible to derive from the features of the units.

So does this body of work give weight to the idea of emergence, or does it incline us more in the direction of supervenience and ontological unit-ism?

John Miller and Scott Page provide an accessible framework within which to consider these kinds of problems in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. They look at certain kinds of social phenomena as constituting what they call "complex adaptive systems," and they try to demonstrate how some of the computational tools developed in the sciences of complex systems can be deployed to analyze and explain complex social outcomes. Here is how they characterize the key concepts:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. (kl 151)
Page and Miller believe that social phenomena often display "emergence" in a way that we can make sense of. Here is the umbrella notion they begin with:
The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (kl 826)
And they believe that the notion of emergence has "deep intuitive appeal". They find emergence to be applicable at several levels of description, including "disorganized complexity" (the central limit theorem, the law of large numbers) and "organized complexity" (the behavior of sand piles when grains have a small amount of control).
Under organized complexity, the relationships among the agents are such that through various feedbacks and structural contingencies, agent variations no longer cancel one another out but, rather, become reinforcing. In such a world, we leave the realm of the Law of Large Numbers and instead embark down paths unknown. While we have ample evidence, both empirical and experimental, that under organized complexity, systems can exhibit aggregate properties that are not directly tied to agent details, a sound theoretical foothold from which to leverage this observation is only now being constructed. (kl 976)
Organized complexity, in their view, is a substantive and important kind of emergence in social systems, and this concept plays a key role in their view of complex adaptive systems.

Another -- and contrarian -- contribution to this field is provided by Joshua Epstein. His three-volume work on agent-based models is a fundamental text book for the field. Here are the titles:

Agent_Zero: Toward Neurocognitive Foundations for Generative Social Science
Growing Artificial Societies: Social Science From the Bottom Up
Generative Social Science: Studies in Agent-Based Computational Modeling

Chapter 1 of Generative Social Science provides an overview of Epstein's approach is provided in "Agent-based Computational Models and Generative Social Science", and this is a superb place to begin (link). Here is how Epstein defines generativity:
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest.... Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (42)
Epstein describes an extensive attempt to model a historical population using agent-based modeling techniques, the Artificial Anasazi project (link). This work is presented in Dean, Gumerman, Epstein, Axtell, Swedlund, McCarroll, and Parker, "Understanding Anasazi Culture Change through Agent-Based Modeling" in Dynamics in Human and Primate Societies: Agent-Based Modeling of Social and Spatial Processes. The model takes a time series of fundamental environmental, climate, and agricultural data as given, and he and his team attempt to reconstruct (generate) the pattern of habitation that would result. Here is the finding they arrive at:

Generativity seems to be directly incompatible with the idea of emergence, and in fact Epstein takes pains to cast doubt on that idea.
I have always been uncomfortable with the vagueness--and occasional mysticism--surrounding this word and, accordingly, tried to define it quite narrowly.... There, we defined "emergent phenomena" to be simply "stable macroscopic patterns arising from local interaction of agents." (53)
So Epstein and Page both make use of the methods of agent based modeling, but they disagree about the idea of emergence. Page believes that complex adaptive systems give rise to properties that are emergent and irreducible; whereas Epstein doesn't think the idea makes a lot of sense. Rather, Epstein's view depends on the idea that we can reproduce (generate) the macro phenomena based on a model involving the agents and their interactions. Macro phenomena are generated by the interactions of the units; whereas for Page and Miller, macro phenomena in some systems have properties that cannot be easily derived from the activities of the units.

At the moment, anyway, I find myself attracted to Herbert Simon's effort to split the difference by referring to "weak emergence" (link):
... reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (Sciences of the Artificial 3rd edition 172)
This view emphasizes the computational and epistemic limits that sometimes preclude generating the phenomena in question -- for example, the problems raised by non-linear causal relations and causal interdependence. Many observers have noted that the behavior of tightly linked causal systems may be impossible to predict, even when we are confident that the system outcomes are the result of "nothing but" the interactions of the units and sub-systems.

Sunday, March 9, 2014

Why emergence?



It is a fair question to ask, whether the concept of emergence is perhaps less important than it initially appears to be. Part of the interest in emergence seems to derive from the impulse by sociologists and philosophers to try to show that there is a legitimate level of the world that is "social", and to reject the more extreme versions of reductionism.

Social scientists have a few concrete and important interests in this set of issues. One is a concern for the autonomy of the social science disciplines. Is there a domain of the social that warrants scientific study? Or can we make do with really good microeconomic theories, agent-based modeling techniques, and a dollop of social psychology, and do without strong theories of the causal powers of social entities?

Another concern is apparently related, but on the ontology side of the story: are there social entities that can be studied for their empirical and causal characteristics independently from the individual activities that make them up? Do social entities really exist? Or are there compelling reasons to conclude that social entities are too fluid and plastic to admit of possessing stable empirical properties?

It seems to me that these concerns can be fully satisfied without appealing to a strong conception of emergence. We have perfectly good concepts that individuate entities at a social level, and we have fairly ordinary but compelling reasons for believing that these sorts of things are causally active in the world. But perhaps we can frame some simple ideas about the social world that will allow us to be more relaxed about whether these properties can be reduced to or explained by facts about actors (methodological individualism), or derived from facts about actors, or are instead strongly independent from the level of actors upon which they rest.

Consider the following background propositions about the social world. These are not trivial assumptions, but it would appear that a broad range of social thinkers would accept them, from enlightened analytical sociologists to many critical realists.
  1. Social phenomena are constituted by the actions and thoughts of situated social actors. ("No pure social stuff, no ineffable special sauce")
  2. Actors are causally influenced by a variety of social structures and and entities. ("Actors are socially constituted and socially situated.") 
  3. Ensembles have properties that derive from the interactions of the composing entities (actors). ("System properties derive from complex and dynamic relations and structures among constituents.") 
  4. There are social properties that are not the simple aggregation of the properties of the actors. ("System properties are not simply the sum of constituent properties.") 
  5. Ensembles sometimes have system-level properties that exert causal powers with regard to their own constituents. ("Systems exert downward causation on their constituents.") 
  6. The computational challenges involved in modeling large complex systems are often overwhelming. ("The properties and behavior of complex systems are sometimes incalculable based simply on information about constituents and their arrangements.") 
These assumptions would serve to establish quite a bit of autonomy for social science investigation and explanation, without requiring us to debate whether social entities are nonetheless emergent. And the ontologically cautious among us may be more comfortable with these limited and reasonably clear assumptions than they are with an open-ended concept of emergent phenomena and properties. Assumption 6 suggests that it is not feasible (and likely will never be) to deduce social patterns from individual-level facts. Assumptions 3 and 4 establish that social properties are "autonomous" from individual-level facts. Assumptions 1 and 2 establish the ontological foundation of social entities -- the socially constituted individuals whose thoughts and actions constitute them. And assumption 5 establishes that the causal powers of social entities are in fact important and autonomous from facts about individuals, in the very important respect that higher-level properties play a causal role in the constitution of lower-level entities (individuals). This assumption is reflected in assumption 2 as well.

So perhaps we might conclude that not much turns on whether social properties and powers are emergent or not. Instead, we might be better advised to try to capture the issues in this area in different terms. And the alternative that I favor is the idea of relative explanatory autonomy (link). The six core assumptions mentioned above serve to capture the heart of this approach.

Saturday, March 8, 2014

Kaidesoja on emergence


Tuukka Kaidesoja's recent book Naturalizing Critical Realist Social Ontology devotes a chapter to the topic of emergence as it is treated within critical realism. Roy Bhaskar insisted that the assumption of emergence was crucial to the theory of critical realism. Kaidesoja sorts out what Bhaskar means by emergence, which turns out to be ambiguous and inconsistent, and offers his own position on the concept.

Kaidesoja quotes an important passage from Bhaskar's Scientific Realism and Human Emancipation) (1986):
It is only if social phenomena are genuinely emergent [. . .] that realist explanations in the human sciences are justified; and it is only if these conditions are satisfied that there is any possibility of human self-emancipation worthy of the name. But, conversely, emergent phenomena require realist explanation and realist explanations possess emancipatory implications. Emancipation depends upon explanation depends upon emergence. Given the phenomena of emergence, an emancipatory politics (or more generally transformative or therapeutic practice) depends upon a realist science. But, if and only if emergence is real, the development of both science and politics are up to us. [quoted by TK, 178]
Kaidesoja invokes a very basic issue about emergence by asking whether a claim of emergence for a given property is a claim about epistemology or about ontology. Is the phenomenon emergent because, given our current state of knowledge it is impossible to derive the property from the properties of the lower level constituents; or do we mean that the property is really (ontologically) independent from the features of the lower level? Kaidesoja makes it clear that Bhaskar and the critical realists have the stronger ontological thesis in mind when they assert that social entities are emergent or have emergent properties. The emergent feature is ontologically irreducible to the composing elements. But it is really unclear what this means.

TK argues that Bhaskar intertwines three different kinds of emergence without clearly distinguishing them: compositional, transcendentally realist, and global-level (179).
  • Compositional emergence: A particular complex whole sometimes has properties that are not properties of any of its parts and not merely "aggregative" effects of the ensemble of parts (179-180).
  • Transcendentally realist emergence: Abstract social structures, as distinct from social particulars, have properties that cannot be derived from the activities of individuals. "Transcendentally real emergent powers of social structures differ from the causal powers of concrete social systems composed of interacting persons" (182).
  • Global-level emergence: Levels of reality (e.g. society, mind, matter) have emergent properties not derivable from the properties of lower levels of reality. "Each emergent level has its own synchronically emergent properties which are autonomous with respect to those of other levels (186).
The three sets of ideas are successively more demanding, and TK finds that they are inconsistent with each other. Moreover, there is a crucial complication: within the compositional version (but not within the other two versions) Bhaskar allows that the emergent factor is amenable to "micro-reductive explanation". This is essentially the position taken by Herbert Simon (link) and Mario Bunge (link), and  it appears to be consistent with Dave Elder-Vass's position in The Causal Power of Social Structures (link) as well. It is a reasonable position. The other two versions, by contrast, are explicitly not compatible with micro-reductive explanation, and do not appear reasonable.

In fact, Kaidesoja finds that there are insolvable problems with the "transcendentally realist" and "global-level" versions of the theory of emergence, and he concludes that they are unsupportable. Kaidesoja therefore focuses his attention on the compositional version as the sole version of emergence that can be coherently asserted within critical realism.
Since Bhaskar and his followers deny the possibility of analysing emergent powers of social structures in compositional terms, their notion of transcendentally realist emergent powers of social structures is incompatible with the compositional account of emergent powers. (184)
I further tried to show that the attribution of transcendentally real emergent powers to social structures is problematic, since it leaves the ontological relation between social structures and concrete social systems (composed of interacting people and their artifacts) obscure and/or construes social structures as abstract entities. (187)
This discussion has an important consequence within TK's naturalizing strategy. It implies that a naturalized critical realism will need to surrender the two more extensive versions of emergence and make do with the compositional form. And that would bring a naturalized critical realism into closer alignment with mainstream thinking about the relation between higher-level and lower-level systems than this framework is usually thought to be.

So the argument TK has constructed in Naturalizing Critical Realist Social Ontology does not limit itself to criticizing the scheme of philosophical reasoning that Bhaskar and other CR theorists have pursued, but also extends to some of the substantive conclusions they have sought to derive.

Sunday, December 9, 2012

Simulating social mechanisms



A key premise of complexity theory is that a population of units has "emergent" properties that result from the interactions of units with dynamic characteristics. Call these units "agents".  The "agent" part of the description refers to the fact that the elements (persons) are self-directed units.  Social ensembles are referred to as "complex adaptive systems" -- systems in which outcomes are the result of complex interactions among the units AND in which the units themselves modify their behavior as a result of prior history.

Scott Page's Complex Adaptive Systems: An Introduction to Computational Models of Social Life provides an excellent introduction. Here is how Page describes an adaptive social system:
Adaptive social systems are composed of interacting, thoughtful (but perhaps not brilliant) agents. It would be difficult to date the exact moment that such systems first arose on our planet -- perhaps it was when early single-celled organisms began to compete with one another for resources.... What it takes to move from an adaptive system to a complex adaptive system is an open question and one that can engender endless debate. At the most basic level, the field of complex systems challenges the notion that by perfectly understanding the behavior of each component part of a system we will then understand the system as a whole. (kl 151)
Herbert Simon added a new chapter on complexity to the third edition of The Sciences of the Artificial - 3rd Edition in 1996.
By adopting this weak interpretation of emergence, we can adhere (and I will adhere) to reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172).
This formulation amounts to the claim of what I referred earlier to as "relative explanatory autonomy"; link. It is a further articulation of Simon's view of "pragmatic holism" first expressed in 1962 (link).

So how would agent-based models (ABM) be applied to mechanical systems? Mechanisms are not intentional units. They are not "thoughtful", in Page's terms. In the most abstract version, a mechanism is an input-output relation, perhaps with governing conditions and with probabilistic outcomes -- perhaps something like this:


In this diagram A, B, and D are jointly sufficient for the working of the mechanism, and C is a "blocking condition" for the mechanism. When A,B,C,D are configured as represented the mechanism then does its work, leading with probability PROB to R and the rest of the time to S.

So how do we get complexity, emergence, or unpredictability out of a mechanical system consisting of a group of separate mechanisms? If mechanisms are determinate and exact, then it would seem that a mechanical system should not display "complexity" in Simon's sense; we should be able to compute the state of the system in the future given the starting conditions.

There seem to be several key factors that create indeterminacy or emergence within complex systems. One is the fact of causal interdependency, where the state of one mechanism influences the state of another mechanism which is itself a precursor to the first mechanism.  This is the issue of feedback loops or "coupled" causal processes. Second is non-linearity: small differences in input conditions sometimes bring about large differences in outputs. Whenever an outcome is subject to a threshold effect, we will observe this feature; small changes short of the threshold make no change in the output, whereas small changes at the threshold bring about large changes. And third is the adaptability of the agent itself.  If the agent changes behavioral characteristics in response to earlier experience (through intention, evolution, or some other mechanism) then we can expect outcomes that surprise us, relative to similar earlier sequences. And in fact, mechanisms display features of each of these characteristics. They are generally probabilistic, they are often non-linear, they are sensitive to initial conditions, and at least sometimes they "evolve" over time.

So here is an interesting question: how do these considerations play into the topic of understanding social outcomes on the basis of an analysis of underlying social mechanisms? Assume we have a theory of organizations that involves a number of lesser institutional mechanisms that affect the behavior of the organization. Is it possible to develop an agent-based model of the organization in which the institutional mechanisms are the units? Are meso-level theories of organizations and institutions amenable to implementation within ABM simulation techniques?

Here is a Google Talk by Adrien Treuille on "Modeling and Control of Complex Dynamics".



The talk provides an interesting analysis of "crowd behavior" based on a new way of representing a crowd.