Thursday, April 7, 2016

Social progress in India?


A few days in Bangalore, Kerala, and Mumbai have been very interesting, from a social-change point of view. There is an election cycle underway, with prospects for a strong showing for the secular Left in several states. There is a resurgence of bigotry of various forms, including both anti-Muslim activism and violence against student activists, cattle herders, and Dalit teenagers. There is the deplorable fact that BJP leadership and Prime Minister Modi are not owning up to their own role in encouraging these forms of inter-group hatred, and certainly are not taking the leadership role that is the responsibility of a governing party to denounce hatred and violence. And there are mounting problems of vehicular traffic, road accidents, drought remediation, garbage removal, and sanitation that absolutely must be solved if ordinary Indians are to have reasonable levels of health, safety, and comfort. It is very interesting to me that the leading appeals coming from Left politicians and candidates in Kerala are for rejecting bigotry and encouraging scientific composting of household trash. These may seem to lie at opposite ends of the spectrum, but they reflect some of India's most pressing challenges today. 

At the same time there are encouraging signs of progress on almost all these issues. For example, the Center for Inclusion and Inclusive Policy at the National Law School University of India in Bangalore brings together a strong cadre of activist scholars committed to ending the persistent low status and opportunity of Dalits. One project in particular is impactful -- bringing the writings of BR Ambedkar, author of Annihilation of Caste, to pre-university students through mobile classes for high school students. More than 50,000 students have been exposed to this course already in Karnataka. Another scholar at the law school has spent his career addressing the continuing problem of bonded labor in India. It is not a problem of the past. He spoke powerfully of the difficulty of suppressing this practice through legislation, given the social power of the landlords and business owners who circumvent these laws. The Center has been at work for about 13 years and provides focus nationally for these important social issues.

The Center for Agrarian Research at the Indian Statistical Institute in Bangalore is another good example of an intellectual force for positive change in rural India (link). Under the leadership of VK Ramachandran and Madhura Swaminathan, the Center organizes a group of young researchers to conduct village-level studies in various parts of India. The scientific directors of the Center work closely with grassroots rural organizations (for example, the All India Agricultural Workers Union and the All India Women's Democratic Organization), to ensure that the focus of the research aligns as well as possible with the knowledge needs of rural people in their struggles for social progress. The journal, Review of Agrarian Studies, is now open-access online, and it is worthwhile for readers of Understanding Society to become frequent visitors. The Center has also published a toolbox presenting the methods of survey and analysis that guides their work, and it has published a valuable series of books and monographs as well. This work serves to highlight features of local village life that are overlooked in national surveys. One important point their work documents is the importance of measuring income directly rather than estimating income through consumption. The range of inequalities in rural society turns out to be much higher than national and international estimates would indicate when this measure is evaluated with precision. 

India is hungry for change. And there is an appetite for large theoretical frameworks that can be used to guide that change. For the right and much of the business leadership of the country the preferred theory is a form of neo-liberalism -- a preference for low state regulation, laissez-faire markets, and unbounded entrepreneurship. The BJP embraces this ideology and adds in a virulent form of Hindu nationalism into the mix. For the left, classical Marxism and an admiration of the progress of China since its revolution are dominant ideas. Many conversations come back to the question of social ownership of the means of production. But perhaps India needs a program for change which is less polarized and less ideological. The political economy of social democracy seems to fit the bill better than either Smith or Marx, USA or China. 

Three points seem apparent. First, social progress in India simply mandates overturning the many forms of inequality and deprivation that exist for many groups in India -- Dalits and the rural poor in particular, but also the tens of millions of migrants who barely survive in India's largest cities. And there are structures of power and privilege that support the current status quo. So a powerful political movement will be required that expresses a strong and realistic program of change when it comes to poverty and systemic discrimination. The Dalit problem is crucial. 

Second, India needs strong institutions at every level of government, from the municipality to state to the national government. Laissez-faire theories of the self-regulating virtues of the market and private activity will simply not solve the problems that exist, from persistent discrimination and violence to environmental pollution to unlicensed development to garbage disposal to traffic safety. Strong and effective regulation of private activity by persons and corporations will be needed or India will be overwhelmed by exploitation, pollution, and inequality. 

Third, these processes of change must move forward through democratic practice. The progressive left must find ways to make its program appealing to the masses of Indian citizens who currently support more conservative approaches to government and greater quiescence about India's underlying structures of oppression. Markets and private ownership of land are not inherently inimical to progress. But they require regulation, and the fruits of economic success need to be shared in an equitable way. Redistributive taxation is morally and socially mandatory -- exactly as it is, in varying degrees, in every democracy in Europe and North America. This means a substantial use of the power of taxation to provide for crucial social services -- health, education, nutrition, housing -- at levels that permit all Indians, rich and poor, to have reasonable chances of success in the resulting economy. Or in other words, India needs a strong welfare state with effective market regulation and voice for the poor and dispossessed. 

Where is Nordic socialism when India most needs it (link)?

Wednesday, March 16, 2016

ABM fundamentalism

image: Chernobyl control room

Quite a few recent posts have examined the power and flexibility of ABM models as platforms for simulating a wide range of social phenomena. Joshua Epstein is one of the high-profile contributors to this field, and he is famous for making a particularly strong claim on behalf of ABM methods. He argues that “generative” explanations are the uniquely best form of social explanation. A generative explanation is one that demonstrates how an upper-level structure or causal power comes about as a consequence of the operations of the units that make it up. As an aphorism, here is Epstein's slogan: "If you didn't grow it, you didn't explain it." 

Here is how he puts the point in a Brookings working paper, “Remarks on the foundations of agent-based generative social science” (link; also chapter 1 of Generative Social Science: Studies in Agent-Based Computational Modeling):

"To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question:
"How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?
"Accordingly, to explain macroscopic social patterns, we generate—or “grow”—them in agent models." (1)

And Epstein is quite explicit in saying that this formulation represents a necessary condition on all putative social explanations: "In summary, generative sufficiency is a necessary, but not sufficient condition for explanation." (5).

There is an apparent logic to this view of explanation. However, several earlier posts cast doubt on the conclusion. First, we have seen that all ABMs necessarily make abstractive assumptions about the behavioral features of the actors, and they have a difficult time incorporating "structural" factors like organizations. We found that the ABM simulations of ethnic and civil conflict (including Epstein's own model) are radically over-simplified representations of the field of civil conflict (link).  So it is problematic to assume the general applicability and superiority of ABM approaches for all issues of social explanation.

Second, we have also emphasized the importance of distinguishing between "generativeness" and "reducibility" (link). The former is a claim about ontology -- the notion that the features of the lower level suffice to determine the features of the upper level through pathways we may not understand at all. The latter is a claim about inter-theoretic deductive relationships -- relationships between our formalized beliefs about the lower level and the feasibility of deriving the features of the upper level from these beliefs. But I argued in the earlier post that the fact that A is generated by B does not imply that A is reducible to B. 

So there seem to be two distinct ways in which J. Epstein is over-reaching here: he is assuming that agent-based models can be sufficiently detailed to reproduce complex social phenomena like civil unrest; and second, he is assuming without justification that only reductive explanations are scientifically acceptable.

Consider an example that provides an explanation of a collective behavior that has explanatory weight, that is not generative, and that probably could not be fully reproduced as an ABM.  A relevant example is Charles Perrow's analysis of technology failure as a consequence of organizational properties (Normal Accidents: Living with High-Risk Technologies). An earlier post considered these kinds of examples in more detail (link). Here is my summary of organizational approaches to the explanation of the incidence of accidents and system safety:
However, most safety experts agree that the social and organizational characteristics of the dangerous activity are the most common causes of bad safety performance. Poor supervision and inspection of maintenance operations leads to mechanical failures, potentially harming workers or the public. A workplace culture that discourages disclosure of unsafe conditions makes the likelihood of accidental harm much greater. A communications system that permits ambiguous or unclear messages to occur can lead to air crashes and wrong-site surgeries. (link)
I would say that this organizational approach is a legitimate schema for social explanation of an important effect (the occurrence of large technology failures). Further, it is not a generativist explanation; it does not originate in a simplification of a particular kind of failure and demonstrate through iterated runs that failures occur X% of the time. Rather, it is based on a different kind of scientific reasoning, based on causal analysis grounded in careful analysis and comparison of cases. Process tracing (starting with a failure and working backwards to find the key causal branches that led to the failure) and small-N comparison of cases allows the researcher to arrive at confident judgments about the causes of technology failure. And this kind of analysis can refute competing hypotheses: "operator error generally causes technology failure", "poor technology design generally causes technology failure", or even "technological over-confidence causes technology failure". All these hypotheses have defenders; so it is a substantive empirical hypothesis to argue that certain features of organizational deficiency (training, supervision, communications processes) are the most common causes of technological accidents.

Other examples from sociology could be provided as well: Michael Mann's explanation of the causes of European fascism (Fascists), George Steinmetz's explanation of variations in the characteristics of German colonial rule (The Devil's Handwriting: Precoloniality and the German Colonial State in Qingdao, Samoa, and Southwest Africa), or Kathleen Thelen's explanation of the persistence and change in training regimes in capitalist economies (How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan). Each is explanatory, each identifies causal factors that are genuinely explanatory of the phenomena in question, and none is generativist in Epstein's sense. These are examples drawn from historical sociology and institutional sociology; but examples from other parts of the disciplines of sociology are available as well.

I certainly believe that ABMs sometimes provide convincing and scientifically valuable explanations. The fundamentalism that I'm taking issue with here is the idea that all convincing and scientifically valuable social explanations must take this form -- a much stronger view and one that is not well supported by the practice of a range of social science research programs.

Or in other words, the over-reach of the ABM camp comes down to this: the claims of exclusivity and general adequacy of the simulation-based approach to explanation. ABM fundamentalists claim that only simulations from units to wholes will be satisfactory (exclusivity), and they claim that ABM simulations can always be designed for any problem that are generally adequate to grounding an explanation (general adequacy). Neither proposition can be embraced as a general or universal claim. Instead, we need to recognize the plurality of legitimate forms of causal reasoning in the social sciences, and we need to recognize, along with their strengths, some of the common weaknesses of the ABM approach for some kinds of problems.

Tuesday, March 15, 2016

What is anchor individualism?


Brian Epstein has attempted to shake up some of our fundamental assumptions about the social world in the past several years by challenging the idea of "ontological individualism" -- the idea that social things consist of facts about individuals in action, thought, and interaction, and nothing else. Here is how he puts the idea in "Ontological Individualism Reconsidered": "Ontological individualism is the thesis that facts about individuals exhaustively determine social facts” (link). He believes this ontological concept is false; he disputes the idea that the social world supervenes upon facts about individuals; and he argues that there are some social facts or circumstances that cannot be parsed in terms of facts about combinations of individuals. His arguments are pulled together in a very coherent way in The Ant Trap: Rebuilding the Foundations of the Social Sciences, but he has made the case in earlier articles as well (link).

Epstein's primary reason for doubting ontological individualism is a notion he shares with John Searle: that social action often involves a setting of law, convention, interpretation, presupposition, implicature, or rule that cannot be "reduced" to facts interior to the individuals involved in an activity. Searle's concept of a "status fact" is an example (link): the fact that John is an Imam is not a purely individual-level fact about John. Instead, it presupposes a structure of religious institutions, rules, procedures, and beliefs, in light of which John's history of interactions with other individuals and settings qualifies him as "Imam".

There is another kind of individualism that Epstein considers as a more adequate version -- what he refers to as "anchor individualism." The diagram below represents his graphical explanation of the relationship between anchor individualism and ontological individualism. What does he mean by this idea?


Here is one of his efforts to explain the point:
What I will call "anchor individualism" is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (101)
Frames, evidently, are institutional contexts, or contexts of meaning, in the terms of which individual actions are situated. They constitute the difference between a bare set of behaviors and a full-blooded social action. Alfred lifts his right hand to his cap; this is a bodily motion. Alfred salutes his superior officer; this is an institutionally defined action that depends upon a frame of military authority and obligation, in the context of which the behavior constitutes a certain kind of social action. (This sounds rather similar, incidentally, to Ryle and Geertz on the "wink" and the distinction between thin and thick description; Geertz, "Thick Description" in The Interpretation Of Cultures.) A frame principle is a stipulation of how an action, performance, or symbolic artifact is constituted, what makes it the socially meaningful thing that it is -- a hundred dollar bill, a first-degree murder, or an Orthodox rabbi. Plainly a frame principle looks a lot like a rule or a constitutive declaration: "any person who received the degree of Bachelors of Science in Accounting, completed 150 credit hours of study, and passed the CPA exam is counted as a "certified public accountant".)

But a mere stipulation of status is not sufficient. If one person individually decides that a university president shall be henceforward be understood to have the authority to perform marriage ceremonies, this private declaration does not change the status definition of "university president." Rather, the stipulation must itself have some sort of social validity. It must be "anchored". We can say specifically what would be required to anchor the status definition of university president considered here; it would require a valid act of legislation that creates this power, and there would need to be widespread recognition of the political legitimacy and bindingness of the new legislation.

Epstein observes that Searle believes that anchoring of a frame principle always comes down to "collective acceptance" (103). But Epstein notes that other theorists have a broader conception of anchoring: attitudes, conforming behaviors, conventions, shared values about political legitimacy, acts of legislatures, and so on. What anchor individualism asserts is that each of these forms of anchoring can be related to the attitudes, beliefs, and performances of individuals and groups of individuals.

So on Epstein's view, there are two complementary versions of individualism. Ontological individualism is a thesis about what is required for grounding a social fact. Ontological individualism maintains that social facts are grounded in the behaviors and thoughts of individuals. But Epstein thinks there is still something else to represent in our picture of social ontology. We need to be able to specify what circumstances anchor the frame principles themselves. That is the circumstances that make an action or performance the kind of action that it is. To call a performance a "marriage" brings with it a long set of presuppositions about history, status, and validity. These presuppositions constitute a certain kind of frame principle. But we can then ask the question, what makes the frame principle binding in the circumstances? This is where anchoring comes in; anchoring is the set of facts that create or document the "bindingness" of the frame principles in question.

In my reading what makes this a distinctive view from traditional thinking about the relationship between individuals and social facts is the effort it represents to formalize the logical standing of circumstances that are intuitively crucial in social interactions: the significance, rule-abiding-ness, legitimacy, and conventionality of a given individual-level behavior. And these circumstances are necessarily distributed across a large group of people, involving the kinds of socially reflexive ideas that Searle thinks are constitutive of the social world: presuppositions, implicatures, rules, rituals, conventions, meanings, and practices. There is no private language, and there is no private practice. (There are things we do purely individually and privately; but then these do not constitute "practices" in the socially meaningful sense.) So the kinds of things that an anchor analysis calls out are social things.

But it also seems fair to observe that the facts that anchor a practice, convention, or rule are indeed facts that depend upon states of mind and action of individual actors. So anchor individualism remains a coherent kind of individualism. These anchoring facts have microfoundations in the thoughts, behavior, habits, and practices of socially situated individuals.

Saturday, March 12, 2016

Wendt's strong claims about quantum consciousness


Alex Wendt takes a provocative step in Quantum Mind and Social Science: Unifying Physical and Social Ontology by proposing that quantum mechanics plays a role in all levels of the human and social world (as well as all life). And he doesn't mean in the trivial sense that all of nature is constituted by quantum-mechanical micro-realities (or unrealities). Instead, he means that we need to treat human beings and social structures as quantum-mechanical wave functions. He wants to see whether some of the peculiarities of social (and individual) phenomena might be explained on the hypothesis that mental phenomena are deeply and actively quantum phenomena. This is a very large pill to swallow, since much considered judgment across the sciences concurs that the macroscopic world — billiard balls, viruses, neurons — are on a physical and temporal scale where quantum effects have undergone “decoherence” and behave as strictly classical entities.

Wendt’s work rests upon a small but serious body of scholarship in physics, the neurosciences, and philosophy on the topics of “quantum consciousness” and “quantum biology”. An earlier post described some tangible but non-controversial progress that has been made on the biology side, where physicists and chemists have explored a possible pathway accounting for birds’ ability to sense the earth’s magnetic field directly through a chemical process that depends upon entangled electrons.

Here I’d like to probe Alex’s argument a bit more deeply by taking an inventory of the strong claims that he considers in the book. (He doesn’t endorse all these claims, but regards them as potentially true and worth investigating.)
  1. Walking wave functions: "I argue that human beings and therefore social life exhibit quantum coherence – in effect, that we are walking wave functions. I intend the argument not as an analogy or metaphor, but as a realist claim about what people really are. (3) ... "My claim is that life is a macroscopic instantiation of quantum coherence. (137) ... "Quantum consciousness theory suggests that human beings are literally walking wave functions. (154)
  2. "The central claim of this book is that all intentional phenomena are quantum mechanical. (149)  ... "The basic directive of a quantum social science, its positive heuristic if you will, is to re-think human behavior through the lens of quantum theory. (32)
  3. "I argued that a very different picture emerges if we imagine ourselves under a quantum constraint with a panpsychist ontology. Quantum Man is physical but not wholly material, conscious, in superposed rather than well-defined states, subject to and also a source of non-local causation, free, purposeful, and very much alive. (207)
  4. "Quantum consciousness theory builds on these intuitions by combining two propositions: (1) the physical claim of quantum brain theory that the brain is capable of sustaining coherent quantum states (Chapter 5), and (2) the metaphysical claim of panpsychism that consciousness inheres in the very structure of matter (Chapter 6). (92)
  5. Quantum decision theory: "[There is] growing experimental evidence that long-standing anomalies of human behavior can be predicted by “quantum decision theory.” (4)
  6. Panpsychism: "Quantum theory actually implies a panpsychist ontology: that consciousness goes “all the way down” to the sub-atomic level. Exploiting this possibility, quantum consciousness theorists have identified mechanisms in the brain that might allow this sub-atomic proto-consciousness to be amplified to the macroscopic level. (5)
  7. Consciousness: "The hard problem, in contrast, is explaining consciousness. (15) ... "As long as the brain is assumed to be a classical system, there is no reason to think even future neuroscience will give us “the slightest idea how anything material could be conscious.” (17) ... "Hence the central question(s) of this book: (a) how might a quantum theoretic approach explain consciousness and by extension intentional phenomena, and thereby unify physical and social ontology, and (b) what are some implications of the result for contemporary debates in social theory? (29)
  8. The quantum brain: "Quantum brain theory hypothesizes that the brain is able to sustain quantum coherence – a wave function – at the macro, whole-organism level. (30) ... "Quantum brain theory challenges this assumption by proposing that the mind is actually a quantum computer. Classical computers are based on binary digits or “bits” with well-defined values (0 or 1), which are transformed in serial operations by a program into an output. Quantum computers in contrast are based on “qubits” that can be in superpositions of 0 and 1 at the same time and also interact non-locally, enabling every qubit to be operated on simultaneously. (95)
  9. Weak and strong quantum minds: "In parsing quantum brain theory an initial distinction should be made between two different arguments that are often discussed under this heading. What might be called the “weak” argument hypothesizes that the firing of individual neurons is affected by quantum processes, but it does not posit quantum effects at the level of the whole brain. (97)
  10. Vitalism: "Principally, because my argument is vitalist, though the issue is complicated by the variety of forms vitalism has taken historically, some of which overlap with other doctrines. (144)
  11. Will and decision: "In Chapter 6, I equated this power with an aspect of wave function collapse, viewed as a process of temporal symmetry-breaking, in which advanced action moves through Will and retarded action through Experience. (174) ... "Will controls the direction of the body's movement over time by harnessing temporal non-locality, potentially over long “distances.” As advanced action, Will projects itself into what will become the future and creates a destiny state there that, through the enforcement of correlations with what will become the past, steers us purposefully toward that end. (182)
  12. Entangled people: "It is the burden of my argument to show that despite its strong intuitive appeal, the separability assumption does not hold in social life. The burden only extends so far, since I am not going to defend the opposite assumption, that human beings are completely inseparable. This is not true even at the sub-atomic level, where entangled particles retain some individuality. Rather, what characterizes people entangled in social structures is that they are not fully separable. (208-209)
  13. Quantum semantics: "This suggests that the “ground state” of a concept may be represented as a superposition of potential meanings, with each of the latter a distinct “vector” within its wave function. (216)
  14. Social structure: "If the physical basis of the mind and language is quantum mechanical, then, given this definition, that is true of social structures as well. Which is to say, what social structures actually are, physically, are superpositions of shared mental states – social wave functions. (258) ...  "A quantum social ontology suggests – as structuration theorists and critical realists alike have long argued – that agents and social structures are “mutually constitutive.” I should emphasize that this does not mean “reciprocal causation” or “co-determination,” with which “mutual constitution” is often conflated in social theory. As quantum entanglement, the relationship of agents and social structures is not a process of causal interaction over time, but a non-local, synchronic state from which both are emergent. (260) ... "First, a social wave function constitutes a different probability distribution for agents’ actions than would exist in its absence. Being entangled in a social structure makes certain practices more likely than others, which I take to involve formal causation. (264-265)
  15. The state and other structures: "The answer is that the state is a kind of hologram. This hologram is different from those created artificially by scientists in the lab, and also from the holographic projection that I argued in Chapter 11 enables us to see ordinary material objects, since in these cases there is something there visible to the naked eye. (271) ... Collective consciousness: "A quantum interpretation of extended consciousness takes us part way toward collective consciousness, but only part, because even extended consciousness is still centered in individual brains and thus solipsistic. A plausible second step therefore would be to invoke the concept of ‘We-feeling,’ which seems to get at something like ‘collective consciousness,’ and is not only widely used by philosophers of collective intentionality, but has been studied empirically by social psychologists as well. (277)
In my view the key premise here is the quantum interpretation of the brain and consciousness that Alex advocates. He wants us to consider that the operations of the brain -- the input-output relations and the intervening mechanisms -- are not "classical" but rather quantum-mechanical. And this is a very, very strong claim. It is vastly stronger than the idea that neurons may be affected by quantum-level events (considered in an earlier post and subject to active research by people interested in how microtubules work within neurons). But Alex would not be satisfied with the idea that "neurons are quantum machines" (point 9 above); he wants to make the vastly stronger argument that "brains are quantum computers". And even stronger than that -- he wants to claim that the brain itself is a wave function, which implies that we cannot understand its working by understanding the workings of its (quantum) components. (I don't think that computer engineers who are designing real quantum computers believe that the device itself is a wave function; only that the components (qubits) behave according to quantum mathematics.) Here is his brain-holism:
Quantum brain theory hypothesizes that quantum processes at the elementary level are amplified and kept in superposition at the level of the organism, and then, through downward causation constrain what is going on deep within the brain. (95)
So the brain as a whole is in superposition, and only resolves with perception or will as a whole in an event of the collapse of its wave function. (He sometimes refers to "a decoherence-free sub-space of the brain within which quantum computational processes are performed" (95), which implies that the brain as a whole is perhaps a classical thing encompassing "quantum sub-regions".) But whether it is the whole brain (implied by "walking wave function") or a relatively voluminous sub-region, the conjurer's move occurs here: extending known though kinky properties of very special isolated systems of micro-entities (a handful of electrons, photons, or atoms) to a description of macro-sized entities maintaining those same kinky properties.

So the "brain as wave function" theory is very implausible given current knowledge. But if this view of the brain and thought cannot be made more credible than it currently is -- both empirically and theoretically -- then Wendt's whole system falls apart: entangled individuals involved in structures and meanings, life as a quantum-vital state, and panpsychism all have no inherent credibility by themselves.

There are many eye-widening claims here -- and yet Alex is clear enough and well-versed enough in relevant areas of research in neuroscience and philosophy of mind to give his case some credibility. He lays out his case with calm good humor and rational care. Alex relies heavily on the fact that there are difficult unresolved problems in the philosophy of mind and the philosophy of physics (the nature of consciousness, freedom of the will, the interpretation of the quantum wave function). This gives impetus to his call for a fresh way of approaching the whole field -- as suggested by historians of science like Kuhn and Lakatos. However, failing to reach an answer to the question, "How is freedom of the will possible?", does not warrant us to jump to highly questionable assumptions about neurophysiology.

But really -- in the end this just is not a plausible theory in my mind. I'm not ready to accept the ideas of quantum brains, quantum meanings, or quantum societies. The idea of entanglement has a specific meaning when it comes to electrons and photons; but metaphorical extension of the idea to pairs or groups of individuals seems like a stretch. I'm not persuaded that we are "walking wave functions" or that entanglement accounts for the workings of social institutions. The ideas of structures and meanings as entangled wave functions (individuals) strike me as entirely speculative, depending on granting the possibility that the brain itself is a single extended wave function. And this is a lot to grant.

(Here is a brief description of the engineering goals of developing a quantum computer (link):
Quantum computing differs fundamentally from classical computing, in that it is based on the generation and processing of qubits. Unlike classical bits, which can have a state of either 1 or 0, qubits allow a superposition of the 1 and 0 states (both simultaneously). Strikingly, multiple qubits can be linked in so-called 'entangled' states, in which the manipulation of a single qubit changes the entire system, even if individual qubits are physically distant. This property is the basis for quantum information processing, with the goal of building superfast quantum computers and transferring information in a completely secure way.
See the referenced research article in Science for a current advance in optical quantum computing; link.)

(The image above is from a research report from a team which has succeeded in creating entanglement of a record number of atoms -- 3,000. Compare that to the hundreds of billions of neurons in the brain, and once again the implausibility of the "walking wave function" idea becomes overwhelming. And note the extreme conditions of low temperature that are required to create this entangled group; the atoms were cooled to 10-millionths of a degree Kelvin, trapped between two mirrors, and subjected to exposure by a single photon (link) And yet presumably decoherence occurs if the temperature raises substantially.)

Here is an interesting lecture on quantum computing by Microsoft scientist Krysta Svore, presented at the Institute for Quantum Computing at the University of Waterloo.


Quantum biology?



I have discussed several times an emerging literature on "quantum consciousness", focusing on Alex Wendt's provocative book Quantum Mind and Social Science: Unifying Physical and Social Ontology. Is it possible in theory for cognitive processes, or neuroanatomical functioning, to be affected by events at the quantum level? Are there known quantum effects within biological systems? Here is one interesting case that is currently being explored by biologists: an explanation of the ability of birds to navigate by the earth's magnetic field in terms of the chemistry of entangled electrons.

Quantum entanglement is defined as a relation between two or more micro-particles (photons, electrons, …) in which the quantum state of one is entangled with the quantum state of the other. When observation of the first part of the pair brings about alteration of the quantum state in that particle, quantum theory entails that the state of the second particle will change as well.

It has been hypothesized that the ability of birds to navigate by reference to the earth’s magnetic field may be explained by quantum effects of electrons in molecules (cryptochromes) in the bird’s retina. Thorsten Ritz is a leader in this area of research. In "Magnetic Compass of Birds Is Based on a Molecule with Optimal Directional Sensitivity" he and his co-authors describes the hypothesis in these terms (link):
The radical-pair model (7,8) assumes that these properties of the avian magnetic compass—light-dependence and insensitivity to polarity—directly reflect characteristics of the primary processes of magnetoreception. It postulates a crucial role for specialized photopigments in the retina. A light-induced electron-transfer reaction creates a spin- correlated radical pair with singlet and triplet states. (3451)
Here is the chemistry from the same article (3452):

Markus Tiersch and Hans Briegel address these findings in "Decoherence in the chemical compass: the role of decoherence for avian magnetoreception". They describe the hypothetical mechanism of paired-electron chemistry as a mechanism in birds for detecting magnetic fields (link):
Certain birds, including the European robin, have the remarkable ability to orient themselves, during migration, with the help of the Earth's magnetic field [3-6]. Responsible for this 'magnetic sense' of the robin, according to one of the main hypotheses, seems to be a molecular process called the radical pair mechanism [7,8] (also, see [9,10] for reviews that include the historical development and the detailed facts leading to the hypothesis). It involves a photo-induced spatial separation of two electrons, whose spins interact with the Earth's magnetic field until they recombine and give rise to chemical products depending on their spin state upon recombination, and thereby to a different neural signal. The spin, as a genuine quantum mechanical degree of freedom, thereby controls in a non-trivial way a chemical reaction that gives rise to a macroscopic signal on the retina of the robin, which in turn influences the behaviour of the bird. When inspected from the viewpoint of decoherence, it is an intriguing interplay of the coherence (and entanglement) of the initial electron state and the environmentally induced decoherence in the radical pair mechanism that plays an essential role for the working of the magnetic compass. (4518)
So the hypothesis is that birds (and possibly other organisms) have evolved ways of exploiting "spin chemistry" to gain a signal from the presence of a magnetic field. What is spin chemistry? Here is a definition from the spin chemistry website (yes, spin chemistry has its own website!) (link):
Broadly defined, Spin Chemistry deals with the effects of electron and nuclear spins in particular, and magnetic interactions in general, on the rates and yields of chemical reactions. It is manifested as spin polarization in EPR and NMR spectra and the magnetic field dependence of chemical processes. Applications include studies of the mechanisms and kinetics of free radical and biradical reactions in solution, the energetics of photosynthetic electron transfer reactions, and various magnetokinetic effects, including possible biological effects of extremely low frequency and radiofrequency electromagnetic fields, the mechanisms by which animals can sense the Earth’s magnetic field for orientation and navigation, and the possibility of manipulating radical lifetimes so as to control the outcome of their reactions. (link)
Tiersch and Briegel go through the quantum-mathematical details on how this process might work in the case of molecules that might be found in birds' retinas. Here is the conclusion drawn by Tiersch and Briegel:
It seems that the radical pair mechanism provides an instructive example of how the behaviour of macroscopic entities, like the European robin, may indeed remain connected, in an intriguing way, to quantum processes on the molecular level. (4538)
This line of thought is still unconfirmed, as both Ritz and Tiersch and Briegel are careful to emphasize. If confirmed, it would provide an affirmative answer to the question posed above -- are there biological effects of quantum-mechanical events? But even if confirmed, it doesn't seem like an enormously surprising result. It traces out a chemical reaction which proceeds differently depending on whether entangled electrons in molecules stimulated by a photon have been influenced by a magnetic field; this gives the biological system a signal about the presence of a magnetic field that does in fact depend on the quantum states of a pair of electrons. Entanglement is now well confirmed, so this line of thought isn't particularly radical. But this is entirely less weird than the idea that quantum particles are "conscious", or that consciousness extends all the way down to the quantum level (quantum interactive dualism, as Henry Stapp calls it; link). And it is nowhere nearly as perplexing as the claim that "making up one's mind" is a form of a collapsing quantum state represented by a part of the brain.

(Of interest on this set of topics is a recent collection, Quantum physics meets the philosophy of mind, edited by Antonella Corradini and Uwe Meixne. Here is a video in which Hans Briegel discusses research on modeling quantum effects on agents: https://phaidra.univie.ac.at/detail_object/o:300666.)

Thursday, March 10, 2016

Non-generative social facts


Is every social process generated by facts about individuals? For example, consider a television advertising campaign for a General Motors truck. This is a complicated sequence of events, actions, contracts, relationships, and interactions among organizational units as well as individuals. The campaign itself is constituted by the schedule of television spots on which the adverts are broadcast. The causes of the nuances of the temporally extended production need to be traced to both individual choices and interactions among organizational units. So in this context, let's ask the question: Is this complex social production "generated" by a set of individual-level facts? Not exactly.

Behind the finished campaign lies the organization and entrepreneur who designed the campaign, and the company who paid for it (social facts). Simultaneous with the campaign is the suite of facts about the media and the public in virtue of which it makes sense to purchase the spots and broadcast the adverts (also social facts). And subsequent to the campaign is its reception and the consequences of the campaign (once again, social). This description involves social-level facts. These social facts are embodied through the actions and thoughts of individuals; but the causal action is at the level of the organization, not the individuals. It is hard to see the logic in saying that, given the antecedent state of the individuals in their situations ahead of time, the campaign was generated. Rather, the campaign was generated by the quasi-intentional activities of several interlocking and interacting organizations, as well as the known social properties of the public and the media.

How does this example fit into the scheme of generativeness and emergence? There is nothing mysterious about this scenario; each of the social units mentioned here has microfoundations at the level of the individuals whose actions and thoughts contribute to its operations. The public is constituted by the part of the population who view the media. Clearly the public's properties supervene on the attitudes and states of mind and action of the individuals. General Motors is a giant corporation, a business organization consisting of many semi-autonomous divisions, and located within a market and a regulatory environment. The marketing division is one of those semi-autonomous divisions. It is broadly commissioned to help position the company in the public awareness and to promote sales of the vehicles. The Marketing division may be regarded as an agent with a performance space both within and outside the corporation. If Marketing performs badly -- develops bad content, places ads in front of the wrong demographic, fails to produce the surge of new sales -- the manager is likely to lose his/her job.The sales department of the media organization is similar, with an imperative to sell advertising time slots.

It seems inapt to say that this scenario is generated by antecedent states of individuals. Rather, it is part of the play of the agents and institutions within this kind of social environment, with actors at various levels doing things and being influenced by the play of events and doings, both individual and collective. It is certainly not explanatorily interesting that GM, Marketing, WXYZ, WXYZ Sales, and the public are all composed of individual actors. Instead, we want to know what it is about the circumstances facing these various social actors (the corporation, its divisions, the PR firm, ...) in virtue of which they do the things they do: design the graphics, purchase specific packages of time, demand a given price for the time, and reacts with a new desire to buy a Chevy truck. In other words, we need a social, semiotic, economic, competitive, and organizational account of the activities that transpire here to bring about the advertising campaign.

The logic of this scenario seems quite different from that of Schelling's residential segregation model, where the patterns of segregation are in fact generated by the preferences and decision rules of the participants. In this case the outcome is fundamentally structured at the social level; the individuals merely play their roles within the corporation, the marketing department, the PR firm, etc. It is certainly hard to see how an ABM model might help to reproduce the sequence of social activities identified here.

Further, it is evident that there is downward causation in this story. The whole point of the marketing campaign is to change the attitudes of the public through the instrumentality of the media. But likewise, the actors within the various organizations are affected by their roles. They act and choose differently because of their location and history in the corporation. At a higher level the structure of WXYZ as a corporation is affected by something higher -- the market relations in which it exists and the government regulatory environment which governs it.

This example makes it seem that there is some space between "A is generated by facts at level B" and "A has microfoundations in facts at level B".

So this complicated example of a fairly routine social process seems to be one that throws attention on the causal and intentional properties of the meso-level social structures rather than on the states of agency of the individuals who constitute those structures. And this in turn suggests that it is not the case that all social events are "generated" by the states of mind and action of the individuals who constitute them, even though each of the subordinate events in the sequences possesses microfoundations at the level of the individual actors.

Tuesday, March 8, 2016

Reduction and generativeness


Providing an ontology of complex entities seems to force us to refer to some notion of higher-level and lower-level things. Proteins consist of atoms; atoms consist of protons, electrons, and neutrons; and cells are agglomerations of many things, including proteins. This describes a relation of composition between a set of lower-level things and the higher-level thing. And this in turn seems to involve some kind of notion of "levels" of things in the world. Things at each level have relations and properties constituting the domain of facts at that level, and the properties of the higher-level thing are sometimes different from the properties of the lower-level things. (Not all the properties, of course -- proteins and atoms alike have mass and momentum.) But for the properties that differ, we have an important question to answer: what explains or determines the properties of the higher-level thing? Several positions have been considered:

  • Facts about things and properties of B are generated by facts of A
  • Facts about things and properties of B can be reduced to facts of A
  • Facts about things and properties of B supervene upon properties of A
I want to discuss these relations here, but it's worth recalling the other important relations across levels that are sometimes invoked.
  • Facts about things and properties of B are weakly emergent from properties of A
  • Facts about things and properties of B are strongly emergent from properties of A
  • Facts about things and properties of B are in part independent from the properties of A
  • Facts about things and properties of B causally influence the properties of A

So let's focus here on reduction and generation. These are sometimes thought to be equivalent notions; but they are not. Let's grant that the facts about B jointly serve to generate the facts about A. Then A supervenes upon B, by definition. Do these facts imply that A is reducible to B, or that facts of A can be or should be reduced to B? Emphatically not. Reducibility is a feature of the relationship between bodies of knowledge or theories -- our knowledge of A and our knowledge of B. To reduce A to B means deriving what we know about B from what we know about A. For example, the laws of planetary motion are derivable from the law of universal gravitation: by working through the mathematics of gravity it is possible to derive the orbits of the planets around the sun. So the laws of planetary motion are reducible to the law of universal gravitation.

Generativity is not a feature of theories; instead, it is an ontological feature of the world. Physicalism is such a conception. Physicalism maintains that facts about the physical body, including the nervous system, jointly generate all mental phenomena. Generativity involves the idea that, taking the full reality of the properties and powers of B, the properties of A result. The properties of the entities at level B suffice to generate all the properties of the entities at level A. But there is no assurance that our current knowledge about B permits a mathematical derivation of A. Further, there is no assurance that a "full and complete theory" of B would permit such a derivation -- because there is no assurance that such a theory exists at all. And then there is the issue of computability: it may be radically in feasible to perform the calculations necessary to derive A from B.

And so it is clear that reducibility does not follow from generativeness.

There is a second level argument separating generativeness from reducibility as well. This is the fact that there are numerous scientific purposes for which reduction is unnecessary even if it were feasible. It might be possible to derive the motion of a cannonball from a calculation of the motions of the component molecules. But this would be silly. We have no scientific interest or need in doing so.

So it is fully consistent for us to take the position of generativeness and anti-reductionism. And this position makes very good sense in the case of macro and micro social facts. We can take the view that all social entities are embodied in facts about various individuals, their social interactions, and their states of mind. This implies that social facts are generated by facts at the actor level, or that the facts of A supervene upon the facts of B. And yet we can also be emphatic in affirming that there is not need or general possibility for reduction from the one level to the other.

Or in other words, the generativeness of the situation is wholly uninformative.

Sunday, March 6, 2016

Critical realism meets peasant studies


Critical realism is a philosophical theory of social ontology and social science knowledge. This philosophy has been expressed through the writings of systematic thinkers such as Roy Bhaskar, Margaret Archer, and other philosophers and sociologists over the past 40 years. Most of the leaders have emphasized the systematic nature of the theory of critical realism. It builds on a philosophical base, the application of the transcendental method of philosophy, developed by Roy Bhaskar. The theory is now being recommended within sociology as a better way of thinking about sociological method and theory.

Critical realism has a number of very positive aspects for consideration by social scientists. It is inspired by a deep critique of the philosophy of science associated with logical positivism, it offers a clear defense of the idea that there is a social and natural reality which it is the task of scientific inquiry to learn about, and it gives valuable attention and priority to the challenge of discovering concrete causal mechanisms which lead to real outcomes in the natural and social world. There is, however, some tendency for this tradition to express itself in an inward-looking and even dogmatic fashion.

So how can the fields of sociological method and critical realism progress today? One thing is clear: the value and relevance of critical realism is not to provide a template for scientific research or the form that a good scientific research project should take. There are no such templates. Mechanical application of any philosophy, whether critical realism, positivism, or any other theory of science, is not a fruitful way of proceeding as a scientist. However, with this point understood, it is in fact valuable for sociologists and other social scientists to think reflectively and seriously about some of the assumptions about the social world and the nature of social explanation which are involved in critical realism. The advice to look for real and persistent structures and processes underlying observable phenomena, the idea that "generative causal mechanisms" are crucial to processes of change and stability, the ideas associated with morphogenesis, and the idea that causation is not simply a summary of constant conjunction -- these are valuable contributions to social science thinking.

This answers one half of the question raised here: sociological method can benefit from involvement in some open-minded debates inspired by the field of critical realism.

But what about the field of critical realism itself? How can this research community move forward? It would seem that the process involved in textual argumentation--"what would Roy say about this question or that question?"--is not a good way of making progress in critical realism or any other field of philosophy of science. More constructive would be for philosophers and social scientists within the field of critical realism to think open-mindedly about some of the shortcomings and blind spots of this field. And an open-minded consideration of some complementary or competing visions of the social world would strengthen the field as well -- the ideas of heterogeneity, plasticity, the social construction of the self, and assemblage, for example.

I think that one good way of posing this challenge to critical realism might be to undertake a careful, rigorous study of very strong examples of social research that involves good inquiry and good theoretical models. The field of critical realism has tended to be to self-contained, with the result that debates are increasingly hermetically separated from actual research problems in the social sciences. Careful and non-dogmatic study of extended, clear examples of social inquiry would be very productive.

As a first step, it would be very stimulating to identify the empirical and explanatory work of a genuinely innovative social scientist like James Scott, and do a careful, reflective, and serious investigation of the definition of research problem, the research methods which were used, the central theoretical or explanatory ideas which were introduced, and the overall trajectory and development of this thinker's thought.

Scott's key ideas include moral economy, hidden transcripts, Zomia, weapons of the weak, seeing like a state, and the social reality of anarchism. And Scott attempts to explain social phenomena as diverse as peasant rebellion, resistance to agricultural modernization, the ways in which English novelists represent class conflict, the strategies of the state and its elusive opponents in southeast Asia, and many other topics of rural society. Many of Scott's narratives can be analyzed in terms of the discovery of novel social mechanisms, strategies of resistance and domination, and embodied large social forces like taxation and conscription. Scott's social worlds are populated by real social actors engaged in concrete social mechanisms and processes which can be known through research. Scott is a realist, but realist in his own terms: he discovers real social relations, social mechanisms and processes, and modes of social change at the local level and the national level and he puts substantial empirical detail on these things. His way of thinking about peasant society is relational--he pays close attention to the relationships that exist within a village, across lines of property and kinship, in cooperation towards collective action. He gives a role to the important powers of the state, but always with an understanding that the power of the state must be conveyed through a set of capillaries of agents in positions extending down to the village level. And in fact, his treatment in anarchy and seeing like a state is a summing up of many of the mechanisms of control and supervision which traditional states have used to control rural populations. (Scott's work has been discussed frequently in earlier posts.)

In fact, I could imagine a series of carefully chosen case studies of innovative, insightful social researchers who have changed the terms of debate and understanding in a particular field. Other examples might include researchers such as Robert Putnam, Robert Axelrod, Charles Tilly, Michael Mann, Clifford Geertz, Albert Soboul, Simon Schama, Bin Wong, Robert Darnton, and Benedict Anderson.

Studies like these would have the potential for significantly broadening the terms of discussion and debate within the field of CR and help it engage more deeply with social scientists in several disciplines. This kind of inquiry might help open up some of the blind spots as well. These kinds of discussions might give greater importance to processes leading to the social construction of the self, greater awareness of the heterogeneity of social processes, and a bit more openness to philosophical ideas outside the corpus. No philosophy can proceed solely on the basis of its own premises; interaction with the practices of innovative scientists can significantly broaden the approach in a positive way.

Saturday, February 27, 2016

Values, directions, and action

 


Several earlier posts have raised the question of rational life planning. What is involved in orchestrating one's goals and activities in such a way as to rationally create a good life in the fullness of time?

We have seen that there is something wildly unlikely about the idea of a developed, calculated life plan. Here is a different way of thinking about this question, framed about directionality and values rather than goals and outcomes. We might think of life planning in these terms:
  • The actor frames a high-level life conception -- how he/she wants to live, what to achieve, what activities are most valued, what kind of person he/she wants to be. It is a work in progress.
  • The actor confronts the normal developmental issues of life through limited moments in time: choice of education, choice of spouse, choice of career, strategies within the career space, involvement with family, level of involvement in civic and religious institutions, time and activities spent with friends, ... These are week-to-week and year-to-year choices, some more deliberate than others.
  • The actor makes choices in the moment in a way that combines short-term and long-term considerations, reflecting the high-level conception but not dictated by it.
  • The actor reviews, assesses, and updates the life conception. Some goals are reformulated; some are adjusted in terms of priority; others are abandoned.
This picture looks quite a bit different from more architectural schemes for creating and implementing a life plan considered in earlier posts, including the view that Rawls offers for conceiving of a rational plan of life. Instead of modeling life planning after a vacation trip assisted by an AAA TripTik (turn-by-turn instructions for how to reach your goal), this scheme looks more like the preparation and planning that might have guided a great voyage of exploration in the sixteenth century. There were no maps, the destination was unknown, the hazards along the way could only be imagined. But there were a few guiding principles of navigation -- "Keep making your way west," "Sail around the biggest storms," "Strive to keep reserves for unanticipated disasters," "Maintain humane relations with the crew." And, with a modicum of good fortune, these maxims might be enough to lead to discovery.

This scheme is organized around directionality and regular course correction, rather than a blueprint for arriving at a specific destination. And it appears to be all around a more genuine understanding of what is involved in making reflective life choices. Fundamentally this conception involves having in the present a vision of the dimensions of an extended life that is specifically one's own -- a philosophy, a scheme of values, a direction-setting self understanding, and the basics needed for making near-term decisions chosen for their compatibility with the guiding life philosophy. And it incorporates the idea of continual correction and emendation of the plan, as life experience brings new values and directions into prominence.

The advantage of this conception of rational life planning is that it is not heroic in its assumptions about the scope of planning and anticipation. It is a scheme that makes sense of the situation of the person in the limited circumstances of a particular point in time. It doesn't require that the individual have a comprehensive grasp of the whole -- the many contingencies that will arise, the balancing of goods that need to be adjusted in thought over the whole of the journey, the tradeoffs that are demanded across multiple activities and outcomes, and the specifics of the destination. And yet it permits the person to travel through life by making choices that conform in important ways to the high-level conception that guides him or her. And somehow, it brings to mind the philosophy of life offered by those great philosophers of life, Montaigne and Lucretius.


Thursday, February 25, 2016

Guest post by Gianluca Pozzoni on political entities


Gianluca Pozzoni is a PhD Candidate in Political Studies at the University of Milan, Italy. His interests span the foundations of the social sciences, and he has written on Marxism, methodological individualism, and the status of social structures. Thank you, Gianluca, for contributing this stimulating guest post.

BY GIANLUCA POZZONI

Daniel Little’s recent post on Assemblage theory as heuristic raises important and thought-provoking issues for social theorizing.

As Little understands it, Manuel DeLanda’s theory put forth in A New Philosophy of Society can be seen as a manifesto for non-essentialist and non-reductionist social theories. In this view, social entities do not have a fixed place in a vertical hierarchy that moves from the building blocks of the social world (e.g. agents) all the way up to large-scale structures (e.g. global markets).

Instead, the components of the social world differ according to the “assemblages” under consideration. So, for instance, organizations are assemblages of people; nation-states are assemblages of cities, people, and organizations; cities are assemblages of people, organizations, as well as buildings and infrastructures; and so on. Thus understood, assemblage theory allows for no basic constituents of the social because it does away with the very idea of society as a structured totality with a “basis” and a “summit”.

This seems to go in the direction of a “flat social ontology” as described by Daniel Little here. A flat model of the social reality would position actors and their interaction networks (e.g. organizations) all at the same level, without further assumptions as to how some causal powers are “emergent” on others (if at all).

This perspective has undeniable attractiveness. For one thing, it is theoretically parsimonious. It does away with somewhat obscure or ill-defined notions of “emergence” while at the same time allowing for aggregate social entities to be irreducible to their components, for instance by assuming that they have at least an independent functioning. Furthermore, it does not presuppose an abstract layering of the social world, let alone one metaphysically pre-ordered into “higher” and “lower” levels.

Nonetheless, Little has already detailed some limitations of the flat-ontology perspective. For Little, conceiving social reality as flat ultimately boils down to assuming an ontology like the one associated with spare versions of methodological individualism.

Here I wish to point out a different problematic aspect of this approach.

Even setting aside the depth ordering of social aggregates, social theory often makes use of broad categories that help group together heterogeneous classes of phenomena for explanatory purposes. These are, for instance, “the polity”, “the economy”, “the military”, and the like.

What ontological status do these categories have? Let us consider politics. In his Contentious Performances (2008), Charles Tilly writes:
We enter the realm of politics when we interact with agents of governments, either dealing with them directly or engaging in activities bearing on governmental rights, regulations, and interests. (p. 6)
However restrictive this definition may be, it is able to identify the political domain in terms of the typology of networks involved – namely, governmental institutions. Accordingly, Tilly et al. (2001) could use it to group together and explain phenomena as diverse as the Watergate scandal and the Mau Mau rebellion in 1950s Kenya under the common label of “contentious politics”.

In a layered social ontology, governmental institutions can be seen as making up a level of the social in its own right. As Little puts it:
For example, might the state be a level-2 entity, in that it encompasses organizations and individuals and it possesses new causal properties not present at level 1? In principle this seems possible. The state is a complex network of organizations and individuals. And it is logically possible that new causal powers emerge that depend on both base and level 1, but that do not require reduction to those lower-level properties.
If this can be granted, politics can be ontologically grounded onto a level of the social. Specifically, we may confine political ontology to all phenomena that involve government-level causal powers.

Flat social ontologies, however, reject the very notion of “levels” of the social. At best, compound social entities exist as relations among their components; they may well have an independent functioning (link), but no “higher-level” or “emergent” causal powers. So do states, governments, and the like.

What then would a flat political ontology look like? Perhaps it may be helpful to refer to Adrian Leftwich’s distinction in What is Politics: The Activity and its Study between politics as “process” and politics as “arena”:
The latter, or arena, approach tends to have a narrower and sharper focus (normally the state and the institutions of government and local government – sometimes, in a more comparative context, including kings, chiefs or emperors and their courts and their relations with the public). (p. 13)
The theory of politics as arena seems to fit quite nicely the stratified approach to social ontology. The arena metaphor suggests that governmental institutions – national or otherwise – make up a relatively autonomous set of interactions quite distinct from other forms of interaction. This can be supported from an ontological point of view by assuming that such institutions are endowed with causal powers that are all situated at one specific level.

Leftwich contrasts the arena model of politics with the idea of politics as “a general process [or set of processes – G.P.] which is not confined to certain institutional arenas or sites” (p. 14). This dynamic idea may seem appealing to those who do not wish to build political ontology as a “regional ontology”. The processual definition of politics does not make reference to the causal powers of any specific entities and therefore does not necessarily require an ontological layering of them.

However, the obvious question then arises as to what makes processes political as opposed to, say, economic. This question seems to be of crucial relevance if the category of politics is to serve explanatory purposes. As Leftwich puts it:
… does such an encompassing view mean that every human interaction is political in some respect? If so, and if politics is thus so broadly defined, what is left that is distinctive about it? (p. 14)
A catch-all definition of politics, then, seems to be redundant and to have little explanatory use. Unfortunately, the flat theory of society alone hardly provides any guidance for further theorizing along this line.

[Query from Dan Little in response to the final question from Leftwich: Is it possible that the difference between economic and political processes is not after all an ontological difference, but rather a pragmatic difference of classification for us? In other words, it is not a fact about the world but about our interests that "constitution" and "wage labor" are classified as "political/legal" and "economic".] 

Friday, February 19, 2016

Causal diagrams and causal mechanisms


There is a long history of the use of directed causal diagrams to represent hypotheses about causation. Can the mathematics and graphical systems created for statistical causal modeling be adapted to represent and evaluate hypotheses about causal mechanisms and outcomes?

In the causal modeling literature the structure of a causal hypothesis is something like this: variable T increases/ decreases the probability of the occurrence of outcome E. This is the causal relevance criterion described by Wesley Salmon in Scientific Explanation and the Causal Structure of the World. It is a fundamentally statistical understanding of causality.

Here is a classic causal path model by Blau and Duncan indicating the relationships among a number of causal factors in bringing about an outcome of interest -- "respondent's first job".


This construction aims at joining a qualitative hypothesis about the causal relations among a set of factors with a quantitative measurements of the correlations and conditional probabilities that support these causal relations. The whole construction often takes its origin in a multivariate regression model.

Aage Sørensen describes the underlying methodological premise of quantitative causal research in these terms in his contribution to Frontiers of Sociology (Annals of the International Institute of Sociology Vol. 11):
Understanding the association between observed variables is what most of us believe research is about. However, we rarely worry about the functional form of the relationship. The main reason is that we rarely worry about how we get from our ideas about how change is brought about, or the mechanisms of social processes, to empirical observation. In other words, sociologists rarely model mechanisms explicitly. In the few cases where they do model mechanisms, they are labeled mathematical sociologists, not a very large or important specialty in sociology. (370)
My question here is whether this scheme of representation of causal relationships and the graphical schemes that have developed around it are useful for the analytics of causal mechanisms.

The background metaphysics assumed in the causal modeling literature is Humean and "causal-factor" based; such-and-so factor increases the probability of occurrence of an outcome or an intermediate variable, the simultaneous occurrence of A and B increase the probability of the outcome, etc. Quoting Peter Hedstrom on causal modeling:
In the words of Lazarsfeld (1955: 124-5), "If we have a relationship between x and y; and if for any antecedent test factor the partial relationships between x and y do not disappear, then the original relationship should be called a causal one." (Dissecting the Social: On the Principles of Analytical Sociology)
The current iteration of causal modeling is a directed acyclic graph (DAG). Felix Elwert provides an accessible introduction to directed acyclic graphs in his contribution to Handbook of Causal Analysis for Social Research (link). Here is a short description provided by Elwert:
DAGs are visual representations of qualitative causal assumptions: They encode researchers’ expert knowledge and beliefs about how the world works. Simple rules then map these causal assumptions onto statements about probability distributions: They reveal the structure of associations and independencies that could be observed if the data were generated according to the causal assumptions encoded in the DAG. This translation between causal assumptions and observable associations underlies the two primary uses for DAGs. First, DAGs can be used to prove or disprove the identification of causal effects, that is, the possibility of computing causal effects from observable data. Since identification is always conditional on the validity of the assumed causal model, it is fortunate that the second main use of DAGs is to present those assumptions explicitly and reveal their testable implications, if any. (246)
A DAG can be interpreted as a non-parametric structural equation model, according to Elwert. (Non-parametric here means simply that we do not assume that the data are distributed normally.) Elwert credits the development of the logic of DAGs to Judea Pearl and Peter Spirtes, along with other researchers within the causal modeling community.

Johannes Textor and a team of researchers have implemented DAGitty, a platform for creating and using DAGs in appropriate fields, including especially epidemiology (link). A crucial feature of DAGitty is that it is not solely a graphical program for drawing graphs of possible causal relationships; rather, it embodies an underlying logic which generates expected statistical relationships among variables given the stipulated relationships on the graph. Here is a screenshot from the platform:



The question to consider here is whether there is a relationship between the methodology of causal mechanisms and the causal theory reflected in these causal diagrams. 

It is apparent that the underlying ontological assumptions associated with the two approaches are quite different. Causal mechanisms theory is generally associated with a realist approach to the social world, and generally rejects the Humean theory of causation. The causal diagram approach, by contrast, is premised on the Humean and statistical approach to causation.  A causal mechanisms hypothesis is not fundamentally evaluated in terms of the statistical relationships among a set of variables; whereas a standard causal model is wholly intertwined with the mathematics of conditional correlation.

Consider a few examples. Here is a complex graphical representation of a process understood in terms of causal mechanisms from McGinnes and Elandy, "Unintended Behavioural Consequences of Publishing Performance Data: Is More Always Better?" (link):



Plainly this model is impossible to evaluate statistically by attempting to measure each of the variables; instead, the researchers proceed by validating the individual mechanisms identified here as well as the direction of influence they have on other intermediate outcomes. The outcome of interest is "quality of learning" at the center of the graph; and the diagram attempts to represent the complex structure of causal influences that exist among several dozen mechanisms or causal factors.

Here is another example of a causal mechanisms path diagram, this time representing the causal system involved in drought and mental health by Vins, Bell, Saha, and Hess (link).


Here too the model is not offered as a statistical representation of covariance among variables; rather, it is a hypothetical sketch of the factors which play in mechanisms leading from drought to depression and anxiety in a population. And the assessment of the model should not take the form of a statistical evaluation (a non-parametric structural equation model), but rather a piecemeal verification of the validity of the specific mechanisms cited. (John Gerring argues that this is a major weakness in causal mechanisms theory, however, in "Causal Mechanisms? Yes, But ..." (link).)

It seems, therefore, that the superficial similarity between a causal model graph (a DAG) and a causal mechanisms diagram is only skin-deep. Fundamentally the two approaches make very different assumptions about both ontology (what a causal relationship is) and epistemology (how we should empirically evaluate a causal claim). So it seems unlikely that it will be fruitful for causal-mechanisms theorists to attempt to adapt methods like DAGs to represent the causal claims they want to advance and evaluate.

Thursday, February 11, 2016

Assemblage theory as heuristic


In A New Philosophy of Society: Assemblage Theory and Social Complexity Manuel DeLanda takes up one of Deleuze's key ideas. This is the idea of "assemblage", and it has been discussed here several times previously (link). (See DeLanda's extensive EGS lecture on assemblage theory below.) Here is a preliminary discussion of assemblage in New Philosophy of Society.
Today, the main theoretical alternative to organic [Hegelian] totalities is what the philosopher Gilles Deleuze calls assemblages, wholes characterized by relations of exteriority. These relations imply, first of all, that a component part of an assemblage may be detached from it and plugged into a different assemblage in which its interactions are different. In other words, the exteriority of relations implies a certain autonomy for the terms they relate, or as Deleuze puts it, it implies that 'a relation may change without the terms changing'. Relations of exteriority also imply that the properties of the component parts can never explain the relations which constitute a whole, that is, 'relations do not have as their causes established ...' although they may be caused by the exercise of a component's capacities. In fact, the reason why the properties of a whole cannot be reduced to those of its parts is that they are the result not of any aggregation of the components' own properties but of the actual exercise of their capacities. These capacities do depend on a component's properties but cannot be reduced to them since they involve reference to the properties of other interacting entities. Relations of exteriority guarantee that assemblages may be taken apart while at the same time allowing that the interactions between parts may result in a true synthesis. (10-11)
In addition to the exteriority of relations, the concept of assemblage is defined along two dimensions. One dimension or axis defines the variable roles which an assemblage's components may play, from a purely material role at one extreme of the axis, to a purely expressive role at the other extreme.... The other dimension defines variable processes in which these components become involved and that either stabilize the identity of an assemblage, by increasing its degree of internal homogeneity or the degree of sharpness of its boundaries, or destabilize it. (12)
In an illuminating discussion of some of Fernand Braudel's comments about medieval villages, DeLanda writes:
This brief description yields a very clear picture of a series of differently scaled assemblages, some of which are component parts of others which, in turn, become parts of even larger ones. (18)
What does this mean in practical terms? As a first approximation, the core idea of assemblage is that social things (cities, structures, ideologies) are composed of an overlapping and contingent collection of a heterogeneous set of social activities and practices. The relations among these activities and practices are contingent, and the properties of the composite thing -- the assemblage -- are likewise a contingent and "emergent" sum of the properties of the component threads. The composite has no "essence" -- just a contingent and changeable set of properties. Here is the thumbnail description I provided in the earlier post:
Fundamentally the idea is that there does not exist a fixed and stable ontology for the social world that proceeds from "atoms" to "molecules" to "materials". Rather, social formations are assemblages of other complex configurations, and they in turn play roles in other, more extended configurations. (link)
Here I want to ask a very simple preliminary question: What is the intellectual role of assemblage theory for sociology and for the philosophy of social science? Is assemblage theory a substantive social theory? Is it a guide to research and methodology? Or is it an ontology?

I think we do best to understand assemblage theory as a high-level and abstract ontological framework, an abstract description of the nature of the social world. It highlights the pervasive fact of  the heterogeneous nature of phenomena in the social world. But it does not provide a substantive theory of what those component threads are; this is for concrete sociological theory to work out. Unlike rational choice theory, Marxist theory, or pragmatist action theory -- each of which rests upon a substantive core set of ideas about the fundamentals of social action and structure -- assemblage theory is neutral with respect to these topics.

So assemblage theory is not a guide to the constituents of the social world; it is not similar to atomic theory or the Mendeleev table of the elements. However, I believe the theory is indeed methodologically helpful. Exploring assemblage theory is a potentially valuable activity for social scientists and philosophers. This is because the theory encourages us to study component systems and underlying social processes rather than looking for unified theories of large unified social objects. In this way it gives value and direction to multi-theoretical, inter-disciplinary approaches.

Moreover, this approach encourages social scientists to arrive at partial explanations of social features by discerning the dynamics of some of the components. These accounts are necessarily incomplete, because they ignore many other constituents of the assembled whole. And yet they are potentially explanatory, when the dynamics being studied have the ability to generate trans-assemblage characteristics (continuity, crisis). (This seems to have some resonance with Roy Bhaskar's idea that the social world is an "open" system of causation; A Realist Theory of Science.)

So assemblage theory is not a substantive social theory. It doesn't prescribe any specific ideas about the components, layers, laminations, or threads out of which social phenomena are composed. Instead, it offers a vision of how we should think of all such constructions in the social world. We should be skeptical about the appearance of unity and coherence in an extended social entity (e.g. the Justice Department or the Muslim world), and look instead to discover some of the heterogeneous and independent processes that underlie the surface appearance. And it gives ontological support for some of the theoretical inclinations of comparative historical sociology (Tilly, Steinmetz, Mann): look for the diversity of social arrangements and the context-dependent conjunctural causes that underlie complex historical events.

Here is a lecture by DeLanda on assemblage theory.



(I chose the illustration of a circus at the top because a circus illustrates some of the layered compositionality that assemblage theory postulates: multiple agents playing multiple roles; transportation activities and business procedures; marketing ploys and aesthetic creativity; and many things happening in the three rings at the same time.)