Tuesday, July 28, 2015

Supervenience, isomers, and social isomers


A prior post focused on the question of whether chemistry supervenes upon physics, and I relied heavily on R. F. Hendry's treatment of the way that quantum chemistry attempts to explain the properties of various molecules based on fundamentals of quantum mechanics. This piece raised quite a bit of great discussion, from people who agree with Hendry and those who disagree.

It occurs to me that there is a simpler reason for thinking that chemistry fails to supervene upon the physics of atoms, however, which does not involve the subtleties of quantum mechanics. This is the existence of isomers for various molecules. An isomer is a molecule with the same chemical composition as another but a different geometry and different chemical properties. From the facts about the constituent atoms we cannot infer uniquely what geometry a molecule consisting of these atoms will take. Instead, we need more information external to the physics of the atoms involved; we need an account of the path of interactions that the atoms took in "folding" into one isomer or the other. Therefore chemistry does not supervene upon the quantum-mechanical or physical properties of atoms alone.

For example, the properties of the normal prion protein and its isomer, infectious prion protein, are not fixed by the constituent elements; the geometries associated with these two compounds result from other causal influences. The constituent elements are compatible with both non-equivalent expressions. The prion molecules do not supervene upon the properties of the constituent elements. The question of which isomer emerges is one of a contingent path dependent process.

It is evident that this is not an argument that chemistry does not supervene upon physics more generally, since the history of interactions through which a given isomer emerges is itself a history of physical interactions. But it does appear to be a rock-solid refutation of the idea that molecules supervene upon the atoms of which they are constituted.

Significantly, this example appears to have direct implications for the relation between social facts and individual actors. If we consider the possibility of "social isomers" -- social structures consisting of exactly similar actors but different histories and different configurations and causal properties in the present -- then we also have a refutation of the idea that social facts supervene upon the actors of which they are constituted. Instead, we would need to incorporate the "path-dependent" series of interactions that led to the formation of one "geometry" of social arrangements rather than another, as well as to the full suite of properties associated with each individual actor. So QED -- social structures do not supervene on the features of the actors. And if some of the events that influence the emergence of one social structure rather than another are stochastic or random -- one social isomer instead of its compositional equivalent -- then at best social structures supervene on individuals conjoined with chance events in a path-dependent process.

There has been much discussion of the question of multiple realizability -- that one higher-level structure may correspond to multiple underlying configurations of components and processes. But so far as I have been able to see, there has been no discussion of the converse possibility -- multiple higher-level structures corresponding to a single underlying configuration. And yet this is precisely what is the case in chemistry for isomers and in the hypothetical but plausible possibility sketched here for "social isomers". This is indeed the key finding of the discovery of path-dependencies in social outcomes.

Sunday, July 26, 2015

Is chemistry supervenient upon physics?


Many philosophers of science and physicists take it for granted that "physics" determines "chemistry". Or in terms of the theory of supervenience, it is commonly supposed that the domain of chemistry supervenes upon the domain of fundamental physics. This is the thesis of physicalism: the idea that all causation ultimately depends on the causal powers of the phenomena described by fundamental physics.

R. F. Hendry takes up this issue in his contribution to Davis Baird, Eric Scerri, and Lee McIntyre's very interesting volume, Philosophy of Chemistry. Hendry takes the position that this relation of supervenience does not obtain; chemistry does not supervene upon fundamental physics.

Hendry points out that the dependence claim depends crucially on two things: what aspects of physics are to be considered? And second, what kind of dependency do we have in mind between higher and lower levels? For the first question, he proposes that we think about fundamental physics -- quantum mechanics and relativity theory (174). For the second question, he enumerates several different kinds of dependency: supervenience, realization, token identity, reducibility, and derivability (175). In discussing the macro-property of transparency in glass, he cites Jaegwon Kim in maintaining that transparency in glass is "nothing more" than the features of the microstructure of glass that permit it to transmit light. But here is a crucial qualification:
But as Kim admits, this last implication only follows if it is accepted that “the microstructure of a system determines its causal/nomic properties” (283), for the functional role is specified causally, and so the realizer’s realizing the functional property that it does (i.e., the realizer–role relation itself) depends on how things in fact go in a particular kind of system. For a microstructure to determine the possession of a functional property, it must completely determine the causal/nomic properties of that system. (175)
Hendry argues that the key issue underlying claims of dependence of B upon A is whether there is downward causation from the level of chemistry (B) to the physical level (A); or, on the contrary, is physics "causally complete". If the causal properties of the higher level are fully fixed by the causal properties of the underlying level, then supervenience is possible; but if the higher level has causal properties that permit influence on the lower level, then supervenience is not possible.

In order to gain insight into the specific issues arising concerning chemistry and physics, Hendry makes use of the "emergentist" thinking associated with C.D. Broad. He finds that Broad offers convincing arguments against "Pure Mechanism", the view that all material things are determined by the micro-physical level (177). Here are Broad's two contrasting possibilities for understanding the relations between higher levels and the physical micro-level:
(i) On the first form of the theory the characteristic behavior of the whole could not, even in theory, be deduced from the most complete knowledge of the behavior of its components, taken separately or in other combinations, and of their proportions and arrangements in this whole . . .
(ii) On the second form of the theory the characteristic behavior of the whole is not only completely determined by the nature and arrangements of its components; in addition to this it is held that the behavior of the whole could, in theory at least, be deduced from a sufficient knowledge of how the components behave in isolation or in other wholes of a simpler kind (1925, 59). [Hendry, 178]
The first formulation describes "emergence", whereas the second is "mechanism". In order to give more contemporary expression to the two views Hendry introduces the key concept of quantum chemistry, the Hamiltonian for a molecule. A Hamiltonian is an operator describing the total energy of a system. A "resultant" Hamiltonian is the operator that results from identifying and summing up all forces within a system; a configurational Hamiltonian is one that has been observationally adjusted to represent the observed energies of the system. The first version is "fundamental", whereas the second version is descriptive.

Now we can pose the question of whether chemistry (behavior of molecules) is fixed by the resultant Hamiltonian for the components of the atoms involved (electrons, protons, neutrons) and the forces that they exert on each other. Or, on the other hand, does quantum chemistry achieve its goals by arriving at configurational Hamiltonians for molecules, and deriving properties from these descriptive operators? Hendry finds that the latter is the case for existing derivations; and this means that quantum chemistry (as it is currently practiced) does not derive chemical properties from fundamental quantum theory. Moreover, the configuration of the Hamiltonians used requires abstractive description of the hypothesized geometry of the molecule and the assumption of the relatively slow motion of the nucleus. But this is information at the level of chemistry, not fundamental physics. And it implies downward causation from the level of chemical structure to the level of fundamental physics.
Furthermore, to the extent that the behavior of any subsystem is affected by the supersystems in which it participates, the emergent behavior of complex systems must be viewed as determining, but not being fully determined by, the behavior of their constituent parts. And that is downward causation. (180)
So chemistry does not derive from fundamental physics. Here is Hendry's conclusion, supporting pluralism and anti-reductionism in the case of chemistry and physics:
On the other hand is the pluralist version, in which physical law does not fully determine the behavior of the kinds of systems studied by the special sciences. On this view, although the very abstractness of the physical theories seems to indicate that they could, in principle, be regarded as applying to special science systems, their applicability is either trivial (and correspondingly uninformative), or if non-trivial, the nature of scientific inquiry is such that there is no particular reason to expect the relevant applications to be accurate in their predictions.... The burden of my argument has been that strict physicalism fails, because it misrepresents the details of physical explanation (187)
Hendry's argument has a lot in common with Herbert Simon's arguments about system complexity (link) and with Nancy Cartwright's arguments about the limitations of (real) physics' capability of representing and calculating the behavior of complex physical systems based on first principles (link). In each case we get a pragmatic argument against reductionism, and a weakened basis for assuming a strict supervenience relation between higher-level structures and a limited set of supposedly fundamental building blocks. What is striking is that Hendry's arguments undercut the reductionist impulse at what looks like its most persuasive juncture -- the relationship between quantum physics and quantum chemistry.


Thursday, July 23, 2015

Microfoundations for rules and ascriptions




One of the more convincing arguments for the existence of social facts that lie above the level of individual actors is the social reality of rules and ascriptive identities. Bob and Alice are married by Reverend Green at 7 pm, July 1, 2015. The social fact that Bob and Alice are now married is not simply a concatenation of facts about their previous motions, beliefs, and utterances. Rather, it depends also on several trans-individual circumstances: first, that their behaviors and performances conform to a set of legal rules governing marriage (e.g., that neither was married at the time of their marriage to each other, or that they had secured a valid marriage license from the county clerk); and second, that various actors in the event possess a legal identity and qualification that transcend the psychological and observational properties they possess. (Reverend Green is in fact a legally qualified agent of a denomination that gives him the legal authority to perform the act of marriage between two qualified adults.) If Bob has permanently forgotten his earlier marriage in a moment of intoxication to Francine, or if Reverend Green is an imposter, then the correct performance of each of the actions of the ceremony nonetheless fails to secure the legal act of "marriage". Bob and Alice are not married if these prior conditions are not satisfied. So the social fact that Bob and Alice are married does not depend exclusively on their performance of a specific set of actions and utterances.

Is this kind of example a compelling refutation of the thesis of ontological individualism (as Brian Epstein believes it is; link)? John Searle thinks that facts like these are fundamentally important in the social world; he refers to them as "status functions" (link). And Epstein's central examples of supra-individual social facts have to do with membership and ascriptive status. However, several considerations suggest to me that the logical status of rules and ascriptions does not have a lot of importance for our understanding of the ontology of the social world.

First, ascriptive properties are ontologically peculiar. They are dependent upon presuppositions and implicatures that cannot be fully validated in the present. Consider the contrast between these two statements about Song Taizu, founder of the Song Dynasty: "Song was a military and political mastermind" and "Song was legitimate emperor of China." The former statement is a factual statement about Song's embodied characteristics and talents. The latter is a complex historical statement with debatable presuppositions. The truth of the statement turns on our interpretation of the legal status of the seven-year-old "Emperor" whom he replaced. It is an historical fact that Song ruled long and effectively as chief executive; it is a legal abstraction to assert that he was "legitimate emperor".

Second, it is clear that systems of rules have microfoundations if they are causally influential. There are individuals and concrete institutions who convey and interpret the rules; there are prosecutors who take offenders to task; there are libraries of legal codes and supporting interpretations that constitute the ultimate standard of adjudication when rules and behavior come into conflict. And individuals have (imperfect) grasp of the systems of rules within which they live and act -- including the rule that specifies that ignorance is no excuse for breach of law. So it is in fact feasible to sketch out the way that a system of law or a set of normative rules acquires social reality and becomes capable of affecting behavior.

Most fundamentally, I would like to argue that our interest is not in social facts simpliciter, but in facts that have causal and behavioral consequences. We want to know how social agglomerates behave, and in order to explain these kinds of facts, we need to know how the actors who make them up think, deliberate, and act. Whether Alice and Bob are really married is irrelevant to their behavior and that of the individuals who surround them. Instead, what matters is how they and others represent themselves. So the behaviorally relevant question is this: do Alice, Bob, Reverend Green, and the others with whom they interact believe that they are married? So the behaviorally relevant content of "x is married to y" is restricted to the beliefs and attitudes of the individuals involved -- not the legalistic question of whether their marriage satisfied current marriage laws.

To be sure, if a reasonable doubt is raised about the legal validity of their marriage, then their beliefs (and those of others) will change. Assuming they understand marriage in the same way as we do -- "two rationally competent individuals have undertaken legally specified commitments to each other, through a procedurally qualified enactment" -- then doubts about the presuppositions will lead them to recalculate their current beliefs and status as well. They will now behave differently than they would have behaved absent the reasonable doubts. But what is causally active here is not the fact that they were not legally married after all; it is their knowledge that they were not legally married.

So is the fact that Bob and Alice are really married a social fact? Or is it sufficient to refer to the fact that they and their neighbors and family believe that they are married in order to explain their behavior? In other words, is it the logical fact or the epistemic fact that does the causal work? I think the latter is the case, and that the purely ascriptive and procedural fact is not itself causally powerful. So we might turn the tables on Epstein and Searle, and consider the idea that only those social properties that have appropriate foundations at the level of socially situated individuals should be counted as real social properties.


Wednesday, July 15, 2015

Supervenience and the social: Epstein's critique



Does the social world supervene upon facts about individuals and the physical environment of action? Brian Epstein argues not in several places, most notably in "Ontological Individualism Reconsidered" (2009; link). (I plan to treat Epstein's more recent arguments in his very interesting book The Ant Trap: Rebuilding the Foundations of the Social Sciences in a later post.) The core of his argument is the idea that there are other factors influencing social facts besides facts about individuals. Social facts then fail to supervene in the strict sense: they depend on facts other than facts about individuals. There are indeed differences at the level of the social that do not correspond to a difference in the facts at the level of the individual. Here is how Epstein puts the core of his argument:
My aim in this paper is to challenge this [the idea that individualism is simply the denial of spooky social autonomy]. But ontological individualism is a stronger thesis than this, and on any plausible interpretation, it is false. The reason is not that social properties are determined by something other than physical properties of the world. Instead it is that social properties are often determined by physical ones that cannot plausibly be taken to be individualistic properties of persons. Only if the thesis of ontological individualism is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social properties by individualistic ones. (3)
And here is how Epstein formulates the claim of weakly local supervenience of social properties upon individual properties:
Social properties weakly locally supervene on individualistic properties if and only if for any possible world w and any entities x and y in w, if x and y are individualistically indiscernible in w, then they are socially indiscernible in w. Two objects are individualistically- or socially-indiscernible if and only if they are exactly like with respect to every individualistic property or every social property, respectively. (9)
The causal story for supervenience of the social upon the individual perhaps looks like this:




The causal story for non-supervenience that Epstein tells looks like this:


In this case supervenience fails because there can be differences in S without any difference in I (because of differences in O).

But maybe the situation is even worse, as emergentists want to hold:


Here supervenience fails because social facts may be partially "auto-causal" -- social outcomes are partially influenced by differences in social facts that do not depend on differences in individual facts and other facts.

In one sense Epstein's line of thought is fairly easy to grasp. The outcome of a game of baseball between the New York Yankees and the Boston Red Sox depends largely on the actions of the players on the field and in the dugout; but not entirely and strictly. There are background facts and circumstances that also influence the outcome but are not present in the motions and thoughts of the players. The rules of baseball are not embodied on the field or in the minds of the players; so there may be possible worlds in which the same pitches, swings, impacts of bats on balls, catches, etc., occur; and yet the outcome of the game is different. The Boston pitcher may be subsequently found to be ineligible to play that day, and the Red Sox are held to forfeit the game. The rule in our world holds that "tie goes to the runner"; whereas in alto-world it may be that the tie goes to the defensive team; and this means that the two-run homer in the ninth does not result in two runs, but rather the final out. So the game does not depend on the actions of the players alone, but on distant and abstract facts about the rules of the game.

So what are some examples of "other facts" that might be causally relevant to social outcomes? The scenario offered here captures some of the key "extra-individual" facts that Epstein highlights, and that play a key role in the social ontology of John Searle: situating rules and interpretations that give semantic meaning to behaviors. Epstein highlights facts that determine "membership" in meaningful social contexts: being President, being the catcher on the Boston Red Sox. Both Epstein and Searle emphasize that there are a wide range of dispersed facts that must be true in order for Barack Obama to be President and Ryan Hanigan to be catcher. This is not a strictly "individual-level" fact about either man. Epstein quotes Gregorie Currie on this point: "My being Prime Minister ... is not just a matter of what I think and do; it depends on what others think and do as well. So my social characteristics are clearly not determined by my individual characteristics alone" (11).

So, according to Epstein, local supervenience of the social upon the individual fails. What about global supervenience? He believes that this relation fails as well. And this is because, for Epstein, "social properties are determined by physical properties that are not plausibly the properties of individuals" (20). These are the "other facts" in the diagrams above. His simplest illustration is this: without cellos there can be no cellists (24). And without hanging chads, George W. Bush would not have been President. And, later, one can be an environmental criminal because of a set of facts that were both distant and unknown to the individual at the time of a certain action (33).

Epstein's analysis is careful and convincing in its own terms. Given the modal specification of the meaning of supervenience (as offered by Jaegwon Kim and successors), Epstein makes a powerful case for believing that the social does not supervene upon the individual in a technical and specifiable sense. However, I'm not sure that very much follows from this finding. For researchers within the general school of thought of "actor-centered sociology", their research strategy is likely to remain one that seeks to sort out the mechanisms through which social outcomes of interest are created as a result of the actions and interactions of individuals. If Epstein's arguments are accepted, that implies that we should not couch that research strategy in terms of the idea of supervenience. But this does not invalidate the strategy, or the broad intuition about the relation between the social and the actions of locally situated actors upon which it rests. These are the intuitions that I try to express through the idea of "methodological localism"; link, link. And since I also want to argue for the possibility of "relative explanatory autonomy" for facts at the level of the social (for example, features of an organization; link), I am not too troubled by the failure of a view of the social and individual that denies strict determination of the former by the latter. (Here is an earlier post where I wrestled with the idea of supervenience; link.)

Sunday, July 5, 2015

Goffman on close encounters



image: GIF from D. Witt (link)

George Herbert Mead's approach to social psychology is an important contribution to the new pragmatism in sociology (link). Mead puts forward in Mind, Self, and Society: From the Standpoint of a Social Behaviorist a conception of the self that is inherently social; the social environment is prior to the individual, in his understanding. And what this means is that individuals acquire habits, attitudes, and ways of thinking through their interactions in the social environments in which they live and grow up. The individual's social conduct is built up out of the internalized traces of the practices, norms, and orientations of the people around him or her.

Erving Goffman is one of the sociologists who has given the greatest attention to the role of social norms in ordinary social interaction. One of his central themes is a focus on face-to-face interaction. This is the central topic in his book, Interaction Ritual - Essays on Face-to-Face Behavior. So rereading Interaction Ritual is a good way to gain some concrete exposure to how some sociologists think about the internalized norms and practices that Mead describes.

Goffman's central concern in this book is how ordinary social interactions develop. How do the participants shape their contributions in such a way as to lead to a satisfactory exchange? The ideas of "line" and "face" are the central concepts in this volume. "Line" is the performative strategy the individual has within the interaction. "Face" is the way in which the individual perceives himself, and the way he perceives others in the interaction to perceive him. Maintaining face invokes pride and honor, while losing face invokes shame and embarrassment. So a great deal of the effort extended by the actor in social interactions has to do with maintaining face -- what Goffman refers to as "face-work". Here are several key descriptions of the role of face-work in ordinary social interactions:
By face-work I mean to designate the actions taken by a person to make whatever he is doing consistent with face. (12)
The members of every social circle may be expected to have some knowledge of face-work and some experience in its use. In our society, this kind of capacity is sometimes called tact, savoir-faire, diplomacy, or social skill. (13)
A person may be said to have, or be in, or maintain face when the line he effectively takes presents an image of him that is internally consistent, that is supported by judgment and evidence conveyed by other participants, and that is confirmed by evidence conveyed through and personal agencies in the situation. (6-7)
So Goffman's view is that the vast majority of face-to-face social interactions are driven by the logic of the participants' conceptions of "face" and the "lines" that they assume for the interaction. Moreover, Goffman holds that in many circumstances, the lines available for the person in the circumstance are defined by convention and are relatively few. This entails that most interactional behavior is scripted and conventional as well. This line of thought emphasizes the coercive role played by social expectations in face to face encounters. And it dovetails with the view Goffman often expresses of action as performative, and self as dramaturgical.

The concept of self is a central focus of Mead's work in MSS. Goffman too addresses the topic of self:
So far I have implicitly been using a double definition of self: the self as an image pieced together from the expressive implications of the full flow of events in an undertaking; and the self as a kind of player in a ritual game who copes honorably or dishonorably, diplomatically or undiplomatically, with the judgmental contingencies of the situation. (31)
Fundamentally, Goffman's view inclines against the notion of a primeval or authentic self; instead, the self is a construct dictated by society and adopted and projected by the individual.
Universal human nature is not a very human thing. By acquiring it, the person becomes a kind of construct, build up not from inner psychic propensities but from moral rules that are impressed upon him from without. (45)
Moreover, Goffman highlights the scope of self-deception and manipulation that is a part of his conception of the actor:
Whatever his position in society, the person insulates himself by blindnesses, half-truths, illusions, and rationalizations. He makes an "adjustment" by convincing himself, with the tactful support of his intimate circle, that he is what he wants to be and that he would not do to gain his ends what the others have done to gain theirs. (43)
One thing that is very interesting about this book is the concluding essay, "Where the Action Is". Here Goffman considers people making choices that are neither prudent nor norm guided. He considers hapless bank robbers, a black journalist mistreated by a highway patrolman in Indiana, and other individuals making risky choices contrary to the prescribed scripts. In this setting, "action" is an opportunity for risky choice, counter-normative choice, throwing fate to the wind. And Goffman thinks there is something inherently attractive about this kind of risk-taking behavior.

Here Goffman seems to be breaking his own rules -- the theoretical ones, anyway. He seems to be allowing that action is sometimes not guided by prescriptive rules of interaction, and that there are human impulses towards risk-taking that make this kind of behavior relatively persistent in society. But this seems to point to a whole category of action that is otherwise overlooked in Goffman's work -- the actions of heroes, outlaws, counter-culture activists, saints, and ordinary men and women of integrity. In each case these actors are choosing lines of conduct that break the norms and that proceed from their own conceptions of what they should do (or want to do).  In this respect the pragmatists, and Mead in particular, seem to have the more complete conception of the actor, because they leave room for spontaneity and creativity in action, as well as a degree of independence from coercive norms of behavior. Goffman opens this door with his long concluding essay here; but plainly there is a great deal more that can be said on this subject.

The 1955 novel and movie Man in the Grey Flannel Suit seems to illustrate both parts of the theory of action in play here -- a highly constrained field of action presented to the businessman (played by Gregory Peck), punctuated by occasional episodes of behavior that break the norms and expectations of the setting. Here is Tom Rath speaking honestly to his boss. (The whole film is available on YouTube.)


Thursday, July 2, 2015

Deliberative democracy and the age of social media


Several earlier posts have focused on the theory of deliberative democracy (link, link, link). The notion is that political decision-making can be improved by finding mechanisms for permitting citizens to have extended opportunities for discussion and debate over policies and goals. The idea appeals to liberal democratic theorists in the tradition of Rousseau -- the idea that people's political preferences and values can become richer and more adequate through reasoned discussion in a conversation of equals, and political decisions will be improved through such a process. This idea doesn't quite equate to the wisdom of the crowd; rather, individuals become wiser through their interactions with other thoughtful and deliberative people, and the crowd's opinions improve as a result.

Here is the definition of deliberative democracy offered by Amy Gutmann and Dennis Thompson in Why Deliberative Democracy? (2004):
Most fundamentally, deliberative democracy affirms the need to justify decisions made by citizens and their representatives. Both are expected to justify the laws they would impose on one another. In a democracy, leaders should therefore give reasons for their decisions, and respond to the reasons that citizens give in return... The reasons that deliberative democracy asks citizens and their representatives to give should appeal to principles that individuals who are trying to find fair terms of cooperation cannot reasonably reject. (3)
All political reasoning inherently involves an intermingling of goals, principles, and facts. What do we want to achieve? What moral principles do we respect as constraints on political choices? How do we think about the causal properties of the natural and social world in which we live? Political disagreement can derive from disagreements in each of these dimensions; deliberation in principle is expected to help citizens to narrow the range of disagreements they have about goals, principles, and facts. And traditional theorists of deliberative democracy, from the pre-Socratics to Gutmann, Thompson, or Fishkin, believe that it is possible for people of good will to come to realize that the beliefs and assumptions they bring to the debate may need adjustment.

But something important has changed since the 1990s when a lot of discussions of deliberative democracy took place. This is the workings of social media -- blogs, comments, Twitter discussions, Facebook communities. Here we have millions of people interacting with each other and debating issues -- but we don't seem to have a surge of better or more informed thinking about the hard issues. On the one hand, we might hope that the vast bandwidth of debate and discussion of issues, involving enormous numbers of the world's citizens, would have the effect of deepening the public's understanding of complex issues and policies. And on the other hand, we seem to have the evidence of continuing superficial thinking about issues, hardening of ideological positions, and reflexive habits of racism, homophobia, and xenophobia. The Internet seems to lead as often to a hardening and narrowing of attitudes as it does to a broadening and deepening of people's thinking about the serious issues we face.

So it is worth reflecting on what implications are presented to our ideas about democracy by the availability of the infrastructure of social media. It was observed during the months of the Arab Spring that Twitter and other social media platforms played a role in mobilization of groups of people sharing an interest in reform. And Guobin Yang describes the role that the Internet has played in some areas of popular activism in China (link). This is a little different from the theory of deliberative democracy, however, since mobilization is different from deliberative value-formation. The key question remains unanswered: can the quality of thinking and deliberation of the public be improved through the use of social media? Can the public come to a better understanding of issues like climate change, health care reform, and rising economic inequalities through the debates and discussions that occur on social media? Can our democracy be improved through the tools of Twitter, Facebook, or Google? So far the evidence is not encouraging; it is hard to find evidence suggesting a convergence of political or social attitudes deriving from massive use of social media. And the most dramatic recent example of change in public attitudes, the sudden rise in public acceptance of single-sex marriage, does not seem to have much of a connection from social media.

Here is a very interesting report by the Pew Foundation on the political segmentation of the world of Twitter (link). The heart of their findings is that Twitter discussions of politics commonly segment into largely distinct groups of individuals and websites (link).
Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
And here is the authors' reason for thinking that the clustering of Twitter conversations is important:
Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Here are examples of "polarized crowds" and "tight crowds":


There is a great deal of research underway on the network graphs that can be identified within social media populations. But an early takeaway seems to be that segmentation rather than convergence appears to be the most common pattern. This seems to run contrary to the goals of deliberative democracy. Rather than exposing themselves to challenging ideas from people and sources in the other community, people tend to stay in their own circle.

So this is how social media seem to work if left to their own devices. Are there promising examples of more intentional uses of social media to engage the public in deeper conversations about the issues of the day? Certainly there are political organizations across the spectrum that are making large efforts to use social media as a platform for their messages and values. But this is not exactly "deliberative". What is more intriguing is whether there are foundations and non-profit organizations that have specifically focused on creating a more deliberative social media community that can help build a broader consensus about difficult policy choices. And so far I haven't been able to find good examples of this kind of effort.

(Josh Cohen's discussion of Rousseau's political philosophy is interesting in the context of fresh thinking about deliberation and democracy; link. And Archon Fung and Erik Olin Wright's collection of articles on democratic innovation, Deepening Democracy: Institutional Innovations in Empowered Participatory Governance (The Real Utopias Project) (v. 4), is a very good contribution as well.)

Monday, June 29, 2015

Quantum mental processes?


One of the pleasant aspects of a long career in philosophy is the occasional experience of a genuinely novel approach to familiar problems. Sometimes one's reaction is skeptical at first -- "that's a crazy idea!". And sometimes the approach turns out to have genuine promise. I've had that experience of moving from profound doubt to appreciation several times over the years, and it is an uplifting learning experience. (Most recently, I've made that progression with respect to some of the ideas of assemblage and actor-network theory advanced by thinkers such as Bruno Latour; link, link.)

I'm having that experience of unexpected dissonance as I begin to read Alexander Wendt's Quantum Mind and Social Science: Unifying Physical and Social Ontology. Wendt's book addresses many of the issues with which philosophers of social science have grappled for decades. But Wendt suggests a fundamental switch in the way that we think of the relation between the human sciences and the natural world. He suggests that an emerging paradigm of research on consciousness, advanced by Giuseppi Vitiello, John Eccles, Roger Penrose, Henry Stapp, and others, may have important implications for our understanding of the social world as well. This is the field of "quantum neuropsychology" -- the body of theory that maintains that puzzles surrounding the mind-body problem may be resolved by examining the workings of quantum behavior in the central nervous system. I'm not sure which category to put the idea of quantum consciousness yet, but it's interesting enough to pursue further.

The familiar problem in this case is the relation between the mental and the physical. Like all physicalists, I work on the assumption that mental phenomena are embodied in the physical infrastructure of the central nervous system, and that the central nervous system works according to familiar principles of electrochemistry. Thought and consciousness are somehow the "emergent" result of the workings of the complex physical structure of the brain (in a safe and bounded sense of emergence). The novel approach is the idea that somehow quantum physics may play a strikingly different role in this topic than ever had been imagined. Theorists in the field of quantum consciousness speculate that perhaps the peculiar characteristics of quantum events at the sub-atomic level (e.g. quantum randomness, complementary, entanglement) are close enough to the action of neural networks that they serve to give a neural structure radically different properties from those expected by a classical-physics view of the brain. (This idea isn't precisely new; when I was an undergraduate in the 1960s it was sometimes speculated that freedom of the will was possible because of the indeterminacy created by quantum physics. But this wasn't a very compelling idea.)

Wendt's further contribution is to immerse himself in some of this work, and then to formulate the question of how these perspectives on intentionality and mentality might affect key topics in the philosophy of society. For example, how do the longstanding concepts of structure and agency look when we begin with a quantum perspective on mental activity?

A good place to start in preparing to read Wendt's book is Harald Atmanspacher's excellent article in the Stanford Encyclopedia of Philosophy (link). Atmanspacher organizes his treatment into three large areas of application of quantum physics to the problem of consciousness: metaphorical applications of the concepts of quantum physics; applications of the current state of knowledge in quantum physics; and applications of possible future advances in knowledge in quantum physics.
Among these [status quo] approaches, the one with the longest history was initiated by von Neumann in the 1930s.... It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. (13)
A physical state reduction is the event that occurs when a quantum probability field resolves into a discrete particle or event upon having been measured. Some theorists (e.g. Henry Stapp) speculate that conscious human intention may influence the physical state reduction -- thus a "mental" event causes a "physical" event. And some process along these lines is applied to the "activation" of a neuronal assembly:
The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. (20)
Also of interest in Atmanspacher's account is the idea of emergence: are mental phenomena emergent from physical phenomena, and in what sense? Atmanspacher specifies a clear but strong definition of emergence, and considers whether mental phenomena are emergent in this sense:
Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them. (6)
This is a strong conception in a very specific way; it specifies that material facts are not sufficient to explain "emergent" mental properties. This implies that we need to know some additional facts beyond facts about the material brain in order to explain mental states; and it is natural to ask what the nature of those additional facts might be.

The reason this collection of ideas is initially shocking to me is the difference in scale between the sub-atomic level and macro-scale entities and events. There is something spooky about postulating causal links across that range of scales. It would be wholly crazy to speculate that we need to invoke the mathematics and theories of quantum physics to explain billiards. It is pretty well agreed by physicists that quantum mechanics reduces to Newtonian physics at this scale. Even though the component pieces of a billiard ball are quantum entities with peculiar properties, as an ensemble of 10^25 of these particles the behavior of the ball is safely classical. The peculiarities of the quantum level wash out for systems with multiple Avogadro's numbers of particles through the reliable workings of statistical mechanics. And the intuitions of most people comfortable with physics would lead them to assume that neurons are subject to the same independence; the scale of activity of a neuron (both spatial and temporal) is orders of magnitude too large to reflect quantum effects. (Sorry, Schrodinger's cat!)

Charles Seife reports a set of fundamental physical computations conducted by Max Tegmark intended to demonstrate this in a recent article in Science Magazine, "Cold Numbers Unmake the Quantum Mind" (link). Tegmark's analysis focuses on the speculations offered by Penrose and others on the possible quantum behavior of "microtubules." Tegmark purports to demonstrate that the time and space scales of quantum effects are too short by orders of magnitude to account for the neural mechanisms that can be observed (link). Here is Tegmark's abstract:
Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10^−13–10^−20s) are typically much shorter than the relevant dynamical time scales (∼10^−3–10^−1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way. (link)
I am grateful to Atmanspacher for providing such a clear and logical presentation of some of the main ideas of quantum consciousness; but I continue to find myself sceptical. There is a risk in this field to succumb to the temptation towards unbounded speculation: "Maybe if X's could influence Y's, then we could explain Z" without any knowledge of how X, Y, and Z are related through causal pathways. And the field seems sometimes to be prey to this impulse: "If quantum events were partially mental, then perhaps mental events could influence quantum states (and from there influence macro-scale effects)."

In an upcoming post I'll look closely at what Alex Wendt makes of this body of theory in application to the level of social behavior and structure.

Saturday, June 20, 2015

Rationality over the long term

image: Dietrich Bonhoeffer with his students

Millions of words have been written on the topic of rationality in action. Life involves choices. How should we choose between available alternatives? Where should I go to college? Which job should I accept? Should I buy a house or rent an apartment? How much time should I give my job in preference to my family? We would like to have reasons for choosing A over B; we would like to approach these choices "rationally."

These are all "one-off" choices, and rational choice theory has something like a formula to offer for the decider: gain the best knowledge available about the several courses of action; evaluate the costs, risks, and rewards of each alternative; and choose that alternative that produces the greatest expected level of satisfaction of your preferences. There are nuances to be decided, of course: should we go for "greatest expected utility" or should we protect against unlikely but terrible outcomes by using a maximin rule for deciding?

There are several deficiencies in this story. Most obviously, few of us actually go through the kinds of calculations specified here. We often act out of habit or semi-articulated rules of thumb. Moreover, we are often concerned about factors that don't fit into the "preferences and beliefs" framework, like moral commitments, conceptions of ourselves, loyalties to others, and the like. Pragmatists would add that much mundane action flows from a combination of habit and creativity rather than formal calculation of costs and benefits.

But my concern here is larger. What is involved in being deliberative and purposive about extended stretches of time? How do we lay out the guideposts of a life plan? And what is involved in acting deliberatively and purposively in carrying out one's life plan or other medium- and long-term goals?

Here I want to look more closely than usual at what is involved in reflecting on one's purposes and values, formulating a plan for the medium or long term, and acting in the short term in ways that further the big plan. My topic is "rationality in action", but I want to pay attention to the issues associated with large, extended purposes -- not bounded decisions like buying a house, making a financial investment, or choosing a college. I'm thinking of larger subjects for deliberation -- for example, conquering all of Europe (Napoleon), leading the United States through a war for the Union ( Lincoln), or becoming a committed and active anti-Nazi (Bonhoeffer).

The scale I'm focusing on here corresponds to questions like these:
  • How did Napoleon deliberate about his ambitions in 1789? How did he carry out his thoughts, goals, and plans?
  • How did Abraham Lincoln think about slavery and the Union in 1861? How did his conduct of politics and war take shape in relation to his long term goals?
  • How did Richard Rorty plan his career in the early years? How did his choices reflect those plans? (Neil Gross considers this question in Richard Rorty: The Making of an American Philosopher; link.)
  • How did Dietrich Bonhoeffer deliberate about the choices in front of him in Germany in 1933? How did he decide to become an engaged anti-Nazi, at the eventual cost of his life?
What these examples have in common is large temporal scope; substantial uncertainties about the future; and extensive intertwining of moral and political values with more immediate concerns of self-interest, prudence, and desire. Moreover, the act of formulating plans on this scale and living them out is formative: we become different persons through these efforts.

The intriguing question for me at the moment is the issue of rational deliberation: to what extent and through what processes can individuals engage in a rational process in thinking through their decisions and plans at this level? Is it an expectation of rationality that an individual will have composed nested sets of plans and objectives, from the most global to the intermediate to the local?

Or instead, does a person's journey through large events take its shape in a more stochastic way: opportunities, short term decisions, chance involvements, and some ongoing efforts to make sense of it all in the form of a developing narrative? Here we might say that life is not planned, but rather built like Neurath's raft with materials at hand; and that rationality and deliberation come in only at a more local scale.

Here is a simple way of characterizing purposive action over a long and complex period. The actor has certain guiding goals he or she is trying to advance. It is possible to reflect upon these goals in depth and to consider their compatibility with other important considerations. This might be called "goal deliberation". These goals and values serve as the guiding landmarks for the journey -- "keep moving towards the tallest mountain on the horizon". The actor surveys the medium-term environment for actions that are available to him or her, and the changes in the environment that may be looming in that period. And he or she composes a plan for these circumstances-- "attempt to keep moderate Southern leaders from supporting cecession". This is the stage of formulation of mid-range strategies and tactics, designed to move the overall purposes forward. Finally, like Odysseus, the actor seizes unforeseen opportunities of the moment in ways that appear to advance the cause even lacking a blueprint for how to proceed.

We might describe this process as one that involves local action-rationality guided by medium term strategies and oriented towards long term objectives. Rationality comes into the story at several points: assessing cause and effect, weighing the importance of various long term goals, deliberating across conflicting goals and values, working out the consequences of one scenario or another, etc.

As biologists from Darwin to Dawkins have recognized, the process of species evolution through natural selection is inherently myopic. Long term intelligent action is not so, in that it is possible for intelligent actors to consider distant solutions that are potentially achievable through orchestrated series of actions -- plans and strategies. But in order to achieve the benefits of intelligent longterm action, it is necessary to be intelligent at every stage -- formulate good and appropriate distant goals, carefully assess the terrain of action to determine as well as possible what pathways exist to move toward those goals, and act in the moment in ways that are both intelligent solutions to immediate opportunities and obstacles, and have the discipline to forego short term gain in order to stay on the path to the long term goal. But, paradoxically, it may be possible to be locally rational at every step and yet globally irrational, in the sense that the series of rational choices lead to an outcome widely divergent from the overriding goals one has selected.

I've invoked a number of different ideas here, all contributing to the notion of rational action over an extended time: deliberation, purposiveness, reflection, calculation of consequences, intelligent problem solving, and rational choice among discrete alternatives. What is interesting to me is that each these activities is plainly relevant to the task of "rational action"; and yet none reduces to the other. In particular, rational choice theory cannot be construed as a general and complete answer to the question, "what is involved in acting rationally over the long term?".

Michael Bratman is the philosopher who has thought about these issues the most deeply; Intention, Plans, and Practical Reason. Manuel Vargas and Gideon Yaffe's recent festschrift on Bratman's work, Rational and Social Agency: The Philosophy of Michael Bratman, is also a useful contribution on the subject. Sarah Paul provides a nice review of Rational and Social Agency here.

Tuesday, June 16, 2015

Science and decision


Science is uncertain; and yet we have no better basis for making important decisions about the future than the best scientific knowledge currently available. Moreover, there are powerful economic interests that exert themselves to undermine the confidence of the public and our policy makers in the findings of science that appear to harm those interests. How should we think about these two factors, one epistemic and the other political? The first lays out the reasons for thinking that some of our most confident theories may in fact be erroneous; the second makes us worry that even strongly credible science will be undermined by corporate and financial interests.

Naomi Oreskes and Erik Conway explore the latter dynamics in substantial detail in Merchants of Doubt. And Henry Pollack, a noted and respected climate scientist, explores the implications of the first point in Uncertain Science ... Uncertain World.

Oreskes' work on the politics and methods of science denial is substantial and convincing. She is an historian of science, and she has carefully traced the pathways through which business interests have exerted themselves to affect the outcome of a range of scientific debates: for example, the harmful effects of tobacco, acid rain, the reality of an ozone hole, and the reality of global warming. She traces the influence that conservative think tanks and corporations have had on the scientific debates over these issues. But more, she demonstrates that a small number of conservative nuclear scientists have played a key and recurring role in drumming up spurious attacks on the scientific credentials of researchers in a number of these fields.
Call it the “Tobacco Strategy.” Its target was science, and so it relied heavily on scientists— with guidance from industry lawyers and public relations experts— willing to hold the rifle and pull the trigger. Among the multitude of documents we found in writing this book were Bad Science: A Resource Book— a how-to handbook for fact fighters, providing example after example of successful strategies for undermining science, and a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite. (kl 170)
Here is what the tobacco strategy looked like in 1979 in the hands of tobacco corporation R. J. Reynolds, in the words of Colin Stokes, former chairman of R. J. Reynolds:
“Science really knows little about the causes or development mechanisms of chronic degenerative diseases imputed to cigarettes,” Stokes went on, “including lung cancer, emphysema, and cardiovascular disorders.” Many of the attacks against smoking were based on studies that were either “incomplete or … relied on dubious methods or hypotheses and faulty interpretations.” The new program would supply new data, new hypotheses, and new interpretations to develop “a strong body of scientific data or opinion in defense of the product.” ^14 Above all, it would supply witnesses. (kl 316)
The purpose of this strategy was clear to its creators:
The industry’s position was that there was “no proof” that tobacco was bad, and they fostered that position by manufacturing a “debate,” convincing the mass media that responsible journalists had an obligation to present “both sides” of it. Representatives of the Tobacco Industry Research Committee met with staff at Time, Newsweek, U.S. News and World Report, BusinessWeek, Life, and Reader’s Digest, including men and women at the very top of the American media industry. (kl 403)
Oreskes and her colleagues make a very worrisome case for the likelihood that good scientific research on controversial issues will be drowned out by money and astute public relations strategies by self-interested corporations. And ultimately this possibility has potentially devastating results for public health and our global future, if the public and our policy makers succumb to this attack on science. 

The attack on the scientific legitimacy and credentials of climate science is of equal concern to Henry Pollack. Pollack honestly acknowledges the limits of uncertainty that are characteristic of all areas of science. But he strongly defends the rational confidence we have in the results of empirical and scientific inquiry into the major natural and social processes which surround us. Here are his four key ideas:
  • Uncertainty is always with us and can never be fully eliminated from our lives, either individually or collectively as a society. Our understanding of the past and our anticipation of the future will always be obscured by uncertainty.
  • Because uncertainty never disappears, decisions about the future, big and small, must always be made in the absence of certainty. Waiting until uncertainty is eliminated before making decisions is an implicit endorsement of the status quo, and often an excuse for maintaining it.
  • Predicting the long-term future is a perilous business, and seldom do the predictions fall very close to reality. As the future unfolds, 'mid-course corrections' can be made that take into account new information and new developments.
  • Uncertainty, far from being a barrier to progress, is actually a strong stimulus for, and an important ingredient of, creativity. (2-3)
Like Oreskes, Pollack finds that scientific controversies often have substantial implications for major economic interests and that it is unsurprising to find that individuals and companies exert themselves to influence the outcomes of debates in ways that serve their interests.

Pollack urges the public and our legislators to take the time to understand the nature of the scientific enterprise more fully and to inoculate themselves against self-interested efforts to undermine the enterprise and its core findings on controversial subjects. 

Now consider a third perspective on this topic of the reliability and vulnerability of science, the point of view associated with Science and Technology Studies (STS) and Sociology of Scientific Knowledge (SSK) (link). A good exemplar of this approach is Harry Collins and Trevor Pinch, The Golem at Large. What they mean by the "golem" is that science, like almost any other human activity, is two-sided when it comes to its effects on human wellbeing. So they are as interested in the failures of technology and science as in the successes. They focus on investigations of technology success and failure in this volume, including the effectiveness of Patriot missile defense systems in the Gulf War, the causes of the Challenger explosion, assessing the effects of the Chernobyl radiation plume on Cumbrian sheep, tests of nuclear fuel flasks in the 1980s, and several other interesting cases. In their own way their message is similar to that of Pollack: science and technology involve investigations, inferences, and manipulations that are inherently fallible. And yet there is no better alternative on the basis of which to assess risky alternatives and solutions.

One of the signature themes of STS and SSK is attention to the non-rational and political factors that influence the conduct of science. Philosophers of science often focus on the positive ability of science to gain truths about the world. STS scholars, in contrast, are often inclined to bracket the objectivity and veridicality of science, and to focus instead on the multiple social processes that influence the development of a body of scientific thought. This leads to an interpretation of science along the lines of a "social construction" model.

Pragmatism seems to point towards the most plausible position on scientific knowledge that incorporates both positions. Nothing in the methods or practices of science guarantees success. But we have a capacity to observe, theorize, measure, and test; and these abilities are crucial to our human ability to navigate an uncertain world. So we should look at the institutions and findings of science much as pragmatists like Israel Scheffler and WVO Quine did: as imperfect but valuable tools on the basis of which to learn some of the more important properties and dynamics of the world around us.

In the current context this means we should pay a lot of attention indeed to the convergence of evidence about climate change that environmental and climate scientists have painstakingly arrived at. And we should be vigilant in uncovering the secretive efforts in play to undermine those findings. 

Wednesday, June 10, 2015

Ian Hacking on natural kinds



Ian Hacking has written quite a bit on the topic of "kinds" (link), beginning with "A Tradition of Natural Kinds" in Philosophical Studies in 1991 (link) and most recently with his lecture to the Royal Institute of Philosophy in 2006 (link). He is also one of the most interesting theorists of "constructivism" -- a sort of mirror opposite to the position that the world consists of things arranged in natural kinds (The Social Construction of What?). So it is worthwhile examining his view of the status of the idea of "natural kinds".

Before we get to natural kinds, Hacking thinks it is a good idea to consider an idea that emanates from Nelson Goodman in Ways of Worldmaking, the idea of "relevant kinds". Hacking discusses this concept at length in Social Construction (128 ff.). Fundamentally the idea of a relevant kind is an ontologically non-committal interpretation of concepts; it is a contingent and interest-driven way of classifying things in one way rather than another.

So what does the idea of a natural kind add to the notion of a relevant kind? A preliminary definition might go along these lines: a natural kind is a group of things sharing a set of properties or capacities. A natural kind is a set of things sharing a common structure or a common set of causal properties. Metal is a natural kind; green things is not. In the 1991 article Hacking lists a number of characteristics that are often thought to attach to natural kinds: independence, definability, utility, and uniqueness (110-111). The final principle is the most comprehensive, and also the least plausible:
Uniqueness. There is a unique best taxonomy in terms of natural kinds, that represents nature as it is, and reflects the network of causal laws. We do not have nor could we have a final taxonomy of anything, but any objective classification is right or wrong according as it captures part of the structure of the one true taxonomy of the universe. (111)
(Hacking explicitly rejects the uniqueness thesis.)

Hacking traces the language of kinds and natural kinds to J. S. Mill and John Venn in the middle of the nineteenth century. He quotes Peirce's effort to improve upon Mill's definition of natural kinds, based on the idea that the objects encompassed within a kind have important properties that are naturally related to each other:
The following definition might be proposed [for 'real kind']: Any class which, in addition to its defining character has another that is of permanent interest, and is common and peculiar to its members, is destined to be conserved in that ultimate conception of the universe at which we aim, and is accordingly to be called 'real'. (119)
Here is how Hacking distinguishes between Mill and Peirce:
A Mill-Kind is a class of objects with a large or even apparently inexhaustible number of properties in common, and such that these properties are not implied by any known systematized body of law about things of this Kind. A Peirce-kind is such a class, but such that there is a systematized body of law about things of this kind, and is such that we may reasonably think that it provides explanation sketches of why things of this kind have many of their properties.
In the 2006 article Hacking offers a clear definition based on William Whewell's reasoning:
A kind is a class denoted by a common name about which there is the possibility of general, intelligible and consistent, and probably true assertions. (13)
And here is his reading in 2006 of John Venn's view of natural kinds:
‘There are classes of objects, each class containing a multitude of individuals more or less resembling one another [...]. The uniformity that we may trace in the [statistical] results is owing, much more than is often suspected, to this arrangement of things into natural kinds, each kind containing a large number of individuals.’ (17)
Now let's turn to Hacking's views fifteen years later in "Natural Kinds: Rosy Dawn, Scholastic Twilight" (link). This piece extends his historical analysis of the evolution of the concept, but here Hacking also lets us know more clearly what his own view is on natural kinds. He argues for two fundamental theses:
  1. Some classifications are more natural than others, but there is no such thing as a natural kind.
  2. Many philosophical research programmes have evolved around an idea about natural kinds, but the seeds of their failure (or degeneration) were built in from the start.
The first is a declaration about the world: the world does not divide into distinct categories of things, as postulated in the uniqueness principle above. The second is a declaration about a philosophical tradition: the line of thought he scrutinizes leading from Mill through Peirce and Russell to Kripke and Quine has led to irresolvable inconsistencies. The topic has become a degenerating research programme.

One of the most interesting recent views on kinds that Hacking discusses is that of Brian Ellis in Scientific Essentialism. Hacking summarizes Ellis's essentialism in these terms:
It emphasizes three types of natural kinds. Substantival natural kinds include elements, fundamental particles, inert gases, sodium salts, sodium chloride molecules, and electrons. Dynamic natural kinds include causal interactions, energy transfer processes, ionizations, diffractions, H2 +Cl2 ⇒ 2HCl, and photon emission at λ = 5461Å from an atom of mercury. Natural property kinds include dispositional properties, categorical properties, and spatial and temporal relations; mass, charge; unit mass, charge of 2e, unit field strength, and spherical shape. (27)
Also interesting is Richard Boyd's "homoeostatic property cluster kinds", a concept that seems to apply best in evolutionary biology. Boyd's view appears in "Realism, Anti-Foundationalism and the Enthusiasm for Natural Kinds" (link), a response to Hacking's 1991 article.  Hacking summarizes Boyd's view in these terms: "In his analysis, kinds, and in particular species, are groups that persist in a fairly long haul. The properties that characterize a species form a cluster. No distinctive property may be common to all members of the species, but the cluster is good for survival" (30).

So what is Hacking's view, all things considered? He is fairly consistent from 1991 to 2006. Hacking's view in 1991 seems to have a pragmatist and anti-realist orientation: things are organized into kinds so as to permit human beings to use and manipulate them. Kinds, uses, and crafts are intimately related.
It is important that some kinds are essential to some crafts. Those are the kinds that we can do things with. It is important that some kinds are important for knowing what to expect from the fauna and flora of the region in which we live. 
And in 2006 he ends the discussion with this conclusion:
Although one may judge that some classifications are more natural than others, there is neither a precise nor a vague class of classifications that may usefully be called the class of natural kinds. A stipulative definition, that picks out some precise or fuzzy class and defines it as the class of natural kinds, serves no purpose, given that there are so many competing visions of what the natural kinds are. In short, despite the honourable tradition of kinds and natural kinds that reaches back to 1840, there is no such thing as a natural kind. (35)
So Hacking's view is a kind of conceptual constructivism. We construct schemes of classification for various pragmatic purposes -- artisanship, agriculture, forest and wildlife management. Schemes have advantages and disadvantages. And there is no definable sense in which one scheme is uniquely best, given everything that nature, biology, and society presents us with.

I've argued for a long time that there are no "social kinds" (link). My fundamental reason for this conclusion is somewhat different from Hacking's line of thought: I emphasize the fundamental heterogeneity and plasticity of social objects, leading to the result that there is substantial variation across the members or instances of a social concept (state, revolution, riot, financial crisis). Social things do not have essential natures, and they do not maintain their properties rigidly over time. So we are best advised to regard sociological concepts in a contingent and pragmatic way -- as nominal schemes for identifying social events and structures of interest, without presuming that they have fundamental and essential properties in common.

The Cultural Revolution through photography

image: Li Zhensheng, self portrait

Several earlier posts have highlighted how challenging it is to come to firm conclusions about some of the most basic facts about the history of the Cultural Revolution in China (link, link, link, link). The history of this important recent period of Chinese history is still a work in progress.  

A genuinely remarkable book of documentary photography on this history appeared in 2003, with the title Red-Color News Soldier. Chinese historian Jonathan Spence provides an illuminating introduction to the volume and the period. The core of the book, edited and presented by Robert Pledge, is a body of photography by Li Zhensheng. Li was a rank-and-file news photographer in Heilongjiang in the northeast of China who had received film training in the 1960s. Li provides a short but fascinating autobiographical statement of his early years during the Great Leap Forward, and he adds to this narrative in each of the main sections of the book. Li took thousands of photographs during the early years of the Cultural revolution, some of which he knew to be politically dangerous. He therefore succeeded in hiding thousands of these negatives for thirty-five years, before making them available for publication in 2003.

The book provides genuinely new emotional insight into this period of chaos in China's recent history. The photographs capture the passions of committed Red Guards as well as the pathos of the often innocent scapegoats who were the victim of Red Guard violence. Mass emotion and individual pathos are captured in almost all the images in the book. 




Several things shout out from the photos in this volume that perhaps shed light on the experience for Chinese people of the Cultural Revolution. One is the intensity, size, and rage of the crowds that are depicted. It is perhaps extreme to say this, but many of these photos evoke mass madness -- people caught up in the emotions and hatreds of the period in ways that obliterated their ordinary human impulses of pity and kindness. What we see instead is a sea of human faces, taking in the humiliation and abuse of their neighbors, while shouting support or laughing at shaming self-confessions, dunce caps, and raw physical abuse.

Related to this is the cruelty that the photos depict. There is no pity shown for the victims forced to humiliate themselves, who are physically tormented, and who were sometimes killed. What is portrayed is a merciless public scapegoating of people, often for the most trivial or spurious of reasons. People were accused of belonging to one of the "four elements" -- landlords, rich peasants, counter-revolutionaries, or "bad characters" (55), and they were dealt with summarily. Physical violence was common; but so too was a deep and sustained imposition of shaming on the hapless school teacher, local party official, or slightly better-off peasant. Li describes the scene of the execution of seven men and one woman, two of whom were "counter-revolutionaries" because of a flyer they had published titled "Looking North". The scene of the execution troubled Li for many years. "All eight were put on the backs of trucks in pairs, driven through town, then out to the countryside northwest of Harbin. There, on the barren grounds of the Huang Shan Cemetery, they were lined up, hands tied behind their backs, and forced to kneel. They were all shot in the back of the head" (139). The sequence of photos Li took of this execution are harrowing. The final photos of the volume depict the execution of Wang Shouxin, a former Party branch secretary, on charges of embezzlement, in 1980 (after the end of the Cultural Revolution). Li Zhensheng was present for this killing as well. (The woman kneeling in the photo above is Wang.)

Another striking feature is the cult of Mao that many of the photos demonstrate. "By the fall of 1966 Mao had become, to most Chinese, a living god" (144). Portraits of the Chairman and peasants shaking their Little Red Books abound at these mass meetings. A headline in the Heilongjiang Daily in 1966 shouts its praise: "Long life to Chairman Mao, Great Leader, Great Commander in Chief, Great Helmsman" (71). This is the very same great helmsman who led China into the Great Leap Forward and a devastating famine resulting in more than 20 million deaths, only eight years earlier. Mao publicly greeted over 11 million Red Guards in Tiananmen Square in appearances over the first several years of the Cultural Revolution, according to Li (131). The many images of Mao in these news photos were not accidental; news editors made sure that there were ample posters in the published photo, even if they were not visible in the original scene:
Another time I made a picture of a crowded rally at a sports field from behind, so you couldn't see all the portraits held up, only their wooden frames -- and for the final image, my editor instructed me to add pictures of Mao to the back of the frames, even though this skewed the perspective and it made no sense that they were facing the wrong way. (133)
So the book provides a rich canvas through which we can begin to grasp some of the human meaning of the experience of the Cultural Revolution. It is important to be clear about the limits of the book, however. It is geographically limited to the extreme northeast of China, the province of Heilongjiang. So it is suggestive of the nature of the experience in other places -- but only suggestive. It would be striking to have other images from Souzhou, Wuhan, or Xian; how similar or different were the currents of rage and violence in those other parts of China?

Second, the book does not shed light on the causes or dynamics of the Cultural Revolution. Li refers to the politics of rival factions on several occasions, but we don't get much of an idea of what the shouting was about in those struggles. And there is no basis for drawing inferences about the leadership's intentions and strategies on the basis of this collection. Li's perspective is from the street: these are the demonstrations that occurred, this is how the crowds looked, here are some of the acts of humiliation and violence that occurred in my presence. It is for others to set the stage by uncovering the political dynamics of the Cultural Revolution from beginning to end.

But the questions raised by this volume are enormously important. Li's camera depicts a population gone mad; and yet these were ordinary people just like the citizens of Albany or Albuquerque or Peoria. So we are forced to ask, what are the conditions that make a populace ready for this kind of raging cruelty; and what are the sparks that unleash the outbreak of a period like the Cultural Revolution?