Saturday, February 22, 2020

Fascist attacks on democracy


The hate-based murders of at least nine young people in Hanau, Germany this week brought the world's attention once again to right-wing extremism in Germany and elsewhere. The prevalence of right-wing extremist violence in Germany today is shocking, and it presents a deadly challenge to democratic institutions in modern Germany. Here is the German justice minister, quoted in the New York Times (link):
“Far-right terror is the biggest threat to our democracy right now,” Christine Lambrecht, the justice minister, told reporters on Friday, a day after joining the country’s president at a vigil for the victims. “This is visible in the number and intensity of attacks.”
Extremist political parties like the Alternative for Germany and the National Democratic Party (link, link) have moved from fringe extremism to powerful political organizations in Germany, and it is not clear that the German government has strategies that will work in reducing their power and influence. Most important, these parties, and many other lesser organizations, spread a message of populist hate, division, and distrust that motivates some Germans to turn to violence against immigrants and other targeted minorities. These political messages can rightly be blamed for cultivating an atmosphere of hate and resentment that provokes violence. Right-wing populist extremism is a fertile ground for political and social violence; hate-based activism leads to violence. (Here is an excellent report from the BBC on the political messages and growing political influence of AfD in Germany (link).)

Especially disturbing for the fate of democracy in Germany is the fact that there is a rising level of violence and threat against local elected officials in Germany over their support for refugee integration. (Here is a story in the New York Times (2/21/20) that documents this aspect of the crisis; link.) The story opens with an account of the near-fatal attack in 2015 on Henriette Reker, candidate for mayor of Cologne. She survived the attack and won the election, but has been subject to horrendous death threats ever since. And she is not alone; local officials in many towns and municipalities have been subjected to similar persistent threats. According to the story, there were 1,240 politically motivated attacks against politicians and elected officials (link). Of these attacks, about 33% are attributed to right-wing extremists, about double the number attributed to left-wing extremists. Here is a summary from the Times story:
The acrimony is felt in town halls and village streets, where mayors now find themselves the targets of threats and intimidation. The effect has been chilling. 
Some have stopped speaking out. Many have quit, tried to arm themselves or taken on police protection. The risks have mounted to such an extent that some German towns are unable to field candidates for leadership at all. 
“Our democracy is under attack at the grass-roots level,” Ms. Reker said in a recent interview in Cologne’s City Hall. “This is the foundation of our democracy, and it is vulnerable.” 
This is particularly toxic for the institutions of democratic governance, because the direct and obvious goal is to intimidate government officials from carrying out their duties. This is fascism.

What strategies exist that will help to reduce the appeal of right-wing extremism and the currents of hatred and resentment that these forms of populism thrive on? In practical terms, how can liberal democracies (e.g. Germany, Britain, or the United States) reduce the appeal of white supremacy, nationalism, racism, and xenophobia while enhancing citizens' commitment to the civic values of equality and rule of law?

One strategy involves strengthening the institutions of democracy and the trust and confidence that citizens have in those institutions. This is the approach developed in an important 2013 issue of Daedalus (link) devoted to civility and the common good. This approach includes efforts at improving civic education for young people. It also includes reforming political and electoral institutions in such a way as to address the obvious sources of inequality of voice that they currently involve. In the United States, for example, the prevalence of extreme and politicized practices of gerrymandering has the obvious effect of reducing citizens' confidence in their electoral institutions. Their elected officials have deliberately taken policy steps to reduce citizens' ability to affect electoral outcomes. Likewise, the erosion of voting rights in the United States through racially aimed changes to voter registration procedures, polling hours and locations, and other aspects of the institutions of voting provokes cynicism and detachment from the institutions of government. (McAdam and Kloos make these arguments in Deeply Divided: Racial Politics and Social Movements in Postwar America.)

Second, much of the appeal of right-wing extremism turns on lies about minorities (including immigrants). Mainstream and progressive parties should do a much better job of communicating the advantages to the whole of society that flow from diversity, talented immigrants, and an inclusive community. Mainstream parties need to expose and de-legitimize the lies that right-wing politicians use to stir up anger, resentment, and hatred against various other groups in society, and they need to convey a powerful and positive narrative of their own.

Another strategy to enhance civility and commitment to core democratic values is to reduce the economic inequalities that all too often provoke resentment and distrust across groups within society. Justin Gest illustrates this dynamic in The New Minority; the dis-employed workers in East London and Youngstown, Ohio have good reason to think their lives and concerns have been discarded by the economies in which they live. As John Rawls believed, a stable democracy depends upon the shared conviction that the basic institutions of society are working to the advantage of all citizens, not just the few (Justice as Fairness: A Restatement).

Finally, there is the police response. Every government has a responsibility to protect its citizens from violence. When groups actively conspire to commit violence against others -- whether it is Baader-Meinhof, radical spinoffs of AfD, or the KKK -- the state has a responsibility to uncover, punish, and disband those groups. Germany's anti-terrorist police forces are now placing higher priority on right-wing terrorism than they apparently have done in the past, and this is a clear responsibility for a government with duty for ensuring the safety of the public (link). (It is worrisome to find that members of the police and military are themselves sometimes implicated in right-wing extremist groups in Germany.) Here are a few paragraphs from a recent Times article on arrests of right-wing terrorists:
BERLIN — Twelve men — one a police employee — were arrested Friday on charges of forming and supporting a far-right terrorism network planning wide-ranging attacks on politicians, asylum seekers and Muslims, the authorities said. 
The arrests come as Germany confronts both an increase in violence and an infiltration of its security services by far-right extremists. After focusing for years on the risks from Islamic extremists and foreign groups, officials are recalibrating their counterterrorism strategy to address threats from within. 
The arrests are the latest in a series of episodes that Christine Lambrecht, the justice minister, called a “very worrying right-wing extremist and right-wing terrorist threat in our country.” 
“We need to be particularly vigilant and act decisively against this threat,” she said on Twitter. (link)
The German political system is not well prepared for the onslaught of radical right-wing populism and violence. But much the same can be said in the United States, with a president who espouses many of the same hate-based doctrines that fuel the rise of radical populism in other countries, and in a national climate where hate-based crimes have accelerated in the past several years. (Here is a recent review of hate-based groups and crimes in the United States provided by the Southern Poverty Law Center; link.) And, like Germany, the FBI has been slow to place appropriate priority on the threat of right-wing terrorism in the United States.

(This opinion piece in the New York Times by Anna Sauerbrey (link) describes one tool available to the German government that is not available in the United States -- strong legal prohibitions of neo-Nazi propaganda and incitement to hatred:
“There is the legal concept of Volksverhetzung,” the incitement to hatred: Anybody who denigrates an individual or a group based on their ethnicity or religion, or anybody who tries to rouse hatred or promotes violence against such a group or an individual, could face a sentence of up to five years in prison.
Because of virtually unlimited protection of freedom of speech and association guaranteed in the First Amendment of the Bill of Rights, these prohibitions do not exist in the United States. Here is an earlier discussion of this topic (link).)

Thursday, February 20, 2020

Slime mold intelligence



We often think of intelligent action in terms of a number of ideas: goal-directedness, belief acquisition, planning, prioritization of needs and wants, oversight and management of bodily behavior, and weighting of risks and benefits of alternative courses of action. These assumptions presuppose the existence of the rational subject who actively orchestrates goals, beliefs, and priorities into an intelligent plan of action. (Here is a series of posts on "rational life plans"; link, link, link.)

It is interesting to discover that some simple adaptive systems apparently embody an ability to modify behavior so as to achieve a specific goal without possessing a number of these cognitive and computational functions. These systems seem to embody some kind of cross-temporal intelligence. An example that is worth considering is the spatial and logistical capabilities of the slime mold. A slime mold is a multi-cellular "organism" consisting of large numbers of independent cells without a central control function or nervous system. It is perhaps more accurate to refer to the population as a colony rather than an organism. Nonetheless the slime mold has a remarkable ability to seek out and "optimize" access to food sources in the environment through the creation of a dynamic network of tubules established through space.

The slime mold lacks beliefs, it lacks a central cognitive function or executive function, it lacks "memory" -- and yet the organism (colony?) achieves a surprising level of efficiency in exploring and exploiting the food environment that surrounds it. Researchers have used slime molds to simulate the structure of logistical networks (rail and road networks, telephone and data networks), and the results are striking. A slime mold colony appear to be "intelligent" in performing the task of efficiently discovering and exploiting food sources in the environment in which it finds itself.

One of the earliest explorations of this parallel between biological networks and human-designed networks was Tero et al, "Rules for Biologically Inspired Adaptive Network Design" in Science in 2010 (link). Here is the abstract of their article:
Abstract Transport networks are ubiquitous in both social and biological systems. Robust network performance involves a complex trade-off involving cost, transport efficiency, and fault tolerance. Biological networks have been honed by many cycles of evolutionary selection pressure and are likely to yield reasonable solutions to such combinatorial optimization problems. Furthermore, they develop without centralized control and may represent a readily scalable solution for growing networks in general. We show that the slime mold Physarum polycephalum forms networks with comparable efficiency, fault tolerance, and cost to those of real-world infrastructure networks—in this case, the Tokyo rail system. The core mechanisms needed for adaptive network formation can be captured in a biologically inspired mathematical model that may be useful to guide network construction in other domains.
Their conclusion is this:
Overall, we conclude that the Physarum networks showed characteristics similar to those of the [Japanese] rail network in terms of cost, transport efficiency, and fault tolerance. However, the Physarum networks self-organized without centralized control or explicit global information by a process of selective reinforcement of preferred routes and simultaneous removal of redundant connections. (441)
They attempt to uncover the mechanism through which this selective reinforcement of routes takes place, using a simulation "based on feedback loops between the thickness of each tube and internal protoplasmic flow in which high rates of streaming stimulate an increase in tube diameter, whereas tubes tend to decline at low flow rates" (441). The simulation is successful in approximately reproducing the observable dynamics of evolution of the slime mold networks. Here is their summary of the simulation:
Our biologically inspired mathematical model can capture the basic dynamics of network adaptability through iteration of local rules and produces solutions with properties comparable or better than those real-world infrastructure networks. Furthermore, the model has a number of tunable parameters that allow adjustment of the benefit-cost ratio to increase specific features, such as fault tolerance or transport efficiency, while keeping costs low. Such a model may provide a useful starting point to improve routing protocols and topology control for self-organized networks such as remote sensor arrays, mobile ad hoc networks, or wireless mesh networks. (442)
Here is a summary description of what we might describe as the "spatial problem-solving abilities" of the slime mold based on this research by Katherine Harman in a Scientific American blog post (link):
Like the humans behind a constructed network, the organism is interested in saving costs while maximizing utility. In fact, the researchers wrote that this slimy single-celled amoeboid can "find the shortest path through a maze or connect different arrays of food sources in an efficient manner with low total length yet short average minimum distances between pairs of food sources, with a high degree of fault tolerance to accidental disconnection"—and all without the benefit of "centralized control or explicit global information." In other words, it can build highly efficient connective networks without the help of a planning board.
This research has several noteworthy features. First, it seems to provide a satisfactory account of the mechanism through which slime mold "network design intelligence" is achieved. Second, the explanation depends only on locally embodied responses at the local level, without needing to appeal to any sort of central coordination or calculation. The process is entirely myopic and locally embodied, and the "global intelligence" of the colony is entirely generated by the locally embodied action states of the individual mold cells. And finally, the simulation appears to offer resources for solving real problems of network design, without the trouble of sending out a swarm of slime mold colonies to work out the most efficient array of connectors.

We might summarize this level of slime-mold intelligence as being captured by:
  • trial-and-error extension of lines of exploration
  • localized feedback on results of a given line leading to increase/decrease of the volume of that line
This system is decentralized and myopic with no ability to plan over time and no "over-the-horizon" vision of potential gains from new lines of exploration. In these respects slime-mold intelligence has a lot in common with the evolution of species in a given ecological environment. It is an example of "climbing Mt. Improbable" involving random variation and selection based on a single parameter (volume of flow rather than reproductive fitness). If this is a valid analogy, then we might be led to expect that the slime mold is capable of finding local optima in network design but not global optima. (Or the slime colony may avoid this trap by being able to fully explore the space of network configurations over time.) What the myopia of this process precludes is the possibility of strategic action and planning -- absorbing sacrifices at an early part of the process in order to achieve greater gains later in the process. Slime molds would not be very good at chess, Go, or war.

I've been tempted to offer the example of slime mold intelligence as a description of several important social processes apparently involving collective intentionality: corporate behavior and discovery of pharmaceuticals (link) and the aggregate behavior of large government agencies (link).

On pharmaceutical companies:
So here's the question for consideration here: what if we attempted to model the system of population, disease, and the pharmaceutical industry by representing pharma and its multiple research and discovery units as the slime organism and the disease space as a set of disease populations with different profitability characteristics? Would we see a major concentration of pharma slime around a few high-frequency, high profit disease-drug pairs? Would we see substantial under-investment of pharma slime on low frequency low profit "orphan" disease populations? And would we see hyper-concentrations around diseases whose incidence is responsive to marketing and diagnostic standards? (link)
On the "intelligence" of firms and agencies:
But it is perfectly plain that the behavior of functional units within agencies are only loosely controlled by the will of the executive. This does not mean that executives have no control over the activities and priorities of subordinate units. But it does reflect a simple and unavoidable fact about large organizations. An organization is more like a slime mold than it is like a control algorithm in a factory. (link)
In each instance the analogy works best when we emphasize the relative weakness of central strategic control (executives) and the solution-seeking activities of local units. But of course there is a substantial degree of executive involvement in both private and public organizations -- not fully effective, not algorithmic, but present nonetheless. So the analogy is imperfect. It might be more accurate to say that the behavior of large complex organizations incorporates both imperfect central executive control and the activities of local units with myopic search capabilities coupled with feedback mechanisms. The resulting behavior of such a system will not look at all like the idealized business-school model of "fully implemented rational business plans", but it will also not look like a purely localized resource-maximizing network of activities.

******

Here is a very interesting set of course notes in which Prof. Donglei Du from the University of New Brunswick sets the terms for a computational and heuristic solution to a similar set of logistics problems. Du asks his students to consider the optimal locations of warehouses to supply retailers in multiple locations; link. Here is how Du formulates the problem:

*     Assuming that plants and retailer locations are fixed, we concentrate on the following strategic decisions in terms of warehouses.
  • Pick the optimal number, location, and size of warehouses 
  • Determine optimal sourcing strategy
    • Which plant/vendor should produce which product 
  • Determine best distribution channels
    • Which warehouses should service which retailers

  • The objective is to design or reconfigure the logistics network so as to minimize annual system-wide costs, including

    • Production/ purchasing costs
    • Inventory carrying costs, and facility costs (handling and fixed costs)
    • Transportation costs
    As Du demonstrates, the mathematics involved in an exact solution are challenging, and become rapidly more difficult as the number of nodes increases.

    Even though this example looks rather similar to the rail system example above, it is difficult to see how it might be modeled using a slime mold colony. The challenge seems to be that the optimization problem here is the question of placement of nodes (warehouses) rather than placement of routes (tubules).

    Tuesday, February 18, 2020

    Methods of causal inquiry


    This diagram provides a map of an extensive set of methods of causal inquiry in the social sciences. The goal here is to show that the many approaches that social scientists have taken to discovering causal relationships have an underlying order, and they can be related to a small number of ontological ideas about social causation. (Here is a higher resolution version of the image; link.)

    We begin with the idea that causation involves the production of an outcome by a prior set of conditions mediated by a mechanism. The task of causal inquiry is to discover the events, conditions, and processes that combine to bring about the outcome of interest. Given that causal relationships are often unobservable and complexly intertwined with multiple other causal processes, we need to have methods of inquiry to allow us to use observable evidence and hypothetical theories about causal mechanisms to discover valid causal relationships.

    The upper left node of the diagram reviews the basic elements of the ontology of social causation. It gives priority to the idea of causal realism -- the view that social causes are real and inhere in a substrate of social action constituted by social actors and their relations and interactions. This substrate supports the existence of causal mechanisms (and powers) through which causal relations unfold. It is noted that causes are often manifest in a set of necessary and/or sufficient conditions: if X had not occurred, Y would not have occurred. Causes support (and are supported by) counterfactual statements -- our reasoning about what would have occurred in somewhat different circumstances. The important qualification to the simple idea of exceptionless causation is the fact that much causation is probabilistic rather than exceptionless: the cause increases (or decreases) the likelihood of occurrence of its effect. Both exceptionless causation and probabilistic causation supports the basic Humean idea that causal relations are often manifest in observable regularities.

    These features of real causal relations give rise to a handful of different methods of inquiry.

    First, there is a family of methods of causal inquiry that involve search for underlying causal mechanisms. These include process tracing, individual case studies, paired comparisons, comparative historical sociology, and the application of theories of the middle range.

    Second, the ontology of generative causal mechanisms suggests the possibility of simulations as a way of probing the probable workings of a hypothetical mechanism. Agent-based models and computational simulations more generally are formal attempts to identify the dynamics of the mechanisms postulated to bring about specific social outcomes.

    Third, the fact that causes produce their effects supports the use of experimental methods. Both exceptionless causation and probabilistic causation supports experimentation; the researcher attempts to discern causation by creating a pair of experimental settings differing only in the presence or absence of the "treatment" (hypothetical causal agent), and observing the outcome.

    Fourth, the fact that exceptionless causation produces a set of relationships among events that illustrate the logic of necessary and sufficient conditions permits a family of methods inspired by JS Mills' methods of similarity and difference. If we can identify all potentially relevant causal factors for the occurrence of an outcome and if we can discover a real case illustrating every combination of presence and absence of those factors and the outcome of interest, then we can use truth-functional logic to infer the necessary and/or sufficient conditions that produce the outcome. These results constitute JL Mackie's INUS conditions for the causal system under study (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect). Charles Ragin's Boolean methods and fuzzy-set theories of causal analysis and the method of quantitative comparative analysis conform to the same logical structure.

    Probabilistic causation cannot be discovered using these Boolean methods, but it is possible to use statistical and probabilistic methods in application to large datasets to discover facilitating and inhibiting conditions and multifactoral and conjunctural causal relations. Statistical analysis can produce evidence of what Wesley Salmon refers to as "causal relevance" (conditional probabilities that are not equal to background population probabilities). This is expressed as: P(O|A&B&C) <> P(O).

    Finally, the fact that causal factors can be relied upon to give rise to some kind of statistical associations between factors and outcomes supports the application of methods of inquiry involving regression, correlation analysis, and structural equation modeling. 

    It is important to emphasize that none of these methods is privileged over all the others, and none permits a purely inductive or empirical study to arrive at valid claims about causation. Instead, we need to have hypotheses about the mechanisms and powers that underlie the causal relationships we identify, and the features of the causal substrate that give these mechanisms their force. In particular, it is sometimes believed that experimental methods, random controlled trials, or purely statistical analysis of large datasets can establish causation without reference to hypothesis and theory. However, none of these claims stands up to scrutiny. There is no "gold standard" of causal inquiry.

    This means that causal inquiry requires a plurality of methods of investigation, and it requires that we arrive at theories and hypotheses about the real underlying causal mechanisms and substrate that give rise to ("generate") the outcomes that we observe.

    Sunday, February 16, 2020

    Generativity and emergence


    Social entities and structures have properties that exercise causal influence over all of us, and over the continuing development of the society in which we live. Schools, corporations, armies, terror networks, transport networks, markets, churches, and cities all fall in this range -- they are social compounds or entities that shape the behavior of the individuals who live and work within them, and they have substantial effects on the broader society as well.

    So it is unsurprising that sociologists and ordinary observers alike refer to social structures, organizations, and practices as real components of the social world. Social entities have properties that make a difference, at the individual level and at the social and historical level. Individuals are influenced by the rules and practices of the organizations that employ them; and political movements are influenced by the competition that exists among various religious organizations. Putting the point simply, social entities have real causal properties that influence daily life and the course of history.

    What is less clear in the social sciences, and in the areas of philosophy that take an interest in such things, is where those causal properties come from. We know from physics that the causal properties of metallic silver derive from the quantum-level properties of the atoms that make it up. Is something parallel to this true in the social realm as well? Do the causal properties of a corporation derive from the properties of the individual human beings who make it up? Are social properties reducible to individual-level facts?

    John Stuart Mill was an early advocate for methodological individualism. In 1843 he wrote his System of Logic: Ratiocinative and Inductive, which contained his view of the relationships that exist between the social world and the world of individual thought and action:
    All phenomena of society are phenomena of human nature, generated by the action of outward circumstances upon masses of human beings; and if, therefore, the phenomena of human thought, feeling, and action are subject to fixed laws, the phenomena of society can not but conform to fixed laws. (Book VI, chap. VI, sect. 2)
    With this position he set the stage for much of the thinking in social science disciplines like economics and political science, with the philosophical theory of methodological individualism.

    About sixty years later Emile Durkheim took the opposite view. He believed that social properties were autonomous with respect to the individuals that underlie them. In 1901 he wrote in the preface to the second edition of Rules of Sociological Method:
    Whenever certain elements combine and thereby produce, by the fact of their combination, new phenomena, it is plain that these new phenomena reside not in the original elements but in the totality formed by their union. The living cell contains nothing but mineral particles, as society contains nothing but individuals. Yet it is patently impossible for the phenomena characteristic of life to reside in the atoms of hydrogen, oxygen, carbon, and nitrogen.... Let us apply this principle to sociology. If, as we may say, this synthesis constituting every society yields new phenomena, differing from those which take place in individual consciousness, we must, indeed, admit that these facts reside exclusively in the very society itself which produces them, and not in its parts, i.e., its members.... These new phenomena cannot be reduced to their elements. (preface to the 2nd edition)
    These ideas provided the basis for what we can call "methodological holism".

    So the issue between Mill and Durkheim is the question of whether the properties of the higher-level social entity can be derived from the properties of the individuals who make up that entity. Mill believed yes, and Durkheim believed no.

    This debate persists to the current day, and the positions are both more developed, more nuanced, and more directly relevant to social-science research. Consider first what we might call "generativist social-science modeling". This approach holds that methodological individualism is obviously true, and the central task for the social sciences is to actually perform the reduction of social properties to the actions of individuals by providing computational models that reproduce the social property based on a model of the interacting individuals. These models are called "agent-based models" (ABM). Computational social scientist Joshua Epstein is a recognized leader in this field, and his book Growing Artificial Societies: Social Science From the Bottom Up provides developed examples of ABMs designed to explain well-known social phenomena from the disappearance of the Anasazi in the American Southwest to the occurrence of social unrest. Here is his summary statement of the approach:
    To the generativist, explaining macroscopic social regularities, such as norms, spatial patterns, contagion dynamics, or institutions requires that one answer the following question: How could the autonomous local interactions of heterogeneous boundedly rational agents generate the given regularity?Accordingly, to explain macroscopic social patterns, we generate—or “grow”—them in agent models. 
    Epstein's memorable aphorism summarizes the field -- "If you didn't grow it, you didn't explain its emergence." A very clear early example of this approach is an agent-based simulation of residential segregation provided by Thomas Schelling in "Dynamic Models of Segregation" (Journal of Mathematical Sociology, 1971; link). The model shows that simple assumptions about the neighborhood-composition preferences of individuals of two groups, combined with the fact that individuals can freely move to locations that satisfy their preferences, leads almost invariably to strongly segregated urban areas.

    There is a surface plausibility to the generativist approach, but close inspection of many of these simulations lays bare some important deficiencies. In particular, a social simulation necessarily abstracts mercilessly from the complexities of both the social environment and the dynamics of individual action. It is difficult to represent the workings of higher-level social entities within an agent-based model -- for example, organizations and social practices. And ABMs are not well designed for the task of representing dynamic social features that other researchers on social action take to be fundamental -- for example, the quality of leadership, the content of political messages, or the high degree of path dependence that most real instances of political mobilization reflect.

    So if methodological individualism is a poor guide to social research, what is the alternative? The strongest opposition to generativism and reductionism is the view that social properties are "emergent". This means that social ensembles sometimes possess properties that cannot be explained by or reduced to the properties and actions of the participants. For example, it is sometimes thought that a political movement (e.g. Egyptian activism in Tahrir Square in 2011) possessed characteristics that were different in kind from the properties of the individuals and activists who made it up.

    There are a few research communities currently advocating for a strong concept of emergence. One is the field of critical realism, a philosophy of science developed by Roy Bhaskar in A Realist Theory of Science (1975) and The Possibility of Naturalism (1979). According to Bhaskar, we need to investigate the social world by looking for the real (though usually unobservable) mechanisms that give rise to social stability and change. Bhaskar is anti-reductionist, and he maintains that social entities have properties that are different in kind from the properties of individuals. In particular, he believes that the social mechanisms that generate the social world are themselves created by the autonomous causal powers of social entities and structures. So attempting to reduce a process of social change to the actions of the individuals who make it up is a useless exercise; these individuals are themselves influenced by the autonomous causal powers of larger social forces.

    Another important current line of thought that defends the idea of emergence is the theory of assemblage, drawn from Gilles Deleuze but substantially developed by Manuel DeLanda in A New Philosophy of Society: Assemblage Theory and Social Complexity (2006) and Assemblage Theory (2016). This theory argues for a very different way of conceptualizing the social world. This approach proposes that we should understand complex social entities as a compound of heterogeneous and independent lesser entities, structures, and practices. Social entities do not have "essences". Instead, they are continent and heterogenous ensembles of parts that have been brought together in contingent ways. But crucially, DeLanda maintains that assemblages too have emergent properties that do not derive directly from the properties of the parts. A city has properties that cannot be explained in terms of the properties of its parts. So assemblage theory too is anti-reductionist. 

    The claim of emergence too has a superficial appeal. It is clear, for one thing, that social entities have effects that are autonomous with respect to the particular individuals who compose them. And it is clear as well that there are social properties that have no counterpart at the individual level (for example, social cohesion). So there is a weak sense in which it is possible to accept a concept of emergence. However, that weak sense does not rule out either generativity or reduction in principle. It is possible to hold both generativity and weak emergence consistently. And the stronger sense -- that emergent properties are unrelated to and underivable from lower level properties -- seems flatly irrational. What could strongly emergent properties depend on, if not the individuals and social relations that make up these higher-level social entities?

    For this reason it is reasonable for social scientists to question both generativity and strong emergence. We are better off avoiding the strong claims of both generativity and emergence, in favor of a more modest social theory. Instead, it is reasonable to advocate for the idea of the relative explanatory autonomy of social properties. This position comes down to a number of related ideas. Social properties are ultimately fixed by the actions and thoughts of socially constituted individuals. Social properties are stable enough to admit of direct investigation. Social properties are relatively autonomous with respect to the specific individuals who occupy positions within these structures. And there is no compulsion to perform reductions of social properties through ABMs or any other kind of derivation. (These are ideas that were first advocated in 1974 by Jerry Fodor in "Special sciences: Or: The disunity of science as a working hypothesis" (link).)

    It is interesting to note that a new field of social science, complexity studies, has relevance to both ends of this dichotomy. Joshua Epstein himself is a complexity theorist, dedicated to discovering mathematical methods for understanding complex systems. Other complexity scientists like John Miller and Scott Page are open to the idea of weak emergence in Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Here is how Miller and Page address the idea of emergence in CAS:
    The usual notion put forth underlying emergence is that individual, localized behavior aggregates into global behavior that is, in some sense, disconnected from its origins. Such a disconnection implies that, within limits, the details of the local behavior do not matter to the aggregate outcome. (CAS, p. 44)
    Herbert Simon is another key contributor to modern complexity studies. Simon believed that complex systems have properties that are irreducible to the properties of their components for pragmatic reasons, including especially computational intractability. It is therefore reasonable, in his estimation, to look at higher-level social properties as being emergent -- even though we believe in principle that these properties are ultimately determined by the properties of the components. Here is his treatment in the third edition of The Sciences of the Artificial - 3rd Edition (1996):
    [This amounts to] reductionism in principle even though it is not easy (often not even computationally feasible) to infer rigorously the properties of the whole from knowledge of the properties of the parts. In this pragmatic way, we can build nearly independent theories for each successive level of complexity, but at the same time, build bridging theories that show how each higher level can be accounted for in terms of the elements and relations of the next level down. (172)
    The debate over generativity and emergence may seem like an arcane issue that is of interest only to philosophers and the most theoretical of social scientists. But in fact, disputes like this one have real consequences for the conduct of an area of scientific research. Suppose we are interested in the sociology of hate-based social movements. If we begin with the framework of reductionism and generativism, we may be led to focus on the social psychology of adherents and the aggregative processes through which potential followers are recruited into a hate-based movement. If, on the other hand, we believe that social structures and practices have relatively autonomous causal properties, then we will be led to consider the empirical specifics of the workings of organizations like White Citizens Councils, legal structures like the laws that govern hate-based political expressions in Germany and France, and the ways that the Internet may influence the spread of hate-based values and activism. In each of these cases the empirical research is directed in important measure to the concrete workings of the higher-level social institutions that are hypothesized to influence the emergence and shape of hate-based movements. In other words, the sociological research that we conduct is guided in part by the assumptions we make about social ontology and the composition of the social world.

    Friday, February 7, 2020

    The future of our democracy


    How can the United States recover its culture of civility and mutual respect among citizens after the bitter, unlimited toxicity of the first three years of Donald Trump's presidency? Trump's political movement, and the President himself, have gone in for an unbridled rhetoric of hatred, suspicion, racism, and white supremacist ideology that seems to have created a durable constituency for these hateful ideas. Even more troublingly, the President has cast doubt on the democratic process itself and the legitimacy of our electoral and judicial institutions.

    Deeply troubling is the fact that the President consistently attempts to mobilize support purely on the basis of division, hatred, and contempt for his opponents. He has provided virtually no sustained exposition or defense of the policy positions he advocates -- anti-immigrant, anti-trade, anti-NATO, anti-Federal Reserve, anti-government. Instead, his appeals amount ultimately to no more than a call to hatred and rejection of his opponents. His current shameful threats against those who supported his impeachment (including Lt. Col. Alexander Vindman, pictured above) are simply the latest version of his politics of threat, hatred, and intimidation. This president has never understood his responsibility to serve all the people of our country -- not merely his supporters -- and to support its constitution and governing institutions faithfully and in support of the public good.

    And almost all Republican leaders (with the admirable exception of Mitt Romney) have swallowed their own principles and have accepted these political appeals -- even as some observers have noted how much the current rhetoric resembles that of Benito Mussolini (link). If even a fraction of the voters who currently support the Trump movement do so with a positive endorsement of the racism and white-supremacy that the President and his supporters project, then there are tens of millions of hate-based partisans in our polity.

    It is an urgent and pressing problem to find strategies for beginning to bring these citizens back from the brink of right wing extremism and hate.

    One possible view is that the goal is unattainable. We might judge that it is very uncommon for hate-based partisans to change their attitudes and actions. So the best we can do is to minimize the likelihood that these individuals will do harm to others, and to maximize the impact and public visibility of more liberal people and movements. (The term "liberal" here isn't grounded in left-right orientation but rather the values of open-mindedness, tolerance, mutual respect, belief in democracy, and civility. Conservatives can be liberal in this sense.)

    Another possibility is that the extremism currently visible among Trump supporters is just a short term eruption, which will subside following the 2020 election. This doesn't seem very likely, given the virulence of animosity, suspicion, and hatred currently on display among many of Trump's supporters. It seems to be easier to incite hatred than to quench it, and it seems unlikely that these activists will quietly morph into tolerant and civil citizens.

    A third possibility is that we will have to acknowledge the presence of hate-based extremists and organizations among us and work aggressively to build up a younger constituency for progressive and tolerant values to present a stronger voice in support of inclusion and democracy. This is not so different from the current situation in some Western European democracies today, where virulent extremist political organizations compete with more inclusive and democratic organizations.

    The difference of our current circumstances in the winter of 2020 and those of November 2016 is the steady degradation of our institutions that the Trump administration has successfully undertaken. Packing the Federal courts with right-wing ideologues (often rated unqualified by the American Bar Association), treating the Congress and its elected members with contempt, derision, and threat, flouting the laws and ethics surrounding the status of whistle-blowers, appointing unqualified ideologues to direct Federal agencies like the EPA, Homeland Security, and Commerce, and subverting the ethics and political neutrality of the Department of Justice -- these are harms that may never be fully repaired. The moral corruption of the leaders of the GOP -- their fundamental and all but universal unwillingness to publicly reject the outrageous and anti-democratic behavior of this President -- will never be forgotten.

    What is the future of our democracy? Can we regain the fundaments of a tolerant, institutionally stable polity in which government is regulated by institutions and politicians are motivated to work to enhancing the preconditions of civility and democratic equality? Or are we headed to an even more personalized form of presidential rule -- a twenty-first century version of nationalist authoritarianism, or fascism?

    Madeline Albright expressed just such worries almost two years ago about the future of our democracy in Fascism: A Warning, and her words are deeply worrisome, perhaps prophetic.
    Fascist attitudes take hold when there are no social anchors and when the perception grows that everybody lies, steals, and cares only about him-or herself. That is when the yearning is felt for a strong hand to protect against the evil “other”—whether Jew, Muslim, black, so-called redneck, or so-called elite. Flawed though our institutions may be, they are the best that four thousand years of civilization have produced and cannot be cast aside without opening the door to something far worse. The wise response to intolerance is not more intolerance or self-righteousness; it is a coming together across the ideological spectrum of people who want to make democracies more effective. We should remember that the heroes we cherish—Lincoln, King, Gandhi, Mandela—spoke to the best within us. The crops we’ll harvest depend on the seeds we sow. (kl 94)
    Fascism, most of the students agreed, is an extreme form of authoritarian rule. Citizens are required to do exactly what leaders say they must do, nothing more, nothing less. The doctrine is linked to rabid nationalism. It also turns the traditional social contract upside down. Instead of citizens giving power to the state in exchange for the protection of their rights, power begins with the leader, and the people have no rights. Under Fascism, the mission of citizens is to serve; the government’s job is to rule. (kl 261)
    Or consider how Steven Levitsky and Daniel Ziblatt pose their fears in How Democracies Die:
    But now we find ourselves turning to our own country. Over the past two years, we have watched politicians say and do things that are unprecedented in the United States—but that we recognize as having been the precursors of democratic crisis in other places. We feel dread, as do so many other Americans, even as we try to reassure ourselves that things can’t really be that bad here. After all, even though we know democracies are always fragile, the one in which we live has somehow managed to defy gravity. Our Constitution, our national creed of freedom and equality, our historically robust middle class, our high levels of wealth and education, and our large, diversified private sector—all these should inoculate us from the kind of democratic breakdown that has occurred elsewhere.
    Yet, we worry. American politicians now treat their rivals as enemies, intimidate the free press, and threaten to reject the results of elections. They try to weaken the institutional buffers of our democracy, including the courts, intelligence services, and ethics offices. American states, which were once praised by the great jurist Louis Brandeis as “laboratories of democracy,” are in danger of becoming laboratories of authoritarianism as those in power rewrite electoral rules, redraw constituencies, and even rescind voting rights to ensure that they do not lose. And in 2016, for the first time in U.S. history, a man with no experience in public office, little observable commitment to constitutional rights, and clear authoritarian tendencies was elected president. (1)
    Albright, Levitsky, and Ziblatt are not alarmists; they are experienced, knowledgeable, and wise observers of and participants in democratic politics. Their concerns should worry us all.

    Tuesday, February 4, 2020

    Alain Touraine on social movements


    Alain Touraine, now in his tenth decade, published Défense de la modernité in 2018 as a statement of his current thinking about the meaning of modernity, and it is a striking contribution. Touraine participated in a seminar on the book at the University of Milan last week, with discussions by Profs. Marino Regini (Milan), Elena Pulcini (Florence), Fabio Rugge (Pavia), Piero Bassetti (Milan), and Davide Cadeddu (Milan), and it was a privilege to be able to attend. Touraine is one of France's most influential sociologists who has contributed important insights into the processes of social movements and contentious politics throughout his long career.

    Hearing Touraine speak in Milan made me want to read more, so I've picked up Solidarity: The Analysis of a Social Movement: Poland 1980-1981, and it is a tour-de-force. By happy coincidence it complements the discussion I'm having in one of my courses on McAdam, Tarrow, and Tilly's book Dynamics of Contention, and it is a fascinating complement to the kinds of analysis that MTT apply to a number of cases of contentious politics. The book is a micro-sociology of the Solidarity Movement as a social movement, and it is based on a number of extensive sets of interviews conducted by a team of French and Polish researchers under Touraine's direction in 1981. Here are a few especially interesting snippets from the introduction to the volume.
    The most important [question for research] concerns the nature of the movement: Solidarity is a trade union but, obviously, more than a trade union. It is a workers' movement born in the factories where it is now fighting against repression, but it is also a national movement and a struggle for democratisation of society... 
    The second question is less obvious, but is of greater challenge to received notions. Is Solidarity a movement, an upsurge of collective will, with all the richness which we have just suggested, or is it in fact the instrument for the reconstruction of a whole society, for the renewal of social institutions and even of those economic and social forces which may eventually enter into conflict with Solidarity itself? ... 
    The third question follows from the brutal rupture of December 1981. At the beginning, Solidarity managed to limit itself and tis demands to such an extent that the agreements signed after the strikes of August 1980 acknowledged the Party's leading role in the state and the sanctity of Poland's international alliances. But was the movement not gradually drawn into a struggle for power itself? (2-3)
    These research questions are crucial, because they frame the ways in which the investigators structure their research and their interpretation of the discussions that they engage in with participants.

    Touraine also explains and justifies the choice to conduct their research around interviews and discussions with rank-and-file members of the movement rather than ideological leaders. His answer is brilliant:
    Because, in a social movement, the participants are far more than just a base prompted by questions of immediate self-interest which it is the leaders' job to transform into a programme and a set of political strategies. Solidarity's enemies have often claimed that its worker members were straightforward trade unionists, whereas the leaders were political agitators. If one listens for a moment to the rank and file, it very quickly becomes apparent that this accusation is groundless, that in each of our research groups just as in every enterprise we visited, the big questions -- political freedoms, national independence, industrial management, social justice -- are as constantly present as they are in the debates of Solidarity's National Committee. (4-5)
    And crucially, Touraine emphasizes the self-representation and identity formation that was a key part of this movement. "The members of Solidarity are not only conscious of being downtrodden; they have a positive awareness of themselves and of their rights.... Their movement is not a mechanical reaction to oppression which has become unbearable; it manifests ideas, choices, a collective will" (5)
    Familiarity with Solidarity should convince us -- and one of the aims of this book is to help establish this belief -- that men and women are not subject to historical laws and material necessity, that they produce their own history through their cultural creations and social struggles, by fighting for the control of those changes which will affect their collective and in particular their national life. (5)
    Touraine is ahead of his time in the study of contention and social movements in emphasizing agency and the creative human effort to define themselves in meaningful terms.

    Touraine describes the method of the project as "sociological intervention". 
    The most immediately apparent feature of sociological intervention is that it seeks to define the meaning which the actors themselves attribute to their action. (7)
    This can be described as ethno-sociology or micro-sociology; it is a method that is designed to allow the researchers to gain a textured and accurate view of the self-understandings of the participants (both in common and sometimes diversely).

    Touraine and his colleagues regard the suppression of Solidarity by the military in Poland in 1981 as the beginning of the end for totalitarian control of the Polish people:
    Today, in Communist Central Europe, totalitarianism is dying. The Hungarian revolution, the Prague Spring and then Solidarity fuelled hopes, which lasted for a few days, a few months or more than a year, that it would be replaced by democracy. Everywhere, forces of arms won the day: Hungary and Czechoslovakia were invaded by a foreign army, while in Poland a military and political ruler acted as the Soviet Union wished, so avoiding the heavy diplomatic price which would have been paid for open intervention. But popular movements and uprisings are not the only victim of this violence. From now on, the Communist regime can no longer claim to speak in the name of society and history: its only foundation is force, and it has lost the legitimacy on which it based its totalitarian ambitions. (191)
    These were prophetic words.

    Sunday, January 26, 2020

    Responsible innovation and the philosophy of technology



    Several posts here have focused on the philosophy of technology (link, linklink, link). A simple definition of the philosophy of technology might go along these lines:
    Technology may be defined broadly as the sum of a set of tools, machines, and practical skills available at a given time in a given culture through which human needs and interests are satisfied and the interplay of power and conflict furthered. The philosophy of technology offers an interdisciplinary approach to better understanding the role of technology in society and human life. The field raises critical questions about the ways that technology intertwines with human life and the workings of society. Do human beings control technology? For whose benefit? What role does technology play in human wellbeing and freedom? What role does technology play in the exercise of power? Can we control technology? What issues of ethics and social justice are raised by various technologies? How can citizens within a democracy best ensure that the technologies we choose will lead to better human outcomes and expanded capacities in the future?
    One of the issues that arises in this field is the question of whether there are ethical principles that should govern the development and implementation of new technologies. (This issue is discussed further in an earlier post; link.)

    One principle of technology ethics seems clear: policies and regulations are needed to protect the future health and safety of the public. This is the same principle that serves as the ethical basis of government regulation of current activities, justifying coercive rules that prevent pollution, toxic effects, fires, radiation exposure, and other clear harms affecting the health and safety of the public.

    Another principle might be understood as exhortatory rather than compulsory, and that is the general recommendation that technologies should be pursued by private actors that make some positive contribution to human welfare. This principle is plainly less universal and obligatory than the “avoid harm” principle; many technologies are chosen because their inventors believe they will entertain, amuse, or otherwise please members of the public, and will thereby permit generation of profits. (Here is a discussion of the value of entertainment; link.)

    A more nuanced exhortation is the idea that inventors and companies should subject their technology and product innovation research to broad principles of sustainability. Given that large technological change can potentially have very large environmental and collective effects, we might think that companies and inventors should pay attention to the large challenges our society faces, now and in the foreseeable future: addiction, obesity, CO2 production, plastic waste, erosion of privacy, spread of racist politics, fresh water depletion, and information disparities, to name several.

    These principles fall within the general zone of the ethics of corporate social responsibility. Many companies pay lip service to the social-benefits principle and the sustainability principle, though it is difficult to see evidence of the effectiveness of this motivation. Business interests often seem to trump concerns for positive social effects and sustainability -- for example, in the pharmaceutical industry and its involvement in the opioid crisis (link).

    It is in the context of these reflections about the ethics of technology that I was interested to learn of an academic and policy field in Europe called “responsible innovation”. This is a network of academics, government officials, foundations, and non-profit organizations working together to try to induce more directionality in technology change (innovation). René von Schomberg and Jonathan Hankins’s recently published volume International Handbook on Responsible Innovation: A Global Resource gives an in-depth exposure to the thinking, research, and policy advocacy that this network has accumulated. A key actor in the advancement of this field has been the Bassetti Foundation (link) in Milan, which has made the topic of responsible innovation central to its mission for several decades. The Journal of Responsible Innovation provides a look at continuing research in this field.

    The primary locus of discussion and applications in the field of RRI has been within the EU. There is not much evidence of involvement in the field from United States actors in this movement, though the Virtual Institute of Responsible Innovation at Arizona State University has received support from the US National Science Foundation (link).

    Von Schomberg describes the scope and purpose of the RRI field in these terms:
    Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society). (2)
    The definition of this field overlaps quite a bit with the philosophy and ethics of technology, but it is not synonymous. For one thing, the explicit goal of RRI is to help provide direction to the social, governmental, and business processes driving innovation. And for another, the idea of innovation isn’t exactly the same as “technology change”. There are social and business innovations that fall within the scope of the effort — for example, new forms of corporate management or new kinds of financial instruments -- but which do not fall within the domain of technological innovations.

    Von Schomberg has been a leading thinker within this field, and his contributions have helped to set the agenda for the movement. In his contribution to the volume he identifies six deficits in current innovation policy in Europe (all drawn from chapter two of the volume):
    1. Exclusive focus on risk and safety issues concerning new technologies under governmental regulations
    2. Market deficits in delivering on societal desirable innovations
    3. Aligning innovations with broadly shared public values and expectations
    4. A focus on the responsible development of technology and technological potentials rather than on responsible innovations
    5. A lack of open research systems and open scholarship as a necessary, but not sufficient condition for responsible innovation
    6. Lack of foresight and anticipative governance for the alternative shaping of innovation in sectors
    Each of these statements involves very complex ideas about society-government-corporate relationships, and we may well come to judge that some of the recommendations made by Schomberg are more convincing than others. But the clarity of this statement of the priorities and concerns of the RRI movement is enormously valuable as a way of advancing debate on the issues.

    The examples that von Schomberg and other contributors discuss largely have to do with large innovations that have sparked significant public discussion and opposition — nuclear power, GMO foods, nanotechnology-based products. These example focus attention on the later stages of scientific and technological knowledge when it comes to the point of introducing the technology into the public. But much technological innovation takes place at a much more mundane level -- consumer electronics and software, enhancements of solar technology, improvements in electric vehicle technology, and digital personal assistants (Alexa, Siri), to name a few.

    A defining feature of the RRI field is the explicit view that innovation is not inherently good or desirable (for example, in the contribution by Luc Soete in the volume). Contrary to the assumptions of many government economic policy experts, the RRI network is unified in criticism of the idea that innovation is always or usually productive of economic growth and employment growth. These observers argue instead that the public should have a role in deciding which technological options ought to be pursued, and which should not.

    In reading the programmatic statements of purpose offered in the volume, it sometimes seems that there is a tendency to exaggerate the degree to which scientific and technological innovation is (or should be) a directed and collectively controlled process. The movement seems to undervalue the important role that creativity and invention play within the crucial fact of human freedom and fulfillment. It is an important moral fact that individuals have extensive liberties concerning the ways in which they use their talents, and the presumption needs to be in favor of their right to do so without coercive interference. Much of what goes on in the search for new ideas, processes, and products falls properly on the side of liberty rather than a socially regulated activity, and the proper relation of social policy to these activities seems to be one of respect for the human freedom and creativity of the innovator rather than a prescriptive and controlling one. (Of course some regulation and oversight is needed, based on assessments of risk and harm; but von Schomberg and others dismiss this moral principle as too limited.)

    It sometimes seems as though the contributors slide too quickly from the field of government-funded research and development (where the public has a plain interest in “directing” the research at some level), to the whole ecology of innovation and discovery, whether public, corporate, or academic. As noted above, von Schomberg considers the governmental focus on harm and safety to be the “first deficit” — in other words, an insufficient basis for “guiding innovation”. In contrast, he wants to see public mechanisms tasked with “redirecting” technology innovations and industries. However, much innovation is the result of private initiative and funding, and it seems that this field appropriately falls outside of prescription by government (beyond normal harm-based regulatory oversight). Von Schomberg uses the phrase “a proper embedding of scientific and technological advances in society”; but this seems to be a worrisome overreach, in that it seems to imply that all scientific and technology research should be guided and curated by a collective political process.

    This suggests that a more specific description of the goals of the movement would be helpful. Here is one possible specification:
    • Require government agencies to justify the funding and incentives that they offer in support of technology innovation based on an informed assessment of the public's preferences;
    • Urge corporations to adopt standards to govern their own internal innovation investments to conform to acknowledged public concerns (environmental sustainability, positive contributions to health and safety of citizens and consumers, ...);
    • Urge scientists and researchers to engage in public discussion of their priorities in scientific and technological research.
    • Create venues for open and public discussion of major technological choices facing society in the current century, leading to more articulate understanding of priorities and risks.
    There is an interesting parallel here with the Japanese government’s efforts in the 1980s to guide investment and research and development resources into the highest priority fields to advance the Japanese economy. The US National Research Council study, 21st Century Innovation Systems for Japan and the United States: Lessons from a Decade of Change: Report of a Symposium (2009) (link), provides an excellent review of the strategies adopted by the United States and Japan in their efforts to stimulate technology innovation in chip production and high-end computers from the 1960s to the 1990s. These efforts were entirely guided by the effort to maintain commercial and economic advantage in the global marketplace. Jason Owen-Smith addresses the question of the role of US research universities as sites of technological research in Research Universities and the Public Good: Discovery for an Uncertain Futurelink.

    The "responsible research and innovation" (RRI) movement in Europe is a robust effort to pose the question, how can public values be infused into the processes of technology innovation that have such a massive potential effect on public welfare? It would seem that a major aim of the RRI network is to help to inform and motivate commitments by corporations to principles of responsible innovation within their definitions of corporate social responsibility, which is unmistakably needed. It is worthwhile for U.S. policy experts and technology ethicists alike to pay attention to these debates in Europe, and the International Handbook on Responsible Innovation is an excellent place to begin.

    Saturday, January 18, 2020

    What are the prospects for a progressive movement in the US?


    It is hard to remember that American politics has experienced times of profound reflection upon and criticism of the premises of modern urban, capitalist, democratic life. Engagement in progressive issues and progressive political movements has a strong history in the U.S. The period of Civil Rights and the Vietnam War was one such time, when institutionalized racism and imperialistic use of military power were the subjects of political debate and activism. An earlier period of profound reflection about our premises was the Progressive era at the beginning of the twentieth century. And the resonance that Bernie Sanders, Elizabeth Warren, and Alexandria Ocasio-Cortez have had with large numbers of younger voters suggests that it is not impossible that we may experience another period of serious progressive thought. It's hard to remember today, in the grips of the most right-wing extremist government our country has seen in a century, that the temper of a time often changes in unpredictable ways.

    What would it take for a progressive political movement to become mainstream in the U.S.? For one thing, it seems unlikely to imagine that it will all come from a "youth movement". The sixties anti-war movement did in fact find a very strong base in universities, but those circumstances were probably fairly exceptional and context-specific: for example, the fact that young men faced the Selective Service focused the minds of many young people on the apparent looniness of the war in Southeast Asia. But the social and cohort composition of the Civil Rights movement seems to have been somewhat different -- a broader range of ordinary people were involved, at a variety of levels, and young people played a role that was only part of the activism of the time. There were student-based organizations, of course; but there were also broad-based coalitions of faith-based, occupation-based, and regionally-based individuals who were ready and willing to be mobilized. And the Progressive Movement at the beginning of the twentieth century appears to have involved many hundreds of thousands of ordinary working people, farmers, and professionals. The Pullman Strike of 1894 involved at least 250,000 workers in 27 states, and in the presidential election of 1904 Eugene Debs received some 403,000 votes as candidate for the Socialist Party of America, some 3% of the total vote.

    What issues seem to be key for building a strong and impactful progressive movement in the U.S. in the 2020s? Activism about the imperative of addressing climate change is one. The issue of extreme, unjustified, and growing inequalities of wealth and income is another. And the failures of American society in addressing the inequalities associated with race and immigration status constitute another urgent issue of concern for progressives.

    If we take as a premise that the issues that are most likely to stimulate activism and sustained political commitment are those that are perceived to be key to the future of one's group, each of these issues has an obvious constituency. Climate change affects everyone, and it affect young people the most. They will live their lives in a world that is in permanent environmental crisis -- intense storms, rising ocean levels, destruction of habitat -- that will create enormous disruption and hardship. Rising inequalities represent a crisis of justice and fairness; how can it possibly be justified that the greatest share of the new wealth created by innovation and economic recovery flowed to the top 1% or the top 10%? And why should the 99% or the 90% tolerate this injustice, decade after decade? And the social harm of racism affects everyone, not just people of color. The Civil Rights movement demonstrated the potency of this issue for mobilizing people across racial groups and across regions to protest and to demand change.

    And yet, these issues are not new. The Occupy movement focused on the inequalities issue, but it came and went. There is broad support in the population for policies that will slow down the processes of climate change, but this support does not appear to be easy to turn into activism and effective popular demands against our government. The government continues to push back environmental regulation and to go out of its way to flout the global consensus about CO2 emissions and climate change. And activism about racism arises periodically, often around police shootings and the Black Lives Matter movement; but this activism is sporadic and intermittent, and doesn't seem to have created much meaningful change.

    The question of uncovering the factors that lead to a widespread shift of engagement with new politics is one of the key topics in Doug McAdam's account of mobilization during the Civil Rights movement in the introduction to the second edition of Political Process and the Development of Black Insurgency, 1930-1970, 2nd Edition. Consider this diagram of his view of the interactive nature of contention:


    Here is McAdam's description of the theory involved here. 
    The figure depicts movement emergence as a highly contingent outcome of an ongoing process of interaction involving at least one set of state actors and one challenger. In point of fact, while I focus here on state/challenger interaction, I think this perspective is applicable to episodes of contention that do not involve state actors. (KL 280)
     This implies that new political thinking and a corresponding social movement do not generally emerge on their own, but rather through contention with another group or the state concerning issues that matter to both. It is a dynamic process of contention and mental formulation involving both status-quo power holders and challengers. And it is an interactive process through which each party develops its own interpretations of the current situation and the opportunities and threats that currently exist through interaction with the other group. This process leads to the formation of "organization / collective identity" -- essentially a shared vision of who "we" are, what we believe in and care about, which in turn supports the emergence of a round of "innovative collective action". The crucial part of his theory is that there is interaction between the two groups at every stage -- interpretation, formation of collective identity, and choice of collective actions. Each party influences and shapes the identity and behavior of the other.

    So let's say that the "challengers" of the decade of the 2020s care primarily about three things: reducing the enormous economic inequalities that exist in our society, controlling climate change, and increasing the power of dispossessed groups to advocate for the issues they care about (abortion rights, Black Lives Matter, and achieving more favorable treatment of immigrants). And the forces of the status quo want three things as well: a favorable environment for corporate profits, secure control of the Federal court system, and no change in racial equality and immigrant status. How might the dynamic that McAdam describes play out?

    Some of the political mechanisms of mobilization that are described in Dynamics of Contention are relevant for thinking about this scenario. Brokerage, coalition formation, and escalation are strategies available to the "new progressives". They can seek to find common ground among a range of groups in society who are poorly served by the reigning conservative government. But it will also emerge that there are serious disagreements about priorities, rankings, and willingness to struggle for a common set of goals. The goal of brokerage and coalition formation is to create broader and more numerous (and therefore potentially more influential) groups who will support a common agenda. But achieving collaboration and consensus is hard, and often not achieved.

    And what about the "forces of the status quo"? The strategies available to them are already visible through their actions since 2008 to entrench their blocking powers within state and federal government: retreat on voter rights and voter participation; use the primary process to ensure that extreme versions of the conservative agenda find support in candidates nominated for office; undermine the political power of labor unions; use the ideological power of government to discredit the progressive opposition (disloyal, favorable to terrorists, enemies of business, ...); and, in the extreme case, use the police and surveillance powers of the state to discredit and undermine the organizations of the progressive movement. (Think of the use of agents provocateurs against the Black Panther party in the 1960s and 1970s through infiltration and misdirection as well as the murder of Fred Hampton in Chicago.)

    All too often the balance of forces between coalition building on the left and the right seems to favor the right; somehow the groups on the left in the United States in the past several decades seem to have been more insistent on ideological purity than those on the right, with the result that the progressive end of the spectrum seems more fragmented than the right. And somehow the organs of the media that have the greatest influence on political values in voters seem to be in the hands of the far right -- Fox News and its commentators in particular. There is also the common background assumption on the left that only profound structural "revolutionary" change (socialism, rejection of electoral politics) will do; whereas typical voters seem to want change that proceeds through the institutions we currently have. 

    Current activism in France over reforms of the pension system has several features that make it more feasible than progressive politics in the U.S. First, it is a focused single issue whose consequences are highly visible to everyone. Second, there is a long tradition in France of using strikes, demonstrations, and street protests to apply pressure on the government. These are the "repertoires of contention" that are so important in Charles Tilly's analysis of French popular politics. Third, the "gilets jaunes" present a very recent and potent example of collective action that was successful in applying a great deal of pressure on the government. It is possible to think of steps that the U.S. government might take that would spark similar levels of national protests (abolition of the Social Security system, for example), but many other provocations by the Trump administration have not sparked ongoing and effective protests (reversal of EPA regulations, withdrawal from the Paris climate accords, legislative attacks on the Voting Rights Act, appointments of hundreds of reactionary  and unqualified hacks to seats on the Federal bench, a "feed the rich" tax reform, massive ICE roundups of immigrants, ...). 

    Perhaps the identity that has the greatest potential for success in the U.S. is a movement based on "reasserting the values of democracy and equality" within the context of a market economy and a representative electoral democracy. This movement would demand tax policies that work to reduce wealth inequalities and support a progressive state; environmental policies that align the U.S. with the international scientific consensus on climate change; healthcare policies that ensure adequate universal insurance for everyone; immigration policy that made sensible accommodations to the realities of the current U.S. population and workforce, including humane treatment of Dreamers; and campaign funds restrictions that limit the political influence of corporations. The slogan might be, "Moving us all forward through social justice, economic innovation, and good government." This might be referred to as "centrist progressivism", and perhaps it is too moderate to generate the passion that a political movement needs to survive. Nonetheless, it might be a form of progressivism that aligns well with the basic pragmatism and fair-mindedness of the American public. And who might serve as a standard bearer for this progressive platform? How about someone with the political instincts and commitments of a Carl Levin, a Harris Wofford, or a Sherrod Brown?