Navigation page

Pages

Friday, May 28, 2010

Prosperity based on commodities


An earlier post looked at economic prosperity and standard of living from the point of view of a grain-based agricultural economy. There I singled out intensive, extensive, and technology-based growth, and the effects these scenarios had on the standard of living for a farming population. This is a particularly simple case, since it equates standard of living with food availability per capita. (This is enough, however, to arrive at credible estimates of the standard of living over long stretches of Chinese history, as Bozhong Li has demonstrated in Agricultural Development in Jiangnan, 1620-1850.)  This simplification leaves out markets, prices, and trade; so it doesn't shed much light on economies based substantially on the production of commodities (including farm products, but also including manufactured goods). So how does the situation change when we postulate production for exchange and consumption based on cash income?

Let's once again consider an isolated region, where all products consumed are produced in the region. So there is no interregional trade. And let's suppose there are three goods: grain, shirts, and beer. Every household needs some of each, and households acquire income through ownership of resources: land, capital, and labor power. The income available to a household is the net return it achieves through use of its resources. Goods are produced by "firms" and are bought and sold through competitive markets.

Now to estimate a household's standard of living we need to do a more complex estimation: we need to estimate the household's income and we need to estimate the "purchasing power" of this income in terms of the baskets of goods this income can purchase at prevailing prices. So we need an income model and a price model for the three goods.  (See Robert Allen's detailed efforts at answering these questions across Eurasia (linklink).)

Production requires access to resources.  Each resource can be used in two basic ways: it can be used directly by its owner in production, or it can be "rented" to a firm for use by the firm in production. So there is also a competitive market for resources: rent for land, interest for capital, and wages for labor. And at any given time there is a specific distribution of resources across population; some households have dramatically more of each resource than others.

We can begin our thought experiment by taking as fixed the techniques of production that exist for the three basic commodities. In order to produce at a given level of output, the firm needs access to a known quantity of resources, in a specific proportion. Firms amd households with lots of resources can begin producing shirts, beer, and grain immediately. Poor firms and households will either rent access to more resources through promise of future rents; or they will rent out the resources they currently possess, including labor time. So landless, propertyless households have no choice but to sell their labor time; they become workers. So now let's picture our region as populated by firms and households producing commodities, and all persons functioning as consumers purchasing a bundle of commodities for life needs.

So far we've provided a scene very familiar from the classical political economists and Marx. Much of subsequent economic thought went into solving various parts of this story: what determines prices, what does the distribution of income look like, and how do innovation and organizational and technological change fit into this story?  What does an equilibrium of production, consumption, and price look like with static technology?  What are the dynamic processes of adjustment that occur when there is a substantial change in the process of production?

My question here is a limited one: what needs to occur in this scenario in order for there to be a rising trend in the average and median standard living for this society?

Let's define the standard of living as the size of the wage basket available to the median consumer: the sets of baskets of grain, shirts, and beer that the median income earner is able to purchase. In order for the standard of living to rise in this isolated region, there needs to be an overall increase in the efficiency and productivity of the production process for the three goods. And the money wage of the median consumer needs to rise.  (Amartya Sen provides quite a bit of analysis of the meaning of the standard of living in The Standard of Living.)

Let's refer to the concrete production process at a given time as the current practice; this is the specific way that inputs are organized in order to create the output. As we saw in the graph of output against time borrowed from Mark Elvin (link), we can think of progress here in two ways. First, there is refinement of practice, as producers gradually recognize small modifications that permit removal of costs from the process. And, as Marx and Smith agree, firms and households producing goods for a market have a powerful incentive to seek out these improvements: they can continue to sell their products at the old price until the rest of the producers catch up.

Second, producers can introduce substantial, revolutionary changes in technology. They may replace skilled sewing-machine operators with sewing robots that reduce each of the inputs into the good. Productivity takes a big stride forward.

There is a third mechanism of cost reduction available: the firm/household may speed up the labor process, lengthen the working day, or lower the wage. Volume I of Capital goes into detail on each of these mechanisms within a market-governed firm.  And each of these approaches is negative for the quality of life of the working class.

Now let's get back to the question of the standard of living. Does the process of competition, rising productivity, and falling prices imply an improvement in the standard of living? Or, conceivably, does it lead to a paradoxical immiseration of the bulk of the population? Both outcomes are possible. Rising efficiency and productivity have permitted our little society to produce a rising quantity of beer, grain, and shirts. And this on the basis of a fixed level of basic resources. So in principle everyone may be better off. But it is possible as well that the benefits of rising productivity have been disproportionately captured by a small advantaged group. So income may have become increasingly concentrated at the top. The average wage basket will have increased. But the median consumer may have declined through that process of concentration.

What this story tells us is something fairly simple: the effects of productivity improvement within a commodity economy depend critically on the prior distribution of assets and the institutions through which income and the gains of efficiency are distributed. And this in turn suggests a point much like that of Robert Brenner: the social-property relations embedded within an economy are critical in determining the fate of the median person, and they are subject to profound political struggle (post).

It would be very interesting to use agent-based modeling software to represent a series of scenarios based on this description of a commodity-based economy undergoing growth.  What do distributive outcomes look like when the prior distribution is relatively equal?  How about when they are substantially unequal?  How much difference does the timing of growth make on the eventual distributive and welfare characteristics of the scenario?

(Piero Sraffa's Production of Commodities by Means of Commodities : Prelude to a Critique of Economic Theory picks up some parts of this story in a neo-Ricardian way; Marxian economists have looked at Sraffa's work as also providing a novel basis for the labor theory of value.  The framework provided here also leads into an argument for a new definition of exploitation by John Roemer in A General Theory of Exploitation and Class.)

Tuesday, May 25, 2010

Varieties of economic progress



The study of economic history reveals a number of different patterns when it comes to agricultural production and the standard of living of a given population in a region.  Let's think about the issue in very simple terms.  Imagine that the standard of living for a population in a region is determined by the amount of grain that each household is able to acquire in a time period.  Grain is produced on farms using labor and technology (water, traction, fertilizer, pesticides, harvest tools).  Output is influenced by the existing agricultural technology and the quantity of labor expended in the farming process.  At a given level of technology and a given practice of labor use, a certain quantity of grain Q can be produced for the population P (farmers and their families).  If population is stable and if land area, technology, and labor use remain constant, then the total amount of grain produced remains constant as well and the standard of living remains level at Q/P.

Now several things can begin to change.  First, consider a steady population increase over time.  If land, technology and labor remain constant, then the standard of living falls, since Q remains constant while P increases.  So how can this population sustain and perhaps improve its standard of living?  It needs to increase the output of grain at a rate at least equal to the rate of increase in population.  And this can be done in several ways.

First, the population can bring more land into cultivation.  Population increase leads to more farm labor; more farmers can farm the additional land; and if agricultural technologies and practices are unchanged, then output will increase proportionally to the increase in population; so the standard of living will remain constant.  This assumes, however, that the new land is of equal productivity to existing land; but as the physiocrats observed, generally new land is of lower productivity.  So in this scenario, output would increase more slowly than population, and the standard of living would slowly decline.  We might call this extensive growth; technique and labor practices remain constant, but the arable land area increases (at the cost of deforestation and loss of common lands).

Second, more labor can be applied to the process of cultivation to increase output, using traditional farming practices.  More frequent weeding and destruction of pests takes time, but it increases output.  So if population is rising and land extent and productivity are constant, it is possible to offset the tendency for average output to fall, by applying more labor to the process.  Family labor, including children, can be expended more and more intensively in order to achieve additional gains in output.  But, of course, the marginal product of these additional hours of labor is small.  This process is familiar from the history of agriculture; Chayanov calls it "self-exploitation" (The Theory of Peasant Economy) and Clifford Geertz calls it "agricultural involution" (Agricultural Involution: The Processes of Ecological Change in Indonesia).  The standard of living may remain fairly constant, but the work load for the farm family increases over time.  Naturally, this process reaches a limit; eight hours a day of farm labor is sustainable; twelve hours is difficult; and eighteen hours is unsupportable.  (Here is an explanation and application of Chayanov's theory to the circumstances of Sri Lanka; link.)  We can call this involutionary growth or labor-intensive growth.

A third possibility is somewhat more positive for the standard of living and quality of life.  Intelligent farmers can recognize opportunities for improving and refining existing techniques and practices.  A better kind of sacking material may do a better job of protecting the harvest from rats; a bicycle-powered irrigation pump may increase the amount of water available for crops, thus increasing the harvest; a different form of labor cooperation across households may permit more effective seeding during the appropriate season.  So the traditional practices can be refined, permitting an increase of output with a constant quantity of land and labor.  This is what Mark Elvin refers to as "refinement of traditional practices" in his pathbreaking analysis of the "high-level equilibrium trap" (The Pattern of the Chinese Past).  It is an incremental process through which the productivity of the traditional farming system is increased through a series of small refinements of practice and technique.  Improvement in productivity permits an improvement in output per person; but if population continues to increase, then soon these gains are erased and the standard of living begins to decline again.


A fourth possibility is even more dramatic.  The fundamental technologies in use may be qualitatively improved: manure may be replaced by bean curd, which in turn may be replaced by chemical fertilizers; seed varieties may be significantly improved through selective breeding; electric-powered pumps may improve the availability of irrigation; small tractors may replace oxen and many person-hours of labor.  This kind of improvement in productivity can be represented as a jump from one of the heavy curves above to a higher "production possibility frontier."  And this enhancement of agricultural productivity can result in massive increases in the quantity of grain relative to the farming population -- thereby permitting a significant improvement in the standard of living for the farming population.  This can be referred to as modern technological productivity growth.

Two problems arise at this point, however.  First is Elvin's fundamental point about Chinese agriculture: these significant technological improvements require a significant social investment in scientific and technical research.  And if a population has already approached a subsistence trap -- a level of population at which intensive labor and existing farm technology only permits a near-subsistence diet for the population -- then there is no source of social surplus that can fund this research investment. (This is the core of his theory of the high-level equilibrium trap: farming techniques and practices have been refined to the maximum degree possible, and population has increased to the point of subsistence.)

Another problem is equally important.  The sorts of productivity improvements described here are "labor-expelling": the size of the labor force needs to fall (unless more land is available).  So the standard of living may rise for the farm population; but there will be a "surplus population" that is excluded from this improvement in productivity.  (This is a process that James Scott describes in Green-Revolution Malasia in Weapons of the Weak: Everyday Forms of Peasant Resistance.)  And at this point, the only hope for improvement of the standard of living for this segment of the population is for economic growth in another sector -- manufacturing or service -- where the labor of displaced farmers can be productively used.

So there are three large patterns, with several structural alternatives among the growth scenarios.


(See several earlier posts on farming, agriculture, and development; link, link, link.)

Saturday, May 22, 2010

Doug McAdam on contentious politics and the social sciences


Doug McAdam is hard at work shedding new light on the meso-dynamics of contention.  What are the specific social and psychological mechanisms that bring people into social movements; what factors and processes make mobilization more feasible when social grievances arise?  Recently he has done work on the impact of Teach for America on its participants, and he and his graduate students are now examining a set of environmental episodes that might have created local NIMBY movements -- but often didn't.

McAdam's most sustained contribution to the field of contention is his 1982 book on the dynamics of the struggle for racial equality, Political Process and the Development of Black Insurgency, 1930-1970.  The book was reissued in 1999 with a substantive new introduction, and it has set the standard for sophisticated sociological study of a large, complex movement.  McAdam collaborated with Sidney Tarrow and Chuck Tilly in articulating a new vision of how to approach the politics of contention in Dynamics of Contention.  And he has co-authored or co-edited another half dozen books on social movements and popular mobilization.  So McAdam has been one of the architects of the field of contentious politics.  Most importantly, he and his collaborators have brought innovative new thinking to the definition of problems for social research.

So it is valuable to dig into some of McAdam's thoughts and his sociological imagination as we think about how the sociology of the future might be shaped.  I conducted an extensive interview with Doug earlier this month, and it opened up quite a few interesting topics.  The full interview is posted on YouTube (link).



There are quite a few important turns to the conversation.
  1. Segment 1: Why is the study of contention a central topic within the social sciences?
  2. Segment 2: How can we approach contention without looking only at the successful cases?  How about the moments where contention might have developed but did not?  We can combine quantitative and qualitative methods -- perhaps in an order that reverses the usual approach.  Maybe we can use quantitative studies to get a general feel for a topic, and then turn to qualitative and case studies to discover the mechanisms.
  3. Segment 3: Another important theme: "We are voracious meaning-making creatures." Human beings have a cognitive-emotional-representational ability to attempt to represent meanings and their own significance within the larger order.  Rational choice theory has too narrow a conception of agency.  Why did the Black community stay off the buses in Montgomery?  Because people were strongly enmeshed in communities of meaning and commitment that framed the bus boycott in terms of meaning and identity.
  4. Segment 4: The psychology of mobilization is complex.  It's not just "rational incentives".  Organizers and leaders use the affinities and loyalties of the community to bring about collective action.  For example, an interesting strategy by SNCC to "shame" church leaders into supporting activists.  Movements happen very suddenly; this seems to reflect a process of "redefining" the situation for participants.  Another interesting issue: what is the right level of analysis -- micro, meso, or macro?  Doug favors the "meso" level.
  5. Segment 5: More on the meso level: disaggregated social activity.  McAdam argues that government actions are themselves often at the meso level.  And he makes the point that Civil Rights reform was strongly influenced in the United States by the issues created internationally through the tensions and ideological conflicts of the Cold War.  This explains why it was Truman rather than Roosevelt who endorsed the need for Civil Rights reform.  You can't explain the broad currents of the Civil Rights movement without understanding the international context that was influencing the Federal government.  (This is an example of a macro-level effect on social movements.)
  6. Segment 6: Now to mechanisms and processes.  There are no laws of civil wars.  So we need to look downward into the unfolding of the episodes of contention.  Comparative historical sociology is a very dynamic movement today.  Your work isn't quite as comparative as that of Tilly or McAdam.  Doug indicates that he favors comparison; but he tends to choose cases that are broadly comparable with each other.  Tilly often made comparisons at a much higher level of variation.  Q: Would you have been comfortable framing your study of the American Civil Rights movement as a comparison with the Solidarity Movement in Poland?  A: no.  There is too broad a range of differences between the cases.
  7. Segment 7: McAdam offers some interesting observations about the relationship between general theory and the specific social phenomena under study.  An important point here is a strong advocacy for eclectic, broad reading as one approaches a complex social phenomenon.  We can't say in advance where the important insights are going to come from -- anthropology, political science, history, sociology, ....
  8. Segment 8: We can dig into the social features that make certain figures very successful in bringing a group of people into a readiness to engage together.  Is social status a key factor?  Is it that some people are particularly persuasive?  Doug wants to break open the black box and get a lot better understanding of the meso-level processes and mechanisms through which mobilization occurs.  A closing topic: what about protest and mobilization in Asia?  Do you think these ideas about mobilization are relevant and illuminating in China or Thailand?  Or has it developed in too specific a relationship to democratic societies? Does the current understanding of popular mobilization help us when we try to understand movements like the Redshirt movement in Thailand?  Doug believes the framework is relevant outside the democratic West.  The ideas need to be applied loosely and flexibly.
  9. Segment 9: So the theory is really a "sketch" of the space of mobilization, rather than a set of specific hypotheses about how mobilizations always work.  And in that understanding -- the field is very relevant to research on the Thailand movement.
(Note the strong connections between this discussion and a few of the earlier interviews -- Tilly, Tarrow, and Zald in particular (link).  My interview with Gloria House about her experience with SNCC in Lowndes County is very relevant as well (link).)

Thursday, May 20, 2010

The dropout crisis


The United States faces a huge dropout crisis. In some cities the high school graduation rate is less than 50% -- sometimes as low as 25%. And this means devastating poverty for the dropouts, as well as continuing social blight for their communities. We might say, though, that the graduation rate is only the symptom of the problem; the causes include high poverty neighborhoods and failed elementary and middle schools, and the effects extend far into the future.

So in a way, it is too simple to call it a dropout crisis; rather, it is a schooling crisis (extending back into the early grades) and a poverty crisis (extending forward for one or more generations for the young people who are affected and their eventual children). And it is a particularly serious national problem, at the beginning of a century where the most important resource will be educated people and talented creators. How can we be optimistic about the prospects for innovation and discovery in the American economy when we are wasting so much human talent?

The crisis itself is widely recognized (link). What we haven't figured out yet is a success strategy for resolving the current system of failure. Is it even possible to envision a system of public education in high-poverty cities that actually succeeds in achieving the 90-90-90 goal (90% graduation rate, 90% achievement at grade level, 90% continuation to post-secondary education)? Or are we forced to conclude that the problem is too great, and that 50% of inner-city children are doomed to lives of continuing poverty and social blight? If so, the future is dim for our county as a whole: rising crime, social problems, civil conflict, and increasingly gated communities are our future. And, inevitably, our economic productivity as a country will falter. So the whole country loses if we don't solve this problem.

The current environment for solving the schooling problems is unpromising. Urban school systems across the country face staggering fiscal crises -- a $300 million deficit in Detroit, $480 million in Los Angeles, and similar amounts in other cities. So school systems are forced into a cycle of cost-cutting, removing some of the critical resources that might have addressed the failure for their students. And the school systems themselves -- administrators, teachers, and unions -- are all too often resistant to change. The current Federal educational reform program, Race to the Top (link), is designed to stimulate new thinking and more successful reforms; but the jury is out.

The situation requires a whole-hearted commitment to solving this problem. Solutions will require the best available research on learning and schooling; they will require substantial resources; and they will require significant collaboration among a number of stakeholders. And the solutions can't be simply one-off demonstration projects; we need a national strategy that will work at scale. There are a million new drop-outs a year. We need to reduce that number by 80% in the next decade if we are to be successful.

These are pretty daunting challenges. So consider this proposed solution that seems to have the ability to satisfy each of these constraints. This is the Diplomas Now program that is becoming increasingly visible in education reform and the press (link). The program is a research-based strategy for helping children make academic progress at every step of the way. It recognizes the need for much more intensive adult contact for at-risk children. It acknowledges the need for providing a host of community services in high-poverty schools. And it places high academic standards at the center of the strategy.

The program is based on important research undertaken by Robert Balfanz at Johns Hopkins University (link). Balfanz finds that it is possible to identify high school drop-outs very early in their school experience. He identifies the ABC cluster of criteria as diagnostic of future high school failure: absenteeism, behavior, and course performance. Sixth-graders who show any one of these characteristics have only a 25% likelihood of completing high school. So, he reasons, let's use these early warning signs and intervene with children when there is still an opportunity to get them back on track. This requires careful tracking of each child, and it requires that schools have the resources to address the problems these children are having in the early grades. But Balfanz argues that the payoff will be exactly what we need: these children will be back on track and will have a high likelihood of graduating from high school.

So what does the strategy need? First, it needs a good and well-implemented tracking system. Second, it needs teachers and principals who have the professional development needed to allow them to assist the progress of their students. But it needs two other things as well: it needs a corps of dedicated young people who will function as fulltime near-peer tutors and mentors for at risk children. And it needs a set of wrap-around social and community services that are available to children and their families in the schools.

This is where community service and stakeholder collaboration come in. CityYear is a vibrant national youth service organization within Americorps (link). CityYear has always placed involvement in high-poverty schools at the core of its service agenda for the young people who give a year of their lives to change the world. Now CityYear has entered into agreements with the Diplomas Now program to support focused interventions in a growing number of schools in a number of cities. (Here is a CityYear report.) And Communities in Schools is a national organization that is able to provide the other piece (link). Communities in Schools provides several social work professionals and supervision for each DN school. Finally, the Talent Development program at Johns Hopkins provides training for DN teachers and administrators.

The Diplomas Now model has now been applied in a number of schools around the country, and the results are highly encouraging (link). Results for a sixth grade class in Feltonville School in Philadelphia are representative: from 2008 to 2009 absenteeism dropped by 80%, negative behavior dropped by 45%, and the number of students with failures in math or english dropped by about 80%. Participants and observers attribute the successes measured here to the synergies captured by the combined approach. But a key factor is the presence of caring young adults in the lives of these children. (video)

These are amazing and encouraging results. But we have to ask the question, what would it take to scale this solution for all of Los Angeles, Detroit, or Chicago? The answer is that it will require a major investment. But it will also return many times that amount in increased productivity and lower incarceration and social service costs.

Here are some estimates from CityYear planning for the challenge of scaling up the Diplomas Now solution. The goal the organization has adopted is an ambitious one: to have CityYear teams in all schools that generate 50% of dropouts in the city. In Detroit CityYear teams currently serve 8 schools and 4,600 students with 65 corps members. In order to reach the goal, CityYear Detroit will need to expand to 39 schools, serving 26,290 students, including 9,400 at-risk students, with 403 corps members. This expansion will be costly; federal, school, and private funding would increase from $3.8 million to $12.8 million. But the five-year return on investment is massive. A Northeastern University study estimates the benefit of converting one dropout into a graduate at $292,000, aggregating to a net social benefit of $686,000,000. The returns are enormous. Nationally the total annual cost of the CityYear program would be just under $200 million by 2016, with other program costs perhaps doubling this amount. But the value of success is a staggering number: net social benefits from reducing the drop-out rate estimated in the range of $10 billion.

So it seems that we now know that the skepticism that is often expressed about inner-city school failure is misplaced. There are intensive strategies for success that should work in any school. There is a cost to these programs. But there are many thousands of young people who are eager to pick up the responsibility. Their civic engagement and pragmatic idealism are inspiring. We need strong support from our government, foundations, and private sources in order to make school failure a thing of the past.

(Here are a couple of earlier posts on this topic; post, post, post.)

Wednesday, May 19, 2010

Red shirts as a social movement


The redshirts in Thailand have moved onto the world stage in the past several months.  Massive protests in Bangkok have stymied the Thai government and have held the army and police forces at bay for months.  Demands from redshirt leaders and posters include removal of the military-backed government of Prime Minister Abhisit and a commitment to prompt elections.  In the background seems to be a demand for a shift in the playing field in Thailand, with meaningful attention to social inequalities.  And exiled former prime minister Thaksin plays a continuing role in the background, offering video messages at protest meetings and veiled instructions to redshirt demonstrators.  Efforts at clearing the protest encampment led to dozens of deaths in April, and a major crackdown this week seems to have succeeded in breaking the protest in Bangkok with another handful of deaths and a great deal of arson in the center of the city.  But there are indications that protests and violence may spread to other parts of Thailand.

What all of this implies is the presence of a major social movement in Thailand, supported by many thousands of rural and urban Thai people, mostly from the lower end of the socioeconomic order.  This much is clear through the journalism that has developed around the current turmoil.  What we haven't yet seen, though, is a careful analysis of the dynamics and processes of this movement.  How is it organized?  How are followers recruited?  What resources are leaders able to call upon?  What are the grievances that motivate potential followers?  The time is ripe for a careful, analytical study of the movement.  And intellectual resources exist for such a study, in the form of the extensive literature on social movements and contention that exists in the current social science literature.  However, that literature largely focuses on social movements in the democratic West, and scholars in this tradition generally lack deep knowledge of the politics of Asian countries.  So we need to find ways of crossing boundaries if we are to make use of social movement theory in the context of the Redshirt movement.

One of the most important voices in the current literature on social contention is Doug McAdam.  His study of the black insurgency in the United States is a sophisticated and extensive analysis of the dynamics of the US civil rights movement in the South (Political Process and the Development of Black Insurgency, 1930-1970), and perhaps there are some parallels between the two movements.  McAdam's work is entirely focused on examples of protests and mobilization in the United States.  But in the introduction to the second edition of this work he provides a clear and powerful statement of the state of the field, and his synthesis of the best current thinking about how to analyze social movements is of general interest.  So perhaps this is one place to begin the search for an empirically and theoretically informed study of the Redshirt movement.

Here are a few of McAdam's central points.
Increasingly, one finds scholars from various countries and nominally different theoretical traditions emphasizing the importance of the same three broad sets of factors in analyzing the origins of collective action.  These three factors are: 1) the political opportunities and constraints confronting a given challenger; 2) the forms of organization (informal as well as formal) available to insurgents as sites for initial mobilization; and 3) the collective processes of interpretation, attribution and social construction that mediate between opportunity and action. (viii)
Or in short: political opportunities, mobilizing structures, and framing processes (viii-ix).  Here are brief descriptions of each of these axes of analysis.
Expanding political opportunities.  Under ordinary circumstances, excluded groups or challengers face enormous obstacles in their efforts to advance group interests....  But the particular set of power relations that define the political environment at any point in time hardly constitutes an immutable structure of political life.  Instead, the opportunities for a challenger to engage in successful collective action are expected to vary over time.  It is these variations that are held to help shape the ebb and flow of movement activity. (ix)
Extant mobilizing structures.  ... By mobilizing structures I mean those collective vehicles, informal as well as formal, through which people mobilize and engage in collective action.  This focus on the meso-level groups, organizations, and informal networks that comprise the collective building blocks of social movements constitutes the second conceptual element in this synthesis. (ix)
Framing or other interpretive processes. ... Mediating between opportunity, organization and action are the shared meanings, and cultural understandings -- including a shared collective identity -- that people bring to an instance of incipient contention.  At a minimum people need to feel both aggrieved about some aspect of their lives and optimistic that, acting collectively, they can redress the problem. (ix-x)
So how can these basic sets of questions help in forming a careful analysis of the Redshirt movement?  McAdam's general point is that these angles of analysis have emerged as key within dozens of studies of collective action and social movements.  They represent an empirically informed set of theoretical perspectives on collective action.  We shouldn't look at these three sets of factors as setting a blueprint for collective action; but it is a good bet that new instances of social movements will involve each of these factors in some way.

Putting the point another way: we can read McAdam's synthesis as posing a research framework in terms of which to investigate a new example of a social movement -- whether the Falun Gong in China, the monks' movement in Burma, the Maoist insurgency in India, or the Redshirt movement in Thailand.  It is certainly possible that a given case won't fit very well into this set of questions; but McAdam's hunch is that this is unlikely.

So it would be very interesting to initiate a careful study of the Redshirt movement along these lines.  Such a study would need to review the shifting circumstances of political power over the past ten years or so in Thailand, both at the national level and at the state level.  Certainly the military overthrow of the Thaksin government created "ebbs and flows" of the sort to which McAdam refers.  And the Yellowshirt demonstrations of 2008 also shifted the fields of power in Thailand.  What openings did these various events create for Redshirt mobilization?  Second, we would need to know a great deal more about the local and regional organizations through which Redshirt mobilization occurs.  What are those organizations?  What resources do they control?  How do they manage to succeed in mobilizing and transporting many tens of thousands of rural supporters to the center of Bangkok?  And how do they manage to continue to supply and motivate these supporters through several months of siege?  Finally, and most importantly, we need to know much more about the mentality and social identities of the Redshirts.  What do they care about?  What are their local grievances?  What are their most basic loyalties and motivations?  McAdam points out that most studies of successful social movements have found that activists and supporters usually possess dense social networks and deep connections to their communities; will this turn out to be true for the Redshirt movement?

There is a cynical reading of the movement that would almost certainly not stand up to this kind of careful analysis: the idea that the Redshirts are simply the pawns of Thaksin, and that Thaksin's financial support to individual followers is sufficient to explain their behavior.  This doesn't seem credible on its face; it makes the movement out to be an automaton controlled by a distant leader.  Surely Thaksin plays a role; but equally certainly, leaders and followers have their own issues, agendas, and passions.

The kind of study suggested here does not yet exist, so far as I can tell.  It would be necessary to pull together a great deal of local knowledge about the social constituencies and local organizations that are involved in the movement -- information that isn't presented in any detail in the journalism that has been offered to date about events in Thailand.  But once a researcher has pulled together preliminary answers to questions in each of these areas, he/she will be much better positioned to answer pressing questions of the day: will the movement survive the repression in Bangkok this week?  Will it spread to other locations in Thailand?  Will the government succeed in preserving the status quo?  And schematic answers to these questions would provide a much more substantial basis for understanding the movement and its location within Thai society.

Here is one small contribution to the effort.  McAdam emphasizes the importance of "identity shift" in the evolution of a social movement.  He thinks that a very substantial part of a movement's strength and staying power derives from the new forms of collective identity that it creates.  There is evidence of shifting identities along these lines within the Redshirt movement.  Consider this interesting analysis of language from Thailand's Troubles:
ไพร่, which sounds like prai, was a dusty word which rarely saw the light of day. Now on every other t-shirt worn by people of the Red movement printed large and proud is prai.
Prai has perhaps a dozen meanings including cad, citizen, plebian and proletariat. In the context of the Red movement protest, which includes an element of class conflict and rebellion over inequality, prai frequently means commoner and peasant.
This sounds quite a bit like a shift of identity, from disregarded poor person to proud member of a movement.

(Several earlier posts have focused on events in Thailand.  Here is a post from about a year ago on civil unrest in Thailand.  See also the social movements thread in UnderstandingSociety.)

Sunday, May 16, 2010

Underdetermination and truth


We say that a statement is underdetermined by available facts when it and an alternative and different statement or theory are equally consistent with that body of facts. It may be that two physical theories have precisely the same empirical consequences -- perhaps wave theory and particle theory represent an example of this possibility. And by stipulation there is no empirical circumstance that could occur that would distinguish between the two theories; no circumstance that would refute T1 and confirm T2. And yet we might also think that the two theories make different assertions about the world. Both statements are underdetermined by empirical facts.

Pierre Duhem further strengthened the case for the underdetermination of scientific theories through his emphasis on the crucial role that auxiliary hypotheses play in the design of experiments in The Aim and Structure of Physical Theory.  Individual theoretical hypotheses -- "light is a wave," "people behave on the basis of their class interests" -- do not have definite empirical implications of their own; rather, it is necessary to bring additional assumptions into play in order to design experiments that might support or refute the theoretical hypothesis.  And here is the crucial point: if an experimental result is inconsistent with the antecedent theory and auxiliary hypotheses, we haven't demonstrated that the hypothesis itself is false, but rather that at least one among the premises is false. And it is possible to save the system by modifying the theory or one or more of the auxiliary assumptions.  As W. V. O. Quine put the point, scientific knowledge takes the form of a "web of belief" (The Web of Belief).  

On the extreme assumption that both statements have precisely the same consequences, we might infer that the two theories must be logically equivalent, since the logical content of a theory is the full range of its deductive consequences and the two theories are stipulated to have the same deductive consequences. So it must be possible to derive each theory from the other. And if T1 and T2 are logically equivalent, then we wouldn't say that they really express different assertions about the world.

A more problematic case is one where two theories have distinct bodies of deductive consequences; but where the subset of deductive consequences that are empirically testable are the same for the two theories. In this situation it is no longer the case that the two theories are logically equivalent but rather "empirically equivalent." And here it would be credible to say that the two make genuinely different assertions about the world -- assertions that cannot be resolved empirically.

A third and still weaker case is one in which T1 and T2 have distinct consequences for both theoretical statements and a specific subset of empirical statements; but they overlap in their consequences for a second body of empirical statements. And consider this possibility: Because a given level of instrumentation and inquiry limit the types of empirical statements that can be evaluated, this body of data does not permit us to differentiate between the theories. So we can distinguish between "currently testable" and "currently not testable". For this third case, we are to imagine that T1 and T2 have distinct implications for theoretical statements and for currently not testable empirical statements; but they have the same consequences for currently testable statements. In this case, T1 and T2 are currently underdetermined -- though advances in instrumentation may result in their being empirically distinguishable in the future.

It is worth noting how heroic is the notion of "determination" of theory by evidence. If we thought that science should issue in theories that are determined by the empirical evidence, we would be committed to the idea that there is one uniquely best theory of everything. This assumption of ultimate theory uniqueness might be thought to follow from scientific and metaphysical realism: the world has a specific and determinate structure and causal properties; this structure gives rise to all observations; well-supported theories are approximately true descriptions of this hidden structure of the world; and therefore there is a uniquely best scientific theory -- the one that refers to this set of entities, processes, and structures. And if an existing theory is false in description of this unobservable reality, then there must be observational circumstances where the false assumptions of the theory give rise to false predictions about observation.  In other words, well-confirmed theories are likely to be approximately true, and the hidden structure of the world can be expected to create observations that refute out false theories.

However, this foundational approach is implausible in virtually every area of science.  Our theories rarely purport to describe the most fundamental level of reality; instead, they are meso-level descriptions of intermediate levels of phenomena. Take the effort to understand planetary motion.  The description of the orbits of the planets as ellipses generated by the gravitational attraction of the planet and the sun turned out to be only approximately true.  Did this refute the pure theory of gravitation? Certainly not; rather, it raised the possibility of other causal processes not yet identified, that interfere with the workings of gravitational attraction.

So how do these general considerations from the philosophy of science affect the situation of knowledge claims in the social sciences?

It would seem that social science claims are even more subject to underdetermination than the claims of mechanics and physics. In addition to the problem of unidentified interfering causes and the need for auxiliary hypotheses, we have the problems of vagueness and specification.  We commonly find that social science theories offer general statements about social causes and conditions that need to be further specified before they can be applied to a given set of circumstances. And there are almost always alternative and equally plausible ways of specifying the concept in a particular setting.

Take the idea of class conflict as a theory of political behavior.  The theory may assert that "workers act on their material interests." Before we can attempt to evaluate this statement in particular social settings, we have to specify several things: how to characterize "material interests" in the setting and how to represent the cognitive-behavioral models the worker uses as he/she deliberates about action.  Is retaining support from City Hall a material interest? Or are material interests restricted to wages and the workplace? Are workers thought to be rational maximizers of their interests, or do they also embody social commitments that modulate the dictates of maximization? And here is the crucial point: different specifications lead to different predictions about political behavior; so the general theoretical assertion is underdetermined by empirical observation.

This discussion seems to lead us into surprising territory -- not the limited question of underdetermination but the large question of truth and correspondence and the question of the rationality of scientific belief. Do we think that social assertions are true or false in the semantic sense: true by virtue of correspondence to the facts as they really are; or do we think that social assertions are simply ways of speaking about complexes of social phenomena, with no referential force? Is the language of class or ideology or ressentiment just a way of encompassing a range of social behaviors, or are there really classes and ideologies in the social world? And if we affirm the latter possibility, does the evidence of social observation permit us to unambiguously select the true theories?

I suppose one possible approach is to minimize the scope of "truth" when it comes to the social sciences. We might say that there is a limited range of social statements that are unambiguously true or false -- Jones robbed a store, Jones robbed a store because he was economically desperate, people sometimes commit crimes out of economic necessity -- but there is a broader class of statements that have a different status.  These are more akin to interpretive schemes in literary criticism or to a set of metaphors deployed to describe a complex social situation.  The language of class may fall in this category.  And we might say that these statements are not truth claims at all, but rather interpretive schemes that are judged to do a better or worse job of drawing together the complex phenomena to which they are applied.  And in this case, it seems unavoidable that statements like these are radically underdetermined by the empirical facts.

(See Kyle Stanford's essay on underdetermination in the Stanford Encyclopedia of Philosophy.)

Thursday, May 13, 2010

Social theory and the empirical social world


How can general, high-level social theory help us to better understand particular historically situated social realities? Is it helpful or insightful to "bring Weber's theory of religion to bear on Islam in Java" or to "apply Marx's theory of capitalism to the U.S. factory system in the 1950s"? Is there any real knowledge to be gained by applying theory to a set of empirical circumstances?

In the natural sciences this intellectual method is certainly a valid and insightful one. We gain knowledge when we apply the theory of fluid dynamics to the situation of air flowing across a wing, or when we apply the principles of evolutionary theory to the problem of understanding butterfly coloration. But do we have the same possibility in the case of the social world?

My general inclination is to think that "applying" general social theories to specific social circumstances is not a valid way of creating new knowledge or understanding. This is because I believe that social ensembles reflect an enormous degree of plasticity and contingency; so general theories only "fit" them in the most impressionistic and non-explanatory way. We may have a pure structural theory of feudalism; but it is only the beginning of a genuinely knowledge-producing analysis of fourteenth-century French politics and economy or the Japanese samurai polity. At best the theory highlights certain issues as being salient -- the conditions of bonded labor, the nature of military dependency between lord and vassal. But the theory of feudalism does not permit us to "derive" particular features or institutions of French or Japanese society. "Feudalism" is an ideal type, a heuristic beginning for social analysis, rather than a general deductive and comprehensive theory of all feudal societies.  And we certainly shouldn't expect that a general social theory will provide the template for understanding all of the empirical characteristics of a given instance of that theorized object.

Why is there this strong distinction between physical theory and social theory? Fundamentally, because natural phenomena really are governed by laws of nature, and natural systems are often simple enough that we can aggregate the effects of the relevant component processes into a composite description of the whole. (There are, of course, complex physical systems with non-linear composite processes that cannot be deductively represented.)  So theories can be applied to complicated natural systems with real intellectual gain.  The theory helps us to predict and explain the behavior of the natural system.

The social world lacks both properties. Component social mechanisms and processes are only loosely similar to each other in different instances; for example, "fealty" works somewhat differently in France, England, and Japan. And there is a very extensive degree of contingency in the ways that processes, mechanisms, agents, and current circumstances interact to produce social outcomes. So there is a great degree of path dependency and variation in social outcomes, even in cases where there are significant similarities in the starting points. So feudalism, capitalism, financial institutions, religions, ethnic conflicts, and revolutions can only be loosely theorized.

That is my starting point. But some social theorists take a radically different approach. A good example of a bad intellectual practice here is the work of Hindess and Hirst in Pre-Capitalist Modes Of Production, in which they attempt to deduce the characteristics of the concrete historical given from its place within the system of concepts involved in the theory of the mode of production.

Is this a legitimate and knowledge-enhancing effort? I don't think that it is. We really don't gain any real insight into this manor, or the Burgundian manor, or European feudalism, by mechanically subsuming it under a powerful and general theory -- whether Marx's, Weber's or Pareto's.

It should be said here that it isn't the categories or hypotheses themselves that are at fault. In fact, I think Marx's analysis and categories are genuinely helpful as we attempt to arrive at a sociology of the factory, and Durkheim's concept of anomie is helpful when we consider various features of modern communities. It is the effort at derivation and subsumption that I find misguided. The reality is larger and more varied than the theory, with greater contingency and surprise.

It is worthwhile looking closely at gifted social scientists who proceed differently. One of these is Michael Burawoy, a prolific and influential sociologist of the American labor process. His book, Manufacturing Consent: Changes in the Labor Process Under Monopoly Capitalism, is a detailed study of the American factory through the lens of his micro-study of a single small machine shop in the 1940s and 1970s, the Allied/Geer factory. Burawoy proceeds very self-consciously and deliberately within the framework of Marx's theory of the capitalist labor process. He lays out the fundamental assumptions of Marx's theory of the labor process -- wage labor, surplus labor, capitalist power relations within the factory -- and he then uses these categories to analyze, investigate, and explain the Allied/Geer phenomena. But he simultaneously examines the actual institutions, practices, and behaviors of this machine shop in great participant-observer detail. He is led to pose specific questions by the Marxist theory of the labor process that he brings with him -- most importantly, what accounts for the "consent" that he observes in the Allied workers? -- but he doesn't bring a prefabricated answer to the question.  His interest in control of surplus labor and coercion and consent within the workforce is stimulated by his antecedent Marxist theory; but he is fully prepared to find novelty and surprise as he investigates these issues.  His sociological imagination is not a blank slate -- he brings a schematic understanding of some of the main parameters that he expects to arise in the context of the capitalist labor process.  But his research assumptions are open to new discovery and surprising inconsistencies between antecedent theory and observed behavior.

And in fact, the parts of Burawoy's book that I find most convincing are the many places where he allows his sociological imagination and his eye for empirical detail to break through the mechanism of the theory. His case study is an interesting and insightful one. And it is strengthened by the fact that Burawoy does not attempt to simply subsume or recast the findings within the theoretical structure of Marx's economics.

(Burawoy addresses some of these issues directly in an important article, "Two Methods in Search of Science" (link). He advocates for treating Marxist ideas as a research program for the social sciences in the sense articulated by Imre Lakatos. )

So my advice goes along these lines: allow Marxism, or Weber or Durkheim or Tilly, to function as a suggestive program of research for empirical investigation. Let it be a source of hypotheses, hunches, and avenues of inquiry. But be prepared as well for the discovery of surprising outcomes, and don't look at the theory as a prescription for the unfolding of the social reality. Most importantly, don't look to theory as a deductive basis for explaining and predicting social phenomena. (Here is an article on the role of Marxism as a method of research rather than a comprehensive theory; link.)

Sunday, May 9, 2010

System safety engineering and the Deepwater Horizon


The Deepwater Horizon oil rig explosion, fire, and uncontrolled release of oil into the Gulf is a disaster of unprecedented magnitude.  This disaster in the Gulf of Mexico appears to be more serious in objective terms than the Challenger space shuttle disaster in 1986 -- in terms both of immediate loss of life and in terms of overall harm created. And sadly, it appears likely that the accident will reveal equally severe failures of management of enormously hazardous processes, defects in the associated safety engineering analysis, and inadequacies of the regulatory environment within which the activity took place.  The Challenger disaster fundamentally changed the ways that we thought about safety in the aerospace field.  It is likely that this disaster too will force radical new thinking and new procedures concerning how to deal with the inherently dangerous processes associated with deep-ocean drilling.

Nancy Leveson is an important expert in the area of systems safety engineering, and her book, Safeware: System Safety and Computers, is a genuinely important contribution.  Leveson led the investigation of the role that software design might have played in the Challenger disaster (link).  Here is a short, readable white paper of hers on system safety engineering (link) that is highly relevant to the discussions that will need to occur about deep-ocean drilling.  The paper does a great job of laying out how safety has been analyzed in several high-hazard industries, and presents a set of basic principles for systems safety design.  She discusses aviation, the nuclear industry, military aerospace, and the chemical industry; and she points out some important differences across industries when it comes to safety engineering.  Here is an instructive description of the safety situation in military aerospace in the 1950s and 1960s:
Within 18 months after the fleet of 71 Atlas F missiles became operational, four blew up in their silos during operational testing. The missiles also had an extremely low launch success rate.  An Air Force manual describes several of these accidents: 
     An ICBM silo was destroyed because the counterweights, used to balance the silo elevator on the way up and down in the silo, were designed with consideration only to raising a fueled missile to the surface for firing. There was no consideration that, when you were not firing in anger, you had to bring the fueled missile back down to defuel. 
     The first operation with a fueled missile was nearly successful. The drive mechanism held it for all but the last five feet when gravity took over and the missile dropped back. Very suddenly, the 40-foot diameter silo was altered to about 100-foot diameter. 
     During operational tests on another silo, the decision was made to continue a test against the safety engineer’s advice when all indications were that, because of high oxygen concentrations in the silo, a catastrophe was imminent. The resulting fire destroyed a missile and caused extensive silo damage. In another accident, five people were killed when a single-point failure in a hydraulic system caused a 120-ton door to fall. 
     Launch failures were caused by reversed gyros, reversed electrical plugs, bypass of procedural steps, and by management decisions to continue, in spite of contrary indications, because of schedule pressures. (from the Air Force System Safety Handbook for Acquisition Managers, Air Force Space Division, January 1984)
Leveson's illustrations from the history of these industries are fascinating.  But even more valuable are the principles of safety engineering that she recapitulates.  These principles seem to have many implications for deep-ocean drilling and associated technologies and systems.  Here is her definition of systems safety:
System safety uses systems theory and systems engineering approaches to prevent foreseeable accidents and to minimize the result of unforeseen ones.  Losses in general, not just human death or injury, are considered. Such losses may include destruction of property, loss of mission, and environmental harm. The primary concern of system safety is the management of hazards: their identification, evaluation, elimination, and control through analysis, design and management procedures.
Here are several fundamental principles of designing safe systems that she discusses:
  • System safety emphasizes building in safety, not adding it on to a completed design.
  • System safety deals with systems as a whole rather than with subsystems or components.
  • System safety takes a larger view of hazards than just failures.
  • System safety emphasizes analysis rather than past experience and standards.
  • System safety emphasizes qualitative rather than quantitative approaches.
  • Recognition of tradeoffs and conflicts.
  • System safety is more than just system engineering.
And here is an important summary observation about the complexity of safe systems:
Safety is an emergent property that arises at the system level when components are operating together. The events leading to an accident may be a complex combination of equipment failure, faulty maintenance, instrumentation and control problems, human actions, and design errors. Reliability analysis considers only the possibility of accidents related to failures; it does not investigate potential damage that could result from successful operation of the individual components.

How do these principles apply to the engineering problem of deep-ocean drilling?  Perhaps the most important implications are these: a safe system needs to be based on careful and comprehensive analysis of the hazards that are inherently involved in the process; it needs to be designed with an eye to handling those hazards safely; and it can't be done in a piecemeal, "fly-test-fly" fashion.

It would appear that deep-ocean drilling is characterized by too little analysis and too much confidence in the ability of engineers to "correct" inadvertent outcomes ("fly-fix-fly").  The accident that occurred in the Gulf last month can be analyzed into several parts. First is the explosion and fire that destroyed the drilling rig and led to the tragic loss of life of 11 rig workers. And the second is the uncalculated harms caused by the uncontrolled venting of perhaps a hundred thousand barrels of crude oil to date into the Gulf of Mexico, now threatening the coasts and ecologies of several states.  Shockingly, there is now no high-reliability method for capping the well at a depth of over 5,000 feet; so the harm can continue to worsen for a very extended period of time.

The safety systems on the platform itself will need to be examined in detail. But the bottom line will probably look something like this: the platform is a complex system vulnerable to explosion and fire, and there was always a calculable (though presumably small) probability of catastrophic fire and loss of the ship. This is pretty analogous to the problem of safety in aircraft and other complex electro-mechanical systems. The loss of life in the incident is terrible but confined.  Planes crash and ships sink.

What elevates this accident to a globally important catastrophe is what happened next: destruction of the pipeline leading from the wellhead 5,000 feet below sea level to containers on the surface; and the failure of the shutoff valve system on the ocean floor. These two failures have resulted in unconstrained release of a massive and uncontrollable flow of crude oil into the Gulf and the likelihood of environmental harms that are likely to be greater than the Exxon Valdez.

Oil wells fail on the surface, and they are difficult to control. But there is a well-developed technology that teams of oil fire specialists like Red Adair employ to cap the flow and end the damage. We don't have anything like this for wells drilled under water at the depth of this incident; this accident is less accessible than objects in space for corrective intervention. So surface well failures conform to a sort of epsilon-delta relationship: an epsilon accident leads to a limited delta harm. This deep-ocean well failure in the Gulf is catastrophically different: the relatively small incident on the surface is resulting in an unbounded and spiraling harm.

So was this a foreseeable hazard? Of course it was. There was always a finite probability of total loss of the platform, leading to destruction of the pipeline. There was also a finite probability of failure of the massive sea-floor emergency shutoff valve. And, critically, it was certainly known that there is no high-reliability fix in the event of failure of the shutoff valve. The effort to use the dome currently being tried by BP is untested and unproven at this great depth. The alternative of drilling a second well to relieve pressure may work; but it will take weeks or months. So essentially, when we reach the end of this failure pathway, we arrive at this conclusion: catastrophic, unbounded failure. If you reach this point in the fault tree, there is almost nothing to be done. And this is a totally irrational outcome to tolerate; how could any engineer or regulatory agency have accepted the circumstances of this activity, given that one possible failure pathway would lead predictably to unbounded harms?

There is one line of thought that might have led to the conclusion that deep ocean drilling is acceptably safe: engineers and policy makers might have optimistically overestimated the reliability of the critical components. If we estimate that the probability of failure of the platform is 1/1000, failure of the pipeline is 1/100, and failure of the emergency shutoff valve is 1/10,000 -- then one might say that the probability of the nightmare scenario is vanishingly small: one in a billion. Perhaps one might reason that we can disregard scenarios with this level of likelihood. Reasoning very much like this was involved in the original safety designs of the shuttle (Safeware: System Safety and Computers). But several things are now clear: this disaster was not virtually impossible. In fact, it actually occurred. And second, it seems likely enough that the estimates of component failure are badly understated.

What does this imply about deep ocean drilling? It seems inescapable that the current state of technology does not permit us to take the risk of this kind of total systems failure. Until there is a reliable and reasonably quick technology for capping a deep-ocean well, the small probability of this kind of failure makes the use of the technology entirely unjustifiable. It makes no sense at all to play Russian roulette when the cost of failure is massive and unconstrained ecological damage.

There is another aspect of this disaster that needs to be called out, and that is the issue of regulation. Just as the nuclear industry requires close, rigorous regulation and inspection, so deep-ocean drilling must be rigorously regulated. The stakes are too high to allow the oil industry to regulate itself. And unfortunately there are clear indications of weak regulation in this industry (link).

(Here are links to a couple of earlier posts on safety and technology failure (link, link).)

Urban inequalities and social mobility


Most American cities commonly look a lot like the poverty map of Cleveland above, when it comes to the spatial distribution of poverty and affluence.  There is a high-poverty core, in which residents have low income, poor health, poor education, and poor quality of life; there are rings of moderate income; and there are outer suburbs of affluent people with high quality of life.

We can ask two different kinds of sociological questions about these facts: What factors cause the reproduction of disadvantage over multiple generations? And what policy interventions have some effect on enhancing upward social mobility within disadvantaged groups? How can we change this cycle of disadvantage?

The persistence of inequalities in urban America was addressed in a special 2008 issue of the Boston Review in a forum on "ending urban poverty."  Particularly interesting is Patrick Sharkey's article "The Inherited Ghetto." Sharkey begins with a crucial and familiar point: that racial inequality has changed only very slightly since the passage of the Fair Housing Act in 1968. The concentration of black poverty in central cities has not substantially improved over that period of time, and the inequalities of health, education, and employment associated with this segregation have continued. And the association between neighborhood, degree of segregation, and income and quality of life is very strong: children born into a poor and segregated neighborhood are likely to live as adults -- in a poor and segregated neighborhood.

Sharkey documents these statements on the basis of his analysis of the data provided the University of Michigan Panel Study of Income Dynamics, the first major statistical study of several generations of families in terms of residence, income, occupation, health, and other important variables. Using a computer simulation based on the two-generation data provided by the Panel Study, Sharkey indicates that it would take five generations for the descendants of a family from a poor, black neighborhood to have a normal expectation of living in a typical American neighborhood. (That's one hundred years in round numbers.)  In other words: the progress towards racial equality in urban America is so slow as to be virtually undetectable.

Particular frustrating is the persistence of segregation in the forty years since the passage of the Fair Housing Act. Sharkey argues that this fact is partially explained by the fact that the policy choices made by federal and local authorities concerning housing patterns have more or less deliberately favored segregation by race. Beginning with the initial Fair Housing legislation -- which was enacted without giving the Federal agencies the power of enforcement -- both federal and state policies have reinforced segregation in housing. Sharkey notes that federal housing programs have subsidized the growth of largely white suburbs, while redlining and other credit-related restrictions have impeded the ability of black families to follow into these new suburban communities. The continuation of informal discrimination in the housing market (as evidenced by "testers" from fair housing agencies) further reinforces continuing segregation between inner-city black population and the suburban, mostly white population.

So racial segregation is one important mechanism that maintains the economic and social inequalities that  our society continues to embody.

How about policies that would work to speed up social progress?  It is commonly agreed that improving access to higher education for disadvantaged people is the best way to speed their economic advancement.  The theory is that individuals within the group will benefit from higher education by enhancing their skills and knowledge; this will give them new economic opportunities and access to higher-wage jobs; the individuals will do better economically, and their children will begin life with more economic support and a set of values that encourage education. So access to higher education ought to prove to be a virtuous circle or a positive feedback loop, leading to substantial social mobility in currently disadvantaged groups.

This theory appears to be substantially true: when it is possible to prepare poor children for admission to college, their performance in college and subsequent careers is good and lays a foundation for a substantial change in quality of life for themselves and their families (link).

However, most of our cities are failing abysmally in the task of preparing poor children for college.  High school graduation rates are extremely low in many inner-city schools -- 25-50%, and performance on verbal and math assessment tests are very low.  So a very substantial number of inner-city, high-poverty children are not being given the opportunity to develop their inherent abilities in order to move ahead in our society.  This is true in Detroit (link), and much the same is true in Cleveland, Oakland, Miami, Houston, New Orleans, and dozens of other cities.  (Here is a survey of the issues by Charles Payne in So Much Reform, So Little Change: The Persistence of Failure in Urban Schools.  And here is a striking report from 1970 prepared by the HEW Urban Education Taskforce.)  High poverty and poor education go hand in hand in American cities.

One important research question is whether there are behavioral or structural factors that predict or cause low performance by groups of students.  Here is a fascinating graph of high school graduation rates broken down by freshman-year absenteeism (MDRC report).  Important research is being done at the Center for the Social Organization of Schools at Johns Hopkins on the dropout crisis in urban schools (link).  (Here is an earlier post on CSOS and its recommendations for improving dropout rates from urban high schools.)  The topic is important because research findings like these may offer indications of the sorts of school reforms that are most likely to enhance school success and graduation.


It is clear that finding ways of dramatically increasing the effectiveness of high-poverty schools is crucial if we are to break out of the multi-generational trap that Sharkey documents for inner-city America.  Here is a specific and promising strategy that is being pursued in Detroit by the Skillman Foundation and its partners (link), based on small schools, greater contact with caring adults, and challenging academic curricula.  This turn-around plan is based on a specific set of strategies for improving inner-city schools developed by the Institute for Student Achievement, and ISA provides assessment data that support the effectiveness of the plan in other cities.  With support from the United Way of Southeast Michigan, several large high schools are being restructured along the design principles of the ISA model.

But the reality is that this problem is immense, and a few successful experiments in school reform are unlikely to move the dial.  Somehow it seems unavoidable that only a Marshall Plan for addressing urban poverty would allow us to have any real confidence in the possibility of reversing the inequalities our cities reveal.  And none of our political leaders -- and few of our taxpayers -- seem to perceive the urgency of the problem.

Wednesday, May 5, 2010

Mental models for the social world

What is involved in being prepared to understand what is going on around you?

In a sense this is Kant's fundamental question in the Critique of Pure Reason: what intellectual resources (concepts, categories, frameworks) does a cognitive agent need in order to make sense of the contents of consciousness, the fleeting experiences and sensations that life brings us? And his answer is pretty well known: we need concepts of fixed objects in space and time, subject to causal laws.  The stream of experiences we have is organized around a set of persistent objects located in time and space with specific causal properties. Space, time, cause, and object are the fundamental categories of cognition when it comes to understanding the natural world. This line of thought leads to an esoteric philosophical idea, the notion of transcendental metaphysics. (P. F. Strawson's work on Kant is particularly helpful; The Bounds of Sense: An Essay on Kant's Critique of Pure Reason.)

But we can ask essentially the same kind of question about the ordinary person's ability to make sense of the social world around him or her. Each person is exposed to a dense stream of experiences of the social world, at various levels. We have ordinary interactions -- with friends, bus drivers, postal carriers, students -- and we want to interpret the behavior that we observe. We read news reports and tweets about happenings in the wider world -- riots in Athens, suicide attacks in Pakistan, business statements about future sales, ... -- and we want to know what these moments mean, how they hang together, and what might have caused them.  In short, we need to have a set of mental resources that permit us to organize these experiences into a representation of a coherent social reality.

So is it possible to provide a transcendental metaphysics for ordinary social experience? Can we begin to list the kinds of concepts we need to have in order to cognize the social world?

We might say that a very basic building block of social cognition is a set of scripts or schemas into which we are prepared to fit our observations and experiences. Suppose we observe two people approach each other on the street, exchange words, bow heads slightly, and part. This interaction between two strangers might be categorized as "courtesy" during a chance meeting. But it might be construed in other ways as well: ironic insults, sexual innuendo, or condescension from superior to inferior. Each of these is an alternative interpretive frame, a way of conceptualizing and "seeing" a complex series of behaviors.  So the scripts or frames that we bring to the observations impose a form of organization on the observations.

Or take the current rioting in Greece: we might construct these masses of collective behavior as rationally directed economic protest, righteous resistance, or opportunistic anarchism. Each alternative has different implications, and each corresponds to a somewhat different set of background assumptions about how social interactions unfold. Each corresponds to a different social metaphysic.  Different observers bring a different set of assumptions about how the social world works to their observations. And these frameworks lead to different constructions of the events.

Or consider the question of the social "things" around which we organize our social perceptions: nations, financial markets, cities, parties, and ideologies, for example. How much arbitrariness is there in the ontological schemes into which we organize the world? Could we have done just as well at making sense of our experience with a substantially different ontology? Is there a most basic ontology that underlies each of these and is a scheme that cannot be dispensed with?

We might try a "fundamental" ontology along these lines: we must identify individuals as purposive, intentional agents; we must recognize relations among individuals -- giving us social networks, knowledge transmission, and groups; and we must recognize social processes with causal powers, constituted by individuals within specific social relations. And we must recognize the situation of consciousness -- beliefs, desires, values, and ideologies. And, we might hypothesize, we can build up all other more specific social entities out of aggregations of these simple things.

This is one possible way of formalizing a social ontology.  But there are others.  For example, we might give priority to relations rather than individuals; or we might give priority to processes rather than structures.  So it is hard to justify the notion that there is a single uniquely best way of conceptualizing the realm of the social.

An interesting collateral question has to do with the possibility of systemic error: is it possible that our metaphysical presuppositions about the social world sometimes lead us to construe our social observations in ways that systematically misrepresent reality? For example, would a "metaphysics of suspicion" (the idea that people generally conceal their true motives) lead us to a worldview along the lines of Jerry Fletcher, the central character in Conspiracy Theory?

Several things seem likely. First, there is no single and unique set of ontological "simples" for the social world. Rather, there are likely to be multiple starting points, all of which can result in a satisfactory account of the social world. So there is no transcendental metaphysics for the social world -- including the candidate sketched above.

Second, it seems that the unavoidable necessity of having a set of causal, semantic, and process schemata does not guarantee correctness. Our schemata may systematically mislead us.  So the schemata themselves amount to a large empirical hypothesis; they may be superseded by other schemata that serve better to organize our experiences.  The schemata are not determined by either apriori or empirical considerations.  And therefore our social cognitions are always a work in progress, and our conceptual frameworks are more like a paradigm than an ineluctable conceptual foundation.