Navigation page

Pages

Saturday, April 4, 2015

Debates about field experiments in the social sciences



Questions about the empirical validation of hypotheses about social causation have been of interest in the past several weeks here. Relevant to that question is Dawn Langan Teele's recent volume, Field Experiments and Their Critics: Essays on the Uses and Abuses of Experimentation in the Social Sciences. The essays in the book make for interesting reading for philosophers of the social sciences. But the overall impression that I take away is that the assumptions this research community makes about social causation are excessively empiricist and under-theorized. These are essentially the assumptions that come along with an econometrician's view of social reality. The researchers approach causation consistently as "empirical social arrangement," "intervention," and "net effect". But this is not a satisfactory way of capturing the workings of social causation. Instead, we need to attempt to construct adequate theories of the institutions, norms, and patterns of action through which various social arrangements work, and the causal mechanisms and processes to which these social realities give rise.

The debates considered here surround the relative effectiveness of controlled observation and RCT-style experiments, with Gerber, Green, and Kaplan arguing on Bayesian statistical grounds that the epistemic weight of observation-based research is close to zero.
We find that unless researchers have prior information about the biases associated with observational research, observational findings are accorded zero weight regardless of sample size, and researchers learn about causality exclusively through experimental results. (kl 211)
A field experiment is defined as "randomized controlled trials carried out in a real-world setting" (kl 92). Observational data relevant to causation often derives from what researchers often call "natural experiments", in which otherwise similar groups of subjects are exposed to different influences thought to have causal effect. If we believe that trauma affects students' learning, we might compare a group of first-grade classrooms in a city that experienced a serious tornado with a comparable group of first-grade classrooms in a city without an abrupt and disruptive crisis. If the tornado classrooms showed lower achievement scores than the no-tornado classrooms, we might regard this as a degree of support for the causal hypothesis.

The radical skeptics about observational data draw strong conclusions; if we accept this line of thought, then it would appear that observational evidence about causation is rarely useful. The italicized qualification in the GGK quote is crucial, however, since researchers generally do have prior information about the factors influencing outcomes and the selection of cases in the studies they undertake, as Susan Stokes argues in her response essay:
Do observational researchers "know nothing" about the processes that generate independent variables and are they hence "entirely uncertain" about bias? Is the "strong possibility" of unobserved confounding factors "always omnipresent" in observational research? Are rival hypotheses 'always plausible"? Can one do nothing more than "assume nonconfoundedness"? To the extent that the answers to these questions are no, radical skepticism is undermined. (kl 751)
Stokes provides a clear exposition of how the influence of unrelated other causes Xij and confounders Zik figure in the linear causal equation for outcome ϒ depending on variable Χ (kl 693):


This model is offered as a representation of the "true" causation of ϒ, including both observed and unobserved factors. We might imagine that we have full observational data on ϒ, Χ, observations for some but not all Χij, and no observations for Zik.

The logical advantage of a randomized field experiment is that random assignment of individuals to the treatment and non-treatment classes guarantees that there is no bias in the populations with respect to a hidden characteristic that may be relevant to the causal workings of the treatment. In the hypothetical tornado-and-learning study mentioned above, there will be a spatial difference between the treatment and control groups; but regional and spatial differences among children may be relevant to learning. So the observed difference in learning may be the effect of the trauma of tornado, or it may be the coincidental effect of the regional difference between midwestern and northeastern students.

Andrew Gellman takes a step back and assesses the larger significance of this debate for social-science research. Here is his general characterization of the policy and epistemic interests that motivate social scientists (along the lines of an earlier post on policy and experiment; link):
Policy analysis (and, more generally, social science) proceeds in two ways. From one direction, there are questions whose answers we seek—how can we reduce poverty, fight crime, help people live happier and healthier lives, increase the efficiency of government, better translate public preferences into policy, and so forth? From another direction, we can gather discrete bits of understanding about pieces of the puzzle: estimates of the effects of particular programs as implemented in particular places. (kl 3440)
Gellman concisely captures the assumptions about causality that underlie this paradigm of social-science research: that causal factors can take the form of pretty much any configuration of social intervention and structure, and we can always ask what the effects of a given configuration are. But this is a view of causation that most realists would reject, because it represents causes in a highly untheorized way. On this ontological mindset, anything can be a cause, and its causal significance is simply the net difference it makes in the world in contrast to its absence. But this is a faulty understanding of real social causation.

Consider an example. Some American school systems have K-8 and 9-12 systems of elementary school and high school; other systems have K-6, 7-8, and 9-12 systems. These configurations might be thought of as "causal factors", and we might ask, "what is the net effect of system A or system B on educational performance of students by grade 12" (or "juvenile delinquency rates by grade 10")? But a realist would argue that this is too macular a perspective on causation for a complex social system like education. Instead,we need to identify more granular and more pervasive causes at a more fundamental individual and institutional level, which can then perhaps be aggregated into larger system-level effects. For example, if we thought that the socialization process of children between 11 and 14 is particularly sensitive to bullying and if we thought that high schools create a more welcoming environment for bullying, then we might have reason to expect that the middle school model would be more conducive to the educational socialization of children in these ages. But these two hypotheses can be separately investigated. And the argument that System A produces better educational outcomes than System B will now rest on reasoning about more fundamental causal processes rather than empirical and experimental findings based on examination of the outcomes associated with the two systems. Moreover, it is possible that the causal-mechanism reasoning that I've just described is valid and a good guide to policy choice, even though the observations and experiments at the level of full educational systems do not demonstrate a statistical difference between them.

More generally, arbitrary descriptions of "social factors" do not serve as causal factors whose effects we can investigate purely through experimentation and observation. Rather, as the realists argue, we need to have a theory of the workings of the social factors in which we are interested, and we then need to empirically study the causal characteristics of those underlying features of actors, norms, institutions, and structures. Only then can we have a basis for judging that this or that macro-level empirical arrangement will have specific consequences. Bhaskar is right in this key ontological prescription for the social sciences: we need to work towards finding theories of the underlying mechanisms and structures that give rise to the observable workings of the social world. And crude untheorized empirical descriptions of "factors" do not contribute to a better understanding of the social world. The framework here is "empiricist," not because it gives primacy to empirical validation, but because it elides the necessity of offering realistic accounts of underlying social mechanisms, processes, and structures.

1 comment:

  1. Sanjay Reddy has a powerful piece on why randomised control trials cannot address problems of structural change and development.
    The article is published in the Review of Agrarian Studies, see


    http://www.ras.org.in/randomise_this_on_poor_economics

    Madhura Swaminathan

    ReplyDelete