Friday, January 16, 2009

Unintended consequences


International relations studies offer plentiful examples of the phenomenon of unintended consequences -- for example, wars that break out unexpectedly because of actions taken by states to achieve their security, or financial crises that erupt because of steps taken to avert them. (The recent military escalations in Pakistan and India raise the specter of unintended consequences in the form of military conflict between the two states.) But technology development, city planning, and economic development policy all offer examples of the occurrence of unintended consequences deriving from complex plans as well.

Putting the concept schematically -- an actor foresees an objective to be gained or an outcome to be avoided. The actor creates a plan of action designed to achieve the objective or avert the undesired outcome. The plan is based on a theory of the causal and social processes that govern the domain in question and the actions that other parties may take. The plan of action, however, also creates an unforeseen or unintended series of developments that lead to a result that is contrary to the actor's original intentions.

It's worth thinking about this concept a bit. An unintended consequence is different than simply an undesired outcome; a train wreck or a volcano is not an unintended consequence, but rather simply an unfortunate event. Rather, the concept fits into the framework of intention and purposive action. An unintended consequence is a result that came about because of deliberate actions and policies that were set in train at an earlier time -- so an unintended consequence is the result of deliberate action. But the outcome is not one of the goals to which the plan or action was directed; it is "unintended". In other words, analysis of the concept of unintended consequences fits into what we might call the "philosophy of complex action and planning." (Unlikely as this sub-specialty of philosophy might sound, here's a good example of a work in this field by Michael Bratman, Intention, Plans, and Practical Reason. Robert Merton wrote about the phenomenon of unintended consequences quite a bit, based on his analysis of the relationships between policy and social science knowledge, in Social Theory and Social Structure.)

But there is also an element of paradox in our normal uses of the concept of an unintended consequence -- the suggestion that plans of action often contain elements that work out to defeat them. The very effort to bring about X creates a dynamic that frustrates the achievement of X. This is suggested by the phrase, the "law of unintended consequences." (I think this is what Hegel refers to as the cunning of reason.)

There is an important parallel between unintended and unforeseen consequences, but they are not the same. A harmful outcome may have occurred precisely because because it was unforeseen -- it might have been easily averted if the planner had been aware of it as a possible consequence. An example might be the results of the inadvertent distribution of a contaminant in the packaging of a food product. But it is also possible that an undesired outcome is both unintended but also fully foreseen. An example of this possibility is the decision of state legislators to raise the speed limit to 70 mph. Good and reliable safety statistics make it readily apparent that the accident rate will rise. Nonetheless the officials may reason that the increase in efficiency and convenience more than offsets the harm of the increase in the accident rate. In this case the harmful result is unintended but foreseen. (This is the kind of situation where cost-benefit analysis is brought to bear.)

Is it essential to the idea of unintended consequences that the outcome in question be harmful or undesirable? Or is the category of "beneficial unintended consequence" a coherent one? There does seem to be an implication that the unintended consequence is one that the actor would have avoided if possible, so a beneficial unintended consequence violates this implicature. But I suppose we could imagine a situation like this: a city planner sets out to design a park that will give teenagers a place to play safely, increase the "green" footprint of the city, and draw more families to the central city. Suppose the plan is implemented and each goal is achieved. But it is also observed that the rate of rat infestation in surrounding neighborhoods falls dramatically -- because the park creates habitat for voracious rat predators. This is an unintended but beneficial consequence. And full knowledge of this dynamic would not lead the planner to revise the plan to remove this feature.

The category of "unintended but foreseen consequences" is easy to handle from the point of view of rational planning. The planner should design the plan so as to minimize avoidable bad consequences; then do a cost-benefit analysis to assess whether the value of the intended consequences outweighs the harms associated with the unintended consequences.

The category of consequences of a plan that are currently unforeseen is more difficult to handle from the point of view of rational decision-making. Good planning requires that the planner make energetic efforts to canvass the consequences the plan may give rise to. But of course it isn't possible to discover all possible consequences of a line of action; so the possibility always exists that there will be persistent unforeseen negative consequences of the plan. The most we can ask, it would seem, is that the planner should exercise due diligence in exploring the most likely collateral consequences of the plan. And we might also want the planner to incorporate some sort of plan for "soft landings" in cases where unforeseen negative consequences do arise.

Finally, is there a "law of unintended consequences", along the lines of something like this:
"No matter how careful one is in estimating the probable consequences of a line of action, there is a high likelihood that the action will produce harmful unanticipated consequences that negate the purpose of the action."
No; this statement might be called "reverse teleology" or negative functionalism, and certainly goes further than empirical experience or logic would support. The problem with this statement is the inclusion of the modifier "high likelihood". Rather, what we can say is this:
"No matter how careful one is in estimating the probable consequences of a line of action, there is the residual possibility that the action will produce harmful unanticipated consequences that negate the purpose of the action."
And this statement amounts to a simple, prudent observation of theoretical modesty: we can't know all the possible results of an action undertaken. Does the possibility that any plan may have unintended harmful consequences imply that we should not act? Certainly not; rather, it implies that we should be as ingenious as possible in trying to anticipate at least the most likely consequences of the contemplated actions. And it suggests the wisdom of action plans that make allowances for soft landings rather than catastrophic failures.

(Writers about the morality of war make quite a bit about the moral significance of consequences of action that are unintended but foreseen. Some ethicists refer to the principle of double effect, and assert that moral responsibility attaches differently to intended versus unintended but foreseen consequences. The principles of military necessity and proportionality come into the discussion at this point. There is an interesting back-and-forth about the doctrine of double effect in the theory of just war in relation to Gaza on Crooked Timber and Punditry.)

2 comments:

Anonymous said...

What are your views on complexity theory? The law of unintended consequences is always raised when complexity theorists are trying to explain why the social sciences needs to incorporate complexity theory more...

Daniel Little said...

This is a good question -- probably worth a posting at some point in the future. I suppose that the main point of complexity theory is this: a system involving multiple causal forces that interact with each other (especially in non-linear ways) is mathematically intractable; so we can't predict future states of the system even if we know what some of the exogenous causal forces are. The non-linearity part entails that there may be large results caused by small changes; the interactive causal part entails that the results of A&B may be very different from the results of A and the results of B.

It is sometimes argued that the Chernobyl nuclear accident occurred because the highly skilled engineers who were shutting the plant down overestimated their ability to "steer" the process.

The big question here is whether social phenomena constitute a "complex system" in the technical sense, or merely a complicated and causally dense system with seriously probabilistic causal relations.