Navigation page

Pages

Friday, October 24, 2008

What do polls tell us?


We're all interested in the opinions of vast numbers of strangers -- potential voters, investors, consumers, college students, or home owners. Our interest is often a practical one -- we would like to know how the election is likely to go, whether the stock market will rebound, whether an influenza season will develop into a pandemic, or whether the shops in our cities and malls will have higher or lower demand in the holiday season. And so we turn to polls and surveys of attitudes and preferences -- consumer confidence surveys, voter preference polls, surveys of public health behaviors, surveys of investor confidence. And tools such as pollster.com aggregate and disaggregate the data to allow us to make more refined judgments about what the public's mood really is. But how valid is the knowledge that is provided by surveys and polls? To what extent do they accurately reflect an underlying social reality of public opinion? And to what extent does this knowledge provide a basis for projecting future collective behavior (including voting)?

There are several important factors to consider.

First is the heterogeneity of social characteristics across a population at virtually every level of scale -- including especially attitudes and beliefs. No matter what slice of the social demographic we select -- selecting for specific age, race, religion, and income, for example -- there will be a range of opinions across the resulting group. Groups don't suddenly become homogeneous when we find the right way of partitioning the population.

Second is an analogous point about plasticity over time. The attitudes and preferences of individuals and groups change over time -- often rapidly. In polling I suppose this is referred to as "voter volatility" -- the susceptibility of a group to changing its preferences in response to changing information and other stimulation. And the fact appears to be that opinions and beliefs change rapidly during periods of decision-making. So knowing that 65% of Hispanic voters preferred X over Y on October 10 doesn't imply much about the preferences of this group two weeks later. This is precisely what the campaigns are trying to accomplish -- a new message or commercial that shifts preferences for an extended group.

Third are questions having to do with the honesty of the responses that a survey or poll elicits. Do subjects honestly record their answers; do they conceal responses they may be ashamed of (the Bradley effect); do they exaggerate their income or personal happiness or expected grade in a course? There are survey techniques intended to address these possibilities (obscuring the point of a question, coming back to a question in a different way); but the possibility of untruthful responses raises an important problem for us when we try to assess the realism of a poll or survey.

Fourth are the standard technical issues having to do with sampling and population estimation: how large a set of observations are required to arrive at an estimate of a population value with 95% confidence? And what measures need to be taken to assure a random sample and avoid sample bias? For example, if polling is done based solely on landline phone numbers, does this introduce either an age bias or an income bias if it is true that affluent young people are more likely to have only cell phones?

So, then, what is the status of opinion surveys and polls as a source of knowledge about social reality? Do public opinion surveys tell us something about objective underlying facts about a population? What does a finding like "65% of Iowans favor subsidies for corn ethanol" based on a telephone poll of 1000 citizens tell us about the opinions of the full population of the state of Iowa?

Points made above lead to several important cautions. First, the reality of public opinion and preference itself is fluid and heterogeneous. The properties we're trying to measure vary substantially across any subgroup we might define -- pro/con assessments about a candidate's judgment, for example. So the measurement of a particular question is simply an average value for the group as a whole, with the possibility of a substantial variance within the group. And second, the opinions themselves may change rapidly over time at the individual level -- with the result that an observation today may be very different from a measurement next week. Third, it is a credible hypothesis that demographic factors such as race, income, or gender may affect attitudes and opinions; so there is a basis for thinking that data disaggregated by these variables may demonstrate more uniformity and consistency. Finally, the usual cautions about sample size and sample bias are always relevant; a poorly designed study tells us almost nothing about the underlying reality.

But what about the acid test: to what extent can a series of polls and surveys performed over many subgroups over an extended period of time, help to forecast the collective behavior of the group as a whole? Can we use this kind of information to arrive at credible estimates of an election in two weeks, or the likely demand for automobiles in 2009, or the willingness of a whole population to accept public health guidelines in a time of pandemic flu?

1 comment:

  1. Survey concentrates on only quantitative side of social decision-making and overlooks the qualitative side, which could change the structure of quantitative side quickly based on new information.

    If those survey companies could pay more effort to find out the decision rules of different social forces. We might gain more insight to understand the quantitative numbers of surveys and predict the change of those numbers when new information is reported to public.

    ReplyDelete