Tuesday, June 30, 2009
Put Yourself in a Reporter's Shoes
Before getting into what "putting yourself in a reporter's shoes" will entail, it's worth thinking about why you should bother -- and, more generally, why it's worth spending a whole unit of a course on public opinion on the role of the media. Way back at the start of the course, I suggested (though perhaps not in so many words) that it is important for students of American politics to have an appreciation of public opinion because of its pivotal role in a political system whose form of government is a "republican democracy" -- that is, a political system in which sovereignty rests ultimately with the people but the various tasks of government are carried out by elected representatives acting on their behalf. In order for that to work, there has to be some means of two-way communication between the people and the government: The people need to know what their representatives in government are up to so that they might hold them accountable, and government officials need to know what their constituents think and want (i.e. public opinion) in order to govern according to the will of the people. That's where the media come in -- they are the primary means through which political communication takes place.
When it comes to modern public opinion polling, the media play an important role in conveying pollsters' findings to both public officials and the public at large. This isn't a straightforward translation process, though. Practically whenever communication takes place through a middleman (in this case, the reporter who takes information provided by a polling organization and then presents it to his media outlet's audience of public officials and citizens), the content of the commuication gets altered in some way: Some information may get lost along the way or reframed in a way that wasn't intended by its source. Nor is this necessarily a result of "media bias" (although that certainly does exist) or journalists' efforts to skew information for their own gain: Even the best-intentioned reporters face professional constraints that required them to make tough decisions about how to balance their objective of faithful and comprehensive reporting with tight deadlines, word limits, and the need to attract and maintain an audience for their product.
The purpose of this unit of the course is to give you a hands-on sense of how reporters manage that balancing act. To that end, you'll complete part of a course designed for journalists-in-training and journalism professionals at the online News University. You should also read the supplementary required text, "20 Questions a Journalist Should Ask About Poll Results," from the National Council on Public Polls. Then, you'll have an opportunity to put the information you've gleaned from these resource into practice, first by critiquing the Pantagraph's report on a recent local poll and then by producing your own 350-word article on a recent national survey about religion in the United States.
...Read more
Wednesday, June 17, 2009
Interview Effects
Self-Administered Polls
Self-administered entail respondents reading the questionnaire on their own and recording their own answers. Online polls are a good example of self-administered polls, as are polls completed via snail mail and the teaching evaluations you fill out at the end of each course at ISU.
The major "pro" of self-administered polling should be fairly obvious in light of what you've already learned about respondent insincerity: A pollster can generally expect more honesty from respondents when they don't have to share their answers out loud with another person whom they fear might make judgements about thier beliefs and behaviors. There are some downsides as well, though. For example, self-administered polls (especially mail-in polls) tend to have a lower response rate than polls that are administered by an interviewer (see below). In addition, open-ended questions, or questions that ask respondents to articulate answers in their own words rather than choose them off a multiple choice list, are less likely to be answered -- let alone answered at length -- in self-administered polls. This limits the amount of information that can be gathered from public opinion polls and also makes it harder to discern when respondents are offering nonattitudes rather than actually-held opinions.
Interviewer-Administered Polls
Trait of Interviewer Effects
A lot of public opinion research focuses on a polling administration problem that emerges only in interviewer-administered polls: trait of interviewer effects, or biased results due to the interviewer's personal characteristics, including especially their race and gender. Trait of interviewer effects are the topic of the last required text for this unit, a video interview with CBS Deputy Director of Surveys Sarah Dutton at last summer's AAPOR conference about race and gender of interviewer effects in the 2008 primaries:
Questionnaire Effects
There are basically aspects of a questionnaire that can trip pollsters up: its question order, its question wording, and its response alternatives.
- Do you think a Communist country like Russia should let American newspaper reporters come in and send back to America the news as they see it?
- Do you think the United States should let Communist newspaper reporters from other countries come in here and send back to their papers the news as they see it?
They found that when the questions were asked in this order, significantly more people said they'd support communist reporters coming to the US than when the questions were asked in the reverse order. This is likely because respondents who had already expressed support for Russia to allow access to American reporters would have felt hypocritical if they wouldn't support a similar policy in the United States.
- Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?
The poll yielded astonishingly high numbers of people claiming to either not know (12%) or that it seemed possible that the Holocaust never occured (22%). Shortly thereafter, the Roper pollsters re-ran the poll with the question reworded as follows:
- Does it seem possible to you that the Nazi extermination of the Jews never happened, or do you feel certain that it happened?
Nonattitudes and Insincerity
While, as we saw in Topic I, public opinion data may be skewed because of inappropriate sampling methodologies, sometimes polls produce inaccurate results because the people who participate them -- even if they comprise a maximally representative sample of the whole population -- aren't quite truthful about their opinions. There are two primary facets to the "respondents lie" problem: nonattitudes and insincerity.
Nonattitudes (sometimes also known as "pseudo-opinions")
Nonattitudes is a term coined by public opinion researchers back in the early days of public opinion polling to refer to the phenomenon of people offering an opinion even when they don't actually have one. Various studies have established that nonattitudes are a real phenomenon. One of the more prominent examples is a study conducted by University of Cincinnati researchers duing the late 1970s, in which they asked respondents their opinions on the "Public Affairs Act of 1975." More than 30% of the respondents offered an opinion, even though there actually is no such legislation.
As you might imagine, when it comes to the prevalence of nonattitudes, not all polling subjects are created equal. Most people do have opinions about high-profile subjects (e.g., abortion, the war in Iraq, Barack Obama). However, when it comes to relatively obscure policies, personalities, and events, nonattitudes are increasingly likely to surface. Similarly, while they may have opinions about an issue in general, they may not have opinions about its finer points and technicalities -- but might nevertheless offer "pseudo-opinions" about them when responding to a public opinion poll. As for why respondents report nonattitudes rather than just decline to respond at all -- there are various possibilities, ranging from wanting to appear more knowledgable than they actually are to feeling pressured to "just answer the question" that's asked of them.
There are a few steps pollsters can take to minimize the problem of nonattitudes creeping into their poll results:
- Screening questions at the start of the poll can weed out likely nonattitudes by asking respondents from the get-go whether they are familiar with the poll's subject and/or have an opinion about it.
- Follow-up questions can also minimize the nonattitudes problem by asking respondents to elaborate on their simple "yes/no," "agree/disagree," or "favor/oppose" responses with explanations as to why they hold those opinions.
- A "mushiness index" (i.e. an index of how "mushy" -- that is, unfixed -- an individual's opinions on some topic are) can be integrated into the poll. This would consist of a series of questions (usually in the neighborhood of 3-5) that ask how much respondents know about the topic, how often they think about or discuss it, and how passionate they are about it.
- Finally, simply providing respondents with the explicit possibility of resopnding "no opinion" or "I don't know" can limit the number of people who offer opinionated responses that they don't actually believe in.
Sometimes respondents do have an opinion about a poll's subject, but choose not to divulge it, as described in this unit's first required text, "When Voters Lie," an article that was published in the Wall Street Journal last summer.
As with nonattitudes, insincerity affects some polling subjects more than others. In contrast to nonattitudes, though, the problem surfaces less as a result of respondents' lack of knowledge about the poll's subject than their psychological urge to come off as "socially desirable" to the poll administrator. So, insincerity is often a problem when people are polled about their behaviors, whether pertaining to their hygiene, their participation in illicit activities, or even whether they vote in presidential elections. People are also more likely to provide insincere responses when asked about attitudes they hold that might be perceived by others as "socially unacceptable" or "politically incorrect." As quite a few of the required texts in this unit suggest, polling subjects that have an explicit racial angle are especially susceptible to bias stemming from voter insincerity.
The simplest way to minimize biased results stemming from insincerity is to have respondents self-administer the poll -- that is, to read the questions and record their responses themselves rather than have an interviewer do it. This isn't always possible, though, especially since telephone polling is often the most effective way to conduct a timely, large-scale poll. Therefore, coming up with new ways to get around the problem of respondent insincerity is always a hot topic in public opinion research. You can hear about one example of an alternative approach in another of this unit's required tests, a Youtube video clip of an interview with an award-winning graduate student presenter at the American Association for Public Opinion Research (AAPOR) conference last summer:
...Read more
Validity and Reliability
To recap in a nutshell, a valid instrument measures what it is intended to measure while a reliable one yields the same measurement outcome every time it's used. Also, while they sometimes go hand-in-hand, validity and reliability in fact vary independently of each other, as this target shooting analogy suggests:
So, why is all this relevant to public opinion?
As I've noted before, public opinion polls are essentially an instrument or tool we use to measure public opinion. Like all measuring devices, the results they produce may or may not be valid or reliable. Indeed, because so many factors go into designing and executing a public opinion poll -- selecting a sample, writing up a questionnaire, administering it, and analyzing the responses -- there's a lot of room for invalidity and unreliability to creep in. And, while the polling industry has made considerable strides towards improvings polling results over the last half-century or so (see the Zetterberg article from Topic I), they're still far from having perfected their craft.
The remainder of Topic II will be devoted to considering various sources of potential inaccuracy in public opinion polling -- ranging from lying respondents to problematic questionnaire design and poll administration -- and identifying some steps pollsters can take to avoid or overcome these pitfalls. ...Read more
Sunday, June 14, 2009
Sampling
Modern public opinion polling depends critically on sampling for its viability, since it rests on the assumption that we can learn about what the public as a whole thinks by asking just a fraction of its members. To get a better feel for how this works, consider these questions:
First, what is sampling? As the video clip above suggests, sampling refers simply to the selection of a subset of individuals (i.e. a "sample") from a larger population. ABC News's "Guide to Polls and Public Opinion" points out that sampling isn't unique to public opinion polling; instead, it's analogous to a blood test, in which only a little bit of blood is studied in order to draw conclusions about the health of an entire organism. Another good analogy is to a chef who tastes just a spoonful of sauce to determine whether it's ready to serve. In both examples, it would be ludicrous to suggest that the entire bloodstream or pot of sauce would need to be tested; so too in public opinion polling, only a small sample of the population is needed to make determinations about the views and attitudes of the whole.
Second, what are the characteristics of a good sample? In a nutshell, good sampling accounts for two factors: size (how big the sample is) and representativeness (the extent to which the sample faithfully reflects key characteristics of the whole population). Of the two, representativeness matters more than size. Consider the contest between the Literary Digest and Gallup polls to predict the 1936 election (see here to refresh your memory). The Literary Digest poll had a significantly larger sample of millions of respondents compared to Gallup's couple of thousand. However, Gallup's quota sampling technique provided a sample that was more representative of the public as a whole than the Literary Digest's reliance on telephone and automobile registry listings, which tended to oversample voters from the wealthier end of the socioeconomic spectrum. That said, size does matter, especially when it comes to determining a poll's margin of error, as suggested by another one of our required texts for this unit: Public Agenda's "Best Estimates: A Guide to Sample Size and Margin of Error."
Finally, how should samples be selected? The final required text for this unit, "Types of Sampling" from the Columbia Center for New Media Teaching and Learning, lays out several types of sampling strategies that are commonly used in public opinion polling and the circumstances that are best suited to the different techniques.
...Read moreMeasuring Public Opinion
To get around this problem, the predominant means for measuring public opinion is the scientific public opinion poll or survey, terms we'll use interchangeably to refer to questionnaires that are administered to a small but representative sample of the population in order to get a sense of what the population as a whole thinks. The first required text for this unit, "Guide to Polls & Public Opinion," by Gary Langer, the director of polling for ABC News, provides a pretty clear overview of what public opinion polls are, how they work, and the extent of their usefulness. Later on in this unit, we'll get more into the nitty-gritty of "how they work" (i.e. the logic and best practices of sampling); then, in Topic II, we'll turn to "the extent of their usefulness" and steps pollsters can take to improve the usefulness of their poll and survey results.
While the belief that public opinion should have a hand in determining political and policy outcomes dates back to the founding of the United States (see my last post), the emergence of scientific public opinion polling and survey research is a relatively recent development. A few of this unit's required texts drive this home:
- The segment on "George Gallup and the Scientific Public Opinion Poll" from a PBS documentary on the twentieth century traces back to the 1930s, when a few enterprising marketing researchers, including George Gallup, were developing new survey research methods. Prior to that time, public opinion was measured primarily through "straw polls," unscientific polls that were used to estimate likely election outcomes on a local basis (e.g. in a single town) as early as 1820s.
- "The Black & White Beans" is a Time magazine article that was written in order to explain the logic of public opinion polling to the American public at a time when it was all still fairly new -- just a little over a decade after Gallup bested the Literary Digest poll with his prediction of the 1936 election. It's probably worth noting that this article was published in May 1948 -- several months before all the major pollsters were fundamentally off base in their forecasts of the 1948 presidential election.
- On that note, in "US Election 1948: The First Great Controversy about Polls, Media, and Social Science," a Swedish social scientist named Hans Zetterberg provides an account of the pollsters' 1948 mishap and changes in polling practices that have been made since then to prevent a similar situation in the future.