Tuesday, June 30, 2009

Put Yourself in a Reporter's Shoes

Until now, we've approached public opinion from the standpoint of the pollster -- that is, the individual whose role is to measure the public's attitudes and beliefs by designing, administering, and analyzing the results of public opinion polls. For Topic III, though, we're switching gears a bit to better appreciate the vantage point of another important actor in the politics of public opinion: the journalist, or the individual whose role is to disseminate pollsters' findings about what the public thinks about politics by reporting on it in the mass media.

Before getting into what "putting yourself in a reporter's shoes" will entail, it's worth thinking about why you should bother -- and, more generally, why it's worth spending a whole unit of a course on public opinion on the role of the media. Way back at the start of the course, I suggested (though perhaps not in so many words) that it is important for students of American politics to have an appreciation of public opinion because of its pivotal role in a political system whose form of government is a "republican democracy" -- that is, a political system in which sovereignty rests ultimately with the people but the various tasks of government are carried out by elected representatives acting on their behalf. In order for that to work, there has to be some means of two-way communication between the people and the government: The people need to know what their representatives in government are up to so that they might hold them accountable, and government officials need to know what their constituents think and want (i.e. public opinion) in order to govern according to the will of the people. That's where the media come in -- they are the primary means through which political communication takes place.


When it comes to modern public opinion polling, the media play an important role in conveying pollsters' findings to both public officials and the public at large. This isn't a straightforward translation process, though. Practically whenever communication takes place through a middleman (in this case, the reporter who takes information provided by a polling organization and then presents it to his media outlet's audience of public officials and citizens), the content of the commuication gets altered in some way: Some information may get lost along the way or reframed in a way that wasn't intended by its source. Nor is this necessarily a result of "media bias" (although that certainly does exist) or journalists' efforts to skew information for their own gain: Even the best-intentioned reporters face professional constraints that required them to make tough decisions about how to balance their objective of faithful and comprehensive reporting with tight deadlines, word limits, and the need to attract and maintain an audience for their product.

The purpose of this unit of the course is to give you a hands-on sense of how reporters manage that balancing act. To that end, you'll complete part of a course designed for journalists-in-training and journalism professionals at the online News University. You should also read the supplementary required text, "20 Questions a Journalist Should Ask About Poll Results," from the National Council on Public Polls. Then, you'll have an opportunity to put the information you've gleaned from these resource into practice, first by critiquing the Pantagraph's report on a recent local poll and then by producing your own 350-word article on a recent national survey about religion in the United States.

...Read more

Wednesday, June 17, 2009

Interview Effects

The final source of potential innacuracy in public opinion polling that we'll consider in this course is the administration of a poll -- that is, the manner in which a questionnaire is presented to respondents and their answers are recorded.

Self-Administered Polls

Self-administered entail respondents reading the questionnaire on their own and recording their own answers. Online polls are a good example of self-administered polls, as are polls completed via snail mail and the teaching evaluations you fill out at the end of each course at ISU.

The major "pro" of self-administered polling should be fairly obvious in light of what you've already learned about respondent insincerity: A pollster can generally expect more honesty from respondents when they don't have to share their answers out loud with another person whom they fear might make judgements about thier beliefs and behaviors. There are some downsides as well, though. For example, self-administered polls (especially mail-in polls) tend to have a lower response rate than polls that are administered by an interviewer (see below). In addition, open-ended questions, or questions that ask respondents to articulate answers in their own words rather than choose them off a multiple choice list, are less likely to be answered -- let alone answered at length -- in self-administered polls. This limits the amount of information that can be gathered from public opinion polls and also makes it harder to discern when respondents are offering nonattitudes rather than actually-held opinions.

Interviewer-Administered Polls

In contrast to self-administered polls, interviewer-administered polls make use of a "middle-man" who reads the questionnaire to respondents and records answers on their behalf. Telephone polls (which are a very efficient way to conduct a large-scale poll, and so are very commonly used in the polling industry) are are the most typical form of interviewer-administered polls, but there are also in-person polls (which was the "state of the art" back in polling's early days) in which the interviewer and respondent interact face-to-face.

The pro's and con's of interviewer-administered polling are basically the flip-side of self-administered polling. On the one hand, it heightens the likelihood of insincerity; on the other, it improves response rates and enables pollsters to gather more extensive information through open-ended questions. Well-trained interviewers can also help clarify when respondents are confused about a question's wording or response alternatives.

Trait of Interviewer Effects

A lot of public opinion research focuses on a polling administration problem that emerges only in interviewer-administered polls: trait of interviewer effects, or biased results due to the interviewer's personal characteristics, including especially their race and gender. Trait of interviewer effects are the topic of the last required text for this unit, a video interview with CBS Deputy Director of Surveys Sarah Dutton at last summer's AAPOR conference about race and gender of interviewer effects in the 2008 primaries:



...Read more

Questionnaire Effects

So far, we've seen two potential sources of invalidity and unreliability in public opinion polls: (1) using sampling techniques that result in an unrepresentative samples, and (2) respondents misrepresenting their opinions, either by offering opinions when they don't in fact have any (i.e. nonattitudes) or by giving responses that they believe to be most socially desirable whether or not they're their true opinions (i.e. insincerity). This post considers a third possibility: The design of the questionnaire itself, which is addressed in the second required text for this unit, AAPOR's "Question Wording" FAQ page.

There are basically aspects of a questionnaire that can trip pollsters up: its question order, its question wording, and its response alternatives.

Question Order

The AAPOR reading covers this pretty well; to recap in a nutshell, problems can emerge when questions that appear early on in a questionnaire affect the way in which respondents answer questions that appear later on. A good example comes from some studies conducted during the 1980s, in which researchers looked at responses to the following two questions:
  • Do you think a Communist country like Russia should let American newspaper reporters come in and send back to America the news as they see it?

  • Do you think the United States should let Communist newspaper reporters from other countries come in here and send back to their papers the news as they see it?

They found that when the questions were asked in this order, significantly more people said they'd support communist reporters coming to the US than when the questions were asked in the reverse order. This is likely because respondents who had already expressed support for Russia to allow access to American reporters would have felt hypocritical if they wouldn't support a similar policy in the United States.

Question Wording

There are many ways for question wording to go wrong, as the AAPOR reading suggests. Leading questions, which (as the name suggests) lead respondents towards a particular answer, are a frequent and obvious cause for concern. At least as frequently problematic, though, are questions that are simply imprecise or confusing. For example, a 1992 Roper poll asked respondents the following question:
  • Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?

The poll yielded astonishingly high numbers of people claiming to either not know (12%) or that it seemed possible that the Holocaust never occured (22%). Shortly thereafter, the Roper pollsters re-ran the poll with the question reworded as follows:

  • Does it seem possible to you that the Nazi extermination of the Jews never happened, or do you feel certain that it happened?
The result this time around was more along the lines that were initially expected (and corroborated by a similar poll by the Gallup Organization), as just 1% of respondents said it seemed possible that the Holocaust never happened.

Response Alternatives

Finally, as the cartoon on the right suggests, a questionnaire can produce skewed poll results because the "multiple choice" options offered to respondents as possible answers to the poll's questions are inappropriate.

Sometimes, as in the cartoon, the problem is glaringly obvious. More often, though, it takes more careful reflection to discern where bias may ensue -- consider, for example, the response alternatives in some of the questions presented in "War or War?" (pdf), an article that appeared in Public Perspective about a year after the 9/11 terrorist attacks.

There are also yet more subtle ways for response alternatives to be problematic. Take, for example, a question that asks respondents how often they engage in some behavior -- say, working out at a gym -- and offers the following response alternatives: (a) often, (b) sometimes, (c) rarely, (d) never. The problem here is that "often" and "sometimes" mean different things to different people. So, you might have two respondents who both work out twice a week, but one of whom claims to work out "often" and the other claims to work out just "sometimes." And finally, as we saw in the case of non-attitudes, sometimes the absense of a "don't know" or "no opinion" option can lead respondents to take a side that they don't actually believe in.
...Read more

Nonattitudes and Insincerity


While, as we saw in Topic I, public opinion data may be skewed because of inappropriate sampling methodologies, sometimes polls produce inaccurate results because the people who participate them -- even if they comprise a maximally representative sample of the whole population -- aren't quite truthful about their opinions. There are two primary facets to the "respondents lie" problem: nonattitudes and insincerity.

Nonattitudes (sometimes also known as "pseudo-opinions")

Nonattitudes is a term coined by public opinion researchers back in the early days of public opinion polling to refer to the phenomenon of people offering an opinion even when they don't actually have one. Various studies have established that nonattitudes are a real phenomenon. One of the more prominent examples is a study conducted by University of Cincinnati researchers duing the late 1970s, in which they asked respondents their opinions on the "Public Affairs Act of 1975." More than 30% of the respondents offered an opinion, even though there actually is no such legislation.

As you might imagine, when it comes to the prevalence of nonattitudes, not all polling subjects are created equal. Most people do have opinions about high-profile subjects (e.g., abortion, the war in Iraq, Barack Obama). However, when it comes to relatively obscure policies, personalities, and events, nonattitudes are increasingly likely to surface. Similarly, while they may have opinions about an issue in general, they may not have opinions about its finer points and technicalities -- but might nevertheless offer "pseudo-opinions" about them when responding to a public opinion poll. As for why respondents report nonattitudes rather than just decline to respond at all -- there are various possibilities, ranging from wanting to appear more knowledgable than they actually are to feeling pressured to "just answer the question" that's asked of them.

There are a few steps pollsters can take to minimize the problem of nonattitudes creeping into their poll results:
  • Screening questions at the start of the poll can weed out likely nonattitudes by asking respondents from the get-go whether they are familiar with the poll's subject and/or have an opinion about it.
  • Follow-up questions can also minimize the nonattitudes problem by asking respondents to elaborate on their simple "yes/no," "agree/disagree," or "favor/oppose" responses with explanations as to why they hold those opinions.
  • A "mushiness index" (i.e. an index of how "mushy" -- that is, unfixed -- an individual's opinions on some topic are) can be integrated into the poll. This would consist of a series of questions (usually in the neighborhood of 3-5) that ask how much respondents know about the topic, how often they think about or discuss it, and how passionate they are about it.
  • Finally, simply providing respondents with the explicit possibility of resopnding "no opinion" or "I don't know" can limit the number of people who offer opinionated responses that they don't actually believe in.
Insincerity

Sometimes respondents do have an opinion about a poll's subject, but choose not to divulge it, as described in this unit's first required text, "When Voters Lie," an article that was published in the Wall Street Journal last summer.

As with nonattitudes, insincerity affects some polling subjects more than others. In contrast to nonattitudes, though, the problem surfaces less as a result of respondents' lack of knowledge about the poll's subject than their psychological urge to come off as "socially desirable" to the poll administrator. So, insincerity is often a problem when people are polled about their behaviors, whether pertaining to their hygiene, their participation in illicit activities, or even whether they vote in presidential elections. People are also more likely to provide insincere responses when asked about attitudes they hold that might be perceived by others as "socially unacceptable" or "politically incorrect." As quite a few of the required texts in this unit suggest, polling subjects that have an explicit racial angle are especially susceptible to bias stemming from voter insincerity.

The simplest way to minimize biased results stemming from insincerity is to have respondents self-administer the poll -- that is, to read the questions and record their responses themselves rather than have an interviewer do it. This isn't always possible, though, especially since telephone polling is often the most effective way to conduct a timely, large-scale poll. Therefore, coming up with new ways to get around the problem of respondent insincerity is always a hot topic in public opinion research. You can hear about one example of an alternative approach in another of this unit's required tests, a Youtube video clip of an interview with an award-winning graduate student presenter at the American Association for Public Opinion Research (AAPOR) conference last summer:


...Read more

Validity and Reliability

It's become something of a fad lately for instructors to have their students create and upload Youtube videos as class assignments; this one from the University of Texas does a pretty good job explaining the concepts of validity and reliability:




To recap in a nutshell, a valid instrument measures what it is intended to measure while a reliable one yields the same measurement outcome every time it's used. Also, while they sometimes go hand-in-hand, validity and reliability in fact vary independently of each other, as this target shooting analogy suggests:

So, why is all this relevant to public opinion?

As I've noted before, public opinion polls are essentially an instrument or tool we use to measure public opinion. Like all measuring devices, the results they produce may or may not be valid or reliable. Indeed, because so many factors go into designing and executing a public opinion poll -- selecting a sample, writing up a questionnaire, administering it, and analyzing the responses -- there's a lot of room for invalidity and unreliability to creep in. And, while the polling industry has made considerable strides towards improvings polling results over the last half-century or so (see the Zetterberg article from Topic I), they're still far from having perfected their craft.

The remainder of Topic II will be devoted to considering various sources of potential inaccuracy in public opinion polling -- ranging from lying respondents to problematic questionnaire design and poll administration -- and identifying some steps pollsters can take to avoid or overcome these pitfalls.
...Read more

Sunday, June 14, 2009

Sampling

(source)
Modern public opinion polling depends critically on sampling for its viability, since it rests on the assumption that we can learn about what the public as a whole thinks by asking just a fraction of its members. To get a better feel for how this works, consider these questions:

First, what is sampling? As the video clip above suggests, sampling refers simply to the selection of a subset of individuals (i.e. a "sample") from a larger population. ABC News's "Guide to Polls and Public Opinion" points out that sampling isn't unique to public opinion polling; instead, it's analogous to a blood test, in which only a little bit of blood is studied in order to draw conclusions about the health of an entire organism. Another good analogy is to a chef who tastes just a spoonful of sauce to determine whether it's ready to serve. In both examples, it would be ludicrous to suggest that the entire bloodstream or pot of sauce would need to be tested; so too in public opinion polling, only a small sample of the population is needed to make determinations about the views and attitudes of the whole.

Second, what are the characteristics of a good sample? In a nutshell, good sampling accounts for two factors: size (how big the sample is) and representativeness (the extent to which the sample faithfully reflects key characteristics of the whole population). Of the two, representativeness matters more than size. Consider the contest between the Literary Digest and Gallup polls to predict the 1936 election (see here to refresh your memory). The Literary Digest poll had a significantly larger sample of millions of respondents compared to Gallup's couple of thousand. However, Gallup's quota sampling technique provided a sample that was more representative of the public as a whole than the Literary Digest's reliance on telephone and automobile registry listings, which tended to oversample voters from the wealthier end of the socioeconomic spectrum. That said, size does matter, especially when it comes to determining a poll's margin of error, as suggested by another one of our required texts for this unit: Public Agenda's "Best Estimates: A Guide to Sample Size and Margin of Error."

Finally, how should samples be selected? The final required text for this unit, "Types of Sampling" from the Columbia Center for New Media Teaching and Learning, lays out several types of sampling strategies that are commonly used in public opinion polling and the circumstances that are best suited to the different techniques. ...Read more

Measuring Public Opinion

Our first full unit is devoted to the question of how we know what the American public thinks about politics -- in other words, how we measure public opinion. The short answer is that we ask the people what their views and attitudes are, which seems kind of obvious -- except that it's just not practical to question everyone every time we want to know what the country is thinking.

To get around this problem, the predominant means for measuring public opinion is the scientific public opinion poll or survey, terms we'll use interchangeably to refer to questionnaires that are administered to a small but representative sample of the population in order to get a sense of what the population as a whole thinks. The first required text for this unit, "Guide to Polls & Public Opinion," by Gary Langer, the director of polling for ABC News, provides a pretty clear overview of what public opinion polls are, how they work, and the extent of their usefulness. Later on in this unit, we'll get more into the nitty-gritty of "how they work" (i.e. the logic and best practices of sampling); then, in Topic II, we'll turn to "the extent of their usefulness" and steps pollsters can take to improve the usefulness of their poll and survey results.

While the belief that public opinion should have a hand in determining political and policy outcomes dates back to the founding of the United States (see my last post), the emergence of scientific public opinion polling and survey research is a relatively recent development. A few of this unit's required texts drive this home:


  • The segment on "George Gallup and the Scientific Public Opinion Poll" from a PBS documentary on the twentieth century traces back to the 1930s, when a few enterprising marketing researchers, including George Gallup, were developing new survey research methods. Prior to that time, public opinion was measured primarily through "straw polls," unscientific polls that were used to estimate likely election outcomes on a local basis (e.g. in a single town) as early as 1820s.

  • "The Black & White Beans" is a Time magazine article that was written in order to explain the logic of public opinion polling to the American public at a time when it was all still fairly new -- just a little over a decade after Gallup bested the Literary Digest poll with his prediction of the 1936 election. It's probably worth noting that this article was published in May 1948 -- several months before all the major pollsters were fundamentally off base in their forecasts of the 1948 presidential election.

  • On that note, in "US Election 1948: The First Great Controversy about Polls, Media, and Social Science," a Swedish social scientist named Hans Zetterberg provides an account of the pollsters' 1948 mishap and changes in polling practices that have been made since then to prevent a similar situation in the future.
...Read more