Tuesday, June 30, 2009
Put Yourself in a Reporter's Shoes
Before getting into what "putting yourself in a reporter's shoes" will entail, it's worth thinking about why you should bother -- and, more generally, why it's worth spending a whole unit of a course on public opinion on the role of the media. Way back at the start of the course, I suggested (though perhaps not in so many words) that it is important for students of American politics to have an appreciation of public opinion because of its pivotal role in a political system whose form of government is a "republican democracy" -- that is, a political system in which sovereignty rests ultimately with the people but the various tasks of government are carried out by elected representatives acting on their behalf. In order for that to work, there has to be some means of two-way communication between the people and the government: The people need to know what their representatives in government are up to so that they might hold them accountable, and government officials need to know what their constituents think and want (i.e. public opinion) in order to govern according to the will of the people. That's where the media come in -- they are the primary means through which political communication takes place.
When it comes to modern public opinion polling, the media play an important role in conveying pollsters' findings to both public officials and the public at large. This isn't a straightforward translation process, though. Practically whenever communication takes place through a middleman (in this case, the reporter who takes information provided by a polling organization and then presents it to his media outlet's audience of public officials and citizens), the content of the commuication gets altered in some way: Some information may get lost along the way or reframed in a way that wasn't intended by its source. Nor is this necessarily a result of "media bias" (although that certainly does exist) or journalists' efforts to skew information for their own gain: Even the best-intentioned reporters face professional constraints that required them to make tough decisions about how to balance their objective of faithful and comprehensive reporting with tight deadlines, word limits, and the need to attract and maintain an audience for their product.
The purpose of this unit of the course is to give you a hands-on sense of how reporters manage that balancing act. To that end, you'll complete part of a course designed for journalists-in-training and journalism professionals at the online News University. You should also read the supplementary required text, "20 Questions a Journalist Should Ask About Poll Results," from the National Council on Public Polls. Then, you'll have an opportunity to put the information you've gleaned from these resource into practice, first by critiquing the Pantagraph's report on a recent local poll and then by producing your own 350-word article on a recent national survey about religion in the United States.
...Read more
Wednesday, June 17, 2009
Interview Effects
Self-Administered Polls
Self-administered entail respondents reading the questionnaire on their own and recording their own answers. Online polls are a good example of self-administered polls, as are polls completed via snail mail and the teaching evaluations you fill out at the end of each course at ISU.
The major "pro" of self-administered polling should be fairly obvious in light of what you've already learned about respondent insincerity: A pollster can generally expect more honesty from respondents when they don't have to share their answers out loud with another person whom they fear might make judgements about thier beliefs and behaviors. There are some downsides as well, though. For example, self-administered polls (especially mail-in polls) tend to have a lower response rate than polls that are administered by an interviewer (see below). In addition, open-ended questions, or questions that ask respondents to articulate answers in their own words rather than choose them off a multiple choice list, are less likely to be answered -- let alone answered at length -- in self-administered polls. This limits the amount of information that can be gathered from public opinion polls and also makes it harder to discern when respondents are offering nonattitudes rather than actually-held opinions.
Interviewer-Administered Polls
Trait of Interviewer Effects
A lot of public opinion research focuses on a polling administration problem that emerges only in interviewer-administered polls: trait of interviewer effects, or biased results due to the interviewer's personal characteristics, including especially their race and gender. Trait of interviewer effects are the topic of the last required text for this unit, a video interview with CBS Deputy Director of Surveys Sarah Dutton at last summer's AAPOR conference about race and gender of interviewer effects in the 2008 primaries:
Questionnaire Effects
There are basically aspects of a questionnaire that can trip pollsters up: its question order, its question wording, and its response alternatives.
- Do you think a Communist country like Russia should let American newspaper reporters come in and send back to America the news as they see it?
- Do you think the United States should let Communist newspaper reporters from other countries come in here and send back to their papers the news as they see it?
They found that when the questions were asked in this order, significantly more people said they'd support communist reporters coming to the US than when the questions were asked in the reverse order. This is likely because respondents who had already expressed support for Russia to allow access to American reporters would have felt hypocritical if they wouldn't support a similar policy in the United States.
- Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?
The poll yielded astonishingly high numbers of people claiming to either not know (12%) or that it seemed possible that the Holocaust never occured (22%). Shortly thereafter, the Roper pollsters re-ran the poll with the question reworded as follows:
- Does it seem possible to you that the Nazi extermination of the Jews never happened, or do you feel certain that it happened?
Nonattitudes and Insincerity
While, as we saw in Topic I, public opinion data may be skewed because of inappropriate sampling methodologies, sometimes polls produce inaccurate results because the people who participate them -- even if they comprise a maximally representative sample of the whole population -- aren't quite truthful about their opinions. There are two primary facets to the "respondents lie" problem: nonattitudes and insincerity.
Nonattitudes (sometimes also known as "pseudo-opinions")
Nonattitudes is a term coined by public opinion researchers back in the early days of public opinion polling to refer to the phenomenon of people offering an opinion even when they don't actually have one. Various studies have established that nonattitudes are a real phenomenon. One of the more prominent examples is a study conducted by University of Cincinnati researchers duing the late 1970s, in which they asked respondents their opinions on the "Public Affairs Act of 1975." More than 30% of the respondents offered an opinion, even though there actually is no such legislation.
As you might imagine, when it comes to the prevalence of nonattitudes, not all polling subjects are created equal. Most people do have opinions about high-profile subjects (e.g., abortion, the war in Iraq, Barack Obama). However, when it comes to relatively obscure policies, personalities, and events, nonattitudes are increasingly likely to surface. Similarly, while they may have opinions about an issue in general, they may not have opinions about its finer points and technicalities -- but might nevertheless offer "pseudo-opinions" about them when responding to a public opinion poll. As for why respondents report nonattitudes rather than just decline to respond at all -- there are various possibilities, ranging from wanting to appear more knowledgable than they actually are to feeling pressured to "just answer the question" that's asked of them.
There are a few steps pollsters can take to minimize the problem of nonattitudes creeping into their poll results:
- Screening questions at the start of the poll can weed out likely nonattitudes by asking respondents from the get-go whether they are familiar with the poll's subject and/or have an opinion about it.
- Follow-up questions can also minimize the nonattitudes problem by asking respondents to elaborate on their simple "yes/no," "agree/disagree," or "favor/oppose" responses with explanations as to why they hold those opinions.
- A "mushiness index" (i.e. an index of how "mushy" -- that is, unfixed -- an individual's opinions on some topic are) can be integrated into the poll. This would consist of a series of questions (usually in the neighborhood of 3-5) that ask how much respondents know about the topic, how often they think about or discuss it, and how passionate they are about it.
- Finally, simply providing respondents with the explicit possibility of resopnding "no opinion" or "I don't know" can limit the number of people who offer opinionated responses that they don't actually believe in.
Sometimes respondents do have an opinion about a poll's subject, but choose not to divulge it, as described in this unit's first required text, "When Voters Lie," an article that was published in the Wall Street Journal last summer.
As with nonattitudes, insincerity affects some polling subjects more than others. In contrast to nonattitudes, though, the problem surfaces less as a result of respondents' lack of knowledge about the poll's subject than their psychological urge to come off as "socially desirable" to the poll administrator. So, insincerity is often a problem when people are polled about their behaviors, whether pertaining to their hygiene, their participation in illicit activities, or even whether they vote in presidential elections. People are also more likely to provide insincere responses when asked about attitudes they hold that might be perceived by others as "socially unacceptable" or "politically incorrect." As quite a few of the required texts in this unit suggest, polling subjects that have an explicit racial angle are especially susceptible to bias stemming from voter insincerity.
The simplest way to minimize biased results stemming from insincerity is to have respondents self-administer the poll -- that is, to read the questions and record their responses themselves rather than have an interviewer do it. This isn't always possible, though, especially since telephone polling is often the most effective way to conduct a timely, large-scale poll. Therefore, coming up with new ways to get around the problem of respondent insincerity is always a hot topic in public opinion research. You can hear about one example of an alternative approach in another of this unit's required tests, a Youtube video clip of an interview with an award-winning graduate student presenter at the American Association for Public Opinion Research (AAPOR) conference last summer:
...Read more
Validity and Reliability
To recap in a nutshell, a valid instrument measures what it is intended to measure while a reliable one yields the same measurement outcome every time it's used. Also, while they sometimes go hand-in-hand, validity and reliability in fact vary independently of each other, as this target shooting analogy suggests:
So, why is all this relevant to public opinion?
As I've noted before, public opinion polls are essentially an instrument or tool we use to measure public opinion. Like all measuring devices, the results they produce may or may not be valid or reliable. Indeed, because so many factors go into designing and executing a public opinion poll -- selecting a sample, writing up a questionnaire, administering it, and analyzing the responses -- there's a lot of room for invalidity and unreliability to creep in. And, while the polling industry has made considerable strides towards improvings polling results over the last half-century or so (see the Zetterberg article from Topic I), they're still far from having perfected their craft.
The remainder of Topic II will be devoted to considering various sources of potential inaccuracy in public opinion polling -- ranging from lying respondents to problematic questionnaire design and poll administration -- and identifying some steps pollsters can take to avoid or overcome these pitfalls. ...Read more
Sunday, June 14, 2009
Sampling
Modern public opinion polling depends critically on sampling for its viability, since it rests on the assumption that we can learn about what the public as a whole thinks by asking just a fraction of its members. To get a better feel for how this works, consider these questions:
First, what is sampling? As the video clip above suggests, sampling refers simply to the selection of a subset of individuals (i.e. a "sample") from a larger population. ABC News's "Guide to Polls and Public Opinion" points out that sampling isn't unique to public opinion polling; instead, it's analogous to a blood test, in which only a little bit of blood is studied in order to draw conclusions about the health of an entire organism. Another good analogy is to a chef who tastes just a spoonful of sauce to determine whether it's ready to serve. In both examples, it would be ludicrous to suggest that the entire bloodstream or pot of sauce would need to be tested; so too in public opinion polling, only a small sample of the population is needed to make determinations about the views and attitudes of the whole.
Second, what are the characteristics of a good sample? In a nutshell, good sampling accounts for two factors: size (how big the sample is) and representativeness (the extent to which the sample faithfully reflects key characteristics of the whole population). Of the two, representativeness matters more than size. Consider the contest between the Literary Digest and Gallup polls to predict the 1936 election (see here to refresh your memory). The Literary Digest poll had a significantly larger sample of millions of respondents compared to Gallup's couple of thousand. However, Gallup's quota sampling technique provided a sample that was more representative of the public as a whole than the Literary Digest's reliance on telephone and automobile registry listings, which tended to oversample voters from the wealthier end of the socioeconomic spectrum. That said, size does matter, especially when it comes to determining a poll's margin of error, as suggested by another one of our required texts for this unit: Public Agenda's "Best Estimates: A Guide to Sample Size and Margin of Error."
Finally, how should samples be selected? The final required text for this unit, "Types of Sampling" from the Columbia Center for New Media Teaching and Learning, lays out several types of sampling strategies that are commonly used in public opinion polling and the circumstances that are best suited to the different techniques.
...Read moreMeasuring Public Opinion
To get around this problem, the predominant means for measuring public opinion is the scientific public opinion poll or survey, terms we'll use interchangeably to refer to questionnaires that are administered to a small but representative sample of the population in order to get a sense of what the population as a whole thinks. The first required text for this unit, "Guide to Polls & Public Opinion," by Gary Langer, the director of polling for ABC News, provides a pretty clear overview of what public opinion polls are, how they work, and the extent of their usefulness. Later on in this unit, we'll get more into the nitty-gritty of "how they work" (i.e. the logic and best practices of sampling); then, in Topic II, we'll turn to "the extent of their usefulness" and steps pollsters can take to improve the usefulness of their poll and survey results.
While the belief that public opinion should have a hand in determining political and policy outcomes dates back to the founding of the United States (see my last post), the emergence of scientific public opinion polling and survey research is a relatively recent development. A few of this unit's required texts drive this home:
- The segment on "George Gallup and the Scientific Public Opinion Poll" from a PBS documentary on the twentieth century traces back to the 1930s, when a few enterprising marketing researchers, including George Gallup, were developing new survey research methods. Prior to that time, public opinion was measured primarily through "straw polls," unscientific polls that were used to estimate likely election outcomes on a local basis (e.g. in a single town) as early as 1820s.
- "The Black & White Beans" is a Time magazine article that was written in order to explain the logic of public opinion polling to the American public at a time when it was all still fairly new -- just a little over a decade after Gallup bested the Literary Digest poll with his prediction of the 1936 election. It's probably worth noting that this article was published in May 1948 -- several months before all the major pollsters were fundamentally off base in their forecasts of the 1948 presidential election.
- On that note, in "US Election 1948: The First Great Controversy about Polls, Media, and Social Science," a Swedish social scientist named Hans Zetterberg provides an account of the pollsters' 1948 mishap and changes in polling practices that have been made since then to prevent a similar situation in the future.
Friday, June 12, 2009
Public Opinion and Its Attributes
There are both idealistic and practical reasons to try to discern and pay attention to public opinion. For example, the fact that democracy attributes ultimate sovereignty to "the people" implies that the people's views and attitudes about how things are and how they should be need to be known and translated into policy outcomes. Along these lines, the founding fathers of the United States -- including George Washington, James Madison, and Thomas Jefferson -- often mentioned public opinion in their discussions of how to design the American political system. In practice, public officials are well-served to track public opinion, not only because the will of the people should determine which policies are enacted, but also to make sure they're appealing to their constituents and to figure out how they might package themselves and their policy initiatives to maximize popular support. As this site describes, Abraham Lincoln especially appreciated this, even going so far as to say that "public sentiment is everything" in American politics and receiving inordinate numbers of visitors at the White House to provide him with what he called "public opinion baths," or opportunities to get a feel for what Civil War era Americans thought by interacting personally with members of the mass public.
4 Attributes of Public Opinion
The required text suggests that, when it comes to any given political issue, personality or event, there are various aspects (or "attributes") of public opinion that might be of interest. Sometimes it's enough just to know roughly how many people feel one way or another (i.e. the "content" of public opinion); other times, though, we want to know how strongly they hold their opinions ("intensity"), whether public opinion is relatively unchanging or constantly in flux ("stability"), and/or whether there seems to be an upward or downward trend in the making ("direction").
Content
The content of public opinion -- that is, the simple distribution of people who feel one way or another about a given personality, event, or issue -- is its most basic attribute.
For example, this graph from a recent Gallup poll tells us that a little more than half (55%) of Americans disapprove of majority government ownership of General Motors while a little less than half (41%) approve of it and only a small handful (5%) have no opinion on the matter:
It doesn't give us any more information than that: We don't know how strongly Americans approve or disapprove of majority government ownership of GM ("intensity"), whether they felt the same way a month ago ("stability"), or whether the size of the majority that disapproves is likely to grow or shrink in the coming weeks ("direction").
While the vast majority of public opinion polls measure content, not all of them do. For example, this graph from the Pew Research Center for the People and the Press's News Interest Index tells us which news stories people followed very and most closely during the first week in June, but not the content of their opinions on those stories:
In other words, we know from the graph that more than 25% of the public thought that Obama's Egypt speech was important enough to follow media coverage of it very closely, but not whether they approve or disapprove of what he said.
Intensity
Sometimes knowing just how many people feel positively or negatively about something isn't enough to inform political decision making; instead, it becomes important to know how strongly or intensely those positive and negative feelings are held. For example, consider this graph, which reported on American public opinion regarding same-sex marriage a few years ago:
From a purely content standpoint, it's not especially telling. About 55% of the population opposes public opinion, about 35% favors it, and the remaining 10% presumably has no opinion on the matter. What is interesting, though, is the intensity (or strength) of opinion held those who oppose same-sex marriage as compared to those who favor it. While clearly more than half of those who oppose same-sex marriage do so "strongly," less than half of those who favor it favor it "strongly." A politician (or his strategists) looking at this graph would likely infer from it that a misstep on the same-sex marriage issue is more likely to cost him support among those who oppose it, since their opposition is comparatively intense, and will err on the side of opposition in his policy votes and campaign messages.
Stability
The attribute of stability refers to whether and how public opinion changes over time. A good example comes from another recent Gallup poll that caused a bit of a stir by its finding that, for the first time ever since Gallup pollsters have been asking about it, more Americans consider themselves to be "pro-life" than "pro-choice" in their opinion on abortion:
As you can see, public opinion on this issue has changed considerably -- by nearly 20 percentage points -- since the mid-1990s, when only 33% of Americans considered themselves pro-life. If you cover up the first few years of the graph, though, public opinion on abortion seems to have been relatively stable: During the decade spanning from 1998 until 2008, the proportion of Americans self-identifying as pro-life fluctuated only mildly around the 45% mark.
Direction
...Read more
Welcome to POL 312 Online!
For the first time this summer, POL 312 is being offered as a fully online course. This blog will serve in lieu of class lectures. Throughout the four-week session, I'll be using this space to contextualize and expound upon the required texts, explain key terms and concepts, point out "breaking news" developments that relate to our subject matter, and raise questions for you to think about. You should feel free to use the "comments" feature to ask questions, request clarifications, take a stab at responding to discussion questions, and/or respond to your classmates' comments.
The course syllabus is available here. You can contact me via email at sgelbman@ilstu.edu.