Media Briefings

Covering opinion polls and surveys

Journalists: Get Email Updates

What are Media Briefings?

The lead-up to this fall’s mid-term elections is already generating numerous polls and surveys, challenging local and regional reporters to interpret results for their audiences in ways that accurately convey civic leanings and enrich the democratic process. This briefing covered current trends in opinion polling, the strengths and weaknesses of common types of polls and surveys, and how to interpret and report on these tools skillfully while avoiding pitfalls. Two experts provided examples and advice and responded to journalist questions.

Journalists: video free for use in your stories

High definition (mp4, 1920x1080)

Download

Introduction

[0:00:25]

RICK WEISS: Hello everyone. Welcome to SciLine’s media briefing on covering opinion polls and surveys. I’m SciLine’s director, Rick Weiss. And for those not familiar with us, SciLine is a philanthropically funded, editorially independent, free service for journalists and scientists based at the nonprofit American Association for the Advancement of Science. Our mission is pretty simple—it’s just to make it easier for reporters like you to get more scientifically validated evidence into your stories. That means not just stories about science, but any story that can be strengthened with some science, which, really, in our view, is about any story you can think of. Among other things, we offer a free matching service that helps connect you to scientists who are both deeply knowledgeable in their fields and are excellent communicators. We do that for you on deadline. Just go to sciline.org, click on I Need An Expert. And while you’re there, check out our other helpful reporting resources, including notification of our next media briefing after this one on the 18th which is going to be focusing on voter turnout.

Today’s briefing is a little bit different than normal. Think of it as part news briefing but also part professional training because increasingly, in the weeks ahead, polls and surveys are themselves going to be the news, and there are so many ways to get them wrong. So, today’s experts are going to show you what scientific principles really tell us about how to get these stories right and how to avoid the pitfalls that can so easily harm your reputation, the reputation of your news outlet and, maybe most importantly, mislead your readers and viewers.

A couple of quick logistical details before we start. We have three—sorry—two panelists today who are going to make short presentations of maybe six or seven minutes each before we start getting into the Q&A. To enter a question during or after their presentations, just hover over the bottom of your Zoom window, select Q&A, enter your name and news outlet and your question. And if you want to pose that question to one or the other of our panelists, just note that. The full video of this briefing should be available on our website by later today or tomorrow and a timestamped transcript within just a day or two after that. If you’d like a raw copy of the recording more immediately, just submit a request with your name and email in the Q&A box and we’ll send that to you or a link to that raw copy by the end of today. You can also use the Q&A box to alert us to any technical difficulties.

I’m not going to give full introductions to our speakers. Their bios are on the website. I’ll just say that we’ll hear first from Dr. Courtney Kennedy, who is vice president of methods and innovation at Pew Research Center. She leads the team there that’s responsible for the design of—that centers U.S. surveys and for maintaining the center’s ongoing American Trends panel. And second, we’ll hear from Gary Langer, who is president and founder of Langer Research Associates, which produces the ongoing ABC News/Washington Post poll for ABC News, where Gary actually was previously the director of polling. I want to say also, he may be the first Emmy Award-winner we’ve ever had on SciLine media briefing. I’m not sure about that, but that’s possibly true. OK. Over to you, Courtney, to get us started.

[0:03:48]

COURTNEY KENNEDY: All right. Thanks so much, Rick. Bear with me quickly as I get my slides up. OK, Rick, does that look all right?

[0:03:57]

RICK WEISS: Looks good.

What to know when covering pre-election polls

[0:03:58]

COURTNEY KENNEDY: OK. Thank you. Well, it’s great to be with all of you today. I’m really just going to jump right in. In preparing for this event, I thought it’d be useful to gather some new polling data to illustrate some key points. And so, what you see on the screen here is a polling average out of the Arizona Senate race. And I think a lot of people would look at this and see the Democratic candidate, Kelly, you know, at 49.0%—looks to have a healthy lead on the Republican, Masters, at 44.9%. But really, if you look at this closely, I think that this illustrates a number of ways in which the interpretation of polls and reporting of polls can go sideways.

So, one thing is that this graph is zoomed in really closely on the data, I think to a problematic extent, because what that does is it makes it look like very small changes of one or two percentage points. They look meaningful here. In reality, they’re probably not, right? Polls have noise. We have sampling error. And so, we never want to overinterpret very small changes and assign them meaning that they don’t have. So, that’s an issue.

Another one—you can see here that whoever created this chart reported a decimal place. And as—you know, I devoted my career to polling. I love polls. But I’ll be the first to tell you that a public opinion poll cannot get anything right to the decimal place. That is just a level of precision that public opinion polling cannot deliver. We can’t tell you if opinion is 44.7 versus 44.8, right? And so ,that’s a problem because it does suggest—those decimal points suggest to the reader that we have that level of precision when we really don’t. So, I’d really strongly discourage ever reporting decimal points in polling data—even a polling average I wouldn’t do it.

But probably the most fundamental point here is that I think a lot of people would look at this sort of data and conclude that, hey, the Democrats got a really commanding lead in this race. This maybe almost looks uncompetitive. That’s actually, I think, goes way beyond what these polling data support, though, because one thing a lot of people wouldn’t know necessarily is that in 2016 and in 2020, if you look at the average amount that state polls were off, they were off, on average, by five percentage points. And here, we’re looking at a race where it suggests maybe there’s a difference of about four points. So, that’s within the amount that polls are off on average.

And so, I think—I took that same data and just reported the same data in I think a much more straightforward, maybe less interesting but more accurate way. What I think these data support is a conclusion that we have a competitive race, right? That’s about it. Perhaps the Democrat has a slight advantage, but really, I don’t think one can confidently say that based on state poll data given what we know about how they’ve performed. So, we really want to keep that in mind.

But I also don’t want to sort of over-teach the lesson here. I don’t want to suggest that polls are useless—far from it. I do think that if you’re asking me, can I use a poll to predict a very close election decided by, like, one or two percentage points? In my opinion, the answer is no. I think that’s beyond what polling can be expected to do well. But if the question is, can polls give us a useful high-level read of where the public is at on major issues? Yeah, absolutely. So, a good example is just polling data like this, tracking how Americans have reacted to the Biden administration—right?—the trends in Biden approval over the course of his administration. Great data—tells a very rich story and a meaningful story.

I’ll give you one more example. There’s been a lot of polling, of course, after the Supreme Court Dobbs decision, which, you know, as you know, overturned Roe v. Wade as the law of the land in terms of abortion. And so, what we know from polling—you could look at Fox, you could look at CNN, New York Times, doesn’t matter. All, like, high-quality national polls tell us about 60% of the public disapproves of the Dobbs decision. You know, again, is polling going to tell you whether it’s 61% versus 63? No. But we absolutely know, with very high confidence, that it’s roughly 60%. And so, we have good data to know that overall image of public sentiment.

OK, before I turn it over to my colleague, Gary, I’m just going to give you a few sort of color examples of things to watch out for when you see a preelection poll. So, one example comes from Lindsey Graham’s most recent reelection bid. So, you got Lindsey Graham—you know, well-known, running for Senate as a Republican in the state of South Carolina, very Republican state. That’s the scenario. And then, surprise, one day we see a poll reported that shows the Democratic candidate, Jaime Harrison, up by two percentage points. Problem is what was not clear in the reporting—this poll was selectively released from the Democratic candidate’s own campaign. So, you have a clear conflict of interest, especially in midterm polls—or in midterm elections, where there’s a real focus on state and congressional district races. You can see a lot of these campaign polls get leaked. And, you know, if your goal is really painting an accurate portrait of what’s going on in these races, this can be problematic—right?—because in reality, Lindsey Graham won that race pretty comfortably by about 10 points.

One more example comes out of Michigan. There was a poll suggesting that Kid Rock, the musician, was leading Debbie Stabenow for the Michigan Senate race. This was a very suspicious, very shady poll, though. It’s not clear at all that this was actually a real pollster, that this poll was—it was even conducted. There was just a number of red flags about this. To their credit, a lot of reporters saw this and threw that fish back in the water. Unfortunately, some reporters did report it. And this just—it got way too much play in the news media than it really warranted because it doesn’t even appear to have been a real poll. And again, back in reality, Stabenow beat the Republican challenger—not Kid Rock—by six points.

So, that leads me to my last final point here, which is that we’re in a moment in the polling industry—a lot of our work has gone online. And there’s frankly no barriers to entry in the industry. And so, we’ve got a field with a huge mix, right? We’ve got well-experienced, well-resourced practitioners using best practices—not infallible, but best practices nonetheless. We’ve got newcomers to the field using more experimental methods. And unfortunately, we do have the potential for spam and outright fraud. And so, we’ve got to be really careful vetting polls. I think Gary’s going to speak to that. We can talk a lot about that more in depth. So, I’ll stop sharing and pass it over.

[0:11:47]

RICK WEISS: Thanks, Courtney. Great introduction to the landscape there. And over to you, Gary.

A polling primer in 5 points

[0:11:54]

GARY LANGER: Thanks, Courtney. Thanks to SciLine for setting this up, and thanks for everybody who’s joined. I got a bit of a bug so you might hear me—forgive me being a little gravelly on this call, but I’ll do my best. Excuse me. I have a bit of an advantage—or a little different background, at least—in that I’m a recovering journalist. I spent 10 years at the Associated Press—train wrecks, plane crashes and three-alarm fires—before I went to ABC News and joined their polling unit. People ask me why I changed profession, and my reply is that I didn’t. Now, as then, I go to my best sources, ask my best questions, and report what they told me. The only thing different there in polling—or the main difference—is that I don’t get to arbitrarily choose my sources. I have to go to a random sample of them, which is something we can talk about a little more. But I do bring, I think—I’d like to think—a journalist perspective for what we’ll be discussing, a polling primer in five points.

The first point is that forecasting is not easy. A story in The Washington Post just a couple of days ago—48 hours before Hurricane Ian made landfall in Fort Myers on September 28, the American model projected that it would make landfall in Tampa on September 29. I don’t raise this to throw shade at NOAA—they do brilliant work—but to point out that there are so many variables involved that predicting the future is really easy except when you try to do it on a reliable basis. It’s just really hard. There’s lots of variables involved. We do our best estimates. But what I’d like to say about polling—and I think it may apply to NOAA forecasting as well—is this is not laser surgery on your eyeball. It’s an estimate. Keep it in mind.

Point two—and preelection polls and, moreover, forecasting, I’d suggest, isn’t the point. What do we find out from good-quality preelection polls? Which issues motivate likely voters? Which don’t? Which policy preferences do they hold? Which candidate attributes matter to them? How do campaign controversies influence the contest, if at all? How and why are voters—likely voters—coming to their choices? Those are both whether to participate and whom to support. In short, what does this election mean? There’s one thing we will always know about an election in the fullness of time, which is, who won? The things, though, that I’ve listed here are things we would not know about any election, and they are essential in the absence of good polling.

Remember the cone of uncertainty. Other polls—non-election polls—sample a known population. Preelection polls, however, have to estimate and sample an unknown population. That is, who’s going to vote? When we do these polls, we don’t know who’s going to turn out. There’s no fixed population of voters. So, this need for estimation produces additional uncertainty. This additional uncertainty is exacerbated by other externalities—just for example, holding elections in a pandemic, holding elections in the midst of a fundamental change in how people vote—for example, very germane right now, the rise of early and absentee voting and holding an election at a time of extraordinary political emotion, which can influence how people express their intentions and how they act on them.

A key point—number four—not all polls are created equal. A fundamental issue is probability or random sample surveys versus convenience sampling or non-probability, non-random surveys. A boatload of independent literature tells us that convenience sampling—that’s sampling that’s not based on a random sample—has non-ignorable inconsistencies across panels of participants within time and within a single panel across time. These surveys, these convenient sample surveys, opt-in online surveys, operate outside the realm of inferential statistics, which is the fundamental basis of how polling works. Inferential statistics tells us that we can make inferences about a full set by examining a randomly selected subset. Random selection is essential. There are good versus poor practices in sampling, in questionnaire design—the forgotten stepchild of survey research—in weighting, in analysis.

How do we know how to pick this apart? Well, the first thing we have to insist on is transparency. When a poll is produced, we need to see a detailed description of the methodology. We need to see every question that was asked in the order in which it was asked, and we need to see the overall or topline results for each question. With these elements, we can see where they either stayed on the straight and narrow or went off the rails—poor sampling, leading biased questionnaires, cherry-picked analysis. A key point about this point, as you can hear, is that reporting polls requires reporting. I’m a recovering journalist, as noted. It’s so easy to grab a convenient data point that seems to fit your premise, slap it in your story and move on without doing due diligence. Any other data that comes in, any other story that comes in over the transom in a newsroom, we check it out before we report it because that is our responsibility, our fundamental pact with our readers and viewers. Surveys should not be any different. The challenge is that reporters, very frankly, for far too long indulged themselves in the lazy luxury of being both data-hungry and math-phobic. We’re all English majors. We got to get over that and report this stuff.

A next guide, a next essential point is that your organization needs standards. What sorts of polls will we report? What sorts of polls will we not report? Far too few organizations have enunciated standards for their poll reporting. I strongly suggest you go to your news editor or your news directors and try to get that going. Establish what counts as a poll because when you see a number and a percentage sign, it seems to speak with authority. We want to make sure that’s justified. Now, I don’t want to say that preelection polls are a lost cause. These are a lot of numbers here. I apologize. These are final preelection polls in presidential elections and in midterm elections by ABC News or ABC News and The Washington Post. And the bottom line is they are historically quite accurate.

Now, we didn’t do any in 2020—any national surveys. We did some state surveys instead. Some of them were good, and some of them were less good, to be honest with you. But there are three fundamental elements of good estimates in preelection polls. One is a very rigorous methodology. One is a large sample size. And the three is that it has to be conducted in very close proximity to Election Day. If you care about the prediction piece, these are essential elements. And you can get, over time, quite consistently good estimates. Again, though, in my view, it’s not the core purpose.

For all of its challenges—my last point—preelection polling, like forecasting hurricanes, provides essential, if sometimes imprecise, information. Campaigns and interest groups conduct polls to try to manipulate public attitudes and behavior and to manipulate media coverage of issues and candidates in order to achieve their goals. The absence of quality, independent public interest polling would leave us defenseless to this manipulation, and I think we need independent polling and good quality reporting to protect us against it. Thanks a lot.

Q&A


What is being done well in press coverage of surveys and polls, and where is there room for improvement?


[0:19:14]

RICK WEISS: Fantastic. Thank you, Gary, thank you, Courtney, for really good lay of the land there. Lots of things to talk about. I’ll remind reporters, if you have questions, to use the Q&A box to let us know what your question is and who you’d like to address it to if there’s someone you’d like to. But for starters, we usually take advantage of our moderator’s privilege here to ask the first question in a briefing. And I want to do that of both Courtney and Gary to ask, just from your experience as professionals in this business and watching the news media, year after year, as they do their best, sometimes succeeding and sometimes failing, is there something you can point to each—either about the way that you think media is doing well in this area—especially lately, maybe looking at this year—or is failing at and deserves to be pointed at where there’s still room for improvement? And, Courtney, I’ll start with you.

[0:20:14]

COURTNEY KENNEDY: Sure. One thing I think the media is very good at is understanding one of the core values of polling, which is that, as Gary said, it’s independent information that can be used to check and challenge people in power making sort of lazy or convenient assertions about things without any evidence. And two examples come to mind. One of them is fairly old, but back in the Clinton administration, when the Lewinsky scandal happened, there was a lot of people in Washington just convinced that that was going to sink the Clinton presidency, that Americans were going to be overwhelmed and enraged and that was the end of him. But polling, during that time, consistently showed that, you know, Americans didn’t love the scandal, sure, but did it really paint or influence their overall opinion of Clinton and the administration? Not really. Like, they were still rather fundamentally happy with what the administration was doing. So, polling was a really useful ground check on what was happening during that era. Much more recent example, a lot of discussion about defunding the police. And we’ve done polling on this. Others have. If you ask Americans, do you want to reduce spending on police enforcement in your area, it is a wildly unpopular sentiment. Now, sure, some people might want more accountability, you know, things along those lines, but literally spending less on policing is not a popular sentiment. And one of the advantages of polls is that we can pick up on that thing right away. And I think journalists are very well-attuned to that.

[0:21:58]

RICK WEISS: Interesting. Great. Gary.

[0:22:01]

GARY LANGER: Yeah. I’d say good and the bad. The good thing is that reporters, I think, recognize the value of trying to understand public attitudes in their coverage. That—I don’t think anyone would suggest that public attitudes should dictate public policy or should rule the day or should dominate our coverage. But they are an essential part of it, and they are a reality check to what we hear from politicians and pundits and spinmeisters and all the rest. So, reaching for good measurement of public attitudes is essential. And I think it’s a good thing that reporters do try to do this and add that as what I would call a reality check to our reporting.

The flip side, the troubling side, is that we do it with far too few standards, with far too little of an evaluation of what does and does not make for valid and reliable survey research. I see reporting in some of the most prominent news organizations in the country that is pretty horrifying in terms of their acceptance of convenience and non-probability samples as if it were reliably representative when it’s not, I’m sorry. Reporting a margin of sampling error with a convenient sample, which is—there’s a word for that. It’s called fiction, which I don’t think is our business. And in failing to differentiate between balanced, neutral, well-worded questions and those that are suboptimal and maybe even intentionally biasing. There’s a world of manipulators out there who are trying to influence our coverage. And I think with polling, as with all else, it’s essential for reporters to be alert to it.


Are there organizations that independently rate the quality of polling or survey groups?


[0:23:35]

RICK WEISS: Great. Lots of—lots to chew on there. We have some questions coming in that are actually relevant to some of these points you’ve each just made. And here’s one I think relevant to your last point, Gary. Is there any organization that independently rates the quality of various polling or survey groups or gives a good housekeeping seal of sorts, which would help reporters maybe cut through this question of quality?

[0:24:02]

GARY LANGER: The answer is yes, but not publicly, sorry to say. Just for example—so years ago, when I—many years ago, I came to be horrified by the magnitude of junk polling that was making air at ABC News—where I worked as director of polling—and from other sources. And I set up, with support of management—which I’ll always appreciate—a standards and vetting operation, where we enunciated, as I said earlier, everyone should use standards for what kind of polling we would or would not report and then apply that to the surveys we saw coming in and vetted each one before it made air to see if it met our standards. We set up a database where reporters—producers across the organization can go online and pull up any polling organization and see how it’s rated. Now, that’s an internal application at ABC. It still exists. It’s got hundreds of polling providers on it, but it’s not public and nor should it be, I would think, because I don’t presume to set standards for others. I merely suggest that you need some.

And I think it’s incumbent on individual news organizations to learn enough about polling—we learn about all sorts of things in the field—to learn enough about polling, to understand how it’s done. The Pew Research Center, ABC News and The Washington Post go to extraordinary effort and expense to do probability-based polling, and to give this kind of serious work the same play as a quickie opt-in online poll is really problematic. And don’t take my word for it. Check it out. Do the research. You need to come to judgments and establish your standards. That’s what you need to do.

[0:25:44]

RICK WEISS: Courtney, does the—does Pew Research Center reveal anything about how you go about judging standards of other people’s polls, or do you only just deal with your own polls anyway?

[0:25:56]

COURTNEY KENNEDY: It’s pretty much the latter, yeah. But I would second Gary’s point. I mean, the other thing I would mention just because it’s somewhat well-known is that AAPOR—that’s an acronym for the major sort of trade association of pollsters. AAPOR does have something called the transparency initiative, which has been up and running for—I don’t know—probably 10 years or so now. And that is sort of part of this discussion. But it really only focuses on the first question of, does the pollster at least tell you what they’re doing, the transparency part? And so, there are polling organizations that have—that abide by that and are transparent. And there’s a whole lot—a lot of the more state and local ones—they tend to not participate in—for whatever reason. So, there’s that transparency piece. But it’s very imperfect. I think folks like Gary and I are far from wholly satisfied because there’s not a quality judgment component to that initiative. And so therefore, it’s quite lacking.


Should news organizations avoid covering polls that do not pass a basic transparency test?


[0:26:58]

RICK WEISS: And just a question from me on that—it seems like some news organizations could make a decision simply not to cover polls that do not pass a basic transparency test, just for starters. But that would be, perhaps, shooting themselves in the foot. Would that eliminate just too many—especially in the midterm season—too many of the polls that are out there? Or is that a practical standard?

[0:27:18]

GARY LANGER: It shouldn’t, Rick. In fact, it’s essential in our view. As I said, for years—many years at ABC, a practice that continues now with others there, we check out any poll before we report it to see how it was conducted and to see if it meets our standards for air. If we don’t get the disclosure we need to check it out, that makes the job very simple. We take it out back and shoot it. It’s not reportable if you don’t have disclosure. Now, this is challenging. It’s problematic because while we put, I would say, ourselves at an integrity advantage in our reporting of polls, we put ourselves, at the same time, potentially at a competitive disadvantage. And nobody really wants to do that. But I would argue that the only thing worse for a reporter than being second is being wrong. And if you report polls you haven’t checked out and which you haven’t gotten disclosure and held to standards, you have a high probability of being just that—wrong.


Does exposure to stories about polling results drive people’s voting decisions?


[0:28:18]

RICK WEISS: Good point. OK. Is there good evidence, this question asks, that people change their minds about how to vote based on what they see in stories about poll results, maybe because they want to vote for the likely winner? You’ve talked about polls as a way of knowing what public sentiment is, but does it drive public sentiment?

[0:28:41]

GARY LANGER: That concept is called the bandwagon effect. Courtney can talk about it, too, I’m sure. But there’s really no good evidence that it exists. If it did, leads in polls would never change hands—right?—and campaigns would not matter. You’d have somebody who starts ahead and who would therefore be ahead and stay ahead the whole time. Ask, you know, President Giuliani how that worked out. So, there are many examples, consistently, constantly, in which leads change hands in campaigns. And that’s because voters come to their judgment. By the way, we act as if voters are just coming to a judgment on which candidate to support. That’s not at all the case. At least as important is they’re coming to a judgment on whether to vote in the first place. A lot of campaigning is not about changing minds but about either motivating or demotivating—motivating your supporters to vote, demotivating the other candidate’s supporters from voting.

[0:29:35]

COURTNEY KENNEDY: I probably agree with Gary on this. There’s thin to no evidence that the polls really change people’s votes. But I would say it’s—that is a tough thing to establish scientifically. That is a very hard causal arrow to really pin down. And my feeling is I actually think it’s understudied as a topic. And frankly, before 2016—I’d get this question a lot, and I used to brush it off. After 2016, I no longer brush off this question not because the polls change people’s minds per se, but I think it’s legitimate to look at that election and wonder if polling but even more so the predictions lead some people to stay home because they were told over and over again, hey; it’s not a question of whether Hillary wins. It’s a question of how much she wins by. How big is that blue wave going to be, right? Ninety-nine percent likely to win, 98%—I mean, all that stuff for weeks, if not months. I think it’s fair to wonder if that had an effect on some people and whether they stayed home. And—but I think—so you have to weigh, you know, the pros and the cons. There are a lot of good sides to polling. And I think pollsters, by and large, are reasonably responsible. I’m more concerned with these predictions and the models that make more definitive statements about who’s going to win.


What is the value of doing polls to predict election results?


[0:31:09]

RICK WEISS: So, that raises another question for me that—I’ll just take advantage of my position and ask it. That starts to raise the whole question of, what is the value of polls about who’s going to win? You know, it’s one thing to ask people what their sentiment is about, you know, what their mood is and what they favor or don’t favor generally. But what is the point in supporting democracy, of doing polls that predict who’s going to win?

[0:31:35]

GARY LANGER: Well, again, one key point, Rick, is that the purpose of preelection polls is—the fundamental purpose, as I said in my presentation, is not to predict who’s going to win, but to predict how and why—what issues people care about. At the same time, preference polling—president—vote preference polling in the course of a campaign is kind of like the score of a game. So, imagine you’re watching a basketball game, and you hear the color commentary and you see the glorious movement of players up and down the court, but you have no idea what the score is. It’s really hard to put it in context and to really understand what’s going on. Now imagine you’re in that position and all the players and all the coaches know the score and the refs and everybody else. The only person who doesn’t know the score is you. Now you’re at a real disadvantage in trying to understand what’s going on. And I can tell you that the campaigns are going to do their own polling. And in the absence of independent polling, they’re going to leak it, they’re going to manipulate it and they’re going to use it to try to drive preferences their way in a way that we’d be really vulnerable to if we weren’t out there doing it ourselves.


What’s the minimum sample size for a reliable poll?


[0:32:49]

RICK WEISS: All right. I can buy that. I won’t try to drive you guys out of business. Question here—can you speak about sample size? What’s the minimum sample size for a reliable poll, and how does that differ on a national-level question like effects of Supreme Court decisions versus locally relevant questions such as opinions on local legislators running for office? Courtney?

[0:33:17]

COURTNEY KENNEDY: That’s a good question. I got to check my own bias because I work at a place, Pew Research Center, where we have more resources than other pollsters. So, it’s easier for me to say, oh, have at least a thousand interviews, always. But easy for me to say, right? So, I want to acknowledge that. But if I’m in the position of vetting a poll—right?—that I just got, if it’s a state poll of less—I’d say if it’s less than 500 interviews, I would dismiss it out of hand. I would really want to see, if you’re polling an election, I would—it’d be 800 interviews before I’d be starting to get comfortable. Might not throw out something at 500, but you get quite large sampling errors under that. Nationally—I mean, it really doesn’t necessarily change too much whether you’re polling at the national level or the state level, so I won’t think that that’s too much of a discriminator. You know, a thousand interviews, you know, in a well-done poll is fine.

[0:34:24]

GARY LANGER: Yeah, I would add that it depends largely on how granular you want to get in your analysis. Don’t forget that a poll of a thousand members of the general public does not have a thousand likely voters in it because not everybody is going to vote. That may be well down into 60% or 50 or 40 or 30% of the sample, and the sample size therefore erodes quite rapidly. Polls typically are driven by the size of the smallest subgroup you want to reliably analyze. And in a good random sample with an admittedly large margin of sampling error, about a hundred or so cases are generally regarded as analyzable. In fact, you hear about a thousand interviews in a national poll—it’s like the norm. Is it? Why? Is it because three zeroes look great? No, it’s because if we do a survey of a thousand adults nationally, we get about a hundred Black adults and we get a little over 100, 115, 120 Hispanic adults. And we know we want to analyze those populations, so to get them—adequate samples, if not robust but decent—then we have to do a thousand nationally. At the state level as well—you know, a sample at the state level, a sample at the national level works the same way. It’s still a ladleful of soup out of the big pot to see what’s in there. You need a decent-sized ladle. An eyedropper won’t do it. A teaspoon won’t do it. But you need to go nuts.


Don’t organizations like FiveThirtyEight rate pollsters?


[0:35:39]

RICK WEISS: OK. Question here from Borys Krawczeniuk from the Scranton Times Tribune in Pennsylvania. Doesn’t an organization like FiveThirtyEight rate pollsters? Also, a lot of campaigns issue polling memos. Reporters should avoid reporting these without caveats, right?

[0:36:00]

COURTNEY KENNEDY: One thing I’d say on FiveThirtyEight—there’s a lot that they do that I think is wonderful. Their pollster ratings, I’m not a huge fan of because one thing that might not be clear is that they rate organizations that, frankly, aren’t even playing the game that they’re grading. So—I mean, and mine’s one of them. Gallup’s another. We, six years ago, got out of the business of doing—putting out estimates that we described as really trying to characterize the outcome of a race. So, we’ve been out of that business a while, but we still have a grade. So, we’re not playing this game that FiveThirtyEight says we’re playing. And so instead, we get judged on polls that we conducted, like, three, four weeks out from Election Day, where our goal was not to predict the outcome. But they’re judging us as though it was. So, there’s a lot I don’t love about those rankings personally.


How should reporters cover campaign memos?


[0:37:01]

GARY LANGER: Yeah. I’ll leave that one there and go to the campaign memos. We got a nice score from FiveThirtyEight, so I can’t say anything bad about them. Sure, you should look skeptically at anything that comes from a campaign. Your inclination should be to disbelieve it. Let me—can I tell a really short story? This is true. A friend of mine was a fieldwork director for the most prominent, probably, or one of the most prominent political campaign consultants of the day some years ago. And they had a candidate running for the Senate and my guy came in with the data. And the campaign consultant sitting at his desk said, how are we doing? And my friend said, not good. We’re down by 13. And if you think about it, this campaign consultant has a variety of interests in getting polling data. He wants to inform his judgment as to the contours of the race. He wants to produce numbers that his candidate can use to show reporters to maintain credibility, can use to show fundraisers to get money and even numbers that will encourage a candidate to stay in the race and keep paying the consultant. So, my guy says, not good. We’re down by 13. Without looking up from his desk, this campaign consultant said—and I quote—”make it six.” That’s what you get from political campaigns. Look out.


How can news agencies conduct reliable polls?


[0:38:25]

RICK WEISS: Sounds familiar in terms of looking for votes in Georgia some time ago. But that’s a great reminder of the incentives that may be there. Question here from Scott Morgan, South Carolina Public Radio—is there a real way news agencies themselves can conduct a reliable poll without just getting opinions from their own listeners, readers and viewers only?

[0:38:53]

COURTNEY KENNEDY: Yeah, absolutely.

[0:38:54]

GARY LANGER: Go ahead, Courtney.

[0:38:55]

COURTNEY KENNEDY: I mean, it’s just—I think the real question is, do you have the resources? But there’s plenty of polling organizations out there that do great work. Gary runs one of them. But there’s quite more. So, there’s definitely a robust polling industry of organizations that can, you know, poll in a state like South Carolina and do a good job. But, you know, it’s going to cost, you know, $50,000 to $100,000 to do a good poll, in my estimation.

[0:39:26]

RICK WEISS: Yeah. And would you say this is…

[0:39:26]

GARY LANGER: And knowing what I know about newsroom budgets, I don’t think it’s realistic. But the thing to do, I would suggest, is there’s two ways forward. One is to get together with a consortium of other news organizations and sponsor some research—good-quality survey research as a group. It is expensive. Another would be to look for a good-quality research university in your state and see if they have a survey research center and if they have in-house expertise on how to conduct surveys well. Check it out and see if you can team up with a good university. Look. Polling producers, like universities, really like the coverage they get when they team up with a news organization. So, you do have something to offer, but you don’t want to give away your good name to crap research. So, you want to be real careful as you go forward.


Can the methodology of how a poll was conducted skew poll results?


[0:40:12]

RICK WEISS: Great reminder of all those incentives—perfect. Here’s a question from Margaret Barthel from WAMU in Washington. Are there issues we should be examining with respect to how people who participated in a poll were contacted that could skew the results in some way? We haven’t talked really much about that—different ways of finding people.

[0:40:39]

COURTNEY KENNEDY: So, yes. I mean, I think that question can go fairly deep into the methodological weeds. But at a high level, yes. I mean, my sort of broad view of the problems that polling has had—part of the problems that polling has had over the last X number of years is that we’ve gone more and more online. And guess what? The people who you can get online with convenience-type approaches skew young, progressive, urban, Democratic. And they don’t—and where those polls are really missing for is folks in more rural areas, more religiously conservative folks, things like that. And so, it’s not shocking to me that polls in the U.S. and around the world, especially those done online, are showing, you know, a fairly consistent Democratic bias. I think there’s a structural reason why that’s the case. And so, I think that really demonstrates the need and the value of polls that don’t just grab people online but make effort, make investment to recruit people offline as well. We do it through the mail. Others do it through the phone. There’s other ways to try to counteract that bias.

[0:41:53]

GARY LANGER: Yeah. The fundamental point, which you raised earlier, is you need a random sample. You have to randomly select. You can’t let people self-select themselves to participate in your survey. Opt-in online and convenient sample surveys are conducted among people who sign themselves up to click through questionnaires on the internet in exchange for cash and gifts. And they do so often, multiple times under multiple assumed personalities to increase their gains, their winnings, if you will. And they often pay very little attention to the content of the survey itself and just speed through it so they get paid for it. That’s a fundamentally different enterprise than a random sample survey. And as Courtney says, they can be achieved through address-based sampling, through the creation of a random or probability-based panel that can still be worked online, or it can be worked with an online and offline component. And good, old-fashioned telephone sampling still gives us good data as well. We run a lot of surveys internationally. Those, believe it or not, in developing countries, is still done face-to-face. So, yeah, random selection is really critical. And that’s the fundamental difference between, as I call it, probability or random sample survey research and this convenient sampling that is pretty problematic.


How does Bayesian polling and projection work?


[0:42:58]

RICK WEISS: Great. Another methodology question here—this from Nick Gerbis from KJZZ Public Radio in Phoenix. Can you explain in basic terms how Bayesian polling and projection works and how journalists should treat them? They seem to fill in missing data with probabilities based on previous trends, which seems iffy.

[0:43:23]

COURTNEY KENNEDY: So, the word Bayesian and basic or simple just are incompatible, in my view. Bayesian is a very technical part of statistics and part of applied survey work. I guess I’m not even—I’ve—I know people in the field who describe what they do as Bayesian, but I’m not sure I’m even quite tracking what the question was getting at. I don’t know, Gary, if you did.

[0:43:51]

GARY LANGER: Well, if you’re using a stable trend data set, you’re going to get variability. You’re going to get noise in any sample. And you can use a Bayesian approach to pull down that noise if you feel you’re justified in doing so. And that is—let’s say I do 10 surveys over the course of a year and they’ve got very consistent numbers of Democrats, Republicans and independents—consistent within my surveys, consistent with other surveys. Now I do another survey, and I have a different makeup of Democrats, Republicans, conservatives, and I’m not so sure about that. Maybe it’s a support (ph) sample. Survey sampling—you know, the margin sampling error worked at the 95% probability level, meaning 5% of the time they’re going to be outside the margin of error, or at least they’re not reliably within it. So, you can use a Bayesian adjustment if you feel you’re justified. The key point, though, is not to, you know, be the wizard behind the curtain playing around, you know, on “Wizard of Oz.” What you have to do is disclose your methods. You can’t, you know, bandy about fancy words. You need to say what you’re doing. You need to disclose it. You need to explain it. That’s from the pollster’s side. From the reporter’s side, you need to ask and get good answers.


What are the most important things journalists should look for when evaluating a poll’s legitimacy?


[0:45:04]

RICK WEISS: Yeah. I think your earlier advice, Gary, to remind reporters that this is not just a cut-and-paste job, this is a reporting job like any other story is just a really interesting and great bottom line to keep in mind. Question here from Alexis Wnuk from brainfacts.org and similar questions from some others. This is addressed to you, Gary, but I think both of you might want to weigh in on it. What are the most important things journalists should look for when evaluating a poll’s legitimacy? Are there any resources to help journalists with this? And I will just mention that I’m going to put in chat mode a couple of resources that we have on the SciLine website here that you can look at. And I know that Courtney has a slide that wasn’t shown but will be posted on our website that do have some resources—that has some resources as well. But it would be great—and I know you rushed through a few of them, Gary, when you were talking earlier, some of the things to look out for, but why don’t we spell out a few of the flags that you might want to point out?

[0:46:01]

GARY LANGER: Yeah, well, one thing for resources for journalists, one is there’s a lot of pollsters out in the world. You can reach out, find a good one who’s willing to talk to you and get some guidance. The other is that the Poynter Institute in Florida have got a guide to polling principles and do some workshops for reporters, and that may be worth taking a look at. The Knight Foundation is well-known for supporting journalists’ efforts, and they may have some resources. There’s a variety of resources out there. But reporting on polls should be, like reporting on almost anything else—I’ve got to say, a lot of us, as reporters, back in the day for me, would get assigned to cover a topic we don’t know anything about, and we got to learn about it. We got to step up to it.

So, in polling, you need three things. You need a detailed description of the survey’s methodology, not some bland assurance it’s random, that it’s got a margin of error, see you later, but a detailed layout of how the survey was done methodologically. You could go to abcnews.com or google ABC News polling standards and methodology, you’ll get the full boat. I promise you. So, a detailed description of the survey methodology. Then you have to see the questionnaire, every question from top to bottom. If you don’t have expertise in survey question design, it’s not that hard. Just read it. Is it neutral, balanced, down the middle? Does it have reasonable options that can be compared to one another and that are not tilted in one or the other direction?

And then third, you need to see the overall results to each question to see, even if you don’t have the crosstabs or the results in the groups, you want to at least be able to see that the report you’re reviewing wasn’t cherry-picked, in which inconvenient results were set aside. So, those are the three pieces you need. You need to get on and look at them. Look, I understand. A lot of reporters on this call, you probably—your heads are spinning because what I’m saying sounds totally impractical. You’re covering a campaign. You’re not the poll reporter. They’re—you don’t have one, and you’re trying to crank out a piece on deadline. And it can be really hard to do this. There can be ways, though. One way is not to just slap the latest number in every campaign story you do, but once a week, pull back and do a polling story that looks at where the polls are at in the race, not only on who’s ahead, but on how and why, on the issues and the attributes and all the rest, and learn how the polls are done and give it your attention on an individual basis, but maybe not in every report. It’s a better approach than slapping an unchecked number in a story, I got to tell you.

[0:48:31]

RICK WEISS: Great. Anything to add there, Courtney?

[0:48:33]

COURTNEY KENNEDY: Well, Gary mentioned that you need to see a statement of the methodology, but we could speak a little more in detail about, OK, once you get that, what do you look for? And I think there’s a few things. But a fundamental one is it should be clear, where did the pollster draw the sample? Where did these people come from? Was it—and specifically, was it, like, a list or database that really covers most of the people in the state or in the nation, you know, whatever the population may be? But does it cover all of them, or is it likely to be excluding some people right off the bat? So, that’s one thing to look for. A shocking number of polls these days are described as conducted online, and that’s it. And really, to me, that’s, you know, just sort of disqualifying right off the bat. You need to know a lot more.

And then the last one is much more technical. You know, just sort of insiders like Gary and I look at this stuff, but I’m fascinated by the weighting because unfortunately, we live in the—in a world where response rates are low, even to rigorous polls. And as pollsters, we have to make a number of adjustments to try to true up the responding sample to what that population actually looks like. And so, I always look and see, you know, did the pollster adjust on a bunch of dimensions that we know bias will creep in if they didn’t tamp it down?

[0:50:00]

GARY LANGER: Yeah, and I can give you a little tool there. One thing to ask the polling provider is what’s the design effect? It’s also known as the UWE, the unequal weighting effect. And if they don’t know, they may well not know what they’re doing—because the design effect is a measure of the extent of weighting done to true up a poll’s demographics with census values. And if the poll is far off, you have a large design effect which reduces the effective sample size, which increases the margin of sampling error.

So, if you see a poll of a thousand people with a three point error margin, that’s a fiction. That would imply zero design effect, and that’s—really doesn’t happen. But—so the design effect has to be calculated and included in a reporting of the margin of sampling error. But at least you can say to the poll producer, what’s your design effect? What’s your UWE here? And if they say 1.2, 1.1, 1.4, you know that. You got it. And, actually, on my website, at langerresearch.com, we have a margin of error calculator in which you can put in the sample size, and you can put in the design effect and see what the margin of error should be. You’re welcome to use it. Just look it up at MOE at langerresearch.com. But if it’s a big design effect, then you know that they didn’t start out with a particularly representative sample, unless they did a lot of oversampling. So, there are some complications. And if they don’t know the desired effect, then they may be clueless.


What is weighting and how can it affect polling results?


[0:51:30]

RICK WEISS: So, let’s take a moment to unpack just the weighting thing because I’m not sure all the reporters on the line are familiar with weighting. And that’s not W-A-I-T-I-N-G. It’s weighting, W-E-I-G-H-T. Can one of you just, you know, take a minute to talk about weighting?

[0:51:46]

COURTNEY KENNEDY: Sure. So, I’ll give you an example. We recruit through the mail, and it turns out women are more likely to open the mail than men. And so, in Pew surveys these days and on our panel, we have a slight overrepresentation of women, just sort of on a raw basis when we do our data collection. So, let’s say I go do a Pew poll today and it’s maybe 55% women, 45% men. But I know from the Census Bureau, it really should only be 52% women. So, when we say weighting, we—after the data collection, we statistically sort of tamp down the influence of the women so that they—in our final data, our final estimates—they have the influence that they should have based on what we know about the population so they are represented at that 52% that we get from the Census Bureau. And we do a similar exercise on college, non-college, race, ethnicity, geography and on down the line. So, it’s making sure that your final estimates—people are represented proportional to what they should be given what we know about the population.

[0:52:59]

GARY LANGER: Yeah. Think of a bicycle wheel. Some of you who ride will know it’ll sometimes go slightly out of round, and you got to true the wheel. Weighting in a survey is like truing a wheel. You simply align the key demographic values—typically age, race, sex, education, those sorts of things, sometimes interactions between them—to the census values that we know. They should not be excessive. And if they are, then there is some question as to what’s going on with the sampling. But there are groups that are—get harder to participate. In ABS surveys, mail surveys—M-A-I-L surveys—as Courtney points out, you get more women participating. In other surveys, it’s just really hard to get younger people, just for example, and people generally with lower socioeconomic status—less—harder to get them to participate. You do your best. And ultimately, you’ve got to weight them up somewhat. But the weight shouldn’t be excessive, and this should be a thing that pollsters should disclose.


Are state and local polls less accurate than larger, national polls?


[0:53:53]

RICK WEISS: I seem to remember from when we talked a couple of years ago before the election a comment that maybe local and state polls were less diligent about weighting, and that might have been one reason why local polls or state polls are—tend to be a little bit less accurate, if that’s—my memory is correct, than national. Is there an issue there? This is a midterm election, so state and local polls matter a lot. Is their weighting generally not as good or different than for larger polls?

[0:54:24]

GARY LANGER: Shouldn’t be, if people know what they’re doing. But there was an example in 2020 when it was shown that a lot of state polls were not including education in their weighting variables. And I’m sorry; not doing that is sort of an equivalent of polling malpractice.


Is the science of polling the same no matter the subject matter?


[0:54:39]

RICK WEISS: Right. OK. We’ve got time for just a couple more questions here, and I’ve got one here. I’m curious about surveys conducted in other contexts, such as in market or consumer research settings. Are there—are other standards in place or techniques used in those contexts that are wholly different from those used in the kinds of public-facing polls we’re discussing here? Or is the science of polling the same no matter the context?

[0:55:07]

COURTNEY KENNEDY: I think it can actually be a bit different for people who are doing polling with the intent of marketing or message testing. They do some different practices. One that comes to mind is something called conjoint analysis, where if, you know, you’re sort of testing out a product and it’s got maybe four features, and you want to study the effect of each of the four features and use a large survey to do that. There’s ways to sort of randomly change each of those features when you’re asking people questions. It’s something as a pollster we don’t really do. There’s other techniques as well. Gary, I don’t know if you’re familiar with more the marketing world.

[0:55:52]

GARY LANGER: Well—yeah, one thing I’d say is that, I mean, the sort of random sample, highly accurate probability-based research we do is little known in the market research world. The vast majority of that stuff is done by convenient sampling of the type that I’ve been throwing shade at the whole hour here. And that’s—there’s a few reasons for it. It’s usually less costly. And, also, a lot of marketing research—let’s face it—is really not done to independently know an—analyze attitudes. It’s done to confirm the executive vice president’s preset expectations. So, a lot of market research is not what I would call particularly serious, or at least certainly not particularly rigorous research. But they’re not trying to very accurately represent population values, and therefore, they would call it fit for purpose. It’s a different purpose.


Which polls or survey questions should reporters keep an eye on in the lead-up to the midterms?


[0:56:48]

RICK WEISS: Great. All right. One question here. Are there any—this is good to start to wrap up today—are there any particular polls or specific survey questions that you are especially keeping an eye on or recommend reporters keep an eye on in the lead-up to the upcoming midterms? What’s catching your eye either because of the topic, because of the nature of the poll, because the nature of the race?

[0:57:19]

COURTNEY KENNEDY: I’ll be honest. I’m always very curious in terms of a poll—that—what The New York Times is doing, not because I have some special favoritism in The New York Times, but their approach to polling is quite interesting these days. They sample from lists of all registered voters. And they try really hard, both on the front end to draw that sample so it represents the voters in that state. And after they collect the data, they try really hard to weight the heck out of it so it represents the voters in that state. And the problem was, in 2020, they did all this work, and it just did not go well. Their polls were quite off. And so, just sort of as a methodologist, I’m always interested to see, you know, sort of doing that as well as you possibly can, does it work and why or why not? And they’re pretty good about talking through the strengths and weaknesses of what they’re doing and what they’re learning about it. So, I would raise them up.

[0:58:23]

GARY LANGER: Yeah. On a methodological point, if I may—so I find RBS sampling—that’s registration-based sampling—to be pretty difficult, pretty, frankly, problematic. So, you go to the state, and you want to get a list of everyone who is registered to vote in the state. And some of those individuals in the state record may have an address—will have an address, but may have a phone number appended, maybe not. So, you go to a list broker who then will try to take the name and the address and append a phone number to it. You get enormous non-coverage. And a critical point in survey sampling we haven’t talked about is non-coverage. That is, people who are not in your sampling frame, therefore, systematically excluded, perhaps. If they’re non-systematically excluded, you’re OK. But in RBS sampling, you get the names back of all the registered voters in the state, and hopefully it’s current. But a lot of them don’t have phone numbers. And a lot of the ones that do have phone numbers don’t have working phone numbers. And you end up missing a lot of people. These samples often have non-coverage of 30-, 40-, 50% of the population, which is pretty intolerable. So, when they crash and burn at the end of the day, I’m not terribly surprised by it.


What is one key take-home message for reporters covering this topic?


[0:59:31]

RICK WEISS: Oh. OK. We are just about at the top of the hour. I’m going to ask each of you just a wrap-up question. Before I do that, I want to remind reporters, as you get ready to log off, you’ll see a short survey as you do log off today. And we’d really appreciate it if you would take just the half a minute it takes to answer three questions so we can keep designing media briefings that work for you as well as possible. And just to wrap up—I always do this—but in a half a minute or less, each of you, just give me a take-home point that you want reporters to really keep top of mind and walk away with today as a really important take-home message. Courtney?

[1:00:07]

COURTNEY KENNEDY: Sure. Well, Gary showed some of the sort of historical track record of the accuracy for the polling that he’s done. And it is sterling, and it’s very good. Unfortunately, when I think about polling in a midterm, most of the polls are not going to be done as well as a Gary Langer ABC News poll. They’re done with a lot less money and methods that just aren’t as high quality. And so, I’m pretty focused on this unfortunate fact that if we’re talking about state polls, you should expect them to be off by about five percentage points. That was the case in 2020 and 2016. And so, if you’re tempted to characterize a race based on polling as, you know, a confidently—somebody in the lead, when it’s less than that, you know, just keep that in mind, that there really is a margin there where we have to really check what we’re concluding about leads and stuff heading into a midterm.

[1:01:04]

RICK WEISS: Great.

[1:01:05]

GARY LANGER: Yeah. That’s really a good point, Courtney. And I’d just go to a broader point, if I may, which is that data are compelling. A lot of our stories are really all full of anecdote, and they get strength and stability from numbers and percentage signs. We see them. We need them. We got to have them. We grab them and we run with them. And I like to say that running with data is like running with scissors. It’s really easy to get hurt. So, you’ve got to stop and figure out, where did these numbers and percentage signs come from? How were they obtained? Were they created using neutral and unbiased probabilistic methods that give me confidence and reliance in them? It is, indeed—it’s part of our compact with our audiences that we’re only going to report news and information to them that we reasonably believe to be true and accurate. That’s kind of what we’re here for. And if we’re going to do that with polling, as with all else, we got to stop and check it out.

[1:02:01]

RICK WEISS: Fantastic advice, and so much information today—great warnings, great advice about how to get things right, as well as what to watch out for and not get wrong. I want to thank our guests today—Courtney, Gary—so much for adding so much to reporters’ understandings of how to get it right when they’re covering polls and surveys.

Again, I encourage our reporters on the line to fill out the survey, follow us on Twitter—@RealSciLine—and join us for our next media briefing, which is just a week from now on the 18th, where we’ll have three experts talking about who votes and who doesn’t and why, and what is the impact of that dynamic on outcomes of elections. Promises to be an interesting briefing. Thanks, everyone, for attending. Thanks to our panelists, and we’ll see you next week. So long.

Dr. Courtney Kennedy

Pew Research Center

Courtney Kennedy is vice president of methods and innovation at Pew Research Center. Her team is responsible for the design of the center’s U.S. surveys and maintenance of the American Trends Panel. Kennedy conducts experimental research to improve the accuracy of public opinion polls. Her research focuses on nonresponse, weighting, modes of administration and sampling frames. She has served as a co-author on five American Association for Public Opinion Research (AAPOR) task force reports, including chairing the committee that evaluated polling in the 2016 presidential election. Prior to joining Pew Research Center, Kennedy served as vice president of the advanced methods group at Abt SRBI, where she was responsible for designing complex surveys and assessing data quality. She has served as a statistical consultant for the U.S. Census Bureau’s decennial census and panels convened by the National Academies of Science, Engineering, and Medicine.

Declared interests:

None.

Gary Langer

Langer Research Associates

Gary Langer is president and founder of Langer Research Associates. The company produces the ongoing ABC News/Washington Post poll for ABC News; manages international surveys for the Pew Research Center; and designs, manages and analyzes surveys for a range of other media, foundation, association, and business clients. Gary was director of polling at ABC News (1990-2010) and a newsman in the Concord, N.H., and New York bureaus of The Associated Press (1980-90), where he covered the 1984 and 1988 presidential elections and directed AP polls (1986-90). His work has been recognized with two news Emmy awards (and 10 nominations), the first and only Emmys to cite public opinion polls; as well as the 2010 Policy Impact Award of the American Association for Public Opinion Research, for a seven-year series of surveys in Iraq and Afghanistan cited by AAPOR as “a stellar example of high-impact public opinion polling at its finest.”

Declared interests:

None.

 

Dr. Courtney Kennedy slides

Download

Gary Langer slides

Download