What are Media Briefings?

SciLine and the American Statistical Association’s (ASA) briefing covered the strengths and weaknesses of common types of opinion polls and surveys, how to interpret margins of error, uses of social media for surveying the public, and pitfalls and best practices for your election coverage. A panel of polling experts provided concrete examples and advice, and responded to journalist questions.

Journalists: video free for use in your stories

High definition (mp4, 1280x720)

Download

Introduction

[0:00:08]

RICK WEISS: And welcome, everyone, to this briefing co-hosted by SciLine and the American Statistical Association focused on polling and surveys, a topic that’s inevitably going to become more dominant in journalism as November approaches. Reporting on these measures of public opinion is a significant responsibility in the lead-up to an election, but it’s very easy to do a less-than-stellar job covering them because, at best, they’re a complicated instrument and, at worst, some are designed to mislead you and the news-consuming public. So SciLine and the ASA are very happy to introduce you today to three people with deep expertise in this area who are going to help us all get our stories right.

Before we get started, I want to take just a minute to introduce Regina Nuzzo, who’s the senior adviser for statistics communication and media innovation at the American Statistical Association and a professor of stats at Gallaudet University in D.C. here, who has worked closely with SciLine over the past few months not only to prepare for this briefing, but also to help produce the fact sheet on polling and surveys that we’ve jointly released and which is available on the SciLine website’s fact sheet page, which I encourage all of you to look at and to refer to repeatedly in the months ahead. Welcome, Regina. Say hello.

[0:01:26]

REGINA NUZZO: Thanks, Rick. Thanks so much. For people who aren’t familiar, the American Statistical Association is the world’s largest community of statisticians and actually the second-oldest professional society in the U.S., actually older than AAAS, and home to about 19,000 members across the world in academia, government, industry, including journalism, and especially data journalism. So ASA is very excited to be working with you, Rick, SciLine, AAAS. And I’m very much looking forward to these three wonderful speakers today. Thanks.

[0:02:06]

RICK WEISS: Thank you. And great. Going ahead, I’m not going to take time now to give full introductions to our speakers. Their bios are on the SciLine website. But just to let you know the order of events, we’ll first hear from Courtney Kennedy, director of survey research at the Pew Research Center and chief methodologist for that widely covered survey organization. We’ll hear next from Gary Langer, president and founder of Langer Research Associates in New York, which designs and manages survey projects for a range of media outlets and other orgs and who himself is a former longtime journalist who knows what it’s like.

And third, we’ll hear from Trent Buskirk, who’s professor of data science and chair of the Applied Statistics and Operations Research Department at Bowling Green State University and former director of the Center for Survey Research at UMass Boston. Remember; you can submit questions anytime by clicking on the Q&A icon at the bottom of your screen. We’ll get to those after all three have presented. And, Courtney, please get us started.

Check out our quick facts on Surveys and Polling

Best practices for covering polls and surveys—and pitfalls to avoid.

Read the quick facts

Presentations

The Polling Landscape

[0:03:05]

COURTNEY KENNEDY: Thanks very much, Rick. All right. It’s great to be with you all today. I’m going to talk about how polls are conducted. But before I jump into the nuts and bolts, I want to make a few high-level points about the polling field writ large. You should know that polling is in a very transitional era. And by that, I mean pollsters are doing their surveys in a number of different ways. So specifically, how The Associated Press and Pew Research Center does their polls is very different from how CNN, Fox, NBC do their polls, which is very different, again, from how places like USA Today, New York Times and Reuters are doing their polls. And part of this is due to the transition from traditional live interviewer telephone polls to surveying online.

That’s a big reason for the diversity of polls. And another thing that you should know about the polling field is something I think as journalists and reporters you can relate to, which is the barriers to being a pollster have basically disappeared. You know, years ago, you had to have a brick-and-mortar building. You had to have survey staff, a call center and training to be a pollster. But today, due to the changes in technology, anyone with a few thousand dollars in their pocket can go to a certain website and, you know, do a national poll. But that doesn’t necessarily mean that the poll was accurate or trustworthy. So for you, if you see a press release by an organization you’ve never heard of, you don’t necessarily want to trust the numbers in there. They could be accurate, but a lot more questions need to be asked. So I’m going to walk through the main types of ways that polling is done these days, and I’ll stay high-level. Really happy to talk about details if there’s questions later.

I’ll start with the most familiar, which is live telephone polling. So this is what CNN, Fox and a number of others are doing with their polling. And the great virtue of this is that you start with a truly random representative sample from the U.S. population ’cause almost every American adult – 98% – have either a landline or a cellphone. And these polls draw random samples from both the master list of all landline numbers, and they sample from the master list of all cellphone numbers. So that’s called random digit dial, which you might be familiar with. You should also know that another way that telephone polling is done these days is by sampling or recruiting people off voter files, like the state lists of all registered voters. Many, though not most – many, though not all of the records of registered voters could have a phone number for that person appended to it. So that technique is really popular with campaigns because it’s very effective for reaching voters, more so than RDD. And there are a few public pollsters using that methodology as well.

Another phone-based methodology is robopolling or interactive voice response, and that’s where you pick up and it’s, you know, it’s a recorded message walking you through the survey. And the appeal of this approach is it’s very inexpensive. But it’s got some noticeable methodological limitations. If you think about who do you know that reliably still answers their home landline, if they even have one, you won’t be surprised to hear that these polls skew very old and skew white. And so one thing that these pollsters often do is they take their robopoll sample and they combine it with a different sample, like an online sample of people, who – which probably has more younger adults, and so they put them together. So this is done. It’s very inexpensive. I’ve never seen a peer-reviewed piece of science demonstrating that this is a good idea and has good properties, but you do see it done. All right, so now I want to move to online polls. And…

[0:07:38]

RICK WEISS: Courtney, I’m going to interrupt for one moment. If you think you’re showing your slides, you’re not, so…

COURTNEY KENNEDY: Oh, OK. Thank you. Let me just take a moment here. Apologies for that.

RICK WEISS: OK.

COURTNEY KENNEDY: All right. How we doing now?

RICK WEISS: Now you got it.

COURTNEY KENNEDY: So – all right. Thank you.

RICK WEISS: Great.

COURTNEY KENNEDY: So with online polls, there’s an important distinction that you should be aware of, and it stems from the fact that there is no way to draw a random sample of the full U.S. population online. There’s no master list of email addresses. There’s no master list of internet users. And so for pollsters who want to do online surveys but have that truly random sample, we have to recruit offline. And what we do these days is we draw a random national sample from the list from the Postal Service of all residential addresses. So we sample from there. We recruit through the mail, and we recruit people to take surveys online for us on an ongoing basis. And so that’s called probability-based online polling because that probability offline that any American is sampled is known. And we can contrast that with online polls that are done through opt-in sampling.

Another word used is nonprobability sampling. And these are much cheaper, but they’re basically done with convenient samples. So there’s like a hodgepodge of different ways that pollsters recruit using this approach. They let people just sign themselves up on survey panels to earn money. They do pop-up ads in social media feeds, or they can recruit through email – a number of different ways. So this approach is widely used ’cause it’s quite inexpensive. You can do a lot of interviewing in a short period of time. But there’s even greater burden on the pollster to try to weight these data to make it representative because they’re not starting with a random sample. And there’s a lot of studies done about how well that works or doesn’t work we’d be happy to talk about later. So that brings me to my last point, which is weighting adjustment.

Weighting is one of the final steps in doing a poll, where the pollster takes the interviews they have and does a statistical adjustment to make them as representative as possible of the U.S. population. This is a critical step because some groups in the public are just more likely to take surveys than others. So if a poll is not weighted, it’s generally going to overrepresent older Americans, whites and people with higher levels of formal education. And so in terms of pollsters, the tricky thing is pollsters go about this very differently. No two pollsters weight exactly the same way. Some don’t weight their data at all, which, to me, is really alarming and, frankly, disqualifying. Some pollsters weight just on a few variables, like gender, age and race. And one of the lessons from 2016 is that can be grossly insufficient to getting good data. But other pollsters really try hard to adjust on a larger number of variables to make their sample as representative of the population as possible. And I’ll leave it there, Rick. Thank you.

[0:11:36]

RICK WEISS: Fantastic. A really clear introduction. Thank you, Courtney. And over to you, Gary.

Reporting On Polls: Three Things

[0:11:42]

GARY LANGER: Thanks, Rick. Hi, everybody. Let me get my share and get us going here.

RICK WEISS: Looks good.

GARY LANGER: You guys see my slides?

RICK WEISS: Yes, perfect.

GARY LANGER: Fantastic. All right, it was a great introduction by Courtney. I’m going to take it a little farther. There’s so much to talk about when we talk about public opinion research, surveys and survey reporting, and I really needed to narrow it down, so I’m going to talk about three things, of many. Thing one is sampling. Thing two is questions. And thing three is operating principles. I think I’m OK raising some of these points with you because in addition to being a survey researcher now for 34 years, I’ve also – am a recovering journalist. I spent a decade as a reporter at The Associated Press, followed by 20 years at ABC News, most of them as director of polling.

So I’ve got some news background as well as, I think, a reasonably strong survey research background. Let’s talk about sampling for a second first. Courtney covered it well. But one way I like to present this is to tell you a old joke that pollsters have, which is to say that if you don’t believe in random sampling, then the next time you go to your doctor and he wants to do a blood test, have him take it all. The point is that a few drops or an ampul of blood randomly drawn from all the blood in your body is adequate to test and make inferences about all the blood in your body – your red count, your white count, your cholesterol, you name it. We have a known universe – all the blood in your body. We take a random sample of it, and we can make inferences. It’s a beauty of survey research based on the principles of inferential statistics, as it’s called – random sampling, also known as probability sampling. Now, there’s other approaches.

One that’s very common these days is the opt-in approach. These are panels composed of people who’ve signed up to click through questionnaires on the internet in exchange for points redeemable for cash and gifts. There was a very detailed 80-page study done in 2010 by AAPOR, the American Association for Public Opinion Research, which found a variety of problems in these opt-in online panel data. They said that researchers should avoid these panels when one of the research objectives is to accurately estimate population values, which, from my perspective, is kind of the purpose of the enterprise. There’s no generally accepted theoretical basis for which to claim that these survey results, in fact, support inference, that they’re projectable to the general population, and thus claims of representativeness should be avoided when using these sample sources.

And lastly, the reporting of a margin of sampling errors – something else we can talk about – as associated with an opt-in or self-identified sample is misleading and (unintelligible). Now, that was in 2010. There’s been a bunch of research before and since really largely supporting these conclusions. As recently as 2016, Courtney’s own group at Pew did a very detailed study on the subject. They found a great deal of variability across panels. They also found – and I’ll circle the headline there for you – widespread errors for estimates based on blacks and Hispanics. And I have to tell you that I don’t think those sort of errors are ever tolerable, but certainly in the heightened awareness we’re experiencing right now, I think it’s particularly intolerable to accept research that contains these sorts of errors. Let’s look at what some news organizations have said. This is a polling standards statement produced by The New York Times some time ago but with some really, I think, germane and important advice, some of the same advice I’ve been trying to give for many years now.

Polls must be thoroughly vetted. They must’ve been determined to have been done well and free from bias in the conclusions drawn before it’s going to be reported. Keeping poorly done research out of the paper is just as important as getting good survey research into the paper. If we get it wrong, we’ve not only misled our readers but also damaged our own credibility. This holds true for polls on every topic in every section of the paper. Absolutely, I think these are really important guideposts for us as we think about poll reporting. We’re not just reporting somebody else’s data. We’re lending it our credibility, and we’re telling our readers and our audiences that this material is worthy of their attention. They expect us to have checked it out first because that’s our job. They also said – The New York Times polling standards – that internet and opt-in polls, so the kind I just described, do not meet the Times’ standards regardless of how many people participate and that, in fact, in order to be worthy of publication in the Times, the survey must be representative – that is, based on, as discussed, a random sample of respondents.

That was in 2006. I haven’t seen any updated polling standards document from the Times, but I certainly have seen different polling practices. This is just recently, a New York Times survey done by the online research firm Survey Monkey, which, as Courtney described, doesn’t do random sample probability-based survey research but does a different type. I don’t know how this happens. We can talk or speculate about it, maybe an example of this. But certainly, what we’ve seen said and pronounced as important polling standards and the practices that are in place don’t always match up. Let’s talk about questions, right? We talk about how a sample’s drawn. What’s being asked in these questions? Here’s one that was pretty fun – recent question.

President Trump has called the special counsel’s investigation a witch hunt and said he’s been subjected to more investigations than previous presidents because of politics. Do you agree? Who would love this question? Donald Trump would love this question. He says, wow, half think that it’s a witch hunt. But let’s pick it apart for a minute. The question is triple barreled. It asks three things in one question – whether the investigation is a witch hunt or not, whether the president is subjected to more investigations than other presidents or not and whether those investigations have been launched because of politics or not. Answers to each of these questions can be – can differ, and thereby, this is a fundamentally flawed question. That’s not all. Asking respondents if they agree without asking if they disagree is unbalanced. Even if you were to ask it as a balanced agree-disagree question, these are fundamentally biasing an approach because they lack the alternative proposition and they encourage satisficing. There’s some really good papers on this.

But you’re not asked even handily, like, do you approve or disapprove of the way the president is doing his job? You’re asked, do you agree or disagree that the president’s doing a great job? You have to conjure up the alternative proposition. It’s cognitively burdensome, and you’re less likely to get a balanced answer. The takeaways here is that it’s important to ask one thing at a time, to ask balanced questions, to ask neutral, unbiased questions. For reporters, it’s to look carefully at the questions that are being reported because this is a question brought to us and branded by USA Today. So considering the source – New York Times, a reputable newspaper; USA Today, a reputable newspaper – is not sufficient. Let’s go to operating principles.

The first, I’m sorry to say, is that we swim in a sea of unreliable data. There are a lot of good, carefully done surveys out there – Courtney’s shop, mine, I’d like to say. A variety of others spend considerable time, effort and expense to do probability-based surveys, random-sample surveys truly representative of the population, spend a lot of time on questionnaire design and data analysis. But, again, as Courtney said, there are a lot of amateurs in the business and a lot of problematic data collected by questionable methods out there. The challenge for us as reporters is that when we see a number with a percentage sign, it is really compelling. It adds structure and substance to what otherwise may be anecdote. We want it. We need it. We got to have it. We want – we’re inclined to run with it.

And I like to say that running with data is like running with scissors. It’s really easy to get hurt. So what do we do? We – I would suggest we need to do what we do – what we’re trained to do as reporters. Like anything else, we’ve got to check it out. That means developing, having and holding standards, being serious about applying them to the data we’re collecting, not taking it all as just numbers but really fulfilling our responsibility to our audiences, which is to make careful judgments as to whether the data we’re about to report does or does not merit our time and their attention. That’s my piece. Thank you very much.

RICK WEISS: Fantastic. Thank you, Gary. And over to you, Trent.

Understanding Opinion Polls and Surveys

[0:20:26]

TRENT BUSKIRK: Thank you. Let me share my screen, and we’ll get going. OK. Can everyone see the slides?

RICK WEISS: Yes.

TRENT BUSKIRK: OK, excellent. I just want to give a disclaimer here. Three men were in a hot air balloon. They found themselves lost in a canyon. One of them says, I’ve got an idea. We can call for help in the canyon, and we’ll echo and carry our voices far. So he leans over the basket and yells out, hello, where are we? They hear the echo several times. Fifteen minutes pass. They hear this voice echoing back, hello, you’re lost. One of the men says, that must’ve been a statistician. Puzzled, the other men said, why do you say that? And they said, well, the reply for three reasons – he took a long time to answer, he was absolutely correct and his answer was absolutely useless.

RICK WEISS: (Laughter).

TRENT BUSKIRK: So this is my disclaimer today as a statistician on the panel. I’m trying to sort of help you all sort of sort these things out. Hopefully, it is useful as we go through this journey. I sort of think about surveys and data sources like I think of onions. There are several layers of surveys that you sort of have to think about and get to the bottom of if you really want to vet the numbers, as Gary and Courtney were mentioning earlier. There are lots of things to consider, and you’ve heard of many of those already. I’m going to try to sort of put a cap on all those things going forward. But the idea of my sort of talk here is to really help you ask experts like an expert. I think part of the journalism enterprise is asking really good questions and being informed.

So hopefully, this is helpful for you all to be informed about a very complicated landscape, as Courtney mentioned, one that is really changing in front of our eyes as we move into the era of big data, nonprobability samples and crunching budgets. So I would like to just give you a couple of questions to ask yourselves about the numbers that you see. And sort of some of these things you’ve already heard, and some of them you may not have heard already. But I think you should ask questions about the questions. As Gary mentioned, not all questions are the same, and question wording matters. Do you support Trump in 2020? Do you plan to vote for Trump in 2020? A respondent might think of the first question and say, no, I didn’t give him any money.

The second question is, I plan to vote for him. These two things may be different and interpreted differently and yet reported as similar. Question context also matters. If you ask a question about racism as an important issue in the United States before you ask whether or not you think the president is appropriately dealing with race-related issues in America, it might result in very different results as if you had changed the order of those questions ’cause of the way people process the cognition that’s required to answer these questions appropriately. Was the question asked by someone or not? We’ve heard that there are many ways to ask or conduct surveys. We can use telephone. We can use online, et cetera. One thing that’s interesting about those differences in modes is that the presence of an interviewer is different across those modes. Telephone usually has an interviewer on the other side, whereas online, people can answer the questions themselves.

We saw some evidence in 2016 that when people were asked about whether or not they supported Trump, they would say no on the phone but they would actually answer the question differently if they were allowed to answer it in the privacy of their own home. So these kinds of things can matter and can implicate the quality of the data that we see, or at least explain some of the variability. The wording of the question, though, I think provides an enormous window for you to think about how to report this. Of respondents surveyed in the USA, a panel of U.S. adults, 25% reported that they plan to vote for Trump in 2020. So being able to mirror the wording of the question in the way you report it is really important because the question wording matters, and it basically impacts the way we measure and what is actually being measured. I think we should also ask questions about who was surveyed or polled. The intended audience for a poll is technically termed the target population.

You might hear that sort of technical term. And it represents really who is being described by the poll numbers that you’re reporting. Sometimes surveys ask different questions of respondents based on their characteristics. A national poll might ask, are you a registered Democrat or Republican? If you are a Democrat, they might ask, do you approve of Biden’s approach to selecting his running mate? So the poll numbers for estimates of support of Biden’s approach refer to a subpopulation of registered Democrats. So numbers that project that estimate should be countered or should be – have a caveat, these are referring to a specific subpopulation of the U.S. The denominator for the proportions that you report are essentially a function of this target population. Sometimes a survey speaks to a single target population.

Other times, a survey can change the target population depending on extra questions that are asked to clarify points. I also think you should ask questions about how the respondents were identified. We’ve also – we’ve heard that respondents can be identified through probability-based designs where there’s a random selection. We also have nonprobability-based designs where people self-select themselves. They respond to an advertisement. They see something in the mail. They go online, they see a pop-up. They decide to participate. This participation mechanism can sometimes alter the results or impact the quality that we see. Probability-based approaches have more control over this because we set the design, we set the sampling parameters and we have a sort of well-organized or a well-designed experiment, if you will, or setting that’s controlled. Nonprobability approaches don’t have that by their very nature. Sometimes polling companies try to adjust for the lack of structure by using model-based approaches or, as Courtney mentioned, weighting approaches that are sometimes more involved to compensate for the fact that we don’t have a probability of selection. I think – also, thinking about how respondents were identified and how the survey was conducted, we also heard that there are many ways to do surveys – online, over the phone, et cetera. Those have different implications for quality, and also for the way people might answer the questions.

But they also – it also provides a context that I think is really important when we go to report these things. Whether or not an interviewer is present when asked questions could implicate certain people admitting that they believe racism is a problem, whereas others may not admit that that is a problem. And then whether or not the full target population was included or not – for example, if we had a random sample of registered voters we conducted by automatically dialing their registered landline number and, based on this sample, we say that the support for Trump in 2020 was estimated to be around 56% – this particular survey method misses the adults who are registered voters who have cellphones only in their household. And the impact of the missingness would be different if these cellphone-only folks were different in their support for Trump than the landline-only counterparts. So we can very clearly see that the context of these numbers matters almost as much as the numbers themselves.

Without the context, it’s really hard to understand where the error might’ve been made or why there are differences across polls or surveys. I also think it’s really important to ask questions about how the poll numbers and survey estimates were derived. We’ve heard a little bit about this from Courtney and Gary so far about using weighting and so on. Most probability-based methods do use sample weighting to adjust or account for differential factors of nonresponse and representation in the frames that we use. Some nonprobability methods also account for this.

But here’s a very important caveat. A lot of times, nonprobability samples, because they’re cheaper to conduct, usually have much larger sample sizes. And on balance, those sample sizes might portend a little bit more accuracy unless these adjustment factors and modeling factors are incorporated into the adjustments, which might make their margins of error more comparable to a probability-based sample, which likely is going to be smaller. And some probability sources may derive – may use models to make these adjustments, and they’re not all based on a design-based inferential framework that we like to sort of refer to. Credible intervals might be a term that you might see that refers to something like a margin of error, but it’s different because it’s more model-based. And so asking questions about how the model was derived or how the error is being quantified is an important one, especially now as we see transition in our field about many ways to get at numbers that are resulting from polls.

I think it’s important, along that vein, to ask about uncertainty in poll numbers. Not every poll is perfect, as we already mentioned. So the numbers that we give you are our estimates. Estimates have error associated with them. It doesn’t mean they’re wrong. It means that we have a certain level of certainty around which we can ascribe to that number. So suppose we randomly sample vegetarians in the U.S. about what vegetable they prefer to eat as a snack, carrots or celery. The results report that carrots were preferred over celery 46% to 43%, with a margin of error reported to be 2 percentage points. What does that mean? It means that with 95% confidence or some other level of confidence that we can set – usually 95 is default – our estimate of carrot preference is in the range of 44 to 48 and, likewise, the range of preference for celery is in the range of 41% to 45%. If you’re visual like me, here’s a picture. Here’s the carrot and the celery, and here’s the range. So on point, it looks like carrots are preferred. But if we think about the margin of error and we incorporate the uncertainty in the estimate, it looks like these preferences might be more similar.

And here’s an interpretation for margin of error. If you haven’t seen one, you can refer to these notes later on. I think also, in closing, it’s important to ask questions about not only the poll numbers you have but whether or not they’re from a single source or whether they’re from multiple sources. Nowadays, we’re starting to see that poll estimates can come from a single poll. You’ve seen that traditionally, and those are really easy. You can actually do what Gary suggested; check out the polling source. Check out their methodology. Inquire. Make a phone call, et cetera. But nowadays, we’re also seeing other opportunities for presenting numbers from polls. Poll estimates can be the result of aggregation, where many polls are lumped together that are surveying a similar outcome over a similar time period, and those results are combined together to make an estimate. Some things to think about, though – poll aggregation is a relatively newer method.

It is being applied by many different outlets, as you can see here, and it basically allows them to smooth over the results from one poll to another to get a slightly more accurate estimate of the underlying outcome. But there are a few questions to ask – right? – because not everybody aggregates polls in the same way, just as not everybody conducts polls in the same way. So I think you should be asking questions about how many polls were included in the aggregation. Certainly, more polls being included potentially gives you more accurate information, but it also increases the differences in the kinds of polls that could be represented.

And what types of polls are included? Are only probability-based polls included, or are nonprobability-based polls included, and how are they combined together? Questions about how the time window is used – if you incorporate a longer time window with a particular outcome that’s volatile, like race relations in the U.S., you might get less accuracy by aggregating the polls, as opposed to better accuracy. And finally, I think you should ask questions about whether these polls are weighted according to different measures, like how accurate were they in the past, how large were these polls, whether they were probability or nonprobability.

These are all impacting the underlying poll aggregate numbers that you are going to report on and that could actually change those numbers from one poll aggregator to another. So the last thing I think you should be asking questions about is the broader context during which the survey or the poll was conducted. Knowing the timeline of the poll is important, but knowing what else is going on in America or in the target population during that time period is also important. Be careful not to read into these numbers as causative measures. Sometimes we want to ascribe more strength to numbers than their cape would portend. They’re not always superheroes like we want them to be. They are just normal numbers that represent a point-in-time estimate. Support for Trump is going down as protest participation increases.

Does participation in protests cause the Trump support to go down, or is it just something that we’re seeing over time coincidentally? So here’s a slide that maybe makes this picture clear. Causation – or spurious correlation is not causation. These numbers may not imply the cause of what’s going on, but they may be tracking together with something else. I think this slide is the cheesiest slide you’re going to see today, but it does talk about the per capita consumption of mozzarella cheese in the U.S. correlates positively with the number of civil engineering doctorates awarded. Maybe those doctorates are powered by cheese, but maybe they’re not. So the questions really mean to think about the numbers that you are representing. Don’t ascribe more strength to them than they really deserve. Here are a couple of resources I put together to maybe help you become better at asking questions of the experts. Thanks so much for your time. It was a pleasure to be with you today.

[0:34:30]

RICK WEISS: Thanks, Trent. That was great and very entertaining. I loved your carrots or celery. Want to remind all the reporters on the line that all these slides will be available on the SciLine website within the next day or so, so you can refer to them, and you can click on those various extra resources and take advantage of that. Also want to remind you that you can click on the Q&A function icon at the bottom of your screen to submit questions at this point. And we can start hearing some of those from our panelists.

Q&A


What is the quality of SMS, or text-based survey methodology?


RICK WEISS: I’m going to start with one right here just to get us off to the start. Can you speak to the quality of SMS/text-based methodology? It doesn’t sound like that’s random, but let’s hear. Courtney, do you want to start with that?

COURTNEY KENNEDY: Sure. Yeah, that’s a new methodology that’s just popped up in the past few years. So because it’s so new, there is not good research that informs us about how – about the quality. There is, to my knowledge, not a single peer-reviewed article that talks about the quality of a poll that’s just done through text. And just on its face, there are some challenges that I see. There’s – this quickly becomes a legal issue, frankly, because of the laws that we have that govern dialing and use of telephone numbers. One interpretation that I’ve seen is that you could only do that with landline numbers – you know, send an automated text message. So if that’s the case, then you’re looking at what I talked about earlier with robocalls, where you’re going to get a sample that skews very white and way too old relative to the population.

But I do think that some of the polls that are polling by text also include cellphone numbers even though that’s more of a gray area legally. So it’s done. From what I’ve seen, the response rates are incredibly low, which does not necessarily invalidate a poll, but it’s something to pay attention to. So, I mean, the bottom line – it’s done. There’s not good research validating that as a good way to do polling. Response rates are very low. And everything I know about polling tells me that the kind of sample you’re going to get that way is going to skew, you know, very different from the population demographically, so a pollster would really have to apply a lot of weighting in an attempt to try to get some valid data. Gary, do you have any thoughts on that one?

[0:37:14]

GARY LANGER: Yeah. I would just add – I think you’re right, Courtney – that that’s a technique that’s at the front edge of survey methods. It’s what I would describe as experimental right now. I think it’s important for us to encourage experimentation, which is a little different than taking the data seriously. The beauty of all of these efforts is that all of our concerns are answerable and testable. We have the data. We can empirically evaluate it if we have the data. And that’s why the most important thing, I think, is – the takeaway for reporters is that it all comes down to disclosure. The real question here is how were these data collected? It’s not all just numbers and percentage size. Some methods were used to obtain these numbers. What methods were used? How were these numbers obtained? And what was asked? And then what does it all look like, right?

And researchers – there’s some really excellent researchers, particularly in academia, who spend their careers evaluating survey techniques and survey data, testing its validity and reliability and reporting in peer-reviewed journals their findings. And I think it’s really important for us to be informed about these findings and for us to not only encourage but to insist upon disclosure for any poll we’re considering reporting. Who collected these data? How were these data collected? What was asked? What was found? What did the unweighted data look like? We talk about election returns as somehow validating data. Well, you correctly predicted the outcome of an election. That’s really not a good measure at all because the amount of modeling that goes on. But what we do know is in any survey that’s done, demographics are included – administrative variables. You can ask any number of questions for which we have solid census data, and we can compare, how did we do on these demographic and other administrative variables compared to the known values? And that’s one way to see the representativeness or the accuracy of the estimates. All of these techniques do require this sort of assessment. It’s super important.

[0:39:13]

TRENT BUSKIRK: Oh, I’ll just pipe in here a little bit. I have some experience with doing SMS-related work, and it is relatively a new methodology. I think the other part of this that we haven’t really spoken to a bit yet is the legality around this. I mean, Courtney sort of mentioned this a little bit, but it is tricky in the U.S. to generate a probability – you can generate a probability sample of cellphone numbers relatively easily in the U.S., and people do that, and that’s sort of very rigorous methodology. What is harder, though, is to take that random sample of cellphone numbers and submit SMS surveys to those – to that sample straight away without prior permission. So a lot of times, you will see these SMS polls are being done by a constituent or a set of parties that have already opted in to receive communications from the pollster or from the organization who is communicating with them via text.

It might be for the purposes of information gathering, but it also might be for other purposes like I want to get a coupon through my phone, so I’ve given you permission to text me. The thing to think about here is how those questions are asked over the text because there are many ways to gather that data and sort of massage that data for analysis, which ultimately determines how the numbers you see are cooked. Essentially, if people are allowing people to vote, like who are you going to support for president in 2020, if you give people a list with a closed form – like A is Trump, B is Biden, C is somebody else – people can respond back very clearly, and those results can be relatively massaged quickly and aggregated. But if people type in their results to you with free response, then the way that that free response information is processed can vary from one vendor to another. And that processing alone can actually impact the quality or the variability in the results that you would obtain. There is also some hesitation, I think, for people to participate in unsolicited text messages that you haven’t seen before. So we have a response cooperation issue as well as a coverage issue.


What went wrong with the polls in 2016?


[0:41:26]

RICK WEISS: Great. Here’s a question from Aaron Zitner, Wall Street Journal. Hi, Aaron. Been a while. He wants to know what is your assessments of what went wrong with the polls in 2016, if anything? And why do people say that state polls were more off than national polls? I’ll start with you, Courtney, ’cause I’ve heard you speak about this before, but others may want to add to that.

[0:41:50]

COURTNEY KENNEDY: Sure. The state polls were more off. I mean, if you just compute the difference between final poll estimates versus the election estimates, and the state polls – particularly, the battleground states is what I’m talking about – the errors were larger on average than the national polls. In even sort of calling the right winner, you could think about the national polls showed that Hillary Clinton was leading Trump by about 3 percentage points in the national popular vote, and she ended up winning that by, you know, about 2 or so percentage points. So the national polls weren’t perfect, but historic – if you look at, you know, the historical accuracy record, they had a pretty good year, actually. But, of course, that’s completely different from how people experienced it because we don’t elect presidents with the national popular vote. It doesn’t matter.

What matters are these states. And the polling in the Upper Midwest turned out to be quite off. I mean, polls were off by 6, 7, 8 points, in many cases calling the wrong winner. So I don’t think there’s any doubt that the polls in the states were, in fact, more off. So why were they? There were two main reasons. I served on a committee that spent over a year looking at this. One reason is there was – there is evidence that there was actually a late swing towards Donald Trump, a swing that was late enough that polls conducted, you know, in September and October, you know, they didn’t catch that. But what the exit poll picked up was that people who were late deciders, that made up their mind in the last few days before the election, they broke for Trump in Pennsylvania, Michigan, Florida by 15, 20 percentage points. And that, historically, is pretty unusual. Usually, late deciders rate – wash out about evenly between the two major party candidates. That was not the case in 2016. And so polls conducted a few days out just didn’t capture that.

Second major thing about 2016 that we found is that the state polls in particular – 70%, 80% of them were not adjusting to make sure that they were representative on education. And one thing we’ve known about polling for years is that people with higher levels of formal education are more likely to take surveys. So if you take any poll, especially these state polls, they have proportionately too many college graduates because they’re more likely to do the poll. In many elections, that doesn’t matter so much. In 2016, it was fatal because education was quite associated with presidential vote preference, where you had people who were college graduates were more likely on average – not all of them, but on average, they were more likely to vote for Clinton, whereas high-school-or-less adults, especially white high school or less, were more likely to vote for Trump. So if you did your poll and you had too few high-school-or-less Trump voters, too many college grad Clinton voters and you didn’t fix it, you overestimated support – overestimated support for Clinton and missed support for Trump. And we saw that in poll after poll. It’s not the only thing that happened, but it was an important part.


What is the most effective way to vet a poll on deadline?


[0:45:11]

RICK WEISS: Courtney. And I know everyone’s going to want to weigh in on a lot of these, but I want to move to additional questions so we can get more reporters’ questions. I’m going to move on to this question from Bakari Savage at WBRC FOX6 News in Birmingham. What’s the most effective or efficient way to vet a poll on deadline or debunk one that’s skewed in the same news cycle? We’ve talked a lot about check all these things out, but how does one do that quickly, on the fly?

GARY LANGER: Well, doing it on the fly can be tough, but reporting anything on the fly can be tough, right? We have to have disclosure. One way to go is to the extent you can, assess, though, what polling organizations you – are producing surveys that are of interest to you, and check them out in advance, right? Be in touch. Have a reporter whose responsibility this is. And be in touch with these organizations and get their disclosure. And I would suggest that nondisclosure should be fatal – right? – because if you can’t check it out, how can you reliably report it? If you know the provider of these surveys, if you’re working at the state level, or if you’re a national researcher, there’s a lot of organizations that are known to the world.

Check them out in advance and lay down some base knowledge for yourself about who’s doing good, reliable work and how and why. Now, part of this is not only about understanding what they do but establishing your own standards. And I would ask all the journalists on this call to go back and look up your own news organization’s standards for poll reporting. First question, do they exist? Do you have any? And then next question, what do they say? What are they based in? What’s the empirical and theoretical standard for them? If you don’t have standards, obviously, it gets really hard to apply them. I’ve got recommendations. I don’t presume to set standards for others, but I do suggest that we all have them.

[0:47:04]

TRENT BUSKIRK: Can I add to that? I think that knowing what questions to ask is important – right? – because you don’t spend your time figuring out how to ask questions for vetting. You just go forward with the vetting. So there are a couple of key questions that are sort of outlined in the segment that I just spoke about, but there are some resources as well. So on the poll aggregator sites, you will see often that many of the polls are listed, and they give a track record of those polls, or some of the aggregators even grade polls. And those grades may or may not be meaningful to you, but it is at least a place for you to see how the poll that you’re trying to vet compares with other polls that you could be vetting as well. So try to get a sense of the variability in the coverage of the things you care about – that’s an important piece of it – to sort of provide you context and maybe how competent you might be and sort of where the needle is or what’s being measured accordingly.

And I do think, you know, having a set of questions on the ready is important, and learning what those questions should be and being sort of conversant in those things is also important ’cause it allows you to access the information through these relationships that you have, as Gary suggested.


Are the Associated Press’s exit polls the main reference in elections?


[0:48:21]

RICK WEISS: Great. And the SciLine/ASA fact sheet I mentioned earlier has a lot of those questions, as does Trent’s slides. Question from Bricio Segovia at Voice of America – why are the AP’s exit polls the reference in elections? How’d they get that?

GARY LANGER: I don’t know that that’s the case, although, to me, the standard is the NEP, the National Election Pool exit polls. The National Election Pool is a consortium made up of leading news organizations and television networks and others that produces rigorous exit polls and has for many years. The AP actually left that consortium a couple of years ago and set up a new, and I think it is still experimental, alternative to exit polling. Exit polling is increasingly tricky because not that many people as previously are exiting these days, right?

A lot more voting is happening by mail. Or absentee, I think, was more than a third of the vote in 2016. So there has to be supplements to exit polls. Perhaps it’s the state-level telephone samples, and it is common in addition to the in-person exit poll in places where there’s high levels of absentee voting. So there’s a variety of ways to do it. I do think the standard is the NEP, the National Election Pool exit poll, and that the AP is something interesting and it’s worth checking out, but it’s new and still experimental in my view.


Can sample size be used as an indicator for whether a poll is worth covering?


[0:50:00]

RICK WEISS: Great. Question from Rae Bichell based in Colorado with the Mountain West News Bureau – can we talk more sample size, specifically whether sample size can be used as a red flag for whether a poll is worth covering or not? If a poll, for example, surveyed 500 people and is making conclusions about what Coloradans think, which is 6 million people, is that a red flag? How about 50 people?

TRENT BUSKIRK: I’ll take this one first, and then I’ll pass it on to my colleagues. So there is a sort of – there’s a complicated question – seemingly simple question, but it is a bit more complicated. Let’s say, for example, that you wanted to survey 500 farmers about whether or not they wanted to get subsidies from the U.S. government to help them with their farm, and there were a total of 500 farmers in the U.S. Five hundred would be a great number. It would probably be too many because you’d be surveying all of them. We typically don’t do censuses because they’re expensive. So the answer to the sample size question is a relative one. Sample sizes of 50 could be really large if the population is small. Sample sizes of 500 could be inadequate if the population size is enormous.

But random samples – and it also depends on the kind of sampling mechanism, right? If you survey – if you have a sample – a nonprobability sample of 3 million people and they’re all women, it’s not big enough to ask about men’s health. So the idea of sample size is connected to the mechanism by which the sample is drawn and the underlying population by which or to which you want to make inference. The general rule of thumb, though, is that samples of around 1,200 or 1,400 or 1,500 give a margin of error of about 3 percentage points in national U.S. polls. So it doesn’t take an enormous amount of sample to give reasonable levels of accuracy for certain outcome measures if the sample is drawn randomly. I think this is the advantage of the random sampling piece that often gets overshadowed. Courtney? Gary?

[0:51:59]

GARY LANGER: If I can just go to my blood test analogy, I think it’s a good one. So the accuracy of the sample, as long as it’s a random sample, is independent of the size of the population being sampled. So you need that much blood drawn from a body to test for white, red, cholesterol, you name it. It can be that much blood from a mouse, from a baby, from a man, from Godzilla to a being the size of the planet Pluto. As long as all of the blood is randomly circulating, that same sample size is adequate for your test. And there’s any other number of examples.

Why do you hear so much about polls – national polls of a thousand people? Because what often drives the sample size for a survey is the size of the smallest subgroup you want to reliably analyze. In a good-quality random sample, you want about a hundred cases, and then you have a good-quality sample of a thousand Americans. You’re going to get about a hundred blacks. You’re going to get about a hundred Hispanics. Because you want those groups for analytical purposes, that draws your overall sample size of 1,000. There’s other ways to get there, oversampling and the rest, but on straight-on sampling, that’s what drives that sampling size decision. If you don’t want to do subgroup analysis, then the sample size of 500 can be perfectly adequate. Sample size is associated with a margin of sampling error. They’re very closely linked, almost the same thing. We have it at our website – langerresearch.com – a margin of error calculator you can use to just put in the sample size of the population. You look at the two populations you’re looking at in a comparison, you click the button, and it’ll tell you the margin of error. It’s a pretty useful tool. You might want to check it out. And there’s probably others out there as well. Courtney, you got something?

[0:53:40]

COURTNEY KENNEDY: Yeah, I would agree with everything Gary said. I would like to say a little bit about the margin of error. The truth is whatever the pollster says the margin of error is, the real margin of error is bigger. And that’s because there’s actually four ways that error can creep into a poll – not capturing or covering the population correctly, the fact that not everybody responds, the fact that there’s misunderstanding and misreporting answering questions, and the fourth one is sampling – the fact that you’re not doing a census, but you’re drawing a sample. And the margin of error only talks about one of those four error sources, and so – but we know that the others are important and contribute error.

Another thing that a lot of people don’t know is if you see a margin of error – and especially think about an online opt-in poll. They – and the pollster reports a margin of error or a confidence interval. That interval assumes that every estimate in that poll is 100% correct, that it has no bias whatsoever, and then that’s the interval around that. But study after study has shown that’s a completely false assumption. You know, all polls contain some error. And the margin – the average amount of error tends to be higher on average, especially for those online opt-in polls. So you do want to bear in mind that there’s a margin of error, but you should know, you should always know it’s a little bit larger than the – whatever numbers were reported.


With many surveys emerging on race in the U.S. right now, can you speak to the quality of methodologies that oversample specific populations – for example, African Americans? (Part 1)


[0:55:16]

RICK WEISS: Really interesting to keep in mind as people report, especially on close findings. We’re going to go a little bit over 3 today. I know I have permission from our speakers to go an extra 10 minutes or so. And we do have a few more questions I want to try to squeeze in, as well as closing take-homes. So let me just get a couple more questions in here. With many surveys emerging on race in the U.S. right now, can you speak to the quality of methodologies that oversample specific populations – for example, African Americans?

COURTNEY KENNEDY: I’d say that you’d want to look at how is the pollster recruiting? If they’re recruiting from sort of a listed sample of, you know, Asian Americans based on the fact that they have an ethnic-sounding name, that kind of technology or that kind of approach has shown to be biased. So you could have an oversample and get what sounds like a lot of interviews, but they could be biased because you worked with a source that wasn’t representative. So you really want to look to see polls that have a robust number of interviews with those groups, at least several hundred ideally – Gary’s right that a hundred is kind of a lower bound – but also that the pollster was recruiting them from (inaudible).


How do you judge the value of a single poll in contrast to an aggregate of surveys?


[0:56:40]

RICK WEISS: We’ve lost your audio, Courtney. Are you there? I’m sorry, we – oh. Sorry, we have lost Courtney’s audio. But for now, while you perhaps try to click on that, I will get one more question squeezed in here that perhaps the other two can hit. Amy Jeffries at WUNC in North Carolina – how do you judge the value of a single poll in contrast to an aggregate of surveys?

[0:57:22]

GARY LANGER: Well, this is something I would like to talk about a little bit because aggregation can be highly problematic. I like to say that aggregating polls is kind of like aggregating champagne, Coca-Cola and turpentine. You know, drink up. What you really need to do is make individual assessments as to the validity and reliability of these surveys on the basis of the methods by which the data were collected. Aggregation does not cure sins, particularly because in these days, cheap and suboptimal polls can easily flood the gates, if you will, because they are so easy to produce. So if you average all the polls you see, you may be putting in a lot of problematic polls along with a couple of good ones, and then you wonder why it didn’t work out so well. So consider the source I think is what we’re trained to do as reporters, and I think that certainly applies to survey research. Courtney and Pew do beautiful work.

There are plenty of other news organizations and other non-news organizations, foundations, nonprofits – you name it – that do beautiful survey work and spend the time and effort. There are others that do much lower-quality work, and averaging them and aggregating them is not a solution, right? So consider the source. If you want to look at a group of surveys that are done similarly and well and it asks similar questions in a similar time frame, that’s certainly legitimate. But you have to make the judgments first about what’s going in. The problem in 2016 was not so much with the polling data, which certainly at the national level was quite accurate, but with the expectations caused by poor estimates that were produced through aggregation.


With many surveys emerging on race in the U.S. right now, can you speak to the quality of methodologies that oversample specific populations – for example, African Americans? (Part 2)


[0:58:51]

RICK WEISS: Courtney, are you there to – you want to finish your thoughts on race?

COURTNEY KENNEDY: Can you hear me?

RICK WEISS: Yeah, you’re there.

COURTNEY KENNEDY: Right. So I just wanted to make the point that, you know, you look for two things – a sufficient sample size of ideally a couple hundred, but equally important is where they’re sampling, if there are oversamples of African Americans or Hispanics or whatever the group of interest is.

[0:59:19]

TRENT BUSKIRK: If I could just add to that, so I just – we just completed a study where we looked at biases in demographic-related data that was measured over 20 years – the last 20 years in a lot of different survey sources – probability-based survey sources. And what we found is that the bias did tend to go up in terms of covering certain race groups in particular but that that bias started to turn around as survey companies started to incorporate cellphone samples into their landline or their – you know, their telephone sampling framework. And as the percentage of cellphone sample that was included in the overall mix of landline and cellphones increased, the biases tended to go down. We are able – the cellphone mix of telephone sampling is – has shown pretty remarkably and consistent capabilities for us to reach populations that are typically underrepresented in other modalities without necessarily having to oversample.

But including the right percentage of cellphone sample, you know, in a landline and cellphone mix is important to consider. So whether or not an organization included, say, 10% of their sample was cellphone versus 80% of their sample was cellphone can be part of the difference in why you see race representation being different across those two surveys in particular. And back to the poll aggregation point, I was just going to add also that it is true that averaging a bunch of things that are in the tail or a bunch of outliers isn’t going to get you any closer to central tendency just because you’ve used an average. Clearly, that makes sense statistically. But I do think that, you know, the idea of using a single poll that is more accurate than, say, 10 polls that are not very accurate at all is probably better on par, and it may be less expensive overall. If you think about the volume or the costs related to doing 10 of those maybe subpar polls, it’s still going to cost you some money, and it may not be as accurate, even if you aggregated them. So I agree with Gary’s points and Courtney’s points on that.

[1:01:38]

GARY LANGER: And just to add on Trent’s just last point, the ABC News-Washington Post poll that we produced, 75% of the calls were by cellphone. We increased it actually recently, and – from 65. And it produces a beautiful sample because we’re no longer, from the vast majority of interviews, ringing in your home where you’re not, but we’re ringing in your pocket, and you can pull out the phone and answer. We have better, more representative samples than we’ve had for many years. Our design effects, which is a representation of the amount of weighting that has to go through on the sample, are lower now – far lower – than they were a decade ago. The ability to use and incorporate intelligently – properly incorporate cellphones into survey samples has increased their accuracy considerably, and we do it.

Observations on the Q&A from a statistical point of view

[1:02:27]

RICK WEISS: Well, as we start to wrap up, I want to bring Regina Nuzzo back into the discussion here and do two things. Regina, I’d love to hear any sort of closing thoughts or observations from you as you’ve listened to all these Q&As going on. Love to hear your take on some of these things from a statistical point of view. And then I’m going to quickly go around the horn after you’re done, Regina, just to get a quick half a minute from each of our three panelists of take-home messages from you. Regina, what are you hearing? And what do you want to emphasize here?

REGINA NUZZO: Thanks, Rick – so much good information here. I think the thing that really has struck me is how much the landscape has changed. Ten years, 20 years – 20 years ago, I was getting my, you know, stats degree. So much has changed since then. And the entry – the barrier to entry to be a pollster is now much lower, so there’s – as Gary said, we’re swimming in this sea of unreliable data. But I’m also seeing that, yes, these seas may be rising, but there are more lifeboats. There’s more positive in there as well. Like, things are changing. Things are different than they were in 2016. The field is doing research on this. There are new innovations all the time. The field learned a lot from 2016. So that what’s happening behind the scenes. But I feel like today, I wanted to highlight a few of those lifeboat tools that have come up. Gary mentioned making sure that your newsroom has standards and, if not, making a push to have standards. Trent gave us a lot of great questions.

We’ve talked about the fact sheet that ASA and SciLine had worked on together. There’s information aggregators. AAPOR had a Transparency Initiative that I don’t think has come up that has information where pollsters commit to making transparent all their methods. And for me, the most important is this wonderful resource that SciLine has where reporters don’t need to be experts on polls. They simply need to know how to drop you all a line or get you on the phone and say, help put me in touch with a polling expert that can help me with all of this. That would be my take on it.


What is one key take-home message for journalists covering opinion polls and surveys?


[1:04:53]

RICK WEISS: Fantastic. Thank you, Regina. Thanks for all your help putting this together. Let me just quick go around the horn and get some last take-homes from folks. Courtney, start with you.

COURTNEY KENNEDY: Sure. I would just say that one thing that’s confusing about polling is that talk is cheap. Every pollster pretty much says that their poll is nationally representative. It doesn’t mean anything. Anybody can say that, but it often fools people into feeling like, oh, that must be a valid poll. So as we’ve emphasized today, I really encourage you to look under the hood. Look at what mode – you know, how did they interview people and where did they get the respondents? If they just say online, that almost certainly means it was an online convenience sample, which may or may not, you know, conform with your place’s standards.

RICK WEISS: Great warning. Gary.

[1:05:38]

GARY LANGER: Yeah. Thanks, everyone, for joining in. Too many news reporters of my generation long indulged themselves in the lazy luxury of being both data hungry and math phobic. And it’s really not acceptable. What’s heartening is that there are reporters, journalists like you, taking the time and trouble to tune in today and inform your judgment about these issues, that there are the resources that outfits like Rick’s and Regina’s put together, and they’re available elsewhere as well. I do think that there’s an increased effort – even a movement, let’s hope – to instill a little more rigor, a little more care and caution in poll reporting. Anything else we report, we check it out first because that’s our responsibility to our audience, and I simply suggest that the same applies to poll reporting and that it starts with disclosure. Thanks a lot.

RICK WEISS: Thank you. And Trent.

[1:06:42]

TRENT BUSKIRK: I echo the sentiment of my co-panelists. I’m very thankful that you all have sought out some information today, and I really, really hope that you are becoming more confident in your ability to access information and ask questions that you don’t understand. Math phobia shouldn’t be an issue. You should still reach out. If there’s something you don’t understand about the number, maybe it is a bad number, or maybe it just needs to be clarified. I will say that polling and surveying in general sometimes gets a bad rap because it doesn’t work. But let’s think about an analogy. We are now trying to find a cure for COVID-19, and there have been some medicines that have been proposed and haven’t worked, but we are still trying to find medicine that works.

And so just like that, polls will continue, and they will be better as a result of a scientific community that is trying very hard to understand how to make sense of a very increasingly vast landscape of methods and approaches to doing this. Polling is an art and a science, but there is a science underneath it all. And so I really hope that you can access that science and learn to ask questions and not be afraid to speak up when you want clarification ’cause there’s lots of people who can help you.

[1:08:04]

RICK WEISS: A great, inspiring ending. There is hope here we can do better, and I think we’re going to, especially with the help of folks like these. Thank you all so much for your contributions to the cause of solid reporting on polls and surveys. I want to remind our attending reporters that all this information will be up on the website within the next day or so at sciline.org. Please follow SciLine at @RealSciLine and the American Statistical Association.

And I also want to encourage you all as you sign off to respond to the very brief survey that I hope is designed well. It’s simply three questions. We’re not going to try to make a big deal out of it or draw any huge conclusions from a small sample, but it would be very helpful to hear from you to answer those three questions so that we can continue to give you the kinds of events like this that can be most helpful to you as reporters. Thanks again to everyone, and we’ll see you at the next SciLine media briefing.

Dr. Courtney Kennedy

Pew Research Center

Courtney Kennedy is vice president of methods and innovation at Pew Research Center. Her team is responsible for the design of the center’s U.S. surveys and maintenance of the American Trends Panel. Kennedy conducts experimental research to improve the accuracy of public opinion polls. Her research focuses on nonresponse, weighting, modes of administration and sampling frames. She has served as a co-author on five American Association for Public Opinion Research (AAPOR) task force reports, including chairing the committee that evaluated polling in the 2016 presidential election. Prior to joining Pew Research Center, Kennedy served as vice president of the advanced methods group at Abt SRBI, where she was responsible for designing complex surveys and assessing data quality. She has served as a statistical consultant for the U.S. Census Bureau’s decennial census and panels convened by the National Academies of Science, Engineering, and Medicine.

Declared interests:

None.

Gary Langer

Langer Research Associates

Gary Langer is president and founder of Langer Research Associates. The company produces the ongoing ABC News/Washington Post poll for ABC News; manages international surveys for the Pew Research Center; and designs, manages and analyzes surveys for a range of other media, foundation, association, and business clients. Gary was director of polling at ABC News (1990-2010) and a newsman in the Concord, N.H., and New York bureaus of The Associated Press (1980-90), where he covered the 1984 and 1988 presidential elections and directed AP polls (1986-90). His work has been recognized with two news Emmy awards (and 10 nominations), the first and only Emmys to cite public opinion polls; as well as the 2010 Policy Impact Award of the American Association for Public Opinion Research, for a seven-year series of surveys in Iraq and Afghanistan cited by AAPOR as “a stellar example of high-impact public opinion polling at its finest.”

Declared interests:

None.

 

Dr. Trent Buskirk

Bowling Green State University

Trent D. Buskirk is the Novak Family Distinguished Professor of Data Science and the chair of the Applied Statistics and Operations Research Department at Bowling Green State University. Prior to his post at BGSU, Dr. Buskirk served as the director for the Center for Survey Research at the University of Massachusetts Boston, and prior to that he was the vice president for statistics and methodology at the Marketing Systems Group and was tenured in the department of biostatistics in the School of Public Health at Saint Louis University. Dr. Buskirk is a fellow of the American Statistical Association, and his research interests include mobile and smartphone survey designs, methods for calibrating and weighting nonprobability samples, and in the use of big data and machine learning methods for social and survey science design and analysis. His work has been published in leading survey, statistics and health related journals such as Social Science Computer Review, Journal of Official Statistics, Public Opinion Quarterly and the Journal of Survey Statistics and Methodology.

Panelist presentations

Download

Video: high definition

(mp4, 1280x720)

Download

Video: standard definition

(mp4, 960x540)

Download