Quick Facts

Surveys and polling

Advice for Reporters, compiled by SciLine and the American Statistical Association

Journalists: Get Email Updates

What are Quick Facts?

Opinion polls vary enormously in structure, style, and credibility, and are easy to mis- or overinterpret. At their best, opinion polls can give an accurate snapshot of broad public sentiment on an issue. But even well-constructed polls are not particularly good at measuring small shifts in opinions over time, and their ability to predict future voter choices is decidedly mixed. Among other confounders for voter polls, people often hold off before settling on a candidate—and even then they often change their minds. The following primer provides some essentials for accurate reporting on polls and surveys.

Survey methods and their general reliability

  • Live telephone interviews with human pollsters, which reach both cell phones and landlines, are expensive but have historically been the most accurate.

  • Probability-based online polls use randomly chosen postal addresses to reach a specific sample of people and recruit them to complete an online survey. (When the individuals are recruited by the pollster to contribute to numerous polls over time, they’re referred to as online panels.) These polls generally also have high statistical accuracy.

  • Nonprobability online polls typically use individuals who, in response to advertisements or other general outreach efforts, have volunteered to answer a survey. These polls are less expensive than probability-based online polls, but they do not use random sampling from the entire public and so have a high risk of bias. Increasingly, pollsters are developing statistical methods and models to address this inherent problem. But these methods are complex and still evolving, so the quality of these polls varies widely and journalists should consider appropriate caveats.

  • Automated polls (also called robopolls, or interactive voice response calls) are relatively inexpensive and include only landlines (though some pollsters don’t abide by this legal restriction), which seriously undermines the representativeness of the sample. The Associated Press Stylebook recommends that the media not report on automated polls.

  • A variety of less-often-used survey methods exist, some of which can generate statistically accurate results but some of which should not be trusted. When in doubt, check with a polling expert.

Check out our media briefing on Covering Opinion Polls and Surveys

An introduction to the essentials of accurate reporting on polls and surveys.

View the media briefing

Things to look for in a poll

  • Is the sampling representative? The sampled population should include individuals from all or nearly all subgroups of the population it is meant to represent.

  • What was asked (and how was it asked)? Look carefully at the actual questions asked and make sure you’re precise in how you describe them. Remember that the order of questions can influence people’s answers, too, so it can be helpful to see the full questionnaire.

  • Who conducted the poll? Several factors can help identify pollsters with reputations for trustworthiness. One is to check whether the organization is a member of the Transparency Initiative sponsored by the American Association for Public Opinion Research or a contributor to the Roper Center for Public Opinion Research’s data archive at Cornell University—keeping in mind that those two organizations focus on ensuring full disclosure of survey methods but do not certify the rigor of those methods, and remembering too that there are reputable pollsters who are not members of either.

  • Who sponsored the poll? Due diligence demands that special scrutiny be applied to polls sponsored by political entities or advocacy groups, for evidence of bias. At the same time, while polls sponsored by academic institutions and large media organizations are generally designed to minimize such bias, they don’t have a perfectly clean record either. Bottom line, if you’re unfamiliar with the sponsor, do some reporting.

  • Did the pollster weight their results? If so, how? Weighting is a statistical process by which a pollster adjusts poll data to ensure it represents the target population overall. It adjusts for the fact that it is impossible to survey everyone in a large population, as well as the reality that the fraction of people polled may differ in certain important ways from the overall population whose opinions are sought.

    • Without weighting, polls typically under-represent younger, less educated, and non-white adults, since they are less likely to respond to polls than are other groups.

    • Weighting is especially important when looking at state-based election polls, which often aren’t well-resourced enough to secure a sufficiently large and representative sample size. Many of the state polls that wrongly predicted the outcome of the 2016 U.S. presidential election failed in part  because many people made up or changed their minds late in the campaign but also, importantly, because these state polls did not use weighting to correct for the fact that college graduates are generally more likely to respond to surveys than other adults. In key states that year, formal education was strongly associated with vote choice—something pollsters had not found to be especially important in the past.

  • How many people were surveyed? The fewer respondents, the higher the statistical uncertainty in results. Generally speaking, 100 respondents is the minimum sample size necessary for a reportable result. But note that a poll result based on 100 respondents will have a margin of error of at least +/– 10 percentage points.

    • Although reputable polls typically have sufficient overall sample size and appropriate weighting, some individual questions within a poll may have been asked of (or answered by) a subgroup categorized by particular demographic subsets, such as race or age. For this reason, it is possible that individual questions within a larger poll may have too few respondents to provide valid results or may be weighted inappropriately.

Understanding margin of error

  • The margin of sampling error (more typically known as the margin of error) isn’t an error in the sense of being a mistake. It’s the level of uncertainty, or the price that we pay in precision for not interviewing every single person in our target population.

  • Margin of error is only one of many types of uncertainty in a poll’s results. Others might stem from how a question is worded or which questions are presented before others. Because margin of error is the one that’s easiest to pin down numerically, it gets the most attention.

  • Some polls may not report margins of error. Nonprobability polls, for example, use sampling techniques not suitable for generating conventional margins of error so use other approaches to estimate uncertainty of their results. Some may report a “credibility interval,” which gives a range the pollsters believe is likely to contain the true value. If you don’t see a margin of error or credibility interval or are unsure how a poll’s uncertainty was assessed, contact the pollster or talk to a polling expert.

  • Margins of error help you figure out the strongest possible conclusions you can draw from a poll’s results. But many people apply this measure incorrectly.

    • For example, in mid-January 2020, some publications reported that a “majority” or “more than half” of Americans favored the President’s  impeachment, conviction, and removal from office, citing a poll showing that 51% of surveyed U.S. adults answered that question affirmatively. Yet the survey results had a margin of sampling error of +/- 3.4 percentage points, which means that the true results for this population were plausibly anywhere within 3.4 percentage points on either side of the poll’s given results. At this level of precision we cannot actually conclude that more than half of U.S. adults shared this opinion. Once we account for the inherent uncertainty that comes from interviewing a sample of adults instead of the entire population, we must conclude that the most plausible range of values is between 47.6% and 54.4% (that is, between 51 minus 3.4, and 51 plus 3.4). Since it’s plausible that only about 48% of all U.S. adults favor an impeachment removal, we cannot conclude that the proportion is “more than half” or a “majority.”

Ten things journalists should find out and report about polls they are covering

  1. Is this poll really a poll? Some unscrupulous campaigns and advocacy groups conduct “push polls,” which are not polls at all. Rather than aiming to tally people’s  opinions, they actively seek to change people’s opinions about issues or individuals. One clue: these efforts typically fail to ask any demographic information.

  2. Who sponsored the poll and who conducted it? Assuming you are not an expert, ask some pros for an assessment of the sponsor’s reputation. At a minimum, include the name of the sponsor in your story, to hold them responsible for the work they are backing.

  3. Who was the target population? This gives important context for interpreting results. For example: a poll of likely voters, or a survey of U.S. teens ages 13 to 17.

  4. How many individuals were sampled, and where? Location is important for context, and larger sample sizes help ensure—but by no means guarantee—more reliable results.

  5. How were the interviews collected? Methodology can point to the representativeness of the sample. For example: a poll conducted by landline and cellular telephone, or interviews were conducted online and by telephone.

  6. When was the poll conducted? The date is important for interpreting results, especially in politics or other fast-changing landscapes. For example: Interviews were conducted September 15 to November 8, 2019.

  7. What was the margin of sampling error? Poll results aren’t complete without information about the uncertainty and range of plausible results.  For example: The poll had a margin of sampling error of +/- 6.0 percentage points (which means that the true results are anywhere within six percentage points on either side of the given results).

  8. Was there weighting? If so, on what? For example: Results were weighted to ensure that responses accurately reflect the population’s characteristics in factors such as age, sex, race, education, and phone use.

  9. What language was used? This hints to the effort made in collecting a diverse sample. For example: The poll was conducted in English and Spanish.

  10. Consider also reminding readers of reasons why polls may not perfectly reflect reality. For example: There are many potential sources of error in polls, including the use of charged wording and the order in which questions appear.

Advice from Pros

  • When reporting poll results, avoid using decimal points or tenths of a percent—that is, report 28%, not 28.4%. (The margin of error will always be at least 1 percentage point, so tenths of a percent are effectively meaningless and misleading, suggesting that the results are more accurate than they actually are.)

  • Don’t place too much weight on any one poll. It’s best to compare several similar polls.

    • Neither should you presume that an aggregate of smaller polls necessarily adds to accuracy or precision. Some aggregators use more sophisticated methods than others, and the quality of their results can vary greatly.

  • Don’t forget that even small differences in question order and choice of words can significantly alter results. (For example, asking survey participants about “euthanasia” versus “physician-assisted death”). For some topics, consider directly quoting the question in full, so readers can see how it was asked.

  • Remember that all poll estimates are inherently uncertain. Margins of error are typically calculated at a “95% confidence level,” which means that in about 5% of poll results — that is, five results out of 100 — the truth of what’s happening in the population will lie outside the margin of error’s bounds.

  • Don’t assume that a poll with a large sample size has high statistical accuracy. The increased statistical precision achievable with large samples can be overwhelmed by the uncertainty and bias potentially introduced from such factors as poorly designed survey questions, flawed data collection, and improper statistical analysis.

    • This is especially important when looking at online polls, where large numbers of respondents can be amassed cheaply. At very high levels—above 1,000 or so respondents—how many individuals were selected is less important than how they were selected and their responses analyzed.

  • Note that terms like “nationally representative,” “organic sampling,” “next-generation sampling,” “representative of all U.S. adults,” and “random sample” may be defined differently by different pollsters. It’s best to ask precisely what is meant in each case. Also note that for election polling, the distinction between “likely voter” and “registered voter” may be especially important.

  • Even well-designed Presidential election polls can lead you astray. One thing to watch for: nationwide polls of voters are usually designed to capture only the popular vote, not electoral college outcomes. And while poll aggregators often build electoral college weights into their models, they rely on state polls, which tend to be less well funded, smaller, and less precisely weighted than national polls. Late-deciding voters can also significantly swing elections away from poll predictions.

  • A final, important point: Surveys are done on a vast range of topics other than electoral preferences. They provide essential data on economic activity, health status, drug use, consumer behavior, and countless other measures that are critical to responsible, democratic policy making and the intelligent allocation of resources. In many of these domains, surveys are quite good at predicting behaviors and needs. When reporting on polls and surveys, treat each with the same fairness you demand from them!