How to know whether you can trust a poll

  • Post category:Blog

A close election is all about uncertainty. So it’s not surprising that many Americans are relying on polls to offer comfort, or warning. Not all polls are created equal, however — and election experts caution that some are more impartial than others.

Not only that, pollsters vary wildly in their level of outreach, and how much they adhere to industry norms regarding data accuracy.

Take this poll released Wednesday by Quinnipiac University, which says Vice President Kamala Harris was ahead of former President Donald Trump by three percentage points in the battleground state of Michigan. On Twitter/X, the improved result for Harris buoyed her supporters, while Trump fans challenged the poll’s veracity.


Tweet may have been deleted

 

Samara Klar, Ph.D., a political science professor at the University of Arizona’s School of Government and Public Policy, stresses transparency when it comes to deciding what polls to give credence to. 

“A poll consumer should be able to clearly see how the data were collected, when it was administered, how many people are in the sample, and demographics of who they are,” Klar tells Mashable. 

A weighty matter

Pay attention to whether a poll’s results are weighted, Klar adds, referring to the statistical technique done to data after collection. Weighting aims to correct sampling errors by measuring certain responses differently to account for the poll’s underrepresented groups.

For example, if few respondents of a poll are Gen Z, or female, that pollster may give more weight to younger women’s responses than older, male participants.

“If the data are weighted, it is helpful to know the criteria upon which the weighting was done,” says Klar.

Ideally, she adds, polls should have sample sizes close to 1,000 respondents, “as this allows for smaller margins of error and closer estimates.”

A margin of error — typically around 3% for 1,000 respondents — is a caveat, acknowledging a sample can never provide a full picture. The American Association for Public Opinion Research (AAPOR) describes error margins as “the range that [a respondent’s] answer likely falls between if we had talked to everyone instead of just a sample.

“For example, if a statewide survey of adults with a margin of error of plus or minus 3 percentage points finds that 58% of the public approve of the job their governor is doing, we would be confident that the true value would lie somewhere between 55% and 61% if we had surveyed to the whole adult population in the state.”

‘No way to be sure a poll is reliable’

Even accounting for ideal sample sizes, weighted data, and margins of error, David Wasserman, senior editor and elections analyst at the nonpartisan Cook Political Report, paints a less rosy picture of polling accuracy.

“There is no way to be sure a poll is reliable because response rates are very low these days,” Wasserman says. “Every pollster is making a different assumption about who will turn out and vote that may or may not turn out to be accurate. You can give the same raw data set to 10 different pollsters and you might get seven or eight different top-line results of a survey based on how the pollsters assume each cohort of voters are going to comprise the electorate.”

If it seems like random polls are popping up everywhere lately — not just the ones from established pollsters like YouGov or The New York Times/Siena College — well, that’s because they are. “There are plenty of newer pollsters with no track record or very limited track record this cycle, as there were in 2022,” Wasserman says. “Democrats are fond of pointing to Republicans flooding the zone with Republican-leaning surveys.”

“There is obviously an effort by mainstream and other pollsters to correct the under-sampling of Trump’s base of support in 2016 and 2020. Pollsters are going about that in different ways but one of the most common ways is to weight their sample by how voters recall voting in the 2020 election.”

The weighting of so-called “recall votes” aims to correct the hesitation of some voters to admit they voted for a past presidential loser. So pollsters weighting recall votes this cycle would give more emphasis to those admitting to voting for Trump in 2020. 

One thing that unites all good pollsters, according to both Klar and Wasserman, is adherence to standards set by the AAPOR. Members of the organization, which includes the most respected pollsters, agree to abide by the organization’s Code of Professional Ethics and Practices. That includes standards on training, transparency, sampling methods, and weighting.

Reaching voters in the modern age

The dearth of responses to most polls requires careful consideration regarding weighting, Wasserman says. While the idea that pollsters are ringing up landlines is outdated, Wasserman says, even contacting people through cell phones, texts, or online panels is a challenge.

Many pollsters have also started utilizing mail to reach respondents, according to the New York Times — often with an offer of a financial incentive to take an online poll, referred to as a probability panel. The new methodology is a way to counter the low response rates of randomly calling potential voters, which is something only one notable pollster, Quinnipiac, still does.

“It’s common for telephone polls, even if they’re overwhelmingly cell phone samples, to wield less than 1% completed responses,” Wasserman says. “For every 100 phone calls you’re making, you might get one completed survey, sometimes it’s less than that.

“Text to web modality is reaching younger voters. But it’s difficult to reach 18-34-year-old voters no matter what mode you’re using, so what ends up happening is pollsters up-weight the respondents they do get in that age bucket to reflect their expected share of their electorate. But pollsters have to make a judgment call about what share they expect.”

The Cook Political Report features a national polling average on its website, culling the latest data from a range of respected and diverse pollsters, like Fox News and ABC News/Washington Post. Three times this year, Cook conducted their own battleground state polls with a large online panel.

“We can’t be positive that our numbers reflect the true state of play, but we made our best effort to come up with an approach that our polling partners, a Democratic firm and a Republican firm, both felt comfortable with,” Wasserman says.

While imperfect, polls still serve an important purpose, Klar insists.

“Polls are great at showing us a snapshot in time: what do people think now,” she says. “Forecasting requires that polls predict the future: Who will actually show up to vote weeks, or months, or sometimes years, from now? Will people change their minds between now and then? If you’re interested in learning what people think today, then polls are tremendously valuable.”

On the other hand, “if you’re looking for a crystal ball to predict the future, you have to take poll results with a grain of salt.”

Views: 0