After a Tough 2016, Many Pollsters Haven’t Changed Anything
Nearly all of the private pollsters interviewed for this article have, at minimum, begun to do something about education. There is widespread agreement that the industry failed to properly represent less-educated white voters, and that this was part of why last year’s polls were too favorable to Democrats. It’s likely to remain a big issue, at least as long as white voters split so decisively on educational lines. The postelection report of the American Association for Public Opinion Research, the nation’s leading polling association, reached a similar conclusion.
One possible fix is to weight by education, which would give more or less weight to certain respondents to ensure that less-educated voters represent the appropriate share of the poll’s final estimate. For many public polls, weighting by education would make a big difference. It could move an otherwise lightly weighted poll by as much as four percentage points toward the Republicans in the Virginia governor’s race, for example, based on an analysis of the most recent Upshot/Siena College poll. With the Democrat, Ralph Northam, holding a modest lead in most polls, the difference could be enough to flip several surveys in the state.
There are a few pollsters in Virginia — like The Washington Post and Quinnipiac — that do weight by education, and did so before the 2016 presidential election. Most others weren’t weighting by education before 2016 and aren’t weighting by it today.
The disparity can be extreme. Sixty-nine percent of likely voters were listed as college graduates in the final Christopher Newport University survey of Virginia, well above the range of 49 percent to 52 percent in Upshot estimates. Mr. Northam led by seven points in the Christopher Newport University poll. A Gravis poll, which showed Mr. Northam ahead by five points, had four-year college graduates at 60 percent of the electorate. A tied Roanoke poll had the college-educated share of the electorate at 57 percent.
The relative lack of change among public pollsters doesn’t mean that the pre-election polls in Virginia or elsewhere are doomed to miss by as much as they did in 2016. By most accounts, the 2016 polling error took a perfect storm: Just about everything that could break Mr. Trump’s way appears to have done so. Next time, it could be the Democrats who beat turnout expectations and sway undecided voters. There’s also no guarantee that the stark educational divide of the 2016 presidential election will be as prominent without Mr. Trump on the ballot, or as important with midterm voters, who tend to be more educated.
But the lack of change hints at a bleak possibility: a mismatch between the scale of the challenge facing the survey research industry and the capacity of many individual public pollsters to respond.
Education Is a Mystery
It might seem obvious that the 2016 election would drive public pollsters to adopt big changes. But many individual public pollsters were reasonably satisfied by their results, even though the industry as a whole seemed to get it wrong.
It’s not completely unreasonable. Last year’s polling error was an odd one. It was almost perfectly distributed across the battleground states to maximize the electoral consequences. There were large errors in a small number of states where Mrs. Clinton had a big lead, like Wisconsin and Michigan. But elsewhere the errors were more typical, or even nonexistent. Virginia is one such state: Mrs. Clinton led by five or six points in all the final polling averages; in the end, she won by 5.3 points.
If pollsters didn’t take any surveys in the Midwest, most or all of their results were probably in the margin of error. Their result might have leaned toward Mrs. Clinton, like everyone else’s, but they could explain that away: There is a lot of evidence that undecided voters broke toward Mr. Trump. And most public pollsters didn’t conduct enough polls, late enough in the race, to be sure of just how well or poorly they really did.
Some pollsters did take enough surveys late enough in the race to merit a re-examination, but they didn’t think education was decisive. Patrick Murphy of Monmouth University, for instance, found that weighting by education would have explained only one percentage point of bias in his surveys.
But perhaps the bigger issue is that education is a mystery: It’s hard for pollsters to even know if they’re getting it wrong, let alone to fix it.
“I don’t weight to education because it’s pretty hard to find accurate benchmarks for a likely-voter profile based on education,” said Matthew Towery of Opinion Savvy, a firm that often conducts state polls using automated calls. “That’s not something that’s necessarily easily ascertained most of the time.”
He’s right. It’s not easy to weight by education, and it’s especially tough for pollsters who rely on voter registration files — the “big data” that powers most campaigning polling and analytics. Many pollsters use voter file data, rather than dialing telephone numbers at random (known as random digit dialing, or R.D.D.), to conduct their surveys and estimate the composition of the electorate.
Voter files offer pollsters rich demographic and political data to help them weight their survey and estimate the likely electorate. R.D.D. surveys have their own advantages, but the loss of this data is a considerable disadvantage, especially in a low-turnout election. Unsurprisingly, the Virginia polls that at least partly use R.D.D. have occasionally produced somewhat outlying results.
But the voter file has a weakness: Education is not reliably included on public voter files. To weight by education, pollsters using the voter file have to look elsewhere — something pollsters who trust the voter file are generally reluctant to do. Not coincidentally, only two of the nonpartisan Virginia pollsters using a voter file appear to weight by education; all but one of the pollsters using random digit dialing in Virginia do, most likely using the census.
What’s the alternative? A pollster could ask voters how much education they have in order to match estimates for the likely electorate from some other source, but there isn’t a definitive benchmark for weighting. There still isn’t universal agreement about how many voters in 2016 had a college degree: There’s a wide 10-point gap between the exit poll results and the census postelection survey.
The combination of the uncertainty around the true educational composition of the electorate and the reluctance to weight voter-file surveys by self-reported measures has been enough to deter many public-voter-file-based surveys from weighting by education.
These same factors pose a challenge to private partisan pollsters. But the pace of change among them has nonetheless been far greater.
All but one of the 10 private partisan pollsters interviewed for this article have begun to do something about education, whether by weighting or by other means.
“This is the first time ever that I’ve weighted education, getting ready for some elections in 2018,” said Glen Bolger, a pollster for Public Opinion Strategies, a prominent Republican polling firm, “because education is a pretty damn good predictor.”
For private pollsters, there is a lot more acceptance of the problem — and more acceptance that they just got it wrong, especially on the Democratic side. The big private polling firms conducted more than enough surveys, in enough states, late enough in the race, to have no illusions about whether their results were meaningfully biased.
Some of them were already doing something about it before the election, but now they’re generally doing more. There’s still uncertainty about the exact way to do it, but most private pollsters are persuaded by the necessity of change — uncertainty is no longer an excuse to do nothing.
“It’s kind of a cop-out,” said Anna Greenberg, a Democratic pollster at Greenberg Quinlan Rosner Research. “You can’t say I can’t deal with education because I don’t know the exact weight of it. You wouldn’t not weight on race because you couldn’t figure it out.”
Is It Enough?
The debate among private pollsters, especially on the Democratic side, has instead advanced to a different stage: whether weighting by education is enough.
The cause for debate is simple. Even weighting by education wouldn’t have been enough to give Mr. Trump the polling edge. One possibility, often advanced by more traditional firms, is that the residual error could be explained by turnout or shifts among undecided voters.
But there’s a more ominous possibility. It wasn’t just that polls didn’t capture enough less-educated white voters; it was also that they didn’t get the right less-educated white voters.
Perhaps the polls missed less-educated voters in the most rural areas. Maybe they missed those who don’t work in office jobs, in old-economy sectors like manufacturing or agriculture. Or maybe they missed voters with low levels of social trust or a lower propensity to volunteer, who have long been less likely to respond to surveys and may have been more likely to support Mr. Trump in 2016.
If true, the problem is a lot more difficult to solve. Weighting by education would not be an adequate remedy, since “baristas aren’t exchangeable with factory workers,” as David Shor of Civis Analytics puts it. Civis is the Democratic firm that has probably been the most vocal about the possibility that the polling industry suffers from bigger nonresponse challenges than can be fixed by weighting by education.
The possibility of a deeper kind of nonresponse bias is now broadly held in the Democratic analytics world, but it’s nowhere near a consensus across the polling industry. That’s in part because there’s limited public evidence to support it at this stage. It’s also because many traditional pollsters are skeptical of analytics firms with an economic incentive to argue that traditional polling is broken or even impossible. But traditional pollsters mentioned some of these issues as well.
Research in this area seems poised to continue for the foreseeable future. But most of these efforts are occurring out of the view of public polling firms. If something is more profoundly wrong with public polling than weighting by education alone can address, it’s hard to see how many public polling firms will be able to do anything about it.