

The search for “more accurate polls” is nonsense and harms America
Can you believe it? The morning before their October 13th game, the Chicago Bears had a 57% percent chance to defeat the Washington Commanders, according to a website analysis! But the Commanders won! I know everyone is pleased and happy that someone, somewhere crunched the numbers for a game between two teams with a combined record of 3-7 and determined that the Bears are a little better and might win, but also might not.
And that means the prediction was correct anyway, because they did tell you a Bears win was only 57% likely. If all you do is measure likelihood for a singular event, you will never be wrong.
This keen analysis is brought to you by FiveThirtyEight (a.k.a. 538), a website devoted to exhaustively predicting things based on fiddly bits of data. It is everywhere in sports these days, as statisticians crunch numbers to show “win probability” onscreen during games in order to let you know, in percentage form, how close the game is, because I guess looking at the score is not scientific.
All of it is a fabrication. The percentages are determined by simulating thousands of Bears-Commanders games and then seeing how many times the Bears won (in this case, 57 out of 100). In real life, of course, the teams only play once, so crunching the numbers to see who will win is pointless because if you wait a while, you will find out who did win.
When it comes to sports, this is merely annoying. When it comes to democracy, it is poisonous.
And 538 does politics. The site comes from Nate Silver, a sabermetrician who began by analyzing baseball numbers to find hidden patterns and then moved on to political polling. The site works like any other process driven by computer analysis of large numbers: It crunches various polling numbers, applies modeling to account for the accuracy and methodology of the polls, and spits out numbers about political races—which then appear as line graphs, tracking day-by-day performance of various candidates.
538 is very proud of its modeling. It grades polling firms A through F, and tells readers when polls questioned registered voters, likely voters, or just random adults. 538 argues again and again that what they say is based on evidence; the site claims “We’re not afraid to say we don’t know.” And what 538 says is based on evidence—but it doesn’t present that evidence in a way that shows how much guesswork went into it.
The big draw of 538 is its charts and graphs. It shows each U.S. Senate race, each governor’s race, and the overall odds of one party or another winning Congress in November. They update these graphs and charts hourly.
These graphs sure as hell don’t look like a combination of data based on modeling and guesswork. They look like hard information. Check them out here or here or here.
Showing one poll as a bar graph is a reasonable way to present the information. Presenting shifting line graphs over time is less palatable—especially when the graphs don’t just mimic the numbers of the polls. The actual lines tracing, say, the Nevada Senate race between Catherine Cortez-Masto and Adam Laxalt are modified by other factors—how the modeling works, and how 538 evaluates the pollsters.
On September 26, for example, 538 had Cortez-Masto up by 0.7% in the race. At that point, the last four polls had Laxalt up by one, one, four, and three points. But somehow, Cortez-Masto was still “up” by half a point. Computer modeling makes for the difference—but few people will stop to examine the evidence. They just check who’s up.
Or take the Ohio Senate race. At one point, a Marist College poll (ranked “A”) showed J.D. Vance ahead by one point. A few days later, a Siena College poll (ranked “A”) showed Tim Ryan up by three. Any normal person would look at those results and say, “Wow! This race is really close!” 538’s conclusion on Sept. 22, however, was more technical: Ryan by 0.3%!
Campaign veterans know there is no logical difference between saying “It’s really close” and saying “Ryan’s up by 0.3%.” Both candidates in Ohio are campaigning as though the race is a toss-up. Ryan has been reminding people that Donald Trump told J.D. Vance “all you do is kiss my ass to get my support” and that Vance was kinda into it; Republicans have been pouring millions of advertising dollars into Ohio to remind people of almost anything else.
The problem is that average folks reading the composite polls and watching the red and blue tracking lines on 538 don’t know it’s a guess. The down-to-the-decimal exactitude gives these analyses the veneer of authority. It looks extremely accurate, as though what we see is a certainty rather than an estimate. 538 is obsessed with whether polls are accurate.
The relevant question, however, is what do polls do? If the only poll that matters is the one on election day, then what is the point of trying to track a far less official sense of public opinion about the choice? The answer is obvious: It allows folks in the media and online to “follow” the popularity of each candidate, and thereby report on the race as a contest of favorability, rather than policy or basic competence. Instead of educating and informing voters about what each candidate plans to do in office, they inform voters about who is ahead.
Guess which approach leads to the kind of red and blue tribalism that makes each election more existential than the last.
Then there is the Bears-Commanders problem: If they frame things as likelihoods, there is no real way to check their work. Reassuring ourselves that the Bears are likely to win is not the same as the Bears winning.
And elections involve much more than public opinion. You can poll voters—but Election Day also turns on things like voter education, long lines, weather, and the relative proximity of a polling place. If Russian hackers blanket Nevada with misinformation campaigns two days before the election, all the fancy polling might not make a difference.
Not that our obsession with polls and numbers might not be making a difference. Much has been made of the pollsters’ failure to predict the 2016 election. Less attention has been paid to the fact that polling websites like 538 did not present the presidential election as an extremely close race, where Trump and Clinton were usually within spitting distance of each other. Instead, the sites (including 538) presented the information as a likelihood of victory. Clinton was up by a few percentage points in the polls, but her odds of winning were anywhere from 75 to 98 percent.
How many Democrats saw those graphs before the 2016 election, figured Clinton was up fifty points instead of two, and decided they didn’t have to vote?
There is also a question about historical modeling. A historical model assumes that the last election is a guide for the next election. But think about what has happened since the last midterm election: a worldwide pandemic, an insurrection against the transfer of government, the end of Roe, the Russian invasion of Ukraine. Many candidates have had feelings about these issues. Indeed, some candidates are explicitly running to overturn the 2020 election and to cozy up to Putin’s regime. The website actually hides information, because its graphs cannot show Vance’s sidelong support for January 6 rioters, or Kevin McCarthy’s plan to cut aid to Ukraine, or John Fetterman’s inability to wear a suit. These positions are not on display at 538. Radicalism is hidden by the graphs.
What becomes important is not what will happen when they get into office but instead Who’s up? The questions are not about the government or the country—they are about the campaign. The election actually becomes about those red and blue lines—not what the lines represent.
Nate Silver and his silly website fundamentally misunderstand information. They are concerned about accuracy of the data rather than whether it tells us something important. It is the political equivalent of checking the footnotes on a history paper about how Abraham Lincoln could have been a Lizard Person.
There is the choice we will make on November 8, 2022, and that will be it. I wish 538 were more attuned to what is at stake than who is ahead. May you be better prepared than they are.
Adam Jortner is the Goodwin-Philpott Professor of Religion in the History Department at Auburn University. He is the author of the Audible series Faith of the Founding Fathers and was part of the creative team behind Where in Time is Carmen Sandiego?