WHEN BRITISH PRIME MINISTER Benjamin Disraeli first complained about the three kinds of lies: lies, damned lies and statistics, polls did not yet exist. Disraeli probably couldnt have imagined how appropriately his aphorism would apply to elections a century later, when the uncertainty of statistics would combine with the fog of political war to confound us all further.
Today, polls often reduce rather than enhance electoral clarity. And theyve become as controversial as the campaigns themselves. With the availability of detailed poll data on the Internet, and an army of politicos and bloggers to comb the fine print for telling arcana, each round of polling numbers is now accompanied by a chorus of commentary as to what those new numbers mean.
Case in point: recent dueling polls by the two gold standards of measuring public opinion, Gallup and the Pew Center for Public Research, diverged widely. Gallup gave Bush a 13-point advantage. The Pews survey the same day had Bush with a 1-point lead (a more recent CNN/USA Today/Gallup Poll of likely voters gave Bush an 8-point lead). Left-wing blogs quickly identified problems with the Gallup Poll. Right-wingers had their own criticism of the Pews information. The Kerry campaign pointed to the Pew and told the photographers to start getting ready for a photo finish. Bushs people tried to ride the Gallup to a multi-length lead.
As usual, the Bush camp stretched the truth much further, but the reason polls have succumbed to so much spin on both sides is that they are, unfortunately, easily spinnable. Polls are imprecise. The process is susceptible to bias, both statistical and political. Results can be shaped just as easily by methodology than by actual changes in opinion. Even the most carefully constructed poll yields only a range of possibility; and when the election is close, as it was in 2000 and will be again in 2004, Bushs and Kerrys respective ranges overlap making all the soothsaying surrounding polls somewhat moot. The question is not what polls mean, but whether they have any meaning at all.
To be fair to Carl Friedrich Gauss, Disraelis remarks were only half-true. When carefully applied, statistics is fairly reliable. Without the discipline, virtually none of modern science would be possible. Its in the realm of interpretation and faulty application that statistics gets its taint of doubt.
Heres how statistical theory works in polls. (Numerophobes: Fear not! There will be no equations or tests, and the background, I promise, is easy to follow.) Out there in the voting population is some proportion of people who at this moment are thinking of voting for Bush or Kerry. Thanks to the insights of Gauss and the magic of a mathematical principle called Law of Large Numbers, we know that a decent-sized cross section of those people can provide a rough estimate of the voting plans for an entire population. The bigger the cross section, the better the estimate. This is the tried-and-true discipline of sampling, and incredibly it means that a thousand people or so can speak for a hundred million.
Or can they? For the theory to work in practice (always the kicker with theories), the sample has to represent the larger population it is intended to describe. If a poll has more Republicans than Democrats, or reaches an unusual number of retired people, college students, African-Americans or any other subdivision within the electorate that has a specific voting tendency then the answer will be prejudiced. This is called sampling error, and it is a fact of statistical measurement.
Thats the difficulty, explains Lynn Vavreck, a professor of political science at UCLA who specializes in quantitative methods. Polls, which rely on calling people and getting them to answer questions, have many sampling pitfalls. The polling firms take a randomized list of 5,000 numbers and start calling in order to get their 1,000 responses. From a scientific standpoint, you need to reach every person on that list to avoid a selection problem. But they throw out cell phones. That might have some effect. And the bigger issue is that not everyone answers. Response rates are getting lower and lower.
In the end, the respondents are the people who picked up the phone and felt like talking. And who those people are could be influenced by all kinds of things, like the days covered by the poll, the wording of the questions, the timing of the calls, a holiday weekend, or whether or not Bush is on television. As in the case of the 10-point lead Bush had in the Newsweek poll right after the RNC, which was conducted partly during the convention and may have gotten more Republicans who were watching at home and eager to talk politics.
Even if pollsters reached everyone on their lists, however, more methodological trouble lies in wait in the form of the widely applied but controversial likely-voter screen.
Many polls report two sets of numbers. The headlines that trumpet dramatic figures in bold type usually refer to the response from likely voters. More quietly, the same polls also provide an additional breakdown of opinion from registered voters, and you will sometimes find those numbers tucked away somewhere down the column or well past the jump.
The distinction is critical, however, because there is an intrinsic fluctuation caused by the way polls determine who is and isnt a likely voter. The problem is and this is particularly true of Gallup that they use an elaborate screening mechanism, explains Ruy Teixeira, a fellow at the Center for American Progress who has been a vocal critic of the likely-voter model. Gallup asks seven questions that try to gauge involvement and interest in the campaigns. This captures people who are most energized by the campaign at any given time. And that has party implications, because at different times, different people are energized.
Again, the period after the RNC stands as an example: Bush supporters are excited, so more Republicans than Democrats turn up in the likely-voter samples. This means, says Teixeira, that much of the movement in the Gallup Poll is due to who is counted as a likely voter rather than movement in public opinion.
Gallup recognizes that their likely-voter model encourages wider swings but they say that swing is what they want to measure. Gallup looks for movement, explains Frank Newport, the editor of the Gallup Poll. Ours is more sensitive than other polls, so that we can see changes. Jim Norman, the polling director for USA Today, which along with CNN has an exclusive arrangement with Gallup, also acknowledges that their system at certain times exaggerates the turnout from one party or another.
Kerry partisans note that the exaggeration this year has favored the Republicans. Some have been making the conspiratorial observation that Gallups CEO, Jim Clifton, is a Republican donor, and they suggest that the organization puts its thumb on the scales. Every polling professional I spoke with dismissed that theory out of hand. But in this election cycle there is no question that for whatever reason, Republicans have been overrepresented by Gallup and other likely voter polls.
Take the Gallup report with Bushs 13-point advantage, for instance. Here was the party identification among likely voters, with 7 percent more Republicans in the sample:
GOP: 305 (40 percent)
Democrat: 253 (33 percent)
Independent: 208 (28 percent)
Actual voter turnout, however, has never looked like this. In 2000, the proportion was even: 34 percent Republican, 34 percent Democrat, and 33 percent Independent. For the 20 years previous, more Democrats than Republicans voted in national elections. Adjusting Gallups sample composition to reflect that kind of turnout in 2004 would slash Bushs lead. That would bring it more in line with Pews poll, whose sample had slightly more Democrats than Republicans.
Gallups explanation for the GOP tilt in their sample is that they dont weight based on party identification because they view that as a fluid attitude rather than a firm characteristic like gender. We do carefully weight the sample for hard demographic data like age, region, ethnicity and so on, says Newport. But people change their minds about affiliation, so we treat that as a variable that moves.
Michael Dimock, the research director for Pew, said the same about their process. We dont weight for party ID either, because it shifts from one poll to the next, he said.
But if both the Pew and Gallup polls were conducted with similar methodologies over mostly overlapping time periods, I asked, why would they have such different samples and results? Yes, Dimock responded, thats a good question.
Neither did Newport have an explanation. But the discrepancy was caused by something. It could have been the likely-voter screen, or some other bias in the questions or time frame. Or the culprit may be what professor Vavreck called the reality of sampling, which is that You just dont always get a great draw.
THIS MAY HAVE BEEN the case for the Gallup Poll, which looks like what statisticians call an outlier a technical term for sampling error. Defending his poll, Frank Newport said that Gallups numbers were not out of line with other polls, none of which tilted toward Kerry which is true, but it is also true that no other poll showed Bush with a 13-point lead.
The true state of opinion, most likely, is somewhere in the middle. You need to look at a preponderance of the evidence, professor Vavreck advises about gauging the information contained in polls. Look at several and get a general sense. Newport himself suggested doing the same, and cited the Web site Real Clear Politics, which compiles an average of available polls. Their current estimate: Bush is up by 5 points.
This is not, unfortunately, how the media usually approach poll numbers. Not only do news stories never question or even describe methodology, the reporters rarely create context by noting that the poll theyre reporting is just one of many.
And herein lies yet another systemic polling problem: Each network or paper commissions its own survey and then acts like it got some kind of scoop. They want to get their numbers out first, and tout them as news, so they dont mention that the other guys numbers are different. The competition between proprietary polls forces the media, when reporting on the extremely sensitive state of national elections, to be essentially un-journalistic: They use only a single source their own poll while ignoring other, potentially conflicting information.
Its a phenomenon professor Vavreck calls incentive incompatibility. The candidates, the public and the media all want different things. USA Today and CNN are trying to sell their products. Candidates are looking for sympathetic coverage. And voters need information. And those things often dont line up.
So when USA Today rushes to a headline that says BUSH CLEAR LEADER IN POLL and declares a definitive Bush surge, the voter is getting shortchanged.
As a remedy, Ruy Teixeira thinks that poll reporting should have more information, with full disclosure of other polls, and perhaps also provide the reader with additional detail about sample compositions, weighting and other methodologies.
Newport agrees, saying it wouldnt be hard to add a sentence or two explaining the context of a particular poll or the nuance of its results. The burden is on journalists, he says. They should be having the discussion were having.
Especially when the race looks like it will be another squeaker. Recall that in 2000, the vote in Florida and several other states was a statistical tie. These kinds of unprecedented, razor-thin margins are another reason to tread lightly with poll information. When the country becomes so evenly polarized in many places, and public opinion migrates inside the tolerance of statistical sampling, even the best poll is not precise enough to have much predictive power.
This is what Karl Sigman, a professor of applied mathematics at Columbia University, discovered in 2000. He and a colleague input state-by-state poll data into a computer simulation that assigned probabilities for each candidate to win the electoral college. Sigmans model was an innovative method for presidential forecasting, and it came closer than estimates by most of the pundits or political professionals.
But even with polling figures released the day before the election, the model was slightly off. Ultimately, what 2000 reminded us about is that polls are not like taking samples of bacteria in a lake; they are an attempt to gauge human intentions, which are messy and resist quantification. That basic ambiguity made it impossible to capture the prospect that, at that late date, some people would still change their minds. Or not vote when they said they would to pollsters. Or come out in unexpected force in one state but not another. In 1984, this wouldnt have mattered; no last-minute budges would have helped Mondale overcome a 20-point deficit. But in 2000, and probably again this year, the shift that tips the scales may not be easily detectable by polls.
That leaves voters, candidates and the media in the strange situation of poring over an endless series of numbers with only tentative analytical value. And almost no democratic value, since the constant horserace bulletins divert attention from substantive reporting about the actual issues at stake.
This is what Canada realized in the early 90s, when it prohibited poll reports in the run-up to federal elections. This year, the Canadian Broadcasting Corporation decided not to commission any polls, citing the detriment of pure horserace coverage. A legislative polling ban would never work in the United States Canadas experiment was in fact short-lived, as its Supreme Court overturned the law on the grounds that it restricted freedom of expression, which is precisely what would happen here and its hard to imagine any of our news organizations following in the brave footsteps of the CBC. But maybe we voters can create our own, self-imposed collective ban. Let us avert our eyes from the polls and shun the temptations of numerology. No more playing obsessively with the L.A. Times interactive electoral map. It just might be that if you get out the vote, the polls wont matter.
Get the This Week's Top Stories Newsletter
Every week we collect the latest news, music and arts stories — along with film and food reviews and the best things to do this week — so that you'll never miss LA Weekly's biggest stories.