High stakes for polling companies in federal election
Canada’s federal election is approaching fast, and there’s little sign of a break in the three-way race. But it’s not just the politicians who are feeling the heat—pollsters are also feeling anxious, as their credibility has suffered serious blows in recent elections. Richard Johnston, Canada Research Chair in public opinion, elections, and representation in UBC’s department of political science, explains why the polling industry is in crisis.
How reliable are political polls?
The incidents of serious error by the industry seem to have gone up. In Canada we’ve had some particularly spectacular examples in the last few years, so there’s no question that confidence in the industry is shaken.
The two most spectacular were Alberta and B.C. in 2012 and 2013, respectively. In 2012, it looked as if Wild Rose would win the election, but instead the Conservatives got a comfortable majority with Alison Redford. And in B.C., the polls pretty much agreed that the NDP would win the 2013 election by five or six per cent. But, in fact, the Liberals won by five or six percentage points.
How important is this election for pollsters?
This is really important for them, and they’re naturally anxious. Election polls are advertising for the polling firms. If you do market research or policy research, that’s what pays the money. Only election polling gives firms a bridge to the real world to assess how credible the rest of their work is, and to prove to clients that they are reliable.
So far, this is not an election that is enabling the pollsters to hide. In the 2011 and 2006 federal elections, pollsters were off by about the same amount each time, but in each case they did not get that party’s place in the pecking order wrong. If enough space opens up between the three that pollsters can competently predict that one party is going to win, they’ll probably be right and we won’t notice. But at this moment, there isn’t a clear frontrunner.
There’s also some suggestion that polling firms are starting to herd, and their predictions are converging. They’re kind of looking over their shoulder at each other. If you look at certain elections, there is less variation across polls than you would predict from sampling error. The suspicion lingers that they’re herding and following each other—and they could be following each other off a cliff.
What is happening when polls go awry?
The truth is, we don’t know. Polling firms themselves are very reluctant to open up because they don’t want to reveal their proprietary secrets to other people in the industry.
We do know that there are certain trends in polls. Thirty years ago, polls conducted by telephone were a pretty efficient method of sampling public opinion. In North America, at least, every household had a landline, so you could potentially reach every eligible member of a population in a place that was uniquely identified with them: their home. This got you as close as realistically possible to giving every member of the population an equal probability of selection.
That situation has broken down. Even before cellphones became important, the willingness of people to complete interviews went down dramatically as people became frustrated by receiving telemarketing calls. Then, the cellphone revolution arrived.
There are many problems with the cellphone, but the biggest is that you can’t uniquely identify them with a residence. Cellphone conversations are often conducted in places that are not private. Often they’re low-attention conversations and they do cost more to complete than landline phones, because fewer people complete the surveys. In addition, pollsters risk calling teenagers and preteens, who can’t vote.
Polling companies are moving away from surveys over the phone and shifting to the web. For example, the Angus Reid Forum is entirely online. Pollsters ask people to join their online panels, and they get rewarded for completing surveys.
Do you see any problems with newer polling methods?
Whether you’re building the panel online or whether you’re engaging in random-digit dialing, all the forms have become less representative than in the past.
In the telephone world, your likelihood of getting a representative sample increasingly rests on the willingness of prospective respondents to actually cooperate. On the web, it’s the other way around. You’re recruiting people voluntarily up front, and then you attempt to create a representative sample out of that group.
Either way, you get samples that are in and of themselves not representative of the population. Rather, they reflect the accessibility of persons to the medium. One of the things that survey researchers often do is adjust their sample by weighting different cohorts. But when they do that, the fact is they’re making their so-called margins of error larger than they claim them to be.
Are there any indicators to tell which polls are accurate?
At this point, I would say, not really. I would urge people not to spend time on the interpretation of individual polls, nor to rely on the individual media reports, because in some cases there are agreements between pollsters and media outlets. Some outlets report in more depth on some polls than others.
Instead, look for information that pools information across the polls. Look to aggregators, who are not themselves making money from any individual poll or who have a stake in publicizing individual polls. Two good resources are www.threehundredeight.com and signal.thestar.com