How accurate are MRP polls predicting huge Tory losses in next general election?

<span>Two MRP polls this week predicted huge losses for the Conservatives at the next general election but had reasonably different headline figures.</span><span>Photograph: Jessica Taylor/AFP/Getty Images</span>
Two MRP polls this week predicted huge losses for the Conservatives at the next general election but had reasonably different headline figures.Photograph: Jessica Taylor/AFP/Getty Images

If an MP’s life wasn’t nerve-racking enough, they must now face polls predicting not just how many seats their party will win, but their own electoral fate. The bad news for anxious politicians is that MRP polling is here to stay – and getting more accurate all the time.

MRP, the handy acronym for a technique called multilevel regression and poststratification, was in the news this week after a pair of polls predicted the Conservatives plummeting to 98 and 155 seats respectively at the next election.

The first, by Survation and based on a 15,000-strong sample, forecast a Labour majority of 286, 107 more even than the 1997 landslide. A few days later, YouGov, which sampled just over 18,700 people, put the Labour majority at 154.

MRP polls also produce constituency-level predictions, allowing MPs and candidates to open a spreadsheet and view their percentage chances in black and white, with YouGov and Survation forecasting defeat for a string of ministers.

The polls did, however, come up with reasonably different headline figures, and not just for the two big parties. For example, Survation concluded a Lib Dem seat tally of 22, while YouGov said 49.

Why the disparities? One answer is that, as ever, polls are snapshots, not a definitive template for election day. Another is that with MRP polling, arguably even more so than with traditional surveys, what you get out depends on what you put in, not just from sampling but also the complex models used to crunch the data.

MRP, in simple terms, takes polling data and adds other details about the respondents, such as their age, qualification level, income, previous voting patterns and where they live.

This is then correlated with census-type data to give the numbers of various types living in each area, with the headline polling data adjusted accordingly.

Patrick English, the director of political analytics for YouGov, describes it as a totally different approach to the “top down” method of traditional polling, and one which uses a lot more computational power.

“The logic is quite simple,” he said. “It firstly takes everyone’s background information. There is then a probability model which says, OK, based on all this information we have about people, how would each different area, which is made up of these different type of people, therefore vote? And that’s it.”

After what the polling industry recognised as a largely disastrous set of projections for the 2015 election, it was YouGov’s early MRP model that correctly forecast 2017’s surprise hung parliament, bringing the idea to prominence.

In recent years, English says, the polling industry has seen an influx of people with skills covering not just advanced data science but areas such as AI and machine learning, making MRP polls ever more sophisticated.

However clever the algorithms are, polling remains an inexact science due to unavoidable imponderables such as tactical voting and, even more prosaically, people simply changing their mind.

Chris Hanretty, a professor of politics at Royal Holloway, University of London, who works with Survation on polling, says that while models can be adjusted to take account of tactical voting, awareness of this inevitably varies significantly over time.

“The problem is that we can only work with what people tell us now,” he said. “We haven’t had millions of, say, Lib Dem leaflets sent out, so of course they’re not thinking in quite the same way about the strategic context of their constituency.

“We’re not future tellers. We’re not forecasting, and there will be some tactical voting that ends up in the election which we can’t capture because we’re not at the election.”

Another variable is people who reply to a poll by saying they do not know how they will vote. Survation and YouGov exclude these from their results.

However, YouGov’s MRP model includes what English describes as a complex understanding of how undecided voters may be expected to behave, and so can “make a really educated guess as to what those voters will do should they turn out”.

Do the pollsters ever think about the impact on MPs whenever a new MRP model is launched into the world? Inevitably, at times they do.

“It can never be harmful to give people accurate information, or at least the most accurate that we can do,” Hanretty said.

“Do I want MPs to be thinking always about their electoral prospects rather than other things? Maybe not. And maybe things were better in the 1950s when you had one poll every other month. But we’re not in that world any more.”

English says he regularly gets feedback from candidates and their agents – at times insisting YouGov has got their area totally wrong.

“If you thought about it too much, you probably become a nervous wreck and never do a poll again,” he said.