Back in the late 1990s a bookmaking firm came up with an interesting new game to keep the customers in its betting shops amused while they were waiting for the racing to start. It ran computer simulations of entire seasons in football’s top flight, several times each morning and squeezed into just a few minutes, complete with “pre‑season odds” to attract fivers and tenners from fans of the big-name teams. It was a little like Wembley, the 1970s board game that did something similar for the FA Cup, and the virtual, highly accelerated Football League proved quite a hit with the regulars.
That was until the morning when, much to the surprise of all concerned, they cranked up the computer, fed in all the odds and possibilities, hit return and five minutes later watched as Charlton were installed as the newly simulated champions of England after starting their imaginary campaign completely unbacked at odds of 2,000-1.
The punters felt they had been played for mugs and yelled “Fix”, but the bookie was as genuinely astonished as any of the customers. They had all been confounded by the basic rule of probability that says that even the most unlikely eventualities will come to pass if you give them enough time and opportunity.
The computer, admittedly, had come up with some distinctly odd results to get Charlton to the top of the pile, but it did not produce any that were, in isolation, unimaginable. What was unusual was the sheer number that happened to crop up in the same simulation, not only in relation to Charlton but also for the teams that set out with – on (virtual) paper – a much better chance of success.
Jump forward about 20 years and much the same was true for Leicester in 2015-16 when they performed well above expectation and got the rub of the green while the big six sides that dominated the betting at the outset all chose the same season to do the opposite. Leicester were an excellent side, but, like the virtual Addicks, they still needed a big dollop of luck, most obviously in an unusual number of unlikely results elsewhere, to get over the line with only 81 points.
These two highly unexpected results, one virtual and one actual, came to mind while trawling through metrics on the most recent Premier League season on fivethirtyeight.com, which was founded by the American data-analysis legend (and sports fan) Nate Silver. The 2018-19 season was, on the face of it, one that largely followed the script. Manchester City, the odds-on favourites, beat the second-favourites, Liverpool, with the next four teams in the betting duly taking the next four spots in the table. Yet the data suggests there were still many dozens of matches throughout the season when the result bore little or no resemblance to the outcome predicted by the data.
FiveThirtyEight uses three metrics to analyse Premier League matches, the simplest of which is expected goals – xG – the increasingly familiar number that seeks to assess how many goals a team could have expected to score from the chances created in any given match. There were 380 matches in the 2018-19 Premier League season with the home teams scoring a total of 596 goals, very close to the xG prediction of 615. Away teams scored 476, against an xG prediction of 496. So far, so good.
Compare the results of individual matches against the xG numbers, however, and some startling discrepancies begin to appear. In all there were 64 games when the difference between the actual outcome and the result predicted by xG amounted to at least two goals – which can be enough to turn three points into none. That is 64 matches, almost exactly one in every six over the course of the season, when the xG data suggests the result was down to a large slice of luck.
There were a dozen matches when the scoreline and the xG forecast disagreed by at least three goals, including three where the difference was four or more. Two of these were thrashings when the final scoreline merely exaggerated the xG numbers, although Southampton fans who witnessed their 6-1 drubbing at the Etihad might – or might not – want to know that an xG of 2.49/1.96 suggests the same set of chances on another day could easily have seen them leave with a point. Nor is this the result of an obvious drawback in some xG estimates, which assess the quality of a chance but not the quality of the player presented with it. The FiveThirtyEight numbers are modified to reflect the “historic conversion rates” of City’s starting forward line of Sterling, Agüero and Sane, all of whom scored.
Photograph: Clive Brunskill/Getty Images
But there was less of an imbalance in the third match with a four-goal difference between xG and actuality. Tottenham’s 3-1 defeat of Leicester in February was, by this measure, the travesty of the season, achieved despite xG’s insistence (1.18 to 3.18) that it should really have been the other way around.
This was also Claude Puel’s penultimate game in charge of Leicester. Their previous outing, a 1-0 home defeat by Manchester United, was 1.9 to 1.3 in Leicester’s favour on xG, while the game before that, a 1-1 draw at Anfield, was 0.6/1.2 to the Foxes on xG. Yet, after shipping four at home to Crystal Palace a fortnight later (and in a match that was a rather less damning 1.1/1.9 on xG), Puel’s tenure came to what was variously described as an “inevitable” conclusion after a “dismal” sequence of results.
Inevitable? Dismal? Or a desperate run of luck at the worst possible time, in a league where pure chance plays a significant, possibly decisive, role in one match in six?
There are many fans and commentators who remain suspicious of xG and similar metrics that attempt to bring some statistical rigour to football analysis. They are often the same fans and commentators who claim that “luck evens out over the season” or that “the final table never lies” – which is, and always will be, baloney.