The Cambridge Analytica scandal isn’t a scandal: this is how Facebook works

It is easy to be misled into believing that the Cambridge Analytica story is about rogue data miners taking advantage of an innocent Facebook. Facebook’s decision to suspend Cambridge Analytica’s access, the use of terms like “data breach”, and a good deal of coverage in the media seems to follow these lines. That, however, misses the key point. This is not a data breach by any means – and nor is it something that could not have been predicted or could easily have been avoided. This is, in many ways, Cambridge Analytica using Facebook exactly as the social media platform was designed to be used. This is how Facebook works.

Three key parts of Facebook’s model come into play: gathering data from people in order to profile them, both en masse and individually, designing systems that allow that data to be used to target people for advertising and content, then allowing third parties (generally advertisers) to use the data and those targeting systems for their own purposes. The power of these systems is often underestimated, but Facebook themselves know it, and have tested it in a number of ways.

They have demonstrated, through their “emotional contagion” experiment in 2014, that they can make people happier or sadder, simply by manipulating the order things appear in people’s timelines. They have demonstrated that they can make people more likely to vote, testing it in the 2010 US congressional elections. They can profile people based on the most mundane of information – the sheer scale of Facebook’s user-base and the amount of information given to them means that “big data” analysis can make connections that might seem bizarre, revealing insights into intelligence, politics, ethnicity and religion without people actually discussing any of those things directly.

They allow advertisements to be targeted to particular “racial affinity” groups – or tailored according to “racial affinity”. Not actual race, because that might conflict with various laws, but the race that your profile suggests you have the most “affinity” towards. Racial profiling without the name.

This all seems relatively harmless when it is just restricted to advertising for products – it might be a bit creepy to find carefully targeted advertisements for holidays in places you like or musicians you admire – but a few changes in parameters for targeting change things completely. The benefits of this profiling for electoral purposes are huge. Profiling the electorate has long been a part of political campaigning, but this makes it much more detailed, much more surreptitious, and much more effective.

Parties can target and influence voters directly; make their own supporters happier and opponents’ supporters sadder; make their own more likely to vote. They can spread stories tailored to individuals’ views, focussing on the aspects of a campaign that they know the individual cares about – both positively and negatively. When you add “fake news” to this, the situation becomes even worse.

That is the real point here. When thought about in terms of profiling and micro-targeting advertising for products, this just sounds efficient and appropriate, and harmless. It is a tiny shift, however to take this into politics – and a shift that groups like Cambridge Analytica found easy to do. All they did was understand how Facebook works, and use it. On a big scale, and in a big way, but this is how Facebook works. Profiling, targeting, persuasive manipulation are the tools of the advertiser on steroids, provably effective and available to those with the money and intent to use them. Unless Facebook changes its entire business model, it will be used in ways that interfere with our lives – and in particular that interferes with our politics.

What is more, it is only going to get more effective. Facebook is gathering more data all the time – including through its associated operations in Instagram, WhatsApp and more. Its analyses are being refined and becoming more effective all the time – and more people like Cambridge Analytica are becoming aware of the possibilities and how they might be used.

How it might be stopped, or at least slowed down, is another matter. This is all based on the fundamental operations of Facebook, so while we rely on Facebook, it is hard to see a way out. By choosing to let ourselves become dependent, we have built this trap for ourselves. Until we find our way out of Facebook, more of this is inevitable.

Paul Bernal is Senior Lecturer in IT, IP and Media Law, UEA Law School