You wait ages for an AI chatbot to come along, then a whole bunch turn up. Why?

When, late last year, the editor asked me and other Observer writers what we thought 2023 would be like, my response was that it would be more like 1993 than any other year in recent history. Why? Simply this: 1993 was the year that Mosaic, the first modern web browser, launched and all of a sudden the non-technical world understood what this strange “internet” thing was for. This was despite the fact that the network had been switched on a whole decade earlier, during which time the world seemed almost entirely unaware of it; as a species, we seem to be slow on the uptake.

Much the same would happen in 2023, I thought, with ChatGPT. Machine-learning technology, misleadingly rebranded as artificial intelligence (AI), has been around for eons, but for the most part, only geeks were interested in it. And then out comes ChatGPT and suddenly “meatspace” (internet pioneer John Perry Barlow’s derisive term for the non-techie world) wakes up and exclaims: “So that’s what this AI stuff is all about. Wow!”

And then all hell breaks loose, because it turns out that all the tech giants, who had been obsessed with this generative AI stuff for years, realised that they had been scooped by a small US research outfit called OpenAI (cunningly funded by boring old Microsoft). Google, Meta, Amazon and co were panic-stricken by the realisation that the AI bandwagon, hauled by a Microsoft locomotive, was pulling out of the station – and they weren’t on it.

There then followed an orgy of me-too-ism. It turns out that apparently everybody and his dog had had their own large language model (LLM) all along. It’s just that they were too high-minded to release them until OpenAI did the unthinkable and broke ranks. Those of us who follow the industry were deluged with demonstrations, press releases, earnest YouTube videos by tech bosses (who, to judge from their performances, should never be allowed in front of a video camera), unsolicited commentary about the market implications from investment bank “analysts”, email torrents from crackpot enthusiasts and so on. Trying to keep track of the madness has been like attempting to get a drink from a firehose.

It turns out that apparently everybody and his dog had had their own AI models all along

But behind all the hoo-ha is a really interesting question: how had an entire industry come up with this apparently huge – but hitherto unannounced – breakthrough? The answer can be found in The Nature of Technology, an extraordinarily insightful book by Belfast-born economist W Brian Arthur, first published in 2009. In it, Arthur explains that many of the biggest technological advances arise because there comes a moment when a number of necessary but unconnected developments suddenly come together to create entirely new possibilities. Instead of the legendary eureka moment, it’s a process of what one may call combinatorial innovation.

In the case of the generative AI that the world is now obsessed with, the necessary components were four in number: the availability of truly massive cloud-computing power; unimaginable quantities of data provided by the internet for training LLMs; significant improvements in algorithms boosted by neural networks; and oodles of money, provided by insanely profitable tech giants.

So it was the combination of those four factors that got us to the ChatGPT moment. The next question is: what happens now? And here the history of the tech industry provides the playbook. All of these technologies, no matter how initially complex they are, eventually become commoditised. And once that happens, they enable lots of new products and services to build on them. A good example is Google Maps. The company invested unconscionable amounts of money, time and talent in creating the product. And now you can’t book a restaurant online, find a pub, a hardware store, a nursery or anything else with a physical location that doesn’t have an embedded Google map on its website.

Much the same will happen with generative AI. In fact it’s already under way. Earlier this month, we finally learned why Microsoft had invested $10bn in OpenAI. It turns out that users of Microsoft 365 (nee boring old Office) will soon have an LLM – called Copilot – at their beck and call. Apparently, “Copilot in Word will create a first draft for you, bringing in information from across your organisation as needed”. Copilot in Excel, meanwhile, “will reveal correlations, propose what-if scenarios, and suggest new formulas based on your questions”. And so on, ad infinitum.

The other lesson from the tech industry playbook is that the technology always escapes into the wild. And it has: you can now run a GPT-3-level AI model on your laptop and phone. Some genius even has it running (albeit slowly) on a Raspberry Pi single-board computer. And even I have the image-generating tool Stable Diffusion running on my iPhone. Time for a rethink, perhaps?

What I’ve been reading

Acropolis now
Waiting for Brando is an entrancing essay by Edward Jay Epstein in Lapham’s Quarterly about the disastrous 1961 filming of Homer’s Iliad.

Rise of the machines
The Atlantic’s Charlie Warzel tries to figure out the implications of large language models in What Have Humans Just Unleashed?

Eastern promise
Dan Wang’s belated 2022 Letter is his unmissable annual report from inside China on his own website.