Typical, isn’t it? You wait thousands of years for a revolutionary piece of AI software and then two come along at once.
Most of us were only just coming to terms with the implications of ChatGPT, the souped-up version of the GPT-3 software, by the California-based AI company OpenAI. ChatGPT responds to written commands. Tell it to write a limerick about the budget, or a school essay, or a newspaper article, and it will, with remarkable results. You could also ask it internet-search style questions, which it answers quickly, without distracting adverts. While other companies have since released equally impressive technology, it was ChatGPT that captured the popular imagination. It has prompted wild gnashing of teeth: depending on who you ask, it is either a fancy toy or a threat to our whole way of life, but nobody denies that it is impressive.
Then, earlier this week, OpenAI released GPT-3’s sequel, GPT-4. The results are even more extraordinary: everything its predecessor can do, it does better. GPT, which stands for Generative Pre-Trained Transformer (not very catchy), is a type of programme known as a Large Language Model. It works by being “trained” on a set of text, which it uses to predict the answers to questions. In the case of GPT-3, and ChatGPT, the training set was a wide selection of words on the internet up to the end of 2021.
One way to think of it is like an enormous version of the predictive text function on your phone, which guesses which word you mean based on previous usage. GPT-4 is trained on a bigger data set so it can do even more. GPT-3 was more than 100 times bigger than GPT-2. We don’t know how much bigger GPT-4 is than 3 – OpenAI is being much more secretive about this version – but we can presume a lot. Here’s what it can do.
It can pass exams
Think exams are dull and unfair? GPT-4 could change the whole landscape. GPT-4’s developers reported that their new machine could outperform most humans on a wide range of tests, including the American Bar exams, answering essays and multiple choice questions to a level that would let it practise law in most states. Surprisingly, it was worse at English exams, where it sits in the bottom half of the league table. Perhaps there is life in the English degree yet…
It can write poetry
“Thou art the fair and lovely rose, Whose beauty doth my heart and mind compose; Thy eyes, like stars that twinkle in the night, Doth shine so bright, and bring me such delight.” This was a poem that Chat GPT-3 came up with for Valentine’s Day, when asked to write in the style of Keats. Its successor promises to be an improvement.
“From what I've seen of GPT-4, it's a leap up from GPT-3 in a couple of ways,” says Telegraph’s poetry critic, Tristram Fane Saunders. “It seems to have a reliable grasp on rhythm and complex rhyme-schemes. There’s more to poetry than fixed verse forms, but with the right prompt – and silly prompts are often best – it’s more plausible than a lot of human-made doggerel: we've finally achieved artificial mediocrity.”
In general, the software satisfies many of the traditional tests – such as the Turing test – designed to prove consciousness. It is a mark of how fast things are moving that a legitimate critique of the technology is that its work is not as good as our greatest poets, as though it weren’t a miracle that a machine can write verse in the first place.
It understands pictures
While you could only input text to GPT-3, GPT-4 responds to pictures as well. You can show it a picture of your fridge and ask it to suggest a meal you could make. In another example, it explains what’s funny about a picture of an iPhone plugged into the wrong cable. “The humour in this image comes from the absurdity of plugging a large, outdated VGA connector into a small, modern smartphone charging port,” it explains.
It is ethical
More ethical than its predecessors, anyway. Earlier competitors have had issues around users tricking them into saying harmful or malicious things. In one memorable exchange, Microsoft’s Bing chatbot appeared to go insane. Thanks to more stringent and sophisticated filters, GPT-4 will not offer instructions for causing terrorism, or make racist or sexist jokes. Or so the creators hope.
It can create games
You can tell these systems to write code, as well as words. You can tell it to create a website, or even a computer game, and it will do it instantly. One user used it to write the code for a working version of the game Pong. The software putting a load of software engineers out of work: just the kind of ironic consequence GPT-4 would appreciate.
AI is an enormous business now. In January, Microsoft invested a reported $10bn in OpenAI. Google was sufficiently alarmed by the competition that its founders, Larry Page and Sergei Brin, have returned to help it speed up its own research. It has announced it will be rolling out AI-enhanced functions in several of its services, including Gmail.
The technology is becoming political, too: it will have wide ramifications for the military and government, as well as helping GCSE students cheat at their essays. The British entrepreneur and investor Ian Hogarth has written about the rise of “AI Nationalism”, arguing that AI policy will be “the single most important area of Government policy” over the coming decades.
Whatever else, we can be confident that GPTs 5, 6, 7 and beyond – and their competitors – will hold plenty of surprises. “GPT-4 is very impressive,” says Dr Daniel Susskind, an Oxford academic and the author of A World Without Work. “But it is important to remember that it is still the worst it is ever going to be.”
We thought robots would replace our cooks and gardeners. Instead, GPT-4 proves they are coming for the lawyers and the poets. The world is changing, one sonnet at a time.