ChatGPT, the fastest-growing app of all time, has given people the power of an intelligent large language model artificial intelligence (AI) - trained on a massive database of information from the Internet - at the tip of their fingers.
This new power has already led to changes in the way many people carry out their work or search for information on the Internet, with OpenAI’s technology inspiring both excitement in the promise of the future AI can deliver, as well as fears about the changes it is ushering in.
One of the fears around AI technology like ChatGPT is what criminals and other bad actors will do with that power.
That’s what Europol, the European Union’s law enforcement agency, looked into in its recent report into ChatGPT titled ‘The impact of Large Language Models on Law Enforcement’.
ChatGPT, which is built on OpenAI’s GPT3.5 large language model technology, could make it “significantly easier for malicious actors to better understand and subsequently carry out various types of crime,” the report states.
That’s because, while the information ChatGPT is trained on is already freely available on the Internet, the tech is able to provide step-by-step instructions on all sorts of topics, if given the right contextual questions from a user.
Here are the types of crime Europol warn that chatbots, or LLMs, could potentially assist criminals with.
Fraud, impersonation, and social engineering
ChatGPT and other chatbots like Google’s Bard have astonished users with their abilities to provide human-like writing on any topic, based on user prompts.
They can impersonate celebrities’ writing styles, and learn a writing style from inputted text, before creating more writing in that learned style. This opens up the system to potential use by criminals who want to impersonate someone or an organisation’s writing style, which could perhaps be used in phishing scams.
Europol also warns that ChatGPT could be used to give legitimacy to various types of online fraud, such as by creating masses of fake social media content to promote a fraudulent investment offer.
One of the sure signs of potential fraud in email or social media communications is through obvious spelling or grammar mistakes made by the criminals writing the content.
With the power of LLMs at their fingertips, even criminals with little grasp of the English language would be able to generate content that no longer has these red flags.
The tech is also ripe to be used by those looking to create and spread propaganda and disinformation, as it is adept at crafting arguments and narratives at great speed.
Cybercrime for beginners
ChatGPT is not only good at writing words, but it is also proficient in a number of programming languages. According to Europol, this means it could have an impact on cybercrime.
“With the current version of ChatGPT, it is already possible to create basic tools for a variety of malicious purposes,” the report warns.
These would be basic tools to produce phishing pages for example, but it enables criminals with little to no coding knowledge to create things they couldn’t create before.
The inevitable improvements in LLM capabilities mean exploitation from criminals “provides a grim outlook” in the coming years.
The fact that OpenAI’s latest version of its transformer, GPT-4 is better at understanding the context of code and correcting mistakes, means “this is an invaluable resource” for criminals with little technical knowledge.
Europol warns that with AI technology set to improve, it can become much more advanced “and as a result dangerous”.
Deepfakes already having real world consequences
The use cases for ChatGPT Europol warned about are just one area of AI that could be exploited by criminals.
There have already been cases of AI deep fakes being used to scam and harm people. In one case, a woman said she was just 18 when she discovered pornographic pictures of her circulating online - despite never taking or sharing those images.
Her face had been digitally added to images of another person’s body. She told Euronews Next it was “a lifelong sentence”. A 2019 Deeptrace Labs report found that 96 per cent of deepfake content online is non-consensual pornography.
Another case saw AI being used to mimic the sound of someone’s voice to scam their family member, using an audio deepfake technique.
Europol concluded its report by stating it is important for law enforcement to “stay at the forefront of these developments,” and to anticipate and prevent criminal use of AI.