ChatGPT is making waves, but can it be trusted?

STORY: ChatGPT has taken the world by storm…

gaining widespread attention – and scrutiny – since its public release in November 2022.

The chatbot from Open AI was designed to mimic human-like conversation and can produce poetry on demand, or even answer complicated questions in a variety of styles.

(UPSOT Andrew Patel, AI Researcher at WithSecure)

"The model itself is fascinating because of what it can do. I mean, it seems like magic.”

Its creators say over a million people used it in its first week.

But can the content it generates be trusted?

ChatGPT uses so-called generative AI, technology that can learn from data how to create virtually any type of content simply from a text prompt.

Andrew Patel is an artificial intelligence researcher at the cyber security company WithSecure.

"GPT stands for Generative Pre-trained Transformer. It is an artificial intelligence model that is trained on a very large dataset of text. You can give it an input and it will provide you with an output. It basically, what it does is it continues writing what you gave it. If you ask it a question, it'll answer it. If you ask it to continue what you're writing, it will do that.”

Using a machine learning technique called Reinforcement Learning from Human Feedback - or, RLHF -

ChatGPT can simulate dialogue, answer follow-up questions, admit mistakes, challenge incorrect premises and reject inappropriate requests.

"ChatGPT is a language model. So in the simplest possible term, a language model is a statistical model that predicts the next word in a sequence of words."

The technique isn’t new, according to Dr. Tim Scarfe from technology company XRAI – the first examples go back as far as the 1990s.

“…Since then, the neural networks have gotten bigger. They've gotten deeper. They've been trained on an insane amount of data. And of course, there's been this whole cloud computing revolution. So now we have insane amounts of compute power to train these models on lots of data."

“But then people started being a bit more exploratory; they said, why don't we just give it questions? Why don't we ask it about mathematics? Why don't we ask it about things that it wasn't even trained on? And then people discovered this emergent reasoning capability. Just take multiplication or arithmetic for example, you can train it on things that is never seen before, and the model has this emergent reasoning capability. And that was a remarkable discovery."

A tool like ChatGPT could be used in real-world applications such as digital marketing, online content creation, customer service, or as some users have found, even to help debug code.

But the program itself acknowledges that it has the potential to be used for both good and bad purposes.

PATEL: “The way that I would use it right now if I were an adversary, if I were doing something malicious, I would write a script that ingests tweets and then crafts toxic harassing replies to them, or ingest political tweets and replace either in opposition to them or in support of them. And you can use a script like that to influence what would be the perceived political landscape on the social network, for instance. So, you could automate the harassment of individuals."

OpenAI has acknowledged the tool’s tendency to respond with "plausible-sounding but incorrect or nonsensical answers," an issue it considers challenging to fix.

AI technology can also perpetuate societal biases like those around race, gender, and culture.

Tech giants like Google and Amazon have previously acknowledged that some of their AI projects were “ethically dicey” and had limitations.