Majority of readers believe artificial intelligence is developing too fast

An exclusive poll conducted by this newspaper finds 72 per cent of over 39,000 readers believe artificial intelligence is developing too fast
An exclusive poll conducted by this newspaper finds 72 per cent of over 39,000 readers believe artificial intelligence is developing too fast

The rapid advance of artificial intelligence (AI) technology is striking fear in everyone from Elon Musk, the CEO of Tesla and Twitter, to Steve Wozniak, a cofounder of Apple.

The sentiment is shared by many Telegraph readers, with an exclusive poll conducted by this newspaper finding that 72 per cent of over 39,000 readers believe AI is developing too fast.

It’s a complex issue, as reader Rob Jones argues: “The development of AI is inevitable and, like nuclear technology before, it will bring massive scientific advancement. But will also have an equally impressive use for destruction and control.

“All this, of course, is in the hands of humankind. The other inevitability is the moment AI becomes self developing. Will it protect or bite its owner?”

Dulan Weerasinha agrees there are both advantages and risks: “The ability to get consistent results and outcomes via a properly trained AI and/or language model… is hugely beneficial to key use cases such as risk analysis, failure detection, diagnosing diseases etc.”

But Dulan opposes “seemingly uncontrolled or unvetted research” into AI, reasoning that “if highly developed models and the ability to run those models fall into the wrong hands, or are acquired by bad actors, the consequences could be destructive”.

Wesley Storz believes that “independent thought within certain parameters by AI is what we need, if it is left to collect and collate data”. However, he worries that “allowing it to grow and learn on its own without a way to maintain control” could be “deadly”.

Meanwhile, Rod Evans is optimistic: “Humans have a limited capacity to gather data and, thus, have a limited opportunity to make the best decision possible. Whereas AI systems have no such limitations and can seek all data and, thus, make the statistically best decision possible.”

Paola Romero also strikes a positive note, labelling AI a “freedom enabling technology”.

While AI might be beneficial for decision making and reducing labour, some readers are concerned over job losses, particularly for the young.

There are fears millions of roles could be made redundant as a result of AI. Analysts at Goldman Sachs estimated that 300 million jobs could soon be done by robots thanks to the new wave of AI.

Reader Michael Johnson says: “I am 60, so it shouldn’t affect me, but I feel it will badly affect jobs for the young. I feel for the young.”

Likewise, Joe Blow notes: “Like lamp lighters, costermongers or high street bank cashiers before them, some jobs will simply disappear… the difference this time is it’s professional, not manual jobs that are threatened.”

Robert Groves predicts: “AI will decimate mass jobs, but create relatively fewer highly paid ones.

“The trend will be towards relatively few highly skilled and extremely highly paid elites who understand AI and are not so much in control of it but in control of its learning, and those who will simply be replaced by AI and in being so provide the cost savings to fund the highly paid tech elites and the subscription to the AI service.”

On another note, reader Selena Alota suggests that while the technologies we develop get more sophisticated, “our level of intellectual civilisation is going backwards”.

“The newest generations have lost the ability to learn something on their own, or to write properly to gain reliable knowledge, because the computers are doing that in their place,” she said.

However, some readers, like Alex Carnes, think we are already in too deep.

“The West might well ban AI research, but China, Russia and North Korea won’t. There’s no choice now, we have to make sure that our AI is better than their AI, or we’re screwed,” he said.

For others, such as Kagan Dougal, they simply “don’t buy the hype”.

“Open GPT, deep fakes and all the others are just fancy text/image prediction software, no more fundamentally different from next word predictions on your keyboard. The difference is that it has been trained on the entire internet, not just words you type,” he said.

Meanwhile, Robin Reliant thinks that “reasonable governance” needs to be introduced. “If it is bothering Elon Musk then it should bother everyone,” he said.


Where do you stand on the debate? Join the conversation in the comments section below