Artificial intelligence is not the new Tower of Babel. We must beware of technophobia instead
Amidst the growing fears over the possibility of increasingly present artificial intelligence spinning out of control, perhaps it would be good to begin by considering the following parable in the style of ancient peoples to help illustrate the narrative.
Once upon a time, the hugely prosperous AIcity — let's call it that — grew at an astonishing pace.
Its AImasons used to build nice sophisticated houses, then low-rises. As this was a highly profitable undertaking, they started building more complicated high-rises, using more or less the same technologies.
A few cracks started appearing here and there, but nobody paid close attention. The AImasons were so fascinated by their success that they started building very tall skyscrapers, aptly named "AI towers of Babel", by just scaling the same construction techniques at a frantic pace.
Their AI towers could house many thousands of inhabitants. However, no AImason could really understand why such very complex buildings functioned so well.
At the same time, cracks and mishaps continued happening at an alarming rate.
Nobody knows what to do, everybody expects the worst
Now, the AImasons started really being worried: What is the source of the technical problems? Is there any chance that these AI towers will collapse? Did we already cross the safe height limit?
The AI tower owners had more materialistic concerns: What happens if the towers collapse? Who will reimburse the victims?
What regulations and legislature apply in such cases? What is the competition doing? How can we outsmart it?
Originally, the city's population was very fascinated by living in these wonderful AI towers. They were awed by their sheer size.
However, quite a few of them started being concerned when seeing unexplainable problems here and there and projecting them into the future.
They kept asking, are we really capable of creating such huge complex constructions, and are we safe in such a city?
The AIcity Government was too busy with other pressing problems and did not care to address all these issues.
In short: nobody knew what to do, but very many started fearing the worst.
The parable ends here — and I promise it wasn't an AI chat-generated one.
AI enthusiasm is laced with technophobia
Yet, this is the current state of affairs when it comes to generative AI and Large Language Models like ChatGPT. AI enthusiasm is, in fact, laced with technophobia.
This is natural for the general public: they like new exciting things, but they are afraid of the unknown.
The new thing is that several prominent scientists became techno-scepticists, if not technophobic themselves.
The case of the scientists and industrialists asking for a six-month ban on AI research, or the scepticism of the top AI scientist Prof Geoffrey Hinton, are such examples.
The only related historical equivalent I can recall is the criticism of atomic and nuclear bombs by a part of the scientific community during the Cold War. Luckily, humanity managed to address these concerns in a rather satisfactory way.
Of course, everyone has the right to question the current state of AI affairs. For one, nobody knows why Large Language Models work so well and if they have a limit.
There are also many dangers that the bad guys might create "AI bombs", particularly if governments remain passive bystanders in terms of regulations.
These are legitimate concerns that fuel the fear of the unknown, even among prominent scientists. After all, they are humans themselves.
We need to maximise AI's positive impact
However, can AI research stop, even temporarily? In my view, no, as AI is the response of humanity to a global society and physical world of ever-increasing complexity.
As the physical and social complexity increase, processes are very deep and seem relentless. AI and citizen morphosis is our only hope to have a smooth transition from the current Information Society to a Knowledge Society.
Otherwise, we may face a catastrophic social implosion.
The solution is to deepen our understanding of AI advances, speed up its development, and regulate its use towards maximising its positive impact while minimising the already evident and other hidden negative effects.
AI research can and should become different: more open, democratic, scientific and ethical. And to that effect, there are ways in which we could approach the issue in a constructive manner.
For one, the first word on important AI research issues that have a far-reaching social impact should be delegated to elected parliaments and governments rather than to corporations or individual scientists.
Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimise its negative aspects.
The positive impact of AI systems can greatly outweigh their negative aspects if proper regulatory measures are taken. Technophobia is neither justified nor a solution.
There are dangers to democracy and progress, but that can be dealt with
In my view, the biggest current threat comes from the fact that such AI systems can remotely deceive too many citizens that have little or average education and/or little investigative capacity.
This can be extremely dangerous to democracy and any form of socio-economic progress.
In the near future, we should counter the big threat coming from LLM and/or CAN use in illegal activities (cheating in university exams is a rather benign use in the space of related criminal possibilities).
Furthermore, their impact on labour and markets will be very positive in the medium-long run.
To help that, in my opinion, AI systems should: a) be required by international law to be registered in an "AI system register" and b) notify their users that they are talking with or using the results of an AI system.
As AI systems have a huge societal impact and towards maximising benefits and socio-economic progress, advanced key AI system technologies should become open.
AI-related data should be (at least partially) democratised, again towards maximising benefit and socio-economic progress.
We can allow progress while maintaining regulatory mechanisms, too
Proper strong financial compensation schemes must be foreseen for AI technology champions to compensate for any profit loss due to the aforementioned openness and to ensure strong future investments in AI R&D, such as through technology patenting and obligatory licensing schemes.
Are AI-based programmes like ChatGPT bringing useful change or unknown chaos?
AI has been dubbed a 'nuclear' threat to cybersecurity. But it can be also used for defence
The AI research balance between academia and industry should be reworked to maximise research output while maintaining competitiveness and granting rewards for undertaken R&D risks.
Education practices should be revisited at all education levels to maximise the benefit of AI technologies while creating a new breed of creative and adaptable citizens and (AI) scientists.
And finally, proper AI regulatory, supervision, and funding mechanisms should be created and beefed up to ensure the above.
Perhaps then, the allegory above will be nothing more than just a (mildly) entertaining fable.
Dr Ioannis Pitas is a professor at the Aristotle University of Thessaloniki – AUTH and the Chair of the International AI Doctoral Academy (AIDA), a leading pan-European AI studies instrument.
At Euronews, we believe all views matter. Contact us at email@example.com to send pitches or submissions and be part of the conversation.