Advertisement

The consequences of our blind faith in Artificial Intelligence are catching up to us

There is growing enthusiasm for Artificial Intelligence (AI) and its capacity to drastically transform business performance and streamline outcomes in public services.

As great as that hunger for innovation sounds, however, in reality, pivots towards AI are typically coupled with a serious lack of understanding of the dangers and limitations of the new technology.

Authorities especially, are beginning to get carried away with the potential of AI. But are they considering and introducing sufficient measures to avoid harm and injustice?

Organisations across the globe have been falling over themselves to introduce AI to projects and products. From facial and object recognition in China to machines that can diagnose diseases more accurately than doctors in America, AI has reached the UK’s shores and grown exponentially in the past few years.

Predictably, this new era of technological innovation, exciting as it is, also raises serious ethical questions, especially when applied to the most vulnerable in society.

My own PhD research project involves developing a system of early detection of depressive disorders in prisoners, as well as analysing the ethical implications of using algorithms to diagnose something as sensitive as mental health issues in a vulnerable group. Essentially, I am asking two questions: “can I do it?” and “should I do it?”

Most engineers and data scientists have been working with a powerful tool called machine learning – which offers fancier and more accurate predictions than simple statistical projections. They are a commonly used type of algorithms – like the one Netflix employs to recommend shows to its users, or the ones that make you see “relevant” ads wherever you go online. More sophisticated systems such as computer vision, used in facial recognition, and natural language processing, used in virtual assistants like Alexa and Siri, are also being developed and tested at a fast pace.

Slowly but surely, machine learning has also been creeping into and helping to shape public policy – in healthcare, policing, probation services and other areas. But are crucial questions being asked about the ethics of using this technology on the general population?

Imagine the potential cost of being a “false positive” in a machine’s prediction about a key aspect of life. Imagine being wrongly earmarked by a police force as someone likely to commit a crime based on an algorithm’s learned outlook of a reality it doesn’t really “understand”. Those are risks we might all be exposed to sooner than we think.

For instance, West Midlands Police recently announced the development of a system called NAS (National Analytics Solution): a predictive model to "guess" the likelihood of someone committing a crime.

This initiative fits into the National Police Chiefs Council’s push to introduce data-driven policing, as set out intheir plan for the next 10 years, Policing Vision 2025. Despite concerns expressed by an ethics panel from the Alan Turing Institute in a recent report, which include warnings about "surveillance and autonomy and the potential reversal of the presumption of innocence," West Midlands Police are pressing on with the system.

Similarly, the National Offender Management Service’s (NOMS) OAsys tool, used to assess the risk of recidivism in offenders, has been increasingly relying on automation for its decisions, although human input still takes precedent in decisions.

The trend, however, as seen in the American justice system, is to move away from requiring human insight and allowing machines to make decisions unaided. But can data – raw, dry, technical information about a human being’s behaviour – be the sole indicator used to predict future behaviour?

A number of machine learning academics and practitioners have recently raised the issue of bias in algorithm’s “decisions,” and rightly so. If the only data available to “teach” machines about reoffending consistently points to offenders from different ethnicities, for instance, being more likely to enter the criminal justice system, and to stay in it, it is possible that a machine would calculate that as a universal truth to be applied to any individual that fits the demographic, regardless of context and circumstances.

The lack of accountability is another conundrum afflicting the industry, since there is no known way for humans to analyse the logic behind an algorithm’s decision – a phenomenon known as “black box” – so “tracing” a possible mistake in a machine’s prediction and correcting it is difficult.

It is clear that algorithms cannot as yet act as a reliable substitute for human insight, and are also subject to human bias at the data collection and processing stages. Even though machine learning has been used successfully in healthcare, for example, where algorithms are capable of quickly analysing heaps of data, spotting hidden patterns and diagnosing diseases more accurately than humans, machines lack the insight and contextual knowledge to predict human behaviour.

It is key that the ethical implications of using AI are not overlooked by industry and government alike. As they rush off to enter the global AI race as serious players, they must not ignore the potential human cost of bad science.

Thais Portilho is a postgraduate researcher in criminology and computer science at the University of Leicester