Advertisement

'Please Alexa': are we beginning to recognise the rights of intelligent machines?

<span class="caption">Robot rights!</span> <span class="attribution"><a class="link " href="https://www.shutterstock.com/image-illustration/white-male-cyborg-opening-his-two-1115832683?src=qPfnEl_G3CsJ_aZv1Hb4GQ-1-27" rel="nofollow noopener" target="_blank" data-ylk="slk:shutterstock;elm:context_link;itc:0;sec:content-canvas">shutterstock</a></span>

Amazon has recently developed an option whereby Alexa will only activate if people address it with a “please”. This suggests that we are starting to recognise some intelligent machines in a way that was previously reserved only for humans. In fact, this could very well be the first step towards recognising the rights of machines.

Machines are becoming a part of the fabric of everyday life. Whether it be the complex technology that we are embedding inside of us, or the machines on the outside, the line between what it means to be human and machine is softening. As machines get more and more intelligent, it is imperative that we begin discussing whether it will soon be time to recognise the rights of robots, as much for our sake as for theirs.

When someone says that they have a “right” to something, they are usually saying that they have a claim or an expectation that something should be a certain way. But what is just as important as rights are the foundations on which they are based. Rights rely on various intricate frameworks, such as law and morality. Sometimes, the frameworks may not be clear cut. For instance, in human rights law, strong moral values such as dignity and equality inform legal rights.

So rights are often founded upon human principles. This helps partially explain why we have recognised the rights of animals. We recognise that it is ethically wrong to torture or starve animals, so we create laws against it. As intelligent machines weave further into our lives, there is a good chance that our human principles will also force us to recognise that they too deserve rights.

But you might argue that animals differ from machines in that they have some sort of conscious experience. And it is true that consciousness and subjective experience are important, particularly to human rights. Article 1 of the Universal Declaration of Human Rights 1948, for example, says all human beings “are endowed with reason and conscience and should act towards one another in a spirit of brotherhood”.

However, consciousness and human rights are not the only basis of rights. In New Zealand and Ecuador, rivers have been granted rights because humans deemed their very existence to be important. So rights don’t emerge only from consciousness, they can extend from other criteria also. There is no one correct type or form of rights. Human rights are not the only rights.

As machines become even more complex and intelligent, just discarding or destroying them without asking any questions at all about their moral and physical integrity seems ethically wrong. Just like rivers, they too should receive rights because of their meaning to us.

<span class="caption">The Whanganui river in New Zealand has been granted the same rights as humans.</span> <span class="attribution"><a class="link " href="https://commons.wikimedia.org/wiki/File:Cool_Bend_on_Whanganui_River_-_panoramio.jpg" rel="nofollow noopener" target="_blank" data-ylk="slk:Duane Wilkins;elm:context_link;itc:0;sec:content-canvas">Duane Wilkins</a>, <a class="link " href="http://creativecommons.org/licenses/by-sa/4.0/" rel="nofollow noopener" target="_blank" data-ylk="slk:CC BY-SA;elm:context_link;itc:0;sec:content-canvas">CC BY-SA</a></span>

What if there was a complex and independent machine providing health care to a human over a long period of time. The machine resembled a person and applied intelligence through natural speech. Over time, the machine and the patient built up a close relationship. Then, after a long period of service, the company that creates the machine decides that it is time to turn off and discard this perfectly working machine. It seems ethically wrong to simply discard this intelligent machine, which has kept alive and built a relationship with that patient, without even entertaining its right to integrity and other rights.

This might seem absurd, but imagine for a second that it is you who has built a deep and meaningful relationship with this intelligent machine. Wouldn’t you be desperately finding a way to stop it being turned off and your relationship being lost? It is as much for our own human sake, than for the sake of intelligent machines, that we ought to recognise the rights of intelligent machines.

Sexbots are a good example. The UK’s sexual offences law exists to protect the sexual autonomy of the human victim. But it also exists to ensure that people respect sexual autonomy, the right of a person to control their own body and their own sexual activity, as a value.

But the definition of consent in section 74 of the Sexual Offences Act 2003 in the UK specifically refers to “persons” and not machines. So right now a person can do whatever they wish to a sexbot, including torture. There is something troubling about this. And it is not because we believe sexbots to have consciousness. Instead, it is probably because by allowing people to torture robots, the law stops ensuring that people respect the values of personal and sexual autonomy, that we consider important.

These examples very much show that there is a discussion to be had over the rights of intelligent machines. And as we rapidly enter an age where these examples will no longer be hypothetical, the law must keep up.

Matter of respect

We are already recognising complex machines in a manner that was previously reserved only for humans and animals. We feel that our children must be polite to Alexa as, if they are not, it will damage our own notions of respect and dignity. Unconsciously we are already recognising that how we communicate with and respect intelligent machines will affect how we communicate with and respect humans. If we don’t extend recognition to intelligent machines, then it will affect how we treat and consider humans.

Machines are integrating their way in to our world. Google’s recent experiment with natural language assistants, where AI sounded eerily like a human, gave us an insight into this future. One day, it may become impossible to tell whether we are interacting with machines or with humans. When that day comes, rights may have to change to include them as well. As we change, rights may naturally have to adapt too.

This article was originally published on The Conversation. Read the original article.

The Conversation
The Conversation

Paresh Kathrani does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.