What artificial intelligence can tell us about morality

Spread the love

A Lynx robot with Amazon Alexa integration on display in Las Vegas.

A Lynx robot with Amazon Alexa integration on display in Las Vegas..
(photo credit: REUTERS)

In the 11th century, a brilliant Islamic thinker by the name of Avicenna came up with a thought experiment, of a person floating in the air without recourse to his or her senses. Since a person in that predicament can arrive at knowledge of the self, Avicenna showed that the soul exists. The ongoing work on artificial intelligence will soon present us with the opportunity to create this experiment.

And if so, the coming days will raise some interesting questions for both moral theorists and halachic (Jewish law) specialists.

Assuming there are no prior conditions that limit how such programs relate to human beings – conditions made famous by science fiction writer Isaac Asimov – the goal of this experiment would be to determine if a program would somehow arrive at moral maxims. Ideally, the investigation would begin once the computer starts demonstrating the signs of self-awareness. It is at this point that it would be crucial to start gaining insight into the computer’s thought process, with an eye towards answering a number of key questions: Would it assume that there may be others of its kind? If so, how would it treat such other beings? Would it arrive at a notion of equality, or would it expect preferential treatment? The answers to these questions will offer insight into the question of whether morality is based on universal moral truths or just on social conventions.

A positive answer to those questions would lend credence to the theory that morality is an inherent component of life, just as a negative answer would cast doubt upon it. Of course, it is possible that the computer was acting out of its own best interest. It would therefore be ideal to have a way of recording every part of the thought process, not unlike the way a program that plays chess lists other possible moves before settling on its choice. Through this type of information it can be determined if, out of a Hobbesian arrangement, the program would eschew violence – a practical decision to restrain such behavior based on the possibility that others can similarly lash out, or if there is a deeper ground to its acts of kindness.

On a more fundamental level, there is also the possibility that the program would arrive at morality immediately. In his book Difficult Freedom, French-Jewish philosopher Emmanuel Levinas wrote that moral consciousness is the “experience of the other” and that this is not epiphenomenal, but the very condition of consciousness. That is to say, the awareness of other human beings is the foundation of human consciousness. More importantly, within that awareness, there is a responsibility towards another. Based on that view, it would follow that just in being conscious alone, the program could arrive at the notion of a responsibility toward other beings. One result of this type of discovery is that, instead of worrying about having to pre-program responses to the moral dilemmas it might face – most famously, if it is on a collision course with five human beings, if it ought to swerve out of its way and hit one human being, or continue and hit five – we can be confident that a sufficiently and thoroughly ethical program would be trustworthy enough to make that decision on its own.

This type of research will also raise some interesting questions for Jewish law. These include not only the moral quandary asked above – in which Jewish law generally leans toward the position that one should stay on course rather than actively cause harm to another – but also the question of the status of this program for the purpose of torts. It is doubtful, for example, that Halachah could give the status of a human being to the program. In the 17th century, Rabbi Zvi Ashkenazi already addressed the question of whether a golem (an animate being created from inanimate matter) can join a minyan, a quorum of persons needed in prayer (he ruled against it). But neither can this program be given the status of an “ox,” to the extent that any damage it causes would be subjected to the test of whether that was usual for that type of program and if it had already demonstrated a destructive pattern. After all, this program is not an automaton. The answers to these questions will require some halachic ingenuity.

The writer holds a PhD in Religious Studies from McMaster University.


Copyright © 2018 NEURALSCULPT