How can we programme a robot to behave morally when we don’t have a working definition for morality ourselves? This is one of the many questions involved with the field of artificial intelligence and the development of advanced robots.
While it might sound like something out of a science fiction novel, artificial intelligence has quickly become a facet of our daily lives. Take Siri or Google Now on our smartphones, and Amazon’s Alexa. Rudimentary as they are today, these so-called “smart assistants” represent the future of artificial intelligence.
The next leap will be the creation of robots that will be programmed to display varying levels of empathy, morality, ethical judgement and perhaps even emotion. The popular television series Westworld – in which guests pay to interact with robots in an adventure park but the boundary between who is a robot and who is not is blurry – explores a future idea of artificial intelligence that might not be very far away.
Before we can make a constructive leap into this future of robotic assistance there are several pressing matters that we must attend to. For one, the question of our own morality. The best way into this vexed issue is to look at a debate concerning Tesla’s safety algorithm for its self-driving cars.
As part of the sophisticated computer system that allows these cars to drive themselves, engineers at Tesla have had to programme cars to behave morally in certain dangerous scenarios. For example, if the car had to run into a crowd of people as opposed to one person due to an emergency, what should it do? What would a human do in such a situation? Would such a person’s own ethical beliefs factor into the decision-making process?
There are no clear answers to these questions because there is no single universal code of ethics or morality. However, decisions are being made by private corporations such as Tesla, Apple and Google that will define how artificial intelligence behaves and is created. That these companies operate according to their monetary interests should give us all a moment of pause given the gravity of the issues they are deciding.
Of course, one could say that only the best algorithm or technology will win over the market and truly influence the industry. The iPod changed the world because it was better than any other Mp3 player of its day.
But when it comes to morality and the codification for how future robots behave, the market pleaser is not always the best option for humanity. There is at least a healthy debate taking shape, spurred on by television series such as Westworld and the rapid growth of AI on our smartphones. If we take our watchful eye off the industry for too long, decisions that will greatly effect our lives will surely be made. Perhaps the debate about robot morality will lead to a breakthrough in understanding our own humanity.