Alun Morgan

Is it possible to achieve robot ethics when humans providing the framework are inherently flawed?

It has been over 80 years since Jorge Luis Borges published his short story “The Library of Babel,” and now the virtual library is open to visit. Borges described a theoretical library of books that, together, contain all possible combinations of letters in the alphabet, with a few provisos and limited punctuation. The idea was this library would contain every book, every article, song, play, etc., that has been – or ever could be – written, among an overwhelming quantity of apparently meaningless material.

It’s a mind-boggling concept, used to explore ideas of time, meaning, the human condition – behavior, frailties, the shortness of life – and our place in the universe. It’s clear this library was imaginary. Borges never expected it to exist. Now, leveraging the computing power available to us today, the website libraryofbabel.info has brought the literary concept to life as a virtual “universe.”

Seven years from now, the era of artificial general intelligence (AGI) will begin, according to Ray Kurzweil. AIs trained for specific tasks such as image, pattern or speech recognition are already in the world and routinely assisting with demanding tasks in industry, medicine, financial analysis, photography and more. Kurzweil said by 2029 a machine will be able to pass the Turing test, the so-called imitation game, in which a human interrogator questioning a machine and a human should be unable to distinguish between the two based on their responses.

According to popular science fiction plots, the inevitable result is super intelligent machines will outpace and outsmart humans, take over, and enslave us. Kurzweil, however, argues the dominant use case will be to augment our own intelligence. We can already see this trend in the ways we use machines like our mobile phones to accomplish tasks impossible or extremely difficult for humans to do otherwise.

Clearly, not all AIs will exist in our pockets or in the cloud. As robots become infused with increasing levels of intelligence, solutions are needed to prevent them from harming humans and vice versa. In his 1942 story “Runaround,” Isaac Asimov proposed built-in safety features based on his three laws of robotics to put forward an alternative to the view of robots as human destroyers.

For Asimov, these rules were a comment and a fictional plot device. We are now at a time when intelligent robots will shortly enter society, and a suitable set of rules is needed. As long ago as 2007, work in South Korea was announced to establish a Robot Ethics Charter that covers aspects including the design, manufacture and use of robots. They are expected not only to enshrine human safety and protection of data or knowledge robots acquire, but also to prevent humans from abusing robots.

The European Union is also in the process of developing ethical rules and is keen to ensure robotics and AI develop in accordance with European social values. Those who write the rules for robotics will have a powerful influence on the way we live in the future.

The desire for control is intrinsic to human nature. Ceding control to intelligent machines will be a direct challenge to that aspect of ourselves. There is outrage when a vehicle driving autonomously is involved in a collision, although human error is the cause of far more accidents on the roads and in industry. An embarrassing driverless train crash during the construction of London’s Docklands Light Railway was revealed to have occurred during unauthorized testing, with the train in manual mode and before proper buffer stops had been installed. Similarly, Waymo’s autonomous vehicles have completed millions of incident-free test miles on public roads and been involved in relatively few accidents, the majority of which have involved human-driven vehicles striking Waymo vehicles.

Once, a manufacturing team I worked with manually overrode a process-monitoring pressure sensor that ultimately resulted in a gas escape and subsequent system failure. Had the automated system been left to operate as intended, this would not have occurred.

However, the question whether machines make better decisions than humans remains valid. We can ensure intelligent machines will make consistent decisions according to the way they have been programmed. Ideally, that programming should be the result of a deeply considered design process that should consider multiple variables and explore all ramifications of each response to specific circumstances. The work on robot ethics seeks to provide a framework for design teams to ensure the machine’s responses will be appropriate in every situation. But can this really be achieved?

Asimov’s rules, although elegant in their simplicity, are almost certainly inadequate to cover all the situations that will face the intelligent machines of the 21st century. To achieve a broad consensus, any charter of robot ethics is likely to be a complex and weighty document. Human decisions can be emotional, biased, inconsistent, even negligent. It’s no small challenge for us as imperfect creatures to create intelligent machines that enshrine human values while eliminating our own faults. 

ALUN MORGAN is technology ambassador at Ventec International Group (ventec-group.com); This email address is being protected from spambots. You need JavaScript enabled to view it..

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article