Ethical Implications of Self-Improving Intelligent Robots

There is substantial concern in the general public and even some experts about the perceived threat of artificial intelligence (AI) running amok, improving itself to outstrip human intelligence and negating any possibility of shutting it down. This proposal sets out a research project to investigate and test the safety and controllability of such ‘smart’ AI, in particular in intelligent robots.
This project aims to investigate the technical possibilities and philosophical implications of ensuring that general AI is safe and controllable. Read more…

Abstract
Our project started with very broad discussions on the interface of Artificial Intelligence and Philosophy. These discussions led to main conclusion that robot designers use many philosophical concepts to design their robots.Robots have ‘knowledge’ and ‘understanding’. They will have their own ‘motivations’ and ‘desires’, which will influence their autonomous ‘choices’. These words to describe contemporary robotics come with much philosophical baggage. This makes the debate about the ethical implications of these robots extremely difficult. For example, robots generating and acting upon their own desires indeed sounds like it could be a major ethical problem; however, what does the design of a robot that generates its own desires look like? How does a ‘desire’ get translated into code? Scott summarised these misconceptions as three epistemological gaps (Ethicist’s Gap, Roboticists’s Gap and the Public Gap) and presented his paper at the International Conference on Social Robotics (ICSR). When we established the three epistemological gaps, our main focus was describing an experiment that could bridge the gap between roboticists and ethicists. The experiment should pursue the following goals: (1) use a key scenario from ethics to describe the choices a robot could have, (2) have a simple scenario to inform the philosophers on the current techniques used for self-learning robots and (3) give insights into the ethical implications of self-improving robots.
We converged for the experiments to the classic Tunnel Problem scenario where the robot is a self-driving car and learns through evolutionary mechanisms. The car has to choose between saving the passenger and overrun a child or evade and kill the passenger by falling off the road. This experiment explains how a self-learning robot can be programmed and how it can learn. While this scenario can already call for many discussions, we extended our experiments to a two-car scenario including the dynamics of the well-known prisoners dilemma. We are in the process off analysing these results and thereafter submit the paper to a journal.