1.1 Introduction
In the winter of 2013, the driver of a school bus saw a deer crossing the road and he turned his bus sharply in order not to hit it. The bus skid off the snowy road, rolled down a steep meadow, and was finally stopped by some trunks of trees. Many of the schoolchildren were severely injured and had to be flown to a hospital; it was remarkable that none of them died. The report did not mention the fate of the deer.
Obviously, the driver made an ethical decision, though a wrong one. Many of our decisions are influenced by our ethics, but most of the time we are not aware of this fact. However, we are aware when we decide to act contrary to our moral standards.
When we develop robots to act as, for example, partners in our workplace or as companions when we are old or we have special needs, they need to be equipped with ethical systems for at least two reasons: they should act cooperatively, especially in complex social situations, and they should understand human decisions.
A distinction has to be made between implicit and explicit ethical systems: every robot must, especially in complex social environments, follow ethical principles. However, its ethics can follow implicitly from the decision processes implemented, or its actions should be a consequence of an explicitly designed ethical system in the robot.
It should be stressed that the ethical principles for robots and those for designers, developers, and those who deploy robots need not be identical. This book is concerned with explicit ethical systems for robots.
Furthermore, the ethical system for a robot which is a companion for an older person or a person with special needs will differ from the ethical system needed for a self-driving car: in the first instance, the robot and the human are interacting on a body-to-body basis; in the second case, the human is inside the body of the robot!
While several books recently published give excellent overviews of the research into ethics and robots, e.g., [], this book aims at helping a designer or a developer of robots for a specific purpose to select appropriate ethical rules or an ethical system, and it shows different ways of implementing these.
If they want to do this consistently, we can test the robots decisions/actions with the comparative moral Turing test, proposed by Allen et al. []: an evaluator has to judge decisions made by a human in situations that require ethical decisions and the decisions of the robot in the same situations. If the evaluator cannot determine correctly which made the decision in more than 50 % of the cases, the robot passes the test.
This introductory chapter is divided into three sections: Ethical Systems Usable for Robots; Platforms for Implementation; and Areas for Deployment.
1.2 Ethical Systems Usable for Robots
This section follows partially the descriptions in Anderson and Anderson [].
The first ethical system proposed for robots appeared in the story Runaround from the American author Isaac Asimov, who later, in 1950, included it in a collection of stories in the volume I, Robot []:
Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Law Two: A robot must obey orders given to it by human beings, except when such orders conflict with Law One.
Law Three: A robot must protect its own existence as long as such protection does not conflict with Law One or Law Two.
Later, Isaac Asimov added one more law which he named Law Zero, which has to precede Law One (naturally, the except phrases had to be changed accordingly):
A second ethical system is based on Jeremy Benthams utilitarianism []: its imperative is to act in such a way that the maximum good for all persons involved is obtained. To act means to select from all possible actions the appropriate one. However, it is difficult to define good; therefore, in most applications it is substituted by utility. Utility is often used by decision or game theorists. In their experiments, life is simplified by using dollars or euros for utilities, thus making utility measurable on a rational scale, at least in some limited range.
Now, in addition to the problems of measuring utilities, there is the problem of calculating the probability with which the person will experience this utilityall of us have probably seen the disappointment of persons we assume we know well when we give them the wrong present. Nevertheless, we can risk selecting the optimum action by computing for each potential action the sum of the products of the utility for each person times the probability that each person experiences this utility and then choosing the one with the largest sum.
A third ethical system which may be implemented in robots originates in the realm of medicine. Probably medicine was the first discipline with a professional ethics because the decisions of physicians can have deadly consequences, but also medical ethics seems to be easier to formulate than others. For example, what would be an appropriate ethical system for the actions of lawyers? To win all processes even if you think your client is guilty? To earn the biggest amount of money? Both goals are probably related. Or to accept only poor people as clients? That sounds far more difficult than the case of medicine.
This ethical system, called principlism [], consists of four ethical principles:
Autonomy: Respect the autonomy of the person. Not so long ago physicians decided about a therapy, be it conservative or surgery, without asking the patients because they thought they knew better. Today it is impossible, for example, to begin with a surgical intervention without explaining the potential risks to the patient, in detail. In addition, patients have to sign a declaration of informed consent.
Beneficence: Your action should bring benefit to the person.
Nonmaleficence: Your action should not harm the person. This is a condensed version, in one word, of the Latin commandment Primum non nocere, in English Above all, do not do harm.
Justice: At first glance a quite surprising principle. However, it is an equally important one: Consider in your action the social (= fair) distribution of benefits and burdens.
Other approaches to ethical systems have been proposed; for both quotations and applications, see, for example, Madl and Franklin [].