• Complain

Robert Trappl - A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations

Here you can read online Robert Trappl - A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2015, publisher: Springer, genre: Science. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

Robert Trappl A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations
  • Book:
    A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations
  • Author:
  • Publisher:
    Springer
  • Genre:
  • Year:
    2015
  • Rating:
    3 / 5
  • Favourites:
    Add to favourites
  • Your mark:
    • 60
    • 1
    • 2
    • 3
    • 4
    • 5

A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

This book will help researchers and engineers in the design of ethical systems for robots, addressing the philosophical questions that arise and exploring modern applications such as assistive robots and self-driving cars.
The contributing authors are among the leading academic and industrial researchers on this topic and the book will be of value to researchers, graduate students and practitioners engaged with robot design, artificial intelligence and ethics.

Robert Trappl: author's other books


Who wrote A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations? Find out the surname, the name of the author of the book and a list of all author's works by series.

A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Springer International Publishing Switzerland 2015
Robert Trappl (ed.) A Construction Manual for Robots' Ethical Systems Cognitive Technologies 10.1007/978-3-319-21548-8_1
1. Robots Ethical Systems: From Asimovs Laws to Principlism, from Assistive Robots to Self-Driving Cars
Robert Trappl 1, 2
(1)
Austrian Research Institute for Artificial Intelligence (OFAI), Freyung 6/6, 1010 Vienna, Austria
(2)
Center for Brain Research, Medical University of Vienna, Spitalgasse 4, 1090 Vienna, Austria
Robert Trappl
Email:
Abstract
This chapter and the books content should aid you in choosing and implementing an adequate ethical system for your robot in its designated field of activity.
Keywords
Ethical system Assistive robot Asimov's laws Principlism Self-driving cars
1.1 Introduction
In the winter of 2013, the driver of a school bus saw a deer crossing the road and he turned his bus sharply in order not to hit it. The bus skid off the snowy road, rolled down a steep meadow, and was finally stopped by some trunks of trees. Many of the schoolchildren were severely injured and had to be flown to a hospital; it was remarkable that none of them died. The report did not mention the fate of the deer.
Obviously, the driver made an ethical decision, though a wrong one. Many of our decisions are influenced by our ethics, but most of the time we are not aware of this fact. However, we are aware when we decide to act contrary to our moral standards.
When we develop robots to act as, for example, partners in our workplace or as companions when we are old or we have special needs, they need to be equipped with ethical systems for at least two reasons: they should act cooperatively, especially in complex social situations, and they should understand human decisions.
A distinction has to be made between implicit and explicit ethical systems: every robot must, especially in complex social environments, follow ethical principles. However, its ethics can follow implicitly from the decision processes implemented, or its actions should be a consequence of an explicitly designed ethical system in the robot.
It should be stressed that the ethical principles for robots and those for designers, developers, and those who deploy robots need not be identical. This book is concerned with explicit ethical systems for robots.
Furthermore, the ethical system for a robot which is a companion for an older person or a person with special needs will differ from the ethical system needed for a self-driving car: in the first instance, the robot and the human are interacting on a body-to-body basis; in the second case, the human is inside the body of the robot!
While several books recently published give excellent overviews of the research into ethics and robots, e.g., [], this book aims at helping a designer or a developer of robots for a specific purpose to select appropriate ethical rules or an ethical system, and it shows different ways of implementing these.
If they want to do this consistently, we can test the robots decisions/actions with the comparative moral Turing test, proposed by Allen et al. []: an evaluator has to judge decisions made by a human in situations that require ethical decisions and the decisions of the robot in the same situations. If the evaluator cannot determine correctly which made the decision in more than 50 % of the cases, the robot passes the test.
This introductory chapter is divided into three sections: Ethical Systems Usable for Robots; Platforms for Implementation; and Areas for Deployment.
1.2 Ethical Systems Usable for Robots
This section follows partially the descriptions in Anderson and Anderson [].
The first ethical system proposed for robots appeared in the story Runaround from the American author Isaac Asimov, who later, in 1950, included it in a collection of stories in the volume I, Robot []:
  • Law One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Law Two: A robot must obey orders given to it by human beings, except when such orders conflict with Law One.
  • Law Three: A robot must protect its own existence as long as such protection does not conflict with Law One or Law Two.
Later, Isaac Asimov added one more law which he named Law Zero, which has to precede Law One (naturally, the except phrases had to be changed accordingly):
  • Law Zero: A robot may not injure humanity or, through inaction, allow humanity to come to harm.
A second ethical system is based on Jeremy Benthams utilitarianism []: its imperative is to act in such a way that the maximum good for all persons involved is obtained. To act means to select from all possible actions the appropriate one. However, it is difficult to define good; therefore, in most applications it is substituted by utility. Utility is often used by decision or game theorists. In their experiments, life is simplified by using dollars or euros for utilities, thus making utility measurable on a rational scale, at least in some limited range.
Now, in addition to the problems of measuring utilities, there is the problem of calculating the probability with which the person will experience this utilityall of us have probably seen the disappointment of persons we assume we know well when we give them the wrong present. Nevertheless, we can risk selecting the optimum action by computing for each potential action the sum of the products of the utility for each person times the probability that each person experiences this utility and then choosing the one with the largest sum.
A third ethical system which may be implemented in robots originates in the realm of medicine. Probably medicine was the first discipline with a professional ethics because the decisions of physicians can have deadly consequences, but also medical ethics seems to be easier to formulate than others. For example, what would be an appropriate ethical system for the actions of lawyers? To win all processes even if you think your client is guilty? To earn the biggest amount of money? Both goals are probably related. Or to accept only poor people as clients? That sounds far more difficult than the case of medicine.
This ethical system, called principlism [], consists of four ethical principles:
Autonomy: Respect the autonomy of the person. Not so long ago physicians decided about a therapy, be it conservative or surgery, without asking the patients because they thought they knew better. Today it is impossible, for example, to begin with a surgical intervention without explaining the potential risks to the patient, in detail. In addition, patients have to sign a declaration of informed consent.
Beneficence: Your action should bring benefit to the person.
Nonmaleficence: Your action should not harm the person. This is a condensed version, in one word, of the Latin commandment Primum non nocere, in English Above all, do not do harm.
Justice: At first glance a quite surprising principle. However, it is an equally important one: Consider in your action the social (= fair) distribution of benefits and burdens.
Other approaches to ethical systems have been proposed; for both quotations and applications, see, for example, Madl and Franklin [].
1.3 Platforms for Implementation
One possible way to implement an ethical system would be to use the Robot Operating System (ROS), originally developed in 2007 by the Stanford Artificial Intelligence Laboratory for the STAIR (STanford AI Robot) project. STAIRs goal description begins with this sentence on its homepage ( ].
Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations»

Look at similar books to A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations»

Discussion, reviews of the book A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.