This book describes and defends a method for designing and evaluating ethics algorithms for autonomous machines, such as self-driving cars and search and rescue drones. Leben argues that such algorithms should be designed and evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms. Rather than simulating the psychological systems that have evolved to solve this problem, engineers should be tackling the problem itself, taking lessons from our moral psychology when needed, but not being limited by it. To do this Leben draws on the moral theory of John Rawls, arguing that normative moral theories are attempts to develop optimal solutions to the problem of cooperation. Sometimes this goal is explicit, but more often theories are the products of consistent and well-defined rationalizations around our moral intuitions. He claims that Rawlsian Contractarianism leads to the 'Maximin' principle - the action that maximizes the minimum value - and that the Maximin principle is the most effective solution to the problem of cooperation. He contrasts the Maximin principle with other principles, and show how they can often produce non-cooperative results. Leben uses the example of an autonomous vehicle facing a situation where every action results in harm, showing how a Rawlsian algorithm would select the lowest survival probability for each action and then select the action with the highest minimum value.
He contrasts this algorithm with other algorithms derived from Utilitarianism and Natural Rights Libertarianism. Further examples are also used, including home care machines and autonomous weapons systems. Ethics for Robots will be of special interest to philosophers, engineers, computer scientists, and cognitive scientists working on the problem of ethics for autonomous systems. Technical ideas are properly introduced and explained and chapter summaries help those new to the topic.