Artificial Intelligence. Carl Schmitts Theory about Preventing Machine-Led Dictatorships
Artificial Intelligence. Carl Schmitts Theory about Preventing Machine-Led Dictatorships
Click to enlarge
Author(s): Schneider, David
ISBN No.: 9783668664739
Pages: 20
Year: 201803
Format: Trade Paper
Price: $ 26.08
Status: Out Of Print

Essay from the year 2018 in the subject Engineering - Artificial Intelligence, grade: High Merit, London School of Economics (Department of Government), language: English, abstract: Machine-led dictatorships are a popular theme in science fiction. Current developments in technology bring the idea of a robot ruler closer to reality: The Watson 2016 Foundation advocated IBM's Watson supercomputer to run for President of The United States of America in the 2016 elections and the artificial intelligence Sam plans to run for office in New Zealand in 2020. To prevent the possibility of a Hitler-like machine dictator, developers of artificial intelligence have to find ways to avoid inadvertently building a superintelligence that will be able to abuse its power to harm humanity. Carl Schmitt, a German Nazi philosopher, did the opposite: he developed a system of legal and political theory that legitimised the power concentration that made Hitler's cruel dictatorship possible. In this paper, I argue that an update of these Schmittian theories can provide useful insights for AI developers who want to ensure that humans will retain control over artificial intelligence. First, I outline the so-called value alignment problem and its subproblem concerning value-loading which have to be solved in order to ensure the emergence of a 'friendly' super AI. Following this, I explain the similarities between Hans Kelsen's legal positivism - Schmitt's target of criticism - and artificial intelligence to justify my use of Schmittian ideas in the realm of AI and politics. Next, I assimilate Schmitt's distinction between constituent power (to humans) and constituted power (to AI) so that, if encoded into the AI as guiding principle, the AI always will only be capable of being the humans' obeying governor and never will be able to revolt against them, which puts an end to the value-loading problem.


Thereafter, I show two complementary ways in which humans' values and goals can be loaded in.


To be able to view the table of contents for this publication then please subscribe by clicking the button below...
To be able to view the full description for this publication then please subscribe by clicking the button below...