Problems on the implementations of artificial moral agents and the singularity: The is-ought problem as a case study
Date of Publication
Doctor of Philosophy in Philosophy
College of Liberal Arts
Jeremiah Joven Joaquin
Defense Panel Chair
Maxell C. Aranilla
Defense Panel Member
Napoleon Mabaquiao, Jr.
Elenita dLR. Garcia
Mark Joseph Calano
Moses Aaron Angeles
This works evaluates the ongoing efforts geared towards the creation and development of artificial moral agents (AMAs), and how these relate to the singularity. Artificial moral agents are artificial intelligent systems that are capable of moral reasoning, judgment, and decision-making. In addition, they form part and parcel of what artificial intelligence theorists have called as artificial general intelligence (AGI), which are systems that are intelligent in most, if not all, aspects of human cognition. Given that one of the central human cognitive abilities in the capability to reason about moral issues, AGIs should, therefore, include the intellectual activity of moral reasoning. Many conceptions of AMAs have been proffered, and some have identified three possible routes to model AMAs, namely: the top-down or direct programming track, bottom-up or developmental approaches, and the hybrid of these two. This work examines the philosophical tenability of these routes, in light of how they account for moral reasoning. It argues that these approaches are challenged by one of the most enduring problems in moral philosophy, a problem dubbed as the is-ought problem in moral reasoning.
Archives, The Learning Commons, 12F Henry Sy Sr. Hall
143 leaves ; 28 cm. + ; 1 computer optical disc.
Artificial intelligence -- Moral and ethical aspects
Boyles, R. J. (2015). Problems on the implementations of artificial moral agents and the singularity: The is-ought problem as a case study. Retrieved from https://animorepository.dlsu.edu.ph/etd_doctoral/1054