Can't bottom-up artificial moral agents make moral judgements?

College

College of Liberal Arts

Department/Unit

Philosophy

Document Type

Article

Source Title

Filosofija, Sociologija

Volume

35

Issue

1

First Page

14

Last Page

22

Publication Date

2024

Abstract

This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume's is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the foundational level, their moral reasoning capabilities are prone to the no-ought-from-is thesis. The main hypothesis of the present work is that, by conceptually analysing the notion of bottom-up artificial moral agents, it would be revealed that their seeming moral judgements do not have proper philosophical basis. For one, the said kinds of artifacts arrive at the understanding of ethically-relevant ideas by means of culling data or facts from the environment. Thus, in relation to the is-ought problem, it may be argued that, even if bottom-up systems seem prima facie capable of generating apparent moral judgments, such are actually absent of good moral grounding, if not empty of any ethical value.

html

Digitial Object Identifier (DOI)

10.6001/fil-soc.2024.35.1.3

Disciplines

Philosophy

Keywords

Artificial intelligence—Moral and ethical aspects; Judgment (Ethics)

Upload File

wf_no

This document is currently not available here.

Share

COinS