论文标题
倾向于为人工道德代理提供多种道德理论
Toward equipping Artificial Moral Agents with multiple ethical theories
论文作者
论文摘要
人工道德代理(AMA)是计算机科学领域的一个领域,目的是创建可以做出类似于人类的道德决定的自主机器。研究人员提出了创建此类机器的理论手段,而哲学家就这些机器应如何行为或是否应该存在提出了争论。在当前理论上的AMA中,所有研究和设计都是以无或最多指定的规范伦理理论作为基础进行的。这是有问题的,因为它缩小了AMA的功能能力和多功能性,这反过来又导致了有限数量的人同意的道德成果(从而破坏了AMA在人类意义上具有道德的能力)。作为解决方案,我们为一般规范性道德理论设计了三层模型,该模型可用于序列人士和企业的道德观点,以便在推理过程中使用AMA。对规范建模的概念证明并评估了四个特定的道德规范(Kantianism,神圣的命令理论,功利主义和利己主义)。此外,将所有模型序列化与XML/XSD序列化,以证明对计算机化的支持。
Artificial Moral Agents (AMA's) is a field in computer science with the purpose of creating autonomous machines that can make moral decisions akin to how humans do. Researchers have proposed theoretical means of creating such machines, while philosophers have made arguments as to how these machines ought to behave, or whether they should even exist. Of the currently theorised AMA's, all research and design has been done with either none or at most one specified normative ethical theory as basis. This is problematic because it narrows down the AMA's functional ability and versatility which in turn causes moral outcomes that a limited number of people agree with (thereby undermining an AMA's ability to be moral in a human sense). As solution we design a three-layer model for general normative ethical theories that can be used to serialise the ethical views of people and businesses for an AMA to use during reasoning. Four specific ethical norms (Kantianism, divine command theory, utilitarianism, and egoism) were modelled and evaluated as proof of concept for normative modelling. Furthermore, all models were serialised to XML/XSD as proof of support for computerisation.