Developing a tool for eliciting users moral theories for automated moral agents

dc.contributor.advisorKeet, Catharina
dc.contributor.authorSeakgwa, Kyle
dc.date.accessioned2025-03-27T13:20:53Z
dc.date.available2025-03-27T13:20:53Z
dc.date.issued2024
dc.date.updated2025-03-27T13:16:47Z
dc.description.abstractIn recent work, Rautenbach and Keet have developed a model of a system, which they name Genet, that allows the user to choose which moral theory their automated moral agent will follow. What remains unclear, however, is how the users will make this choice, given that most of them will not have the vocabulary to classify themselves in the moral philosophical terms used by Genet. This issue is what this thesis is meant to address. This was done by building three high fidelity prototypes and then conducting online user evaluations of them. Each of these prototypes implemented an algorithm that was designed based on the elicitation approach of one of three fields: cognitive science, human computer interaction and knowledge engineering. Each of these aimed to computationally determine a user's preferred moral theory, by availing itself of a human-in-the-loop component enabled by discipline specific elicitation stimuli and rules to classify the user. These prototypes were then evaluated from a usability perspective using the System Usability Scale (SUS), and from an accuracy perspective, to determine which is most validly able to elicit users' moral preferences in the form required by Genet. This latter evaluation was done using validation measures based on existing approaches to validation in moral psychology. It was observed that all the prototypes performed equally well in terms of usability, with each having an acceptable SUS score. However, each of the prototypes also performed equally inaccurately in terms of the validity of the moral theory categorizations made. While this evaluation was carried out with only a small sample size (n=20) and thus has limited generalizability, as the first study to compare and computationally implement different moral theory elicitation approaches, the present study contributes to evidence for (or at least fails to falsify) problems with the project of making the design of Automated Moral Agents dependent on elicitation of a user's one preferred moral theory. A positive claim that the data collected here does support is that, at least for some potential users, even computational elicitation tools that use empirically validated measures of moral theory preferences (like those from cognitive science) do not allow one to predict the moral judgements they will make.
dc.identifier.apacitationSeakgwa, K. (2024). <i>Developing a tool for eliciting users moral theories for automated moral agents</i>. (). University of Cape town ,Faculty of Science ,Department of Computer Science. Retrieved from http://hdl.handle.net/11427/41289en_ZA
dc.identifier.chicagocitationSeakgwa, Kyle. <i>"Developing a tool for eliciting users moral theories for automated moral agents."</i> ., University of Cape town ,Faculty of Science ,Department of Computer Science, 2024. http://hdl.handle.net/11427/41289en_ZA
dc.identifier.citationSeakgwa, K. 2024. Developing a tool for eliciting users moral theories for automated moral agents. . University of Cape town ,Faculty of Science ,Department of Computer Science. http://hdl.handle.net/11427/41289en_ZA
dc.identifier.ris TY - Thesis / Dissertation AU - Seakgwa, Kyle AB - In recent work, Rautenbach and Keet have developed a model of a system, which they name Genet, that allows the user to choose which moral theory their automated moral agent will follow. What remains unclear, however, is how the users will make this choice, given that most of them will not have the vocabulary to classify themselves in the moral philosophical terms used by Genet. This issue is what this thesis is meant to address. This was done by building three high fidelity prototypes and then conducting online user evaluations of them. Each of these prototypes implemented an algorithm that was designed based on the elicitation approach of one of three fields: cognitive science, human computer interaction and knowledge engineering. Each of these aimed to computationally determine a user's preferred moral theory, by availing itself of a human-in-the-loop component enabled by discipline specific elicitation stimuli and rules to classify the user. These prototypes were then evaluated from a usability perspective using the System Usability Scale (SUS), and from an accuracy perspective, to determine which is most validly able to elicit users' moral preferences in the form required by Genet. This latter evaluation was done using validation measures based on existing approaches to validation in moral psychology. It was observed that all the prototypes performed equally well in terms of usability, with each having an acceptable SUS score. However, each of the prototypes also performed equally inaccurately in terms of the validity of the moral theory categorizations made. While this evaluation was carried out with only a small sample size (n=20) and thus has limited generalizability, as the first study to compare and computationally implement different moral theory elicitation approaches, the present study contributes to evidence for (or at least fails to falsify) problems with the project of making the design of Automated Moral Agents dependent on elicitation of a user's one preferred moral theory. A positive claim that the data collected here does support is that, at least for some potential users, even computational elicitation tools that use empirically validated measures of moral theory preferences (like those from cognitive science) do not allow one to predict the moral judgements they will make. DA - 2024 DB - OpenUCT DP - University of Cape Town KW - Information Technology LK - https://open.uct.ac.za PB - University of Cape town PY - 2024 T1 - Developing a tool for eliciting users moral theories for automated moral agents TI - Developing a tool for eliciting users moral theories for automated moral agents UR - http://hdl.handle.net/11427/41289 ER - en_ZA
dc.identifier.urihttp://hdl.handle.net/11427/41289
dc.identifier.vancouvercitationSeakgwa K. Developing a tool for eliciting users moral theories for automated moral agents. []. University of Cape town ,Faculty of Science ,Department of Computer Science, 2024 [cited yyyy month dd]. Available from: http://hdl.handle.net/11427/41289en_ZA
dc.language.rfc3066eng
dc.publisher.departmentDepartment of Computer Science
dc.publisher.facultyFaculty of Science
dc.publisher.institutionUniversity of Cape town
dc.subjectInformation Technology
dc.titleDeveloping a tool for eliciting users moral theories for automated moral agents
dc.typeThesis / Dissertation
dc.type.qualificationlevelMasters
dc.type.qualificationlevelMPhil
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
thesis_sci_2024_seakgwa kyle.pdf
Size:
1.89 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed upon to submission
Description:
Collections