Assignment 4 (FINAL
RESEARCH REPORT
ANSWERS) 2025
(596428)
[Document subtitle]
[School]
[Course title]
, HRPYC81 Project 5 Assignment 4 (FINAL RESEARCH REPORT ANSWERS) 2025 (596428)
Course
Research Report (HRPYC81)
Institution
University Of South Africa (Unisa)
Book
The Psychology Research Handbook
HRPYC81 Project 5 Assignment 4 (FINAL RESEARCH REPORT ANSWERS) 2025 (596428); 100%
TRUSTED Complete, trusted solutions and explanations.
Artificial Intelligence (AI) technologies are increasingly employed in medical diagnostics, mental
health and psychology, autonomous driving, criminal sentencing assessment, and wealth
management. These innovative advancements will change not only our profession as
psychologists (Krach & Corcoran, 2024; Zhang & Wang, 2024) but also our mobility (e.g.,
Hamburger et al., 2022), health behaviour (Newby et al., 2021) or financial decisions (Bhat,
2024). Most importantly, we will increasingly rely on the decision-making processes of AI
applications. Some of these decisions can be benign, like choosing flowers to be planted in a
public park to beautify the space. These decisions are mainly based on practicality, preference,
convenience, or necessity to achieve a particular goal or resolve a situation/problem. In
contrast, other decisions require the application of right and wrong principles, ethics and
values. These moral decisions mainly focus on upholding justice, fairness, and the well-being of
others. Artificial Intelligence (AI) technology is applied to making both benign and moral
decisions. Irrespective of whether humans or AI make moral or other decisions, the decision-
making process involves choosing from competing goals, values, and preferences, among
others. Thus, many decisions involve trade-offs (Shaddy et al., 2021). Some trade-offs can be
benign; for instance, taking public transport instead of a car might cost me time. Other trade-
offs can be severe, such as sacrificing a person’s life to save many lives. The latter is also known
as the Trolley Dilemma (Thomson, 1984; see also an interesting study on cultural differences,
Ahlenius & Tännsjö, 2012), which inspired Greene and colleagues to develop the dual-process
theory of moral judgment (Greene, 2007; 2023; Greene & Haidt, 2002; Greene et al., 2001).
Research is still in its beginning stages, yet it is increasingly contributing to understanding
people’s attitudes towards AI and the psychological factors affecting these attitudes (e.g., De
Freitas et al., 2023; Kaya et al., 2024). More specifically, more studies are being conducted to