,HRPYC81 Research Project 5 Assignment 2 (LITERATURE
REVIEW) 2025 - DUE May 2025 (741881) ;100% trusted,
comprehensive and complete reliable solution with clear
explanation
Table of Contents
1. Introduction
2. Background and Theoretical Framework
3. Literature Review
4. Research Objectives
5. Research Questions
6. Methodology
o 6.1 Design
o 6.2 Participants
o 6.3 Instrumentation
o 6.4 Procedure
o 6.5 Data Analysis
7. Ethical Considerations
8. Expected Findings
9. Significance of the Study
10. References
1. Introduction
Artificial Intelligence (AI) has rapidly transitioned from a
futuristic concept to an integral part of everyday life. From voice
assistants and recommendation systems to more complex
applications like autonomous vehicles, mental health chatbots,
and AI-driven hiring platforms, the capabilities of AI continue to
, evolve and expand. As these systems increasingly participate in
making decisions traditionally reserved for humans, a critical
question emerges: Do people trust AI more than human
decision-makers?
The growing integration of AI into domains that involve high-
stakes decisions—such as medical diagnostics, psychological
assessments, judicial outcomes, and financial planning—raises
pressing concerns about trust, accountability, and ethical
alignment. While AI can enhance efficiency, consistency, and
objectivity, it also challenges conventional notions of empathy,
moral reasoning, and human judgment. In some cases, people
may perceive AI systems as more impartial or consistent than
humans, especially in contexts plagued by bias or error. In other
situations, the lack of human understanding or emotional
intelligence in AI may cause discomfort and distrust.
Trust is a cornerstone for the adoption and acceptance of new
technologies. In the case of AI, trust is particularly nuanced. It
may vary depending on the context of the decision (e.g.,
choosing a candidate for a job versus making a diagnosis for
depression), the type of decision (benign vs. moral), and
individual psychological factors such as personality traits,
cognitive style, previous experiences with technology, and
levels of AI-related anxiety.
Importantly, decisions made by both humans and AI often
involve trade-offs—balancing values such as fairness, utility,
and personal well-being. The Trolley Dilemma and its variants
have long been used to study moral reasoning in humans. Now,
REVIEW) 2025 - DUE May 2025 (741881) ;100% trusted,
comprehensive and complete reliable solution with clear
explanation
Table of Contents
1. Introduction
2. Background and Theoretical Framework
3. Literature Review
4. Research Objectives
5. Research Questions
6. Methodology
o 6.1 Design
o 6.2 Participants
o 6.3 Instrumentation
o 6.4 Procedure
o 6.5 Data Analysis
7. Ethical Considerations
8. Expected Findings
9. Significance of the Study
10. References
1. Introduction
Artificial Intelligence (AI) has rapidly transitioned from a
futuristic concept to an integral part of everyday life. From voice
assistants and recommendation systems to more complex
applications like autonomous vehicles, mental health chatbots,
and AI-driven hiring platforms, the capabilities of AI continue to
, evolve and expand. As these systems increasingly participate in
making decisions traditionally reserved for humans, a critical
question emerges: Do people trust AI more than human
decision-makers?
The growing integration of AI into domains that involve high-
stakes decisions—such as medical diagnostics, psychological
assessments, judicial outcomes, and financial planning—raises
pressing concerns about trust, accountability, and ethical
alignment. While AI can enhance efficiency, consistency, and
objectivity, it also challenges conventional notions of empathy,
moral reasoning, and human judgment. In some cases, people
may perceive AI systems as more impartial or consistent than
humans, especially in contexts plagued by bias or error. In other
situations, the lack of human understanding or emotional
intelligence in AI may cause discomfort and distrust.
Trust is a cornerstone for the adoption and acceptance of new
technologies. In the case of AI, trust is particularly nuanced. It
may vary depending on the context of the decision (e.g.,
choosing a candidate for a job versus making a diagnosis for
depression), the type of decision (benign vs. moral), and
individual psychological factors such as personality traits,
cognitive style, previous experiences with technology, and
levels of AI-related anxiety.
Importantly, decisions made by both humans and AI often
involve trade-offs—balancing values such as fairness, utility,
and personal well-being. The Trolley Dilemma and its variants
have long been used to study moral reasoning in humans. Now,