(COMPLETE ANSWERS)
Semester 2 2025 - DUE 25
September 2025
For assistance contact
Email:
, Internal Auditing and AI: A Report on Risks and Challenges
Question 1: What specific challenges or risks do internal auditors face when they use AI to
gather background information for audit engagements?
The integration of Artificial Intelligence (AI) into the internal audit function, particularly for
gathering background information during the planning phase of an engagement, introduces
significant opportunities but also presents a complex array of challenges and risks. These
challenges are not merely technological but span ethical, operational, and cybersecurity domains.
Internal auditors must be acutely aware of these risks to leverage AI effectively and maintain the
integrity and credibility of their work.
1. Data Integrity and Accuracy Risks
One of the foremost challenges is ensuring the integrity and accuracy of the information gathered
by AI. AI systems rely on algorithms to analyze vast amounts of data from diverse sources,
including social media, news feeds, and public company reports. The risk lies in the quality of
the source data itself. AI can unintentionally ingest and amplify inaccuracies, misinformation,
or biased data from social media platforms or unverified news outlets (Mahmood, 2021). For
example, an AI tool used to analyze public sentiment about an organization on Twitter may
misinterpret satire or sarcasm, leading to a skewed risk assessment. An internal auditor who
relies on this flawed data without critical verification may misdirect audit resources or draw
incorrect conclusions. The "black box" nature of some AI models further compounds this risk, as
it can be difficult to trace how the AI arrived at a specific piece of information or insight (The
IIA, 2023).
2. Algorithmic Bias
A significant ethical and operational risk is algorithmic bias. AI models are trained on historical
data, and if that data contains inherent human or systemic biases, the AI will learn and perpetuate
them. In an audit context, this could manifest in several ways. An AI-powered tool used to
identify fraud risks might be biased against certain employee demographics, leading it to unfairly
flag transactions from specific departments or individuals while overlooking more subtle but
significant risks elsewhere (Alles et al., 2020). This not only undermines the objectivity of the
audit but could also lead to serious legal and ethical repercussions for the organization. Auditors
must not only be aware of the potential for bias but also be equipped with the skills to audit the
AI system's output for fairness and impartiality.
3. Privacy and Data Security Risks
Using AI to gather information, especially from publicly available sources like social media,
presents significant privacy and cybersecurity challenges. Internal auditors are bound by