Portfolio (COMPLETE
ANSWERS) Semester 2 2025 -
DUE 23 October 2025
For assistance contact
Email:
, Question One
Generative Artificial Intelligence (GenAI) chatbots have become increasingly powerful tools for
information generation and communication. However, as Shoaib et al. (2023, p. 1) observe,
“deepfakes can be deployed as LM-based GenAI-generated misinformation that threaten the
integrity of news propagation, as they could be exploited for fabricating scandals, falsifying
records of public statements, and manipulating electoral processes.” I strongly agree that GenAI
chatbots can produce falsified content that can negatively influence decision-making. My
argument is supported by three key reasons: the ease of generating convincing misinformation,
the difficulty of verifying AI-generated content, and the potential impact of such content on
public trust and democratic processes.
Firstly, GenAI chatbots can rapidly generate persuasive and realistic falsified content, including
fake news articles, doctored statements, and fabricated evidence. These outputs often appear
credible to the average reader because they are grammatically correct, logically structured, and
presented with confidence. According to Shoaib et al. (2023), large language models can create
deepfakes that mimic real people and events, making it easier to deceive audiences. When
decision-makers rely on such fabricated information, they risk making choices based on false
premises, which can have serious economic, political, and social consequences.
Secondly, verifying AI-generated content can be challenging, even for experienced users. Unlike
traditional misinformation that often reveals bias or factual inconsistencies, AI-generated text
can closely resemble legitimate reporting. This creates a significant risk when individuals or
organisations rely on chatbot responses without fact-checking. Shoaib et al. (2023) note that such
misinformation can “falsify records of public statements,” which may lead people to believe that
credible figures have said things they never did. This lack of verifiability undermines informed
decision-making in areas such as governance, healthcare, and finance.
Thirdly, falsified GenAI content can damage public trust and manipulate democratic processes.
For example, during elections, deepfake videos and fabricated statements may be used to mislead
voters or discredit political figures. As Shoaib et al. (2023) point out, this type of manipulation
can threaten the integrity of electoral systems. Once false information spreads widely, it is
difficult to retract or correct, leading to long-term harm to institutions and decision-making
structures.
Although these risks are significant, there are effective strategies that users can apply to ensure
that the information they obtain from GenAI chatbots is credible. Firstly, users should cross-
check information with reputable sources such as academic journals, government websites, or
trusted news outlets before making decisions based on chatbot outputs. Secondly, users can use
digital verification tools to authenticate sources and detect deepfakes or manipulated content.
Thirdly, users should cultivate critical information literacy by questioning the accuracy, source,
and intent of the content generated by GenAI. This active verification process reduces the
likelihood of making decisions based on falsified information.
In conclusion, GenAI chatbots have the capacity to produce convincing falsified content that can
seriously undermine informed decision-making. The speed and realism with which deepfakes