The growing use of generative artificial intelligence (GenAI) chatbots in education and workplaces
has sparked debates about their impact on users’ cognitive engagement and independent
reasoning. I firmly believe that excessive reliance on GenAI chatbots can diminish users’ critical
thinking skills. Although these systems provide rapid access to information and streamline many
tasks, overdependence may erode users’ ability to analyse, evaluate, and create knowledge
independently. Three major reasons support this argument: cognitive offloading, reduced
metacognitive engagement, and overconfidence in AI-generated content.
Firstly, excessive reliance on GenAI encourages cognitive offloading, where users transfer mental
effort to technology rather than exercising their own analytical skills. Sarkar et al. (2025) note that
users who frequently depend on AI for everyday or low-stakes tasks report “reductions in cognitive
effort and critical engagement”, suggesting that AI convenience may inadvertently discourage
active thinking. This passive interaction limits opportunities to practise problem-solving and
reflection—skills essential for critical thinking. When users habitually accept AI responses without
questioning them, they risk weakening the mental processes that enable independent judgment
and creativity.
Secondly, overreliance on GenAI can lead to reduced metacognitive awareness, meaning users may
stop reflecting on how they think or learn. According to Dwivedi et al. (2023), while GenAI
enhances productivity, it may also “narrow the user’s focus to surface-level understanding rather
than deep learning”. This reduction in self-awareness about cognitive strategies results in users
accepting answers uncritically, rather than analysing underlying reasoning. Over time, this can
foster a culture of intellectual dependency, where human insight is undervalued compared to
machine-generated responses.
Thirdly, reliance on AI can create overconfidence in AI-generated content, reducing users’
motivation to verify information independently. Floridi and Chiriatti (2020) argue that AI systems
produce text that appears coherent and persuasive but may lack genuine understanding or factual
accuracy. This illusion of reliability encourages users to “trust outputs that merely simulate
intelligence”, potentially replacing inquiry with complacency. When individuals unquestioningly
accept AI outputs as authoritative, their ability to identify bias, inconsistency, or ethical
implications is compromised—further undermining critical thought.
However, users can actively employ strategies to preserve and enhance their critical thinking while
engaging with GenAI tools. First, they should practise active interrogation, consistently questioning
the source, logic, and implications of AI-generated responses. Asking “why” and “how” questions
promotes analytical evaluation instead of passive acceptance. Second, users can cross-verify
information by consulting credible human and academic sources, ensuring they compare AI output
against multiple perspectives. This not only builds discernment but also cultivates information
literacy. Third, users should engage in reflective learning, deliberately analysing how AI tools
influence their reasoning and identifying gaps in understanding. As Sarkar et al. (2025) suggest,
conscious reflection helps maintain a balance between AI assistance and cognitive independence.