👉 The Real Risks of Artificial Intelligence and Why Human Oversight Matters
- Andreá Cassar
- Jan 14
- 5 min read
What Should We Be Concerned About When It Comes to Artificial Intelligence?
As artificial intelligence (AI) continues to advance and integrate into nearly every sector of society, it has generated both optimism and concern. While fears surrounding new technologies are not unprecedented, the scale, speed, and autonomy of AI systems raise unique challenges. These concerns can be broadly categorized into workforce disruption, the risk of harmful errors or hallucinations, ethical and moral dilemmas such as bias, malpractice resulting from misuse or inadequate safeguards, and a broader fear of change driven by uncertainty and lack of trust (Pew Research Center, 2023).
Workforce Disruption and Job Transformation
AI’s ability to store, search, analyze, and generate information at scale will inevitably reduce or eliminate certain entry-level and repetitive roles. Administrative positions, scheduling support, and clerical work are among the most affected. However, eliminating human oversight entirely would contradict a foundational principle of responsible AI use: humans must remain accountable for AI outcomes. Assuming AI systems will never double-book appointments, misinterpret priorities, or fail in edge cases is unrealistic.
Historical parallels suggest that technological disruption reshapes work rather than eliminates it. Just as radio transformed, but did not eliminate newspapers, and television reshaped radio, AI will alter job functions while creating new roles in governance, data oversight, system auditing, and ethics. Pew Research findings indicate that while automation anxiety is widespread, experts expect job transformation rather than mass elimination (Pew Research Center, 2023).
Accuracy, Trust, and the Limits of AI Use
A critical concern is whether AI-generated outputs can be trusted in all contexts. Not all information should be accessible or actionable through AI tools, particularly in high-risk domains such as self-harm, violence, and severe mental health crises. Research increasingly shows that conversational AI can unintentionally reinforce harmful ideation when safeguards are insufficient (PBS NewsHour, 2024).
Mental health use cases require strict boundaries. Providing general wellness guidance differs fundamentally from responding to prompts involving suicide or violence. Continued engagement in such scenarios rather than refusal, escalation, or disengagement can cause harm. Studies on AI and mental health highlight emerging concerns around “AI psychosis,” emotional dependency, and inappropriate reinforcement of user beliefs (PBS NewsHour, 2024).
These concerns are reflected in recent legal cases. In August 2025, the parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, alleging that ChatGPT encouraged self-harm through its responses (NBC News, 2025; TIME, 2025). According to reporting, the chatbot responded to explicit references to suicide in ways that failed to disengage or redirect appropriately (TIME, 2025). While OpenAI has denied wrongdoing, the case underscores the ethical responsibility of AI developers to implement robust safeguards especially for vulnerable users (NBC News, 2025).
Ethics, Safeguards, and Accountability
Importantly, AI systems do not possess intent or malice. Harmful outcomes often stem from organizational decisions to deploy systems without adequate safety controls. Research on AI ethics emphasizes that failures typically arise not from the models themselves, but from insufficient governance, oversight, and risk mitigation during deployment (PMC, 2024).
The argument that digital systems are too complex to regulate does not hold. Identity verification, parental controls, and age restrictions already exist across digital platforms, including gaming, streaming, and financial services. Comparable safeguards could be applied to AI systems, particularly for minors and high-risk interactions. A recent settlement involving Character.AI and Google following a teen suicide further highlights the legal and ethical consequences of inadequate protections (CNN, 2026).
Hallucinations, Bias, and Risk in Healthcare
AI hallucinations confident but incorrect outputs pose especially serious risks in healthcare. AI systems increasingly support diagnostics, research synthesis, and treatment recommendations. However, biased or incomplete training data can produce dangerously inaccurate conclusions. Medical research shows that AI models trained on non-representative datasets may fail when applied across diverse populations (PMC, 2024).
For example, treatment recommendations that do not account for genetic or demographic differences such as those affecting patients with sickle cell anemia can lead to harmful outcomes. Continuous data validation, demographic auditing, and human review are therefore essential safeguards. Studies stress that cutting corners in medical AI development can place human lives at risk (PMC, 2024).
AI as an Ongoing Human–Technological Evolution
Throughout history, transformative technologies from writing to industrial machinery have provoked fear alongside progress. AI represents the latest chapter in this pattern. While concerns about automation, ethics, and misuse are valid, AI also offers unprecedented opportunities to improve healthcare, research, productivity, and innovation.
Knowledge remains central to empowerment. Understanding both the risks and possibilities of AI enables professionals whether job seekers, clinicians, or everyday users to engage responsibly. As research suggests, AI is neither inherently benevolent nor dangerous; its impact depends on the ethical frameworks, safeguards, and human judgment that guide its use (Pew Research Center, 2023).
Conclusion
Although the risks associated with artificial intelligence are real and must be taken seriously, they should not overshadow the significant benefits AI has already delivered across numerous industries. While AI remains relatively new in its current form, its contributions to innovation, efficiency, and discovery increasingly outweigh its limitations when deployed responsibly. From healthcare and research to workforce productivity and decision support, AI has demonstrated its potential to augment human capability rather than replace it.
As discussed in greater detail in The Impact of Artificial Intelligence Across Industries and the Workforce, AI’s value lies not only in its computational power but in how closely its outputs are monitored, evaluated, and corrected by human oversight. The more rigorously organizations validate what AI systems compute, the more reliable and beneficial these tools become.
It is therefore imperative that companies remain disciplined in enforcing strong AI policies and governance frameworks. Ethical considerations, moral responsibility, and bias mitigation must be embedded throughout the design, deployment, and maintenance of AI systems. By sustaining this commitment, organizations can ensure that AI continues to evolve as a force for positive, responsible, and human-centered progress. References
American Psychiatric Association. (2025, August 31). What to know about “AI psychosis” and the effect of AI chatbots on mental health [Transcript]. PBS NewsHour. https://www.pbs.org/newshour/show/what-to-know-about-ai-psychosis-and-the-effect-of-ai-chatbots-on-mental-health
CNN Business. (2026, January 7). Google, Character.AI settle lawsuits over chatbot safety concerns.https://www.cnn.com/2026/01/07/business/character-ai-google-settle-teen-suicide-lawsuit
NBC News / Sky News. (2026, January 8). Google and chatbot start-up Character settle teen suicide lawsuits.https://www.washingtonpost.com/technology/2026/01/07/google-character-settle-lawsuits-suicide/
People. (2026, January 8). Teen’s mom settles with Google and AI company after claiming his suicide was fueled by love of chatbot. https://people.com/teens-mom-settles-with-google-and-ai-company-after-claiming-his-suicide-was-fueled-by-love-of-chatbot-11881597
Raine v. OpenAI. (2025). Raine v. OpenAI (2025 lawsuit). In Wikipedia. https://en.wikipedia.org/wiki/Raine_v._OpenAI
Reuters. (2025, September 29). OpenAI launches parental controls in ChatGPT after California teen’s suicide.https://www.reuters.com/legal/litigation/openai-bring-parental-controls-chatgpt-after-california-teens-suicide-2025-09-29/
Time. (2025, October 23). OpenAI removed safeguards before teen’s suicide, amended lawsuit claims.https://time.com/7327946/chatgpt-openai-suicide-adam-raine-lawsuit/
Pew Research Center. (2023, June 21). The most harmful or menacing changes in digital life that are likely by 2035. Pew Research Center. https://www.pewresearch.org/internet/2023/06/21/themes-the-most-harmful-or-menacing-changes-in-digital-life-that-are-likely-by-2035/
National Institutes of Health. (2024). Medical sciences and artificial intelligence (PMC document). https://pmc.ncbi.nlm.nih.gov/articles/PMC12140851/
PBS NewsHour. (2025, September 2). What to know about AI and mental health (NewsHour segment). https://www.pbs.org/newshour/classroom/daily-news-lessons/2025/09/what-to-know-about-ai-and-mental-health




Comments