Loading

Case Series Open Access
Volume 7 | Issue 1 | DOI: https://doi.org/10.33696/casereports.7.034

Barriers to AI Adoption in Psychiatry: Exploring the Attitudes of Five Psychiatrists Who Do Not Use AI

  • 1Psychiatrist, Forensic Unit R4, Department R, Mental Health Centre Sct. Hans, Roskilde, Denmark
  • 2University Psychiatric Hospital Vrapce, Department of Social Psychiatry, Zagreb, Croatia, Mens Sana d.o.o, Psychological treatments, biofeedback, neurofeedback, Zagreb, Croatia
+ Affiliations - Affiliations

Corresponding Author

Ema N. Gruber, emagruber2000@yahoo.com

Received Date: June 11, 2025

Accepted Date: July 15, 2025

Abstract

Objectives: The use of generative artificial intelligence (AI) in psychiatric practice is becoming increasingly prevalent; however, some professionals still refuse to adopt AI tools. This study analyzes the attitudes of five psychiatrists who do not use AI and explores the reasons behind their decision.

Methods: Case series, 5 cases. Respondents were answering a structured questionnaire addressing trust in AI, ethical dilemmas, potential risks, and institutional support.

Results: The main reasons for not using AI include a lack of trust in information provided by AI, fear of perpetuating biases or discrimination in decision making processes, fear of the possibility of AI being used for malicious purposes, ethical concerns regarding patient relationships, fear of losing jobs due to AI, and insufficient institutional support for integrating AI into work. Most respondents express concern about the potential misuse of AI for malicious purposes. None of the respondents uses AI in their work or recommends AI solutions to patients. They discuss the use of AI with their colleagues. Only one believes it is possible to form an emotional connection with AI.

Conclusion: This case series offers a nuanced understanding of the barriers to adopting generative AI in psychiatric practice, particularly among professionals who have opted not to use such tools. The findings highlight that resistance to AI integration is shaped by a complex interplay of factors: lack of trust in the information provided by AI, concerns about perpetuating biases or discrimination through algorithmic decision-making, fears regarding potential misuse of AI for malicious purposes, apprehension about job loss due to AI, and insufficient institutional support for implementation. 

While most respondents reported discussing the use of AI with their colleagues, only one expressed the belief that it is possible to form an emotional connection with AI. To overcome these barriers, a comprehensive approach is required—one that combines targeted education with robust institutional support. Such efforts are essential to build trust and confidence among clinicians and to ensure that the integration of AI tools upholds clinical integrity and prioritizes patient-centered care.

Author Information X