AIチャットボットがメンタルヘルス倫理を系統的に違反するという新研究(AI chatbots systematically violate mental health ethics standards)

ad

2025-10-21 ブラウン大学

ブラウン大学の研究チームは、ChatGPTなどのAIチャットボットが精神医療分野の倫理基準に体系的に違反していることを明らかにした。認知行動療法(CBT)などの心理療法的プロンプトを用いても、AIは危機対応の誤り、誤情報提供、擬似的共感の演出、文化・性別バイアスなど15種類の倫理的リスクを示した。3名の臨床心理士による評価では、ユーザーの誤信念を強化する応答や、自殺念慮への不適切対応などが確認された。研究者らは、AIが治療的支援に寄与する可能性を認めつつも、専門家による監督や法的枠組みの欠如を重大な懸念として指摘。今後はAIカウンセリングに対する倫理・教育・法的基準の策定が不可欠と訴えた。成果はAAAI/ACM「Artificial Intelligence, Ethics and Society」会議で発表される。

AIチャットボットがメンタルヘルス倫理を系統的に違反するという新研究(AI chatbots systematically violate mental health ethics standards)
Licensed psychologists reviewed simulated chats based on real chatbot responses revealing numerous ethical violations, including over-validation of user’s beliefs.

<関連情報>

LLMカウンセラーがメンタルヘルス実践における倫理基準に違反する理由:実践者情報に基づくフレームワーク How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework

Zainab Iftikhar,Amy Xiao,Sean Ransom,Jeff Huang,Harini Suresh
AAAI/ACM Conference on AI, Ethics, and Society  Published:2025-10-15
DOI:https://doi.org/10.1609/aies.v8i2.36632

Abstract

Large language models (LLMs) were not designed to replace healthcare workers, but they are being used in ways that can lead users to overestimate the types of roles that these systems can assume. While prompt engineering has been shown to improve LLMs’ clinical effectiveness in mental health applications, little is known about whether such strategies help models adhere to ethical principles for real-world deployment. In this study, we conducted an 18-month ethnographic collaboration with mental health practitioners (three clinically licensed psychologists and seven trained peer counselors) to map LLM counselors’ behavior during a session to professional codes of conduct established by organizations like the American Psychological Association (APA). Through qualitative analysis and expert evaluation of N=137 sessions (110 self-counseling; 27 simulated), we outline a framework of 15 ethical violations mapped to 5 major themes. These include: Lack of Contextual Understanding, where the counselor fails to account for users’ lived experiences, leading to oversimplified, contextually irrelevant, and one-size-fits-all intervention; Poor Therapeutic Collaboration, where the counselor’s low turn-taking behavior and invalidating outputs limit users’ agency over their therapeutic experience; Deceptive Empathy, where the counselor’s simulated anthropomorphic responses (“I hear you”, “I understand”) create a false sense of emotional connection; Unfair Discrimination, where the counselor’s responses exhibit algorithmic bias and cultural insensitivity toward marginalized populations; and Lack of Safety & Crisis Management, where individuals who are “knowledgeable enough” to correct LLM outputs are at an advantage, while others, due to lack of clinical knowledge and digital literacy, are more likely to suffer from clinically inappropriate responses. Reflecting on these findings through a practitioner-informed lens, we argue that reducing psychotherapy—a deeply meaningful and relational process—to a language generation task can have serious and harmful implications in practice. We conclude by discussing policy-oriented accountability mechanisms for emerging LLM counselors.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました