2025-07-22 マウントサイナイ医療システム (MSHS)

<関連情報>
- https://www.mountsinai.org/about/newsroom/2025/like-humans-ai-can-jump-to-conclusions-mount-sinai-study-finds
- https://www.nature.com/articles/s41746-025-01792-y
医療倫理推論における大規模言語モデルの落とし穴
Pitfalls of large language models in medical ethics reasoning
Shelly Soffer,Vera Sorin,Girish N. Nadkarni & Eyal Klang
npj Digital Medicine Published:22 July 2025
DOI:https://doi.org/10.1038/s41746-025-01792-y
Large language models (LLMs), such as ChatGPT-o1, display subtle blind spots in complex reasoning tasks. We illustrate these pitfalls with lateral thinking puzzles and medical ethics scenarios. Our observations indicate that patterns in training data may contribute to cognitive biases, limiting the models’ ability to navigate nuanced ethical situations. Recognizing these tendencies is crucial for responsible AI deployment in clinical contexts.


