医療AI技術の透明性を高め、潜在的バイアスに取り組むための新たな提言(New Recommendations to Increase Transparency and Tackle Potential Bias in Medical AI Technologies)

ad

2024-12-18 バーミンガム大学

バーミンガム大学を中心とする国際研究チーム「STANDING Together」は、医療AI技術の透明性向上とバイアス軽減を目的とした新たな推奨事項を発表しました。推奨事項には、少数派を含む多様なデータセットの使用、データ公開者によるバイアス特定の支援、AI開発者向けの評価指針、AIのバイアス検証方法の定義が含まれます。この取り組みは、医療AIが全ての人々に公平で有効な技術となることを目指し、『The Lancet Digital Health』などに掲載されました。

<関連情報>

医療データセットにおけるアルゴリズム・バイアスへの取り組みと透明性の促進:STANDING Togetherコンセンサス勧告 Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations

Joseph E Alderman, MBChB ∙ Joanne Palmer, PhD∙ Elinor Laws, MBBCh∙ Melissa D McCradden, PhD∙ Johan Ordish, MA∙ Marzyeh Ghassemi, PhD∙ et al.
The Lancet Digital Health  Published: December 18, 2024
DOI:https://doi.org/10.1016/S2589-7500(24)00224-3

医療AI技術の透明性を高め、潜在的バイアスに取り組むための新たな提言(New Recommendations to Increase Transparency and Tackle Potential Bias in Medical AI Technologies)

Summary

Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました