AIを活用した注意メカニズムががん病理報告の合理化に貢献(AI-driven attention mechanisms aid in streamlining cancer pathology reporting)

ad

2024-03-08 オークリッジ国立研究所(ORNL)

AIを活用した注意メカニズムががん病理報告の合理化に貢献(AI-driven attention mechanisms aid in streamlining cancer pathology reporting)
ORNL researchers are using attention mechanisms to maximize AI’s capabilities in the scanning of medical texts, which can drastically reduce the time needed for cancer incidence reporting. Credit: Getty images

米国エネルギー省のオークリッジ国立研究所のMOSSAICチームは、米国国立がん研究所との協力で、癌報告の情報の障害を克服するための画期的な取り組みを行っている。彼らはAIモデルを使用して、癌病理報告書から情報を抽出し、NCIデータベースにデータを迅速にアップロードする方法を開発した。この研究は、癌研究や予防戦略の開発に不可欠な重要な基盤を提供している。

<関連情報>

臨床テキスト分類における注意メカニズム: 比較評価 Attention Mechanisms in Clinical Text Classification: A Comparative Evaluation

Christoph S. Metzner; Shang Gao; Drahomira Herrmanno;…
IEEE Journal of Biomedical and Health Informatics  Published:19 January 2024
DOI:https://doi.org/10.1109/JBHI.2024.3355951

Abstract

Attention mechanisms are now a mainstay architecture in neural networks and improve the performance of biomedical text classification tasks. In particular, models that perform automated medical encoding of clinical documents make extensive use of the label-wise attention mechanism. A label-wise attention mechanism increases a model’s discriminatory ability by using label-specific reference information. This information can either be implicitly learned during training or explicitly provided through embedded textual code descriptions or information on the code hierarchy; however, contemporary studies arbitrarily select the type of label-specific reference information. To address this shortcoming, we evaluated label-wise attention initialized with either implicit or explicit label-specific reference information against two common baseline methods—target-attention and text-encoder architecture-specific methods—to generate document embeddings across four text-encoder architectures—a convolutional neural network, two recurrent neural networks, and a transformer. We also present an extension of label-wise attention that can embed the information on the code hierarchy. We performed our experiments on the MIMIC III dataset, which is a standard dataset in the clinical text classification domain. Our experiments showed that using pretrained reference information and the hierarchical design helped improve classification performance. These performance improvements had less impact on larger datasets and label spaces across all text-encoder architectures. In our analysis, we used an attention mechanism’s energy scores to explain the perceived differences in performance and interpretability between the text-encoder architectures and types of label-attention.

医療・健康
ad
ad
Follow
ad
タイトルとURLをコピーしました