AIを訓練すると人間は自らの行動を変える(Humans change their own behavior when training AI)

ad

2024-08-07 ワシントン大学セントルイス校

AIを訓練すると人間は自らの行動を変える(Humans change their own behavior when training AI)
Researchers have found people adjust their own behavior to be more fair when training AI. (Photo: Shutterstock)

ワシントン大学の研究で、AIを訓練すると伝えられた参加者が、自分の行動をより公正に見せるように調整する現象が発見されました。これはAI開発において重要な影響を与える可能性があります。研究では、「ウルティマタムゲーム」でAIを訓練すると信じた参加者が、自己利益を犠牲にしてでも公正な取引を求める傾向があることが示されました。この行動変化は、AI訓練が終了した後も持続し、人々がAI訓練に影響を与える要因についてさらなる調査が必要とされています。

<関連情報>

AIトレーニングが人間の意思決定に及ぼす影響 The consequences of AI training on human decision-making

Lauren S. Treiman ltreiman, Chien-Ju Ho, and Wouter Kool
Proceedings of the National Academy of Sciences  Published:August 6, 2024
DOI:https://doi.org/10.1073/pnas.2408731121

Significance

In recent years, people have become more reliant on AI to help them make decisions. These models not only help us but also learn from our behavior. Therefore, it is important to understand how our interactions with AI models influence them. Current practice assumes that the human behavior used to train is unbiased. However, our work challenges this assumption. We show that people change their behavior when they are aware it is used to train AI. Moreover, this behavior persists days after training has ended. These findings highlight a problem with AI development: Assumptions of unbiased training data can lead to unintentionally biased models. This AI may reinforce these habits, resulting in both humans and AI deviating from optimal behavior.

Abstract

AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.

教育
ad
ad
Follow
ad
タイトルとURLをコピーしました