2024-08-07 ワシントン大学セントルイス校
Researchers have found people adjust their own behavior to be more fair when training AI. (Photo: Shutterstock)
<関連情報>
- https://source.wustl.edu/2024/08/humans-change-their-own-behavior-when-training-ai/
- https://www.pnas.org/doi/10.1073/pnas.2408731121
AIトレーニングが人間の意思決定に及ぼす影響 The consequences of AI training on human decision-making
Lauren S. Treiman ltreiman, Chien-Ju Ho, and Wouter Kool
Proceedings of the National Academy of Sciences Published:August 6, 2024
DOI:https://doi.org/10.1073/pnas.2408731121
Significance
In recent years, people have become more reliant on AI to help them make decisions. These models not only help us but also learn from our behavior. Therefore, it is important to understand how our interactions with AI models influence them. Current practice assumes that the human behavior used to train is unbiased. However, our work challenges this assumption. We show that people change their behavior when they are aware it is used to train AI. Moreover, this behavior persists days after training has ended. These findings highlight a problem with AI development: Assumptions of unbiased training data can lead to unintentionally biased models. This AI may reinforce these habits, resulting in both humans and AI deviating from optimal behavior.
Abstract
AI is now an integral part of everyday decision-making, assisting us in both routine and high-stakes choices. These AI models often learn from human behavior, assuming this training data is unbiased. However, we report five studies that show that people change their behavior to instill desired routines into AI, indicating this assumption is invalid. To show this behavioral shift, we recruited participants to play the ultimatum game, where they were asked to decide whether to accept proposals of monetary splits made by either other human participants or AI. Some participants were informed their choices would be used to train an AI proposer, while others did not receive this information. Across five experiments, we found that people modified their behavior to train AI to make fair proposals, regardless of whether they could directly benefit from the AI training. After completing this task once, participants were invited to complete this task again but were told their responses would not be used for AI training. People who had previously trained AI persisted with this behavioral shift, indicating that the new behavioral routine had become habitual. This work demonstrates that using human behavior as training data has more consequences than previously thought since it can engender AI to perpetuate human biases and cause people to form habits that deviate from how they would normally act. Therefore, this work underscores a problem for AI algorithms that aim to learn unbiased representations of human preferences.