AIで初心者を専門家に育成(Turning Beginners Into Experts With Artificial Intelligence)

ad

2025-01-29 テキサスA&M大学

AIで初心者を専門家に育成(Turning Beginners Into Experts With Artificial Intelligence)
Researchers have developed an advanced AI model that could help transfer knowledge in a scalable fashion.

テキサスA&M大学のアルフレド・ガルシア教授とミネソタ大学の共同研究チームは、人工知能(AI)を活用して、初心者が複雑なセンサーモーター技能を習得するための新しい手法を開発しました。 この手法は、専門家の動作データをAIモデルに学習させ、初心者が段階的にスキルを習得できるよう支援するものです。具体的には、専門家の動作をマルコフ決定過程(MDP)としてモデル化し、AIが成功したタスクに報酬を与えることで、効果的な学習を促進します。このアプローチにより、医療分野での複雑な手術手技の習得や、スポーツにおける技術向上など、さまざまな分野での応用が期待されています。

<関連情報>

有限時間保証付き高次元状態空間におけるマルコフ決定過程の構造推定 Structural Estimation of Markov Decision Processes in High-Dimensional State Space with Finite-Time Guarantees

Siliang Zeng ,Mingyi Hong ,Alfredo Garcia
Operations Research Journal  Published:19 Sep 2024
DOI:https://doi.org/10.1287/opre.2022.0511

Abstract

We consider the task of estimating a structural model of dynamic decisions by a human agent based on the observable history of implemented actions and visited states. This problem has an inherent nested structure: In the inner problem, an optimal policy for a given reward function is identified, whereas in the outer problem, a measure of fit is maximized. Several approaches have been proposed to alleviate the computational burden of this nested-loop structure, but these methods still suffer from high complexity when the state space is either discrete with large cardinality or continuous in high dimensions. Other approaches in the inverse reinforcement learning literature emphasize policy estimation at the expense of reduced reward estimation accuracy. In this paper, we propose a single-loop estimation algorithm with finite time guarantees that is equipped to deal with high-dimensional state spaces without compromising reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show the proposed algorithm converges to a stationary solution with a finite-time guarantee. Further, if the reward is parameterized linearly, the algorithm approximates the maximum likelihood estimator sublinearly.

 

実証が生成的世界モデルと出会うとき:オフライン逆強化学習のための最尤フレームワーク When demonstrations meet generative world models: a maximum likelihood framework for offline inverse reinforcement learning

Siliang Zeng, Chenliang Li, Alfredo Garcia, Mingyi Hong
NIPS ’23: Proceedings of the 37th International Conference on Neural Information Processing Systems  Published: 10 December 2023

Abstract

Offline inverse reinforcement learning (Offline IRL) aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent. Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving. However, the structure of an expert’s preferences implicit in observed actions is closely linked to the expert’s model of the environment dynamics (i.e. the “world” model). Thus, inaccurate models of the world obtained from finite data with limited coverage could compound inaccuracy in estimated rewards. To address this issue, we propose a bi-level optimization formulation of the estimation task wherein the upper level is likelihood maximization based upon a conservative model of the expert’s policy (lower level). The policy model is conservative in that it maximizes reward subject to a penalty that is increasing in the uncertainty of the estimated model of the world. We propose a new algorithmic framework to solve the bi-level optimization problem formulation and provide statistical and computational guarantees of performance for the associated optimal reward estimator. Finally, we demonstrate that the proposed algorithm outperforms the state-of-the-art offline IRL and imitation learning benchmarks by a large margin, over the continuous control tasks in MuJoCo and different datasets in the D4RL benchmark. Our implementation is available at https://github.com/Cloud0723/Offline-MLIRL

教育
ad
ad
Follow
ad
タイトルとURLをコピーしました