脳が顔の表情をつくる仕組みを解明(How the brain creates facial expressions)

ad

2026-01-07 ロックフェラー大学

米ロックフェラー大学の研究チームは、脳がどのように表情を作り出すかという神経メカニズムを解明した。研究では、顔の筋肉を制御する運動指令が、単なる筋肉ごとの指示ではなく、感情や社会的文脈に応じて統合的に生成されていることが示された。特定の神経回路が活性化されることで、笑顔や恐怖、怒りといった表情が自然で一貫した形で表出する。さらに、これらの回路は他者の反応を予測しながら働くことが分かり、表情がコミュニケーション手段として進化してきた背景も示唆された。この成果は、自閉スペクトラム症やパーキンソン病など、表情制御に障害を伴う神経疾患の理解と治療法開発につながる可能性がある。

<関連情報>

顔のジェスチャーは、動的かつ安定したコードの皮質階層を通じて実行される Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes

Geena R. Ianni, Yuriria Vázquez, Adam G. Rouse, Marc H. Schieber, […] , and Winrich A. Freiwald
Science  Published:8 Jan 2026
DOI:https://doi.org/10.1126/science.aea0890

Editor’s summary

Primates use their faces as important social communication tools. The neural mechanisms generating facial movements are largely unknown. Combining naturalistic stimuli with functional magnetic resonance imaging–guided electrophysiological recordings, Ianni et al. found that a network encompassing both lateral and medial neocortical areas is involved in the production of facial gestures, including both deliberate and emotional expressions (see the Perspective by Waller and Whitehouse). These brain regions exhibit distinct activity patterns even before a gesture is initiated and adapt their signaling based on the social context. Although these cortical regions function in parallel, they are organized along a posterior-to-anterior gradient. This organization facilitates a transition from rapidly changing to more stable gesture representations, ensuring that facial expressions remain clear and contextually appropriate during social interactions. —Peter Stern

Structured Abstract

INTRODUCTION

Faces are central to primate social life. Beyond their static features, dynamic facial gestures convey critical information about internal states, intentions, and social hierarchies. Although much is known about how the brain perceives faces, the neural mechanisms that generate facial gestures remain poorly understood. In primates, the muscles of facial expression are under direct cortical control from multiple regions. It is unclear whether gestures are governed by distinct medial and lateral circuits or by an integrated, distributed cortical network.

RATIONALE

We investigated how the primate cortex generates naturalistic facial gestures by combining functional localization (functional magnetic resonance imaging) of cortical circuitry with multichannel array recordings in four regions: primary motor (M1), ventral premotor (PMv), cingulate motor (M3), and primary somatosensory (S1) cortex. Subjects engaged in a naturalistic paradigm while we simultaneously recorded from single cells across these regions and continuously tracked ongoing movement, including ethologically recognizable facial gestures. We then investigated single-cell properties and neural population activity within each face motor region to address two questions: How do cortical regions encode facial gestures, and how are these codes organized across space and time?

RESULTS

Face motor regions across the cortex were broadly and equally involved in all gesture types, overturning a classic view that the medial cortex encodes emotional movements, and the lateral cortex voluntary ones. Instead, every region contained both broadly tuned and gesture-specific cells. Analysis of neural population dynamics revealed that gesture category was decodable in all regions not only during movement but also well before its onset, with distinct activity trajectories underlying each gesture. Neural activity, especially in motor and somatosensory cortex, predicted moment-by-moment facial kinematics during gesture production, yet neuron-neuron correlations were gesture specific despite shared effectors. Temporal generalization decoding revealed a hierarchy of coding strategies across regions: lateral motor and somatosensory cortices used highly dynamic codes that changed over time, consistent with real-time control of movement. Medial cingulate cortex used a temporally stable code that generalized across premovement and movement epochs, and the premotor cortex showed intermediate, moderately stable dynamics.

CONCLUSION

Our findings establish that facial gesture production is supported by a distributed cortical network hierarchically organized by temporal dynamics, balancing dynamic and stable coding strategies. In facial gesture production, muscle kinematics are the end product of a larger sensory motor process wherein cognitive variables, such as visual information, internal states, and somatosensory feedback, each play a role. Intimate cortical involvement in gesture production, along with the segregation of neural states before movement onset, indicate that facial gestures are not merely reflexive outputs but that they arise from the integration of contextual and sensory information prior to behavior. This temporal hierarchy in cortical codes for facial gestures may provide a mechanism by which a motor system integrates these cues while producing well-timed and interpretable motor outputs. In addition to refining long-standing models of facial motor control, this updated framework has translational implications: Understanding how cortical codes generate naturalistic communication may inform brain-computer interfaces designed to restore these functions in patients.

脳が顔の表情をつくる仕組みを解明(How the brain creates facial expressions)Temporal dynamics across cortical face areas during naturalistic gestures.
Functionally localized, simultaneous multichannel recordings in four face motor areas (M1, S1, PMv, M3) during naturalistic social interaction revealed a hierarchy in temporal coding for facial gestures across the cortex. All regions encoded all gesture types; lateral regions (M1, S1) showed fast dynamics driving kinematics, whereas anterior and medial regions (M3, PMv) had stable dynamics, potentially representing social context.

Abstract

Facial gestures are one fundamental set of communicative behaviors in primates, generated through the dynamic arrangement of many fine muscles. Anatomy shows that facial muscles are under direct control from multiple cortical regions, and studies of patients with focal lesions suggest that lateral frontal cortices control voluntary movements and medial emotional expressions. By directly measuring single-neuron activity from both sets of areas, we found that lateral and medial cortical face motor regions encode both types of gestures. They do so through area-specific temporal activity patterns, distinguishable well prior to movement onset. Our results show that cortical regions projecting in parallel downstream form a continuum of gesture coding, from dynamic to temporally stable, to produce socially appropriate motor outputs during communication.

生物工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました