动作(物理)
认知心理学
锤子
凝视
感知
运动技能
心理学
代表(政治)
计算机科学
人工智能
发展心理学
神经科学
工程类
政治
物理
结构工程
量子力学
法学
政治学
作者
Ori Ossmy,Brianna E. Kaplan,Danyang Han,Melody Xu,Catherine Bianco,Roy Mukamel,Karen E. Adolph
出处
期刊:Current Biology
[Elsevier]
日期:2021-12-01
卷期号:32 (1): 190-199.e3
被引量:1
标识
DOI:10.1016/j.cub.2021.11.018
摘要
Across species and ages, planning multi-step actions is a hallmark of intelligence and critical for survival. Traditionally, researchers adopt a "top-down" approach to action planning by focusing on the ability to create an internal representation of the world that guides the next step in a multi-step action. However, a top-down approach does not inform on underlying mechanisms, so researchers can only speculate about how and why improvements in planning occur. The current study takes a "bottom-up" approach by testing developmental changes in the real-time, moment-to-moment interplay among perceptual, neural, and motor components of action planning using simultaneous video, motion-tracking, head-mounted eye tracking, and electroencephalography (EEG). Preschoolers (n = 32) and adults (n = 22) grasped a hammer with their dominant hand to pound a peg when the hammer handle pointed in different directions. When the handle pointed toward their non-dominant hand, younger children ("nonadaptive planners") used a habitual overhand grip that interfered with wielding the hammer, whereas adults and older children ("adaptive planners") used an adaptive underhand grip. Adaptive and nonadaptive children differed in when and where they directed their gaze to obtain visual information, neural activation of the motor system before reaching, and straightness of their reach trajectories. Nonadaptive children immediately used a habitual overhand grip before gathering visual information, leaving insufficient time to form a plan before acting. Our novel bottom-up approach transcends mere speculation by providing converging evidence that the development of action planning depends on a real-time "tug of war" between habits and information gathering and processing.
科研通智能强力驱动
Strongly Powered by AbleSci AI