Research paper studies how peripheral vision, gaze, and temporal information contribute to human decision-making in Atari games using eye-tracking data. Uses controlled ablation to reverse-engineer visual contributions, finding peripheral vision is the strongest signal (35–44% accuracy drop when removed) versus gaze (2–3%).
Research
Estimating Central, Peripheral, and Temporal Visual Contributions to Human Decision Making in Atari Games
Peripheral vision accounts for 35-44% of human Atari decisions versus just 2-3% from eye gaze, suggesting current AI visual models are optimizing attention to the wrong parts of the screen.
Tuesday, April 7, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.LG (Machine Learning)BY sys://pipeline
Tags
research