Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality
Ludwig Sidenmark and Hans Gellersen
TOCHI: ACM Transactions on Computer-Human Interaction
Humans perform gaze shifts naturally through a combination of eye, head and body movements. Although gaze has been long studied as input modality for interaction, this has previously ignored the coordination of the eyes, head and body. This article reports a study of gaze shifts in virtual reality (VR) aimed to address the gap and inform design. We identify general eye, head and torso coordination patterns and provide an analysis of the relative movements' contribution and temporal alignment. We quantify effects of target distance, direction and user posture, describe preferred eye-in-head motion ranges, and identify a high variability in head movement tendency. Study insights lead us to propose gaze zones that reflect different levels of contribution from eye, head and body. We discuss design implications for HCI and VR, and in conclusion argue to treat gaze as multimodal input, and eye, head and body movement as synergetic in interaction design.
Ludwig Sidenmark and Hans Gellersen. 2019. Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality. ACM Trans. Comput.-Hum. Interact (TOCHI). 27, 1, Article 4 (December 2019), 40 pages.
BibTex
@article{10.1145/3361218, author = {Sidenmark, Ludwig and Gellersen, Hans}, title = {Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality}, year = {2019}, issue_date = {February 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {27}, number = {1}, issn = {1073-0516}, url = {https://doi.org/10.1145/3361218}, doi = {10.1145/3361218}, journal = {ACM Trans. Comput.-Hum. Interact.}, month = {dec}, articleno = {4}, numpages = {40}, keywords = {eye-head coordination, head and body movement, gaze interaction, Eye gaze, eye, multimodal interaction, eye tracking, gaze shifts} }