Eye-Head Coordination and Interaction
Gaze shifts are fast and can extend over the entire visual field and even to objects not initially within view. Smaller shifts can be performed with just the eyes, but gaze shifts frequently involve head movement, in support of eye saccades and to maintain a comfortable eye-in-head position. The relationship and coordination of the movements is complex. Not all gaze involves head movement, not all head movement is in support of gaze, and when eye and head move together it is at different speeds, additive toward a target and subtractive when the eyes reach a target ahead of the head.
In groundwork for the GEMINI project, we have studied eye, head and torso movement during gaze shifts in virtual reality [1]. The two plots show examples of gaze shift that give insight into the complex temporal and spatial relationships.
The example at the top is a gaze shift performed from a central position in the head, in reaction to a target appearing at 35º visual angle. The eyes responded faster than the head and reached the target (vertical line) with only minimal head contribution to the gaze amplitude. The head continued to move toward the target, compensated by VOR eye movement in to maintain gaze on the target and rotate the eye back to a more central position in the head (VOR refers to the vestibulo-ocular reflex that stabilises vision).
The plot below illustrates a larger gaze shift of 100º where the target was not initially within the field-of-view but indicated by display of a directional cue. In contrast to the prior example, the gaze shift was not reactive to a stimulus but planned based on the cue, leading to a longer reaction time after which eye and head started simultaneously. As the eyes move faster than the head, they need to repeatedly rotate back relative to the head, but the resultant steps in the gaze signal are short and do not represent fixations.
Among the key insights gained is that the head does not follow the eyes all the way but only to within about 10º of the target, leaving a considerable offset in gaze direction from the centre of the head. This is significant for design, where head orientation is widely used to approximate gaze.
We have designed several novel interaction techniques and interface concepts that leverage insight we gained into eye-head coordination. This includes novel concepts for gaze pointing and selection mediated by head movement [2], seamless switching of cursor control between gaze for coarse-grained and head for fine-grained positioning [3] and radial interfaces for expressive input with gaze and head movement [4].
Eye, Head and Torso Coordination During Gaze Shifts in Virtual Reality
Ludwig Sidenmark and Hans Gellersen
TOCHI: ACM Transactions on Computer-Human InteractionEye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection
Ludwig Sidenmark and Hans Gellersen
UIST '19: ACM Symposium on User Interface Software and TechnologyBimodalGaze: Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement
Ludwig Sidenmark, Diako Mardanbegi, Argenis Ramirez Gomez, Christopher Clarke, and Hans Gellersen.
ETRA '20: ACM Symposium on Eye Tracking Research and ApplicationsRadi-Eye: Hands-Free Radial Interfaces for 3D Interaction using Gaze-Activated Head-Crossing
Ludwig Sidenmark, Dominic Potts, Bill Bapisch, Hans Gellersen
CHI '21: CHI Conference on Human Factors in Computing Systems