Gaze has been studied for HCI since the seventies. This has focussed on eye-tracking, saccades and fixations as eye movements that are associated with intent, interfaces that are attentive to where we look, and ideas of gaze as a natural pointer for selection of input. In GEMINI, we are developing a new foundation for interaction design with gaze and eye movement, by considering how eye, head, hand and body movements work in concert.

Eye-Head Coordination and Interaction

We tend to think of ‘looking’ as something that we do with only our eyes. However, head movement is an integral part of our visual system, to support eye saccades and maintain a comfortable eye-in-head position. In eye-tracking applications, head movement has been treated as a problem rather than as part of gaze, and suppressed or filtered out. In contrast, we are exploring how input from eye and head can be used in synergy.

Pursuits: Gaze at Objects in Motion

The main types of eye movements studied for HCI are fixations that occur when we focus on an object and saccades that we use to rapidly shift our attention in the visual field. Fixations are detected when the eyes are relatively still in the head to keep an object in the line of sight. However, if we look at an object that is moving, our eyes produce a different type of movement, smoothly anticipating the object’s path to keep in focus. In our group, we pioneered gaze interfaces that leverage this closed-loop behaviour of the eyes for input and interaction.

Gaze and Hand Multimodal Interaction

We naturally look at objects that we aim to manipulate. In HCI, gaze has therefore been viewed as alternative to manual input, where objects are selected by gaze and dwell time instead of mouse and click. This has advanced accessibility but exposed problems of Midas Touch, accuracy and expressiveness. In our work, we are instead focussing on how gaze and hands can complement each other for multimodal interaction, based on natural eye-hand coordination.

Gaze and Synchronous Gestures

The pursuit-based gaze input paradigm we pioneered relies on motion correlation to associate a user’s eye movement with an object in the field displaying the corresponding motion. The same principle can be employed for gestural interaction, for users to dynamically “synchronise” with interactive objects to gain control. In GEMINI we aim to investigate how synchronous gestures are guided by the eyes, to develop fast and robust techniques for spontaneous input with any body movement.

Eye Movement and Gaze in 3D

Gaze has been widely studied for interaction with 2D interfaces where objects appear at a fixed distance on displays designed for comfortable viewing without much head movement. 3D environments, whether physical, virtual or mixed reality, involve a wider range of gaze behaviours and design challenges. We are particularly interested in involuntary eye movements that stabilise and adapt our vision when we move and interact in 3D, and how they can be leveraged to estimate depth, infer attention and support interaction.