Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition

Namrata Srivastava, Joshua Newn and Eduardo Velloso

PACM Interactive, Mobile, Wearable and Ubiquitous Technologies

Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties—low-level gaze features—or some knowledge of the Areas-of-Interest (AOI) of the stimulus—high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.

Namrata Srivastava, Joshua Newn, and Eduardo Velloso. 2018. Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 4, Article 189 (December 2018), 27 pages.

 

BibTex

@article{10.1145/3287067,
author = {Srivastava, Namrata and Newn, Joshua and Velloso, Eduardo},
title = {Combining Low and Mid-Level Gaze Features for Desktop Activity Recognition},
year = {2018},
issue_date = {December 2018},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {2},
number = {4},
url = {https://doi.org/10.1145/3287067},
doi = {10.1145/3287067},
abstract = {Human activity recognition (HAR) is an important research area due to its potential for building context-aware interactive systems. Though movement-based activity recognition is an established area of research, recognising sedentary activities remains an open research question. Previous works have explored eye-based activity recognition as a potential approach for this challenge, focusing on statistical measures derived from eye movement properties---low-level gaze features---or some knowledge of the Areas-of-Interest (AOI) of the stimulus---high-level gaze features. In this paper, we extend this body of work by employing the addition of mid-level gaze features; features that add a level of abstraction over low-level features with some knowledge of the activity, but not of the stimulus. We evaluated our approach on a dataset collected from 24 participants performing eight desktop computing activities. We trained a classifier extending 26 low-level features derived from existing literature with the addition of 24 novel candidate mid-level gaze features. Our results show an overall classification performance of 0.72 (F1-Score), with up to 4% increase in accuracy when adding our mid-level gaze features. Finally, we discuss the implications of combining low- and mid-level gaze features, as well as the future directions for eye-based activity recognition.},
journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
month = {dec},
articleno = {189},
numpages = {27},
keywords = {gaze features, activity recognition, Eye tracking}
}