RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth Data
Riku Arakawa, Mayank Goel, Chris Harrison, Karan Ahuja
Abstract
Tracking a user’s gaze on smartphones offers the potential for accessible and powerful multimodal interactions. However, phones are used in a myriad of contexts and state-of-the-art gaze models that use only the front-facing RGB cameras are too coarse and do not adapt adequately to changes in context. While prior research has showcased the efficacy of depth maps for gaze tracking, they have been limited to desktop-grade depth cameras, which are more capable than the types seen in smartphones, that must be thin and low-powered. In this paper, we present a gaze tracking system that makes use of today’s smartphone depth camera technology to adapt to the changes in distance and orientation relative to the user’s face. Unlike prior efforts that used depth sensors, we do not constrain the users to maintain a fixed head position. Our approach works across different use contexts in unconstrained mobile settings. The results show that our multimodal ML model has a mean gaze error of 1.89 cm; a 16.3% improvement over using RGB data alone (2.26 cm error). Our system and dataset offer the first benchmark of gaze tracking on smartphones using RGB+Depth data under different use contexts.
Citation
Arakawa, R., Goel, M., Harrison, C., & Ahuja, K. (2022, November). Rgbdgaze: Gaze tracking on smartphones with RGB and depth data. In Proceedings of the 2022 International Conference on Multimodal Interaction (pp. 329-336).
BibTeX
@inproceedings{arakawa2022rgbdgaze,
title={Rgbdgaze: Gaze tracking on smartphones with RGB and depth data},
author={Arakawa, Riku and Goel, Mayank and Harrison, Chris and Ahuja, Karan},
booktitle={Proceedings of the 2022 International Conference on Multimodal Interaction},
pages={329--336},
year={2022}
}