A Calibration Interface for 3D Gaze Depth Disambiguation in Virtual Environments

Cameron Boyd, Mia Thompson, Madeline Smith, Gokila Dorai, Jason Orlosky

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In Augmented and Virtual Reality, accurate eye tracking is a requirement for many applications. Though state-of-the art algorithms have enabled sub-degree accuracy for line-of-sight tracking, one remaining problem is that depth tracking, i.e. calculation of the gaze intersection at various depths, is still inaccurate. In this paper, we propose a 3D calibration method that accounts for gaze depth in addition to line-of-sight. By taking advantage of 3D calibration points and modeling the relationship between gaze inaccuracy and depth, we show that we can improve depth calculations and better determine the 3D position of gaze intersections in virtual environments.

Original languageEnglish (US)
Title of host publicationProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages695-696
Number of pages2
ISBN (Electronic)9798350374490
DOIs
StatePublished - 2024
Event2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024 - Orlando, United States
Duration: Mar 16 2024Mar 21 2024

Publication series

NameProceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024

Conference

Conference2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024
Country/TerritoryUnited States
CityOrlando
Period3/16/243/21/24

Keywords

  • Calibration
  • Eye Tracking
  • Gaze Depth
  • Virtual Reality

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Media Technology
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'A Calibration Interface for 3D Gaze Depth Disambiguation in Virtual Environments'. Together they form a unique fingerprint.

Cite this