Abstract
Developing more natural and intelligent interaction methods for head mounted displays (HMDs) has been an important goal in augmented reality for many years. Recently, small form factor eye tracking interfaces and wearable displays have become small enough to be used simultaneously and for extended periods of time. In this paper, we describe the combination of monocular HMDs and an eye tracking interface and show how they can be used to automatically reduce interaction requirements for displays with both single and multiple focal planes. We then present the results of preliminary and primary experiments which test the accuracy of eye tracking for a number of different displays such as Google Glass and Brother’s AiRScouter. Results show that our focal plane classification algorithm works with over 98 % accuracy for classifying the correct distance of virtual objects in our multi-focal plane display prototype and with over 90 % accuracy for classifying physical and virtual objects in commercial monocular displays. Additionally, we describe methodology for integrating our system into augmented reality applications and attentive interfaces.
Original language | English (US) |
---|---|
Pages (from-to) | 301-310 |
Number of pages | 10 |
Journal | KI - Kunstliche Intelligenz |
Volume | 30 |
Issue number | 3-4 |
DOIs | |
State | Published - Oct 1 2016 |
Externally published | Yes |
Keywords
- Attentive interface
- Eye tracking
- Head mounted display
- Mixed reality
- Safety
ASJC Scopus subject areas
- Artificial Intelligence