TY - GEN
T1 - Using eye tracked virtual reality to classify understanding of vocabulary in recall tasks
AU - Orlosky, Jason
AU - Huynh, Brandon
AU - Hollerer, Tobias
N1 - Funding Information:
ACKNOWLEDGMENTS This research was funded in part by the United States Office of Naval Research, grant #N62909-18-1-2036. We would also like to thank all of the experiment participants for their time.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/12
Y1 - 2019/12
N2 - In recent years, augmented and virtual reality (AR/VR) have started to take a foothold in markets such as training and education. Although AR and VR have tremendous potential, current interfaces and applications are still limited in their ability to recognize context, user understanding, and intention, which can limit the options for customized individual user support and the ease of automation. This paper addresses the problem of automatically recognizing whether or not a user has an understanding of a certain term, which is directly applicable to AR/VR interfaces for language and concept learning. To do so, we first designed an interactive word recall task in VR that required non-native English speakers to assess their knowledge of English words, many of which were difficult or uncommon. Using an eye tracker integrated into the VR Display, we collected a variety of eye movement metrics that might correspond to the user's knowledge or memory of a particular word. Through experimentation, we show that both eye movement and pupil radius have a high correlation to user memory, and that several other metrics can also be used to help classify the state of word understanding. This allowed us to build a support vector machine (SVM) that can predict a user's knowledge with an accuracy of 62% in the general case and and 75% for easy versus medium words, which was tested using cross-fold validation. We discuss these results in the context of in-situ learning applications.
AB - In recent years, augmented and virtual reality (AR/VR) have started to take a foothold in markets such as training and education. Although AR and VR have tremendous potential, current interfaces and applications are still limited in their ability to recognize context, user understanding, and intention, which can limit the options for customized individual user support and the ease of automation. This paper addresses the problem of automatically recognizing whether or not a user has an understanding of a certain term, which is directly applicable to AR/VR interfaces for language and concept learning. To do so, we first designed an interactive word recall task in VR that required non-native English speakers to assess their knowledge of English words, many of which were difficult or uncommon. Using an eye tracker integrated into the VR Display, we collected a variety of eye movement metrics that might correspond to the user's knowledge or memory of a particular word. Through experimentation, we show that both eye movement and pupil radius have a high correlation to user memory, and that several other metrics can also be used to help classify the state of word understanding. This allowed us to build a support vector machine (SVM) that can predict a user's knowledge with an accuracy of 62% in the general case and and 75% for easy versus medium words, which was tested using cross-fold validation. We discuss these results in the context of in-situ learning applications.
KW - Classification
KW - Cognition
KW - Eye Tracking
KW - Memory
KW - Pupillometry
KW - Virtual Reality
UR - http://www.scopus.com/inward/record.url?scp=85077990400&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077990400&partnerID=8YFLogxK
U2 - 10.1109/AIVR46125.2019.00019
DO - 10.1109/AIVR46125.2019.00019
M3 - Conference contribution
AN - SCOPUS:85077990400
T3 - Proceedings - 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2019
SP - 66
EP - 73
BT - Proceedings - 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2019
Y2 - 9 December 2019 through 11 December 2019
ER -