TY - GEN
T1 - In-situ labeling for augmented reality language learning
AU - Huynh, Brandon
AU - Orlosky, Jason
AU - Hollerer, Tobias
N1 - Funding Information:
This work was funded in part by the United States Department of the Navy, Office of Naval Research, Grants #N62909-18-1-2036 and #N00014-16-1-3002. Many thanks to Takemura Lab at Osaka University and the Four Eyes Lab at the University of California, Santa Barbara for supporting this collaboration.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.
AB - Augmented Reality is a promising interaction paradigm for learning applications. It has the potential to improve learning outcomes by merging educational content with spatial cues and semantically relevant objects within a learner's everyday environment. The impact of such an interface could be comparable to the method of loci, a well known memory enhancement technique used by memory champions and polyglots. However, using Augmented Reality in this manner is still impractical for a number of reasons. Scalable object recognition and consistent labeling of objects is a significant challenge, and interaction with arbitrary (unmodeled) physical objects in AR scenes has consequently not been well explored. To help address these challenges, we present a framework for in-situ object labeling and selection in Augmented Reality, with a particular focus on language learning applications. Our framework uses a generalized object recognition model to identify objects in the world in real time, integrates eye tracking to facilitate selection and interaction within the interface, and incorporates a personalized learning model that dynamically adapts to student's growth. We show our current progress in the development of this system, including preliminary tests and benchmarks. We explore challenges with using such a system in practice, and discuss our vision for the future of AR language learning applications.
KW - Centered computing
KW - Human
KW - Mixed and augmented reality
KW - Semi
KW - Supervised learning
KW - Theory and algorithms for application domains
UR - http://www.scopus.com/inward/record.url?scp=85071876610&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071876610&partnerID=8YFLogxK
U2 - 10.1109/VR.2019.8798358
DO - 10.1109/VR.2019.8798358
M3 - Conference contribution
AN - SCOPUS:85071876610
T3 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
SP - 1606
EP - 1611
BT - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019
Y2 - 23 March 2019 through 27 March 2019
ER -