TY - GEN
T1 - Evaluation of pointing interfaces with an AR agent for multi-section information guidance
AU - Techasarntikul, Nattaon
AU - Mashita, Tomohiro
AU - Ratsamee, Photchara
AU - Uranishi, Yuki
AU - Takemura, Haruo
AU - Orlosky, Jason
AU - Kiyokawa, Kiyoshi
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - In educational settings such as art galleries or museums, Augmented Reality (AR) has the potential to provide detailed information about exhibits. However, dealing with items that contain information in multiple sections or areas is still a significant challenge. For example, a large painting may contain many minute details, which requires a system that can explain its broader features rather than just a generic description. To address this challenge, we introduce an AR guidance system that uses an embodied agent to point out items and explain each piece and part of exhibit items in detail. We also designed and tested 3 different pointing interfaces for the embodied agent: Gesture only, gesture with a dot laser, and gesture with line laser. To evaluate this interface, we conducted a user experiment simulating painting guidance to test interest and exhibit memory. During the experiment, the agent pointed to various areas of interest in the painting and provided a detailed description to participants. The result shows that the search times for target positions were the fastest with the line laser. However, no particular interface outperformed others in memory recall of exhibit content.
AB - In educational settings such as art galleries or museums, Augmented Reality (AR) has the potential to provide detailed information about exhibits. However, dealing with items that contain information in multiple sections or areas is still a significant challenge. For example, a large painting may contain many minute details, which requires a system that can explain its broader features rather than just a generic description. To address this challenge, we introduce an AR guidance system that uses an embodied agent to point out items and explain each piece and part of exhibit items in detail. We also designed and tested 3 different pointing interfaces for the embodied agent: Gesture only, gesture with a dot laser, and gesture with line laser. To evaluate this interface, we conducted a user experiment simulating painting guidance to test interest and exhibit memory. During the experiment, the agent pointed to various areas of interest in the painting and provided a detailed description to participants. The result shows that the search times for target positions were the fastest with the line laser. However, no particular interface outperformed others in memory recall of exhibit content.
KW - Centered computing
KW - Computing methodologiesmixed / augmented reality
KW - Human
KW - Information systems
KW - Retrieval effectiveness
KW - User interface design
UR - http://www.scopus.com/inward/record.url?scp=85071888355&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071888355&partnerID=8YFLogxK
U2 - 10.1109/VR.2019.8798061
DO - 10.1109/VR.2019.8798061
M3 - Conference contribution
AN - SCOPUS:85071888355
T3 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
SP - 1185
EP - 1186
BT - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019
Y2 - 23 March 2019 through 27 March 2019
ER -