Project Details
Description
to reach a point where we will have the ability to continuously display virtual information in a variety of real world situa'tions. However, augmented reality (AR) interfaces are currently limited in their ability to interact with the wearer an''d environment to provide specific, safe, and useful information when needed. Moreover, many questions remain about how vir''tual content needs to be managed and how to make that content more relevant, especially in dynamic industrial or emerg''ency situations.By overcoming these issues, visual perception and cognition can potentially be enhanced past innate human abili'ty. Specific goals of this research include 1) improving the merge of augmentative information and visual spectra into the natural' human field of view, 2) understanding both the environment and user~s mental and visual states to more effectively augment vision'' and cognition, and 3) automatically managing the retrieval and placement of content to improve enhancement a''nd reduce distraction. To achieve these goals, we are exploring unique combinations of eye tracking and Artificial Intelligence' (AI) to help monitor user attention and cognitive state. We hypothesize that by using these resulting states in conjunc'tion with environmental analysis, we can better automate the retrieval and merge of virtual content into a user~s view.METHODOLO'GYThe problems associated with traditional augmented reality (AR) are well known. These include localization and mapp'ing, image reproduction, and latency, to name a few. Solving these problems means that we can display virtual objects or info''rmation that is perceptually indistinguishable from real, physical items. AR has tremendous potential not only to add content, but'' to enhance vision, memory, and even cognition. This is an area that is not as well explored, but is equally important in its po'tential to benefit humanity.Extraction of user state and analysis of scene context are also somewhat well-studied areas in themsel'ves, but we still lack concrete ways to use these states to modulate or present virtual content in a logical way. For example, many'' systems can determine that a user is confused or engaged in visual search, but very few researchers have focused on how to overlay'' instructions or augmentations in response to those mental states, let alone the environment. To solve such problems, we'' will evaluate logical combinations of cognitive state recognition via eye tracking, scene analysis, and AI to facilitate more effe'ctive enhancement of human vision through AR. Several initial prototypes are shown in Figure 1. These designs also include neural' networks and Markov Chains designed to access the most relevant and probable instructions, reminders, or enhancements.Figure 1.'' Parts of our vision augmentation framework, including an eye-tracked telescopic display (left)0, an image taken from the'' internal eye tracker using custom pupil detection (mid left), cognitive state analysis using saccadic quantization (mid r'ight) and initial steps towards the automatic merge of relevant thermal information into a user~s natural field of v'iew (right).To outline one potential example, consider a user who sometimes forgets to observe a particular object, ta''ke medication, or follow procedural instructions. Our goal is to recall the right augmentation for that object at the right time. T''o do so, we can first use eye tracking, sensor data and state classification to build a model of a user~s typical' mental (relaxed vs. stressed) and visual (search vs. concentration) states. This model will then be linked to a temporal database of augmentations that are associated with cognitive states and events over time. Much like our brains access' the right information at the right time via
| Status | Active |
|---|---|
| Effective start/end date | 12/6/17 → … |
Funding
- U.S. Navy: $372,980.00