TY - GEN
T1 - Management and manipulation of text in dynamic mixed reality workspaces
AU - Orlosky, Jason
AU - Kiyokawa, Kiyoshi
AU - Takemura, Haruo
PY - 2013
Y1 - 2013
N2 - Viewing and interacting with text based content safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively. My PhD research focuses on answering the following questions: 1) What are appropriate methods to intelligently move digital content such as e-mail, SMS messeges, and news articles, throughout the real world? 2) Once a user stops moving, in what way should dynamics of the current workspace change when migrated to a new static environment? 3) Lastly, how can users manipulate mobile content using the fewest number of interactions possible? My strategy for developing solutions to these problems primarily involves automatic or semi-automatic movement of digital content throughout the real world using camera tracking. I have already developed an intelligent text management system that actively manages movement of text in a user's field of view while mobile [11]. I am optimizing and expanding on this type of management system, developing appropriate interaction methodology, and conducting experiments to verify effectiveness, usability, and safety when used with an HMD in various environments.
AB - Viewing and interacting with text based content safely and easily while mobile has been an issue with see-through displays for many years. For example, in order to effectively use optical see through Head Mounted Displays (HMDs) in constantly changing dynamic environments, variables like lighting conditions, human or vehicular obstructions in a user's path, and scene variation must be dealt with effectively. My PhD research focuses on answering the following questions: 1) What are appropriate methods to intelligently move digital content such as e-mail, SMS messeges, and news articles, throughout the real world? 2) Once a user stops moving, in what way should dynamics of the current workspace change when migrated to a new static environment? 3) Lastly, how can users manipulate mobile content using the fewest number of interactions possible? My strategy for developing solutions to these problems primarily involves automatic or semi-automatic movement of digital content throughout the real world using camera tracking. I have already developed an intelligent text management system that actively manages movement of text in a user's field of view while mobile [11]. I am optimizing and expanding on this type of management system, developing appropriate interaction methodology, and conducting experiments to verify effectiveness, usability, and safety when used with an HMD in various environments.
KW - Content Stabilization
KW - Heads Up Display
KW - Scene Analysis
KW - Text Placement
KW - View Management
KW - Wearable Display
UR - http://www.scopus.com/inward/record.url?scp=84893327657&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84893327657&partnerID=8YFLogxK
U2 - 10.1109/ISMAR.2013.6671819
DO - 10.1109/ISMAR.2013.6671819
M3 - Conference contribution
AN - SCOPUS:84893327657
SN - 9781479928699
T3 - 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013
BT - 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013
T2 - 12th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR 2013
Y2 - 1 October 2013 through 4 October 2013
ER -