Abstract
A shared interactive display (e.g., a tabletop) provides a large space for collaborative interactions. However, a public display lacks a private space for accessing sensitive information. On the other hand, a mobile device offers a private display and a variety of modalities for personal applications, but it is limited by a small screen. We have developed a framework that supports fluid and seamless interactions among a tabletop and multiple mobile devices. This framework can continuously track each user's action (e.g., hand movements or gestures) on top of a tabletop and then automatically generate a unique personal interface on an associated mobile device. This type of inter-device interactions integrates a collaborative workspace (i.e., a tabletop) and a private area (i.e., a mobile device) with multimodal feedback. To support this interaction style, an event-driven architecture is applied to implement the framework on the Microsoft PixelSense tabletop. This framework hides the details of user tracking and inter-device communications. Thus, interface designers can focus on the development of domain-specific interactions by mapping user's actions on a tabletop to a personal interface on his/her mobile device. The results from two different studies justify the usability of the proposed interaction.
Original language | English (US) |
---|---|
Pages (from-to) | 727-737 |
Number of pages | 11 |
Journal | Journal of Visual Languages and Computing |
Volume | 25 |
Issue number | 6 |
DOIs | |
State | Published - 2014 |
Externally published | Yes |
Keywords
- Bimanual interaction
- Human computer interaction
- Multimodal interface
- Tangible interface
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Computer Science Applications