Publication Type
Conference Proceeding Article
Version
publishedVersion
Publication Date
5-2017
Abstract
We present Follow-My-Lead, an alternative indoor navigation technique that uses visual information recorded on an actual navigation path as a navigational guide. Its design revealed a trade-off between the fidelity of information provided to users and their effort to acquire it. Our first experiment revealed that scrolling through a continuous image stream of the navigation path is highly informative, but it becomes tedious with constant use. Discrete image checkpoints require less effort, but can be confusing. A balance may be struck by adding fast video transitions between image checkpoints, but precise control is required to handle difficult situations. Authoring still image checkpoints is also difficult, and this inspired us to invent a new technique using video checkpoints. We conducted a second experiment on authoring and navigation performance and found video checkpoints plus fast video transitions to be better than both image checkpoints plus fast video transitions and traditional written instructions
Keywords
Indoor Navigation, Mobile Computing, Smartglasses, Video, Wearable Computing
Discipline
Artificial Intelligence and Robotics | Software Engineering | Technology and Innovation
Research Areas
Software and Cyber-Physical Systems
Publication
CHI '17: Proceedings of ACM CHI Conference on Human Factors in Computing Systems, Denver, May 6-11
First Page
5703
Last Page
5715
ISBN
9781450346566
Identifier
10.1145/3025453.3025976
Publisher
ACM
City or Country
New York
Citation
ROY, Quentin; PERRAULT, Simon T.; ZHAO, Shengdong; DAVIS, Richard; PATTENA VANIYAR, Anuroop; VECHEV, Velko; LEE, Youngki; and MISRA, Archan.
Follow-my-lead: Intuitive indoor path creation and navigation using see-through interactive videos. (2017). CHI '17: Proceedings of ACM CHI Conference on Human Factors in Computing Systems, Denver, May 6-11. 5703-5715.
Available at: https://ink.library.smu.edu.sg/sis_research/3745
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Additional URL
http://doi.org/10.1145/3025453.3025976
Included in
Artificial Intelligence and Robotics Commons, Software Engineering Commons, Technology and Innovation Commons