2014 | OriginalPaper | Chapter
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
Authors : Vladimir Kulyukin, Thimma Reddy
Published in: Computers Helping People with Special Needs
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Various technologies, including GPS, Wi-Fi localization, and infrared beacons, have been proposed to increase travel independence for visually impaired (VI) and blind travelers. Such systems take readings from sensors, localize those readings on a map, and instruct VI travelers where to move next. Unfortunately, sensor readings can be noisy or absent, which decreases the traveler’s situational awareness. However, localization technologies can be augmented with solutions that put the traveler’s cognition to use. One such solution is narrative maps, i.e., verbal descriptions of environments produced by O&M professionals for blind travelers. The production of narrative maps is costly, because O&M professionals must travel to designated environments and describe large numbers of routes. Complete narrative coverage may not be feasible due to the sheer size of many environments. But, the quality of produced narrative maps can be improved by automated landmark extraction and path inference. In this paper, an algorithm is proposed that uses scalable natural language processing (NLP) techniques to extract landmarks and their connectivity from verbal route descriptions. Extracted landmarks can be subsequently annotated with sensor readings, used to find new routes, or track the traveler’s progress on different routes.