S. Sorce, A. Augello, A. Santangelo, G. Pilato, A. Gentile, A. Genco, S. Gaglio

A Multimodal Guide for the Augmented Campus

Human-Computer Interaction

The use of Personal Digital Assistants (PDAs) with ad-hoc built-in information retrieval and auto-localization functionalities can help people navigating an environment in a more natural manner compared to traditional audio/visual pre-recorded guides. In this work we propose and discuss a user-friendly, multi-modal guide system for pervasive context-aware service provision within augmented environments. The proposed system is adaptable to the user needs of mobility within a given environment; it is usable on different mobile devices and in particular on PDAs, which are used as advanced adaptive HEI (human-environment interaction) interfaces. An information retrieval service is provided that is easily accessible through spoken language interaction in cooperation with an auto-localization service. The interaction is enabled by speech recognition and synthesis technologies, and by a ChatBot system, endowed with common sense reasoning capabilities to properly interpret user speech and provide him with the requested information. This interaction mode turns to be more natural, and users are required to have only basic skills on the use of PDAs. The auto-localization service relies on a RFID-based framework, which resides partly in the mobile side of the entire system (PDAs), and partly in the environment side. In particular, RFID technology allows the system to provide users with context-related information. An implemented case study is showed that illustrates service provision in an augmented environment within university campus settings (termed "Augmented Campus"). Lastly, a discussion about user experiences while using trial services within the Augmented Campus is given.

This article is authored also by Synbrain data scientists and collaborators. READ THE FULL ARTICLE