In this chapter the role of multimodality in intelligent, mobile guides for cultural heritage environments is discussed. Multimodal access to information contents enables the creation of systems with a higher degree of accessibility and usability. A multimodal interaction may involve several human interaction modes, such as sight, touch and voice to navigate contents, or gestures to activate controls. We first start our discussion by presenting a timeline of cultural heritage system evolution, spanning from 2001 to 2008, which highlights design issues such as intelligence and context-awareness in providing information. Then, multimodal access to contents is discussed, along with problems and corresponding solutions; an evaluation of several reviewed systems is also presented. Lastly, a case study multimodal framework termed MAGA is described, which combines intelligent conversational agents with speech recognition/ synthesis technology in a framework employing RFID based location and Wi-Fi based data exchange.
This article is authored also by Synbrain data scientists and collaborators. READ THE FULL ARTICLE