Multimodal and Multisensory Interaction as Communicative Media of Narrative Form for Experience Sharing
In the realm of digital storytelling, various kinds of digital media have been used to engage users with narrative content. Current narrative forms of storytelling rely more on dynamic information systems and communication platforms. In terms of interaction, the enabling audio and visual representations provide more engaging user experience with novel interactive forms of navigation and user-control mechanisms. Interaction in storytelling has become a determining factor in how much a user is engaged in a communication act. Recent interactive photo-sharing applications, such as public displays with motion sensing and multi-touch capabilities, enable users to share more than just photos. The appropriate use of interactive media adds a dynamic dimension to the design of narrative forms in experience sharing applications. The work presented here considers multimodal and multisensory inputs, including speech, gesture, physical activity and proximity sensing, as vehicles to improve the user’s interaction with our application as well as communicating and sharing with other users. At this point of the research only multimodal inputs are considered, the output is only visually rendered. The paper introduces ADEO, a research prototype platform that enables multimodal and multisensory interaction and visually presents a “contextual story” for the user about her activities and pictures taken in the past seven days, along with additional contextual information. The discussion starts with a user study and its findings about experience sharing current practices and how users see potential future developments. Major part of the paper deals with the design of multimodal interaction modes for experience sharing and further explores the role of multimodal inputs in social communication. The design takes telepresence and multisensory perception as the underlining guidelines, and the implementation addresses the integration of explicit multimodal user inputs with multisensory inputs such as proximity and physical activity information. The paper ends with an outline of the next steps of the research, namely validation of the current design and potential implementation alternatives.
||Experience Sharing, Design, Multimodal Interaction, Multisensory Interaction, Storytelling, Narrative, ADEO
The International Journal of Visual Design, Volume 6, Issue 2, pp.1-15.
Article: Print (Spiral Bound).
Article: Electronic (PDF File; 8.888MB).
Co-founder, Principal, Aalto University, Helsinki, Palo Alto, USA
Péter Pál Boda worked with Nokia Research Center for more than 16 years, first in Finland, then in California since 2007. His last position was in the Hollywood laboratory as Senior Principal Scientist. Péter has contributed to several key innovation areas, including natural language understanding, spoken dialogue systems, bilingual conversational applications, multisensory and multimodal interaction. As part of the global university relation efforts, he also worked with top universities around the globe on device-centric wireless sensor networks and participatory sensing. His motivation is innovating in the intersection of advanced interaction technologies and design, creating meaning and social value. His scientific research interests are in Human-Computer Interaction, UI and UX solutions for pervasive and ubiquitous computing, and in general, how to make people’s lives easier by reducing the necessity of interaction with machines, systems and applications. He has a Master of Science degree from the Technical University of Budapest and a Licentiate of Technology degree from Helsinki University of Technology. He is a Fellow of the California College of the Arts, after participating to the inaugural “Leading by Design” program in 2010. Péter is currently completing his PhD thesis at Aalto University, Helsinki, Finland, while focusing on design consultancy and storytelling-based communication as the cofounder of zoom::moon and as a principal at marzzippan.com.
PhD Candidate, Media Arts and Technology, University of California Santa Barbara, Santa Barbara, CA, USA
Yuan-Yi Fan is a Ph.D. Candidate of Media Arts and Technology at University of California Santa Barbara USA and graduated with a MS in Biomedical Engineering from National Yang Ming University and BS in Mechanical and Electro-Mechanical Engineering from National Sun Yat-Sen University, both in Taiwan. During his doctoral study, he held research intern positions at Nokia Research Center Hollywood, Santa Monica, USA, Summer 2011, and Oblong Industries Inc., Los Angeles, USA, Summer 2012. As a multimedia artist, he is interested in how form and narrative of new media influence perception. Through creating affective interface and interactive art installations, he looks for relations between representation, perception, and awareness. He is currently a researcher and Artist-in-Residence at the Media Neuroscience Lab, University of California Santa Barbara, USA. His artworks have been exhibited at ZERO1 Biennial, ISEA, NIME, Mindshare L.A., Collider 4: Spectacle, UCSBʼs CREATE Concert and PRIMAVERA Festival of Contemporary Arts and Digital Media.