During the last decade, cell phones with multimodal interfaces based on combined new media have become the dominant computer interface worldwide. Multimodal interfaces support mobility and expand the expressive power of human input to computers. They have shifted the fulcrum of human-computer interaction much closer to the human. This book explains the foundation of human-centered multimodal interaction and interface design, based on the cognitive and neurosciences, as well as the major benefits of multimodal interfaces for human cognition and performance. It describes the data-intensive methodologies used to envision, prototype, and evaluate new multimodal interfaces. From a system development viewpoint, this book outlines major approaches for multimodal signal processing, fusion, architectures, and techniques for robustly interpreting users' meaning. Multimodal interfaces have been commercialized extensively for field and mobile applications during the last decade. Research also is growing rapidly in areas like multimodal data analytics, affect recognition, accessible interfaces, embedded and robotic interfaces, machine learning and new hybrid processing approaches, and similar topics. The expansion of multimodal interfaces is part of the long-term evolution of more expressively powerful input to computers, a trend that will substantially improve support for human cognition and performance.
The content of this handbook would be most appropriate for graduate students, and of primary interest to students studying computer science and information technology, human-computer interfaces, mobile and ubiquitous interfaces, and related ...
The content of this handbook is most appropriate for graduate students and of primary interest to students studying computer science and information technology, human-computer interfaces, mobile and ubiquitous interfaces, affective and ...
Multimodal speech and pen interfaces. In S. Oviatt, B. Schuller, P. R. Cohen, D. Sonntag, G. Potamianos, and A. Krüger, editors, The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, ...
Finally, this volume discusses the societal impact of more widespread adoption of these systems, such as privacy risks and how to mitigate them.
It begins by discussing why natural language understanding is a valuable component of mobile applications. ... Extensible Multimodal Annotation (EMMA) and the Multimodal Architecture and Interfaces (MMI) specification reduce the amount ...
The Internet of Things (IoT) has enormous potential for adding convenience, comfort, safety, and efficiency to ... For this reason, natural language using a standard API will become very important for these types of interactions.
456 U. Hadar, A. Burstein, R. Krauss, and N. Soroker. 1998a. Ideational gestures and speech in brain-damaged subjects. Language and Cognitive Processes, 13(1): 59–76. DOI: 10.1080/016909698386591. 457 U. Hadar, D. Wenkert-Olenik, ...
Returning to the source of the frontal “writing centre” hypothesis. Cortex, 46(9):1204– 1210. H. A. Ruff, L. M. Saltarelli, M. Capozzoli, and K. Dubiner. 1992. The differentiation of activity in infants' exploration of objects.
This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas.
By these means the book establishes the necessary theoretical foundations to engage productively with today’s increasingly complex combinations of multimodal artefacts and performances of all kinds.