TU Berlin



Page Content

to Navigation

MuToAct - Multimodal Touchless Interaction

Project Description



In most human-machine systems keyboard, mouse, remote-control units, and touch displays are still the main ways of interaction. But the current trend for interaction is to use more freely the space surrounding the user to present information and interact with them. The number and size of displays is increasing in many different areas of life and it will soon be  possible to use every physical surface as an interactive display. The result is that it is not possible anymore to interact with the surfaces via touch. Furthermore, the user has to interact at different distances with the displays and projections. Thus, the question arises how touchless input shall be designed.

The most commonly used touchless interaction modalities are gesture, speech, and gaze. Since today, the different modalities are indeed evaluated but seldom compared with each other and seldom evaluated from a user-centred perspective. However, for the design of interaction it is essential to know the advantages and disadvantages of the different modalities. Only then it is possible to choose the best one for a certain context.

The aim of the project MuToAct is the evaluation and comparison of various input modalities in a user-centred perspective. Out of that design guidelines for touchless interaction will be derived.



cand. MSc Fabian Hasse (Masterarbeit)
cand. Dipl.- Psych. Martin Grund
cand. Dipl.-Psych. Marcelina Sünderhauf
Dipl.-Psych Monika Elepfandt (Projektleitung)

Former staff

cand. BSc Tristan Kim (Praktikant)
cand. BSc Informatik Stefan Piotrowski


Elepfandt, M., Hasse, F. & Dzaack, J. (2012). Die Magie der berührungslosen Interaktion – Vom Spiel in die Arbeitswelt. To appear in Proceedings of Useware 2012, VDI, Dec 2012.

Elepfandt, M. & Dzaack, J. (2012). Berührungslose Interaktion mit großen Displays. In Grandt, M. & Schmerwitz, S. (Hrsg.) Fortschrittliche Anzeigesysteme für die Fahrzeug- und Prozessführung: 54. Fachausschusssitzung Anthropotechnik der Deutschen Gesellschaft für Luft- und Raumfahrt. Koblenz: DGLR. S. 249 - 261.

Elepfandt, M. & Grund, M. (2012). Move it there, or not? The design of voice commands for gaze with speech. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In '12). ACM, New York, NY, USA, Article 12, 3 pages.

Elepfandt, M. (2012). Pointing and speech: comparison of various voice commands. In Proceedings of the 7th Nordic Conference on Human-Computer Interaction: Making Sense Through Design (NordiCHI '12). ACM, New York, NY, USA, 807-808.

Elepfandt, M. & Sünderhauf, M., (2012). Präferenz von Gesten bei der Interaktion mit großen Displays. In: Reiterer, H. & Deussen, O. (Hrsg.), Mensch & Computer 2012 – Workshopband: interaktiv informiert – allgegenwärtig und allumfassend!?. München: Oldenbourg Verlag, S. 451-454.

Elepfandt, M. (2011). Berührungslose Interaktion: Sprache, Gestik oder Blick? Multi- oder unimodal? In Schmid, S., Elepfandt, M., Adenauer, J., Lichtenstein, A. (Hrsg.), Reflexionen und Visionen der Mensch-Maschine-Interaktion, 9. Berliner Werkstatt Mensch-Maschine-Systeme. Düsseldorf: VDI,  S. 413-416.

Elepfandt, M. & Sünderhauf, M. (2011). Multimodal, Touchless Interaction in Spatial Augmented Reality Environments. In V. Duffy (Hrsg.), Digital Human Modeling (Bd. 6777, S. 263–271). Berlin, Heidelberg: Springer.


Quick Access

Schnellnavigation zur Seite über Nummerneingabe