By input modalities we mean all the ways in Special Database which we as humans can convey a message. Written text and speech are often the most efficient means, but specific movements and patterns can also contain information. For example, waving at someone is a specific way of greeting someone. Output modalities, on the other Special Database hand, are all the ways in which the interface can give feedback back to the user. Think of visual displays (the on/off light on your TV), sound effects (the alarm of your clock radio), but also Special Database vibrations (when you receive a message on your smartphone) or even scents (with aromatherapy).
The idea of multimodal interfaces is not new, by Special Database the way. As early as 1979, the American MIT was working on an interface that combined speech and movement input. Today, the gaming industry is often the first to Special Database experiment with new modalities and new ways of human-machine interaction. The rules of a game are often clearly defined, making it relatively easier to build a working prototype. For example, a Microsoft Kinect allowed the Xbox to recognize gestures and movements for the first time. Thanks to this NUI, the Special Database gaming experience immediately became a lot more interesting.
Multimodal interfaces: 1+1 = 3 Apart from the Special Database fact that multimodal interfaces are often cool and futuristic, they also have a number of practical advantages. First, it becomes a lot easier to multitask. Normally you can only focus on one thing at a time per sense or modality. For example, it's impossible to pay attention to the road in Special Database your car and set up your navigation system at the same time – you need your vision for both tasks. However, your brain can process two different streams of information at the same time. By Special Database controlling your navigation system with voice commands, you can keep your view on the road and perform two tasks at once.