This wiki was created with the goal to collect, structure, and share knowledge on voice and multimodal user interfaces.
Deliver the right information on the right device and modality - depending on context.
There are so many interface paradigms now (desktop-first, mobile-first, voice-first) that we believe it is important to focus on what's beyond technical specifications of a device. It is about delivering contextual experiences that are relevant to a user at a given time.
Learn more in the introduction.
- RIDR Lifecycle: An abstracted process for voice and multimodal interactions.
- Request: Gather raw user input.
- Interpretation: Turn raw input into structured meaning.
- Dialog and Logic: Handle domain logic and create structured output.
- Response: Return response back to various output channels.
- Interaction Stack