Here is my abstract:
Human computer interaction has traditionally been limited to the mouse and keyboard; however, with the advent of touch screens and motion capture hardware, there has been a rise in the concept of the “natural user interface” or NUI. The natural user interface is heavily driven by gesture rather than precise movements and clicks of the mouse. This project will explore interactions with visualizations driven by multimodal input (in the form of motion and voice).
Sounds awesome. I'll be looking forward to the kinds of gestures you can come up with to interact with the system.
ReplyDeleteOne question: what is the goal for the user? What is the user trying to do?