While
I was playing around with moving a shape across the screen using my
Kinect Framework, I realized that a gesture does not make an interaction
alone. When we swipe our hands left to move an object left, our
interaction comprises of more than just a mapping between the hand and
object position.
At the root of any interaction, we have the gesture. The
gesture is a explicit action with a purpose of achieving a specific
goal or result. In my interface, the gesture is usually revolves around
the position, velocity or acceleration of specific skeletal joints.
However, this gesture information is useless if we do not know what our gesture maps to. This is where the mode
of interaction comes into play. The mode of interaction is a explicit
mapping between our gestures and state of the interface. So for our
simple example, panning is the mode of interaction.
Finally the last element of our interaction is the context.
The context comprises our relationship with our input environment. Are
we sitting or standing? Are we actually facing the screen? What
actions/gestures did we perform leading up to this current interaction?
Context envelops our spatial relationship in the environment and the
history of the sequence of our interactions.
In conclusion: Gesture + Mode + Context = Interaction.
I am currently on working on implementing this interaction scheme in my Kinect framework.
Here is the project I was referring to about representing spatial and temporal data (historical information of cities)
ReplyDeletehttp://hypercities.ats.ucla.edu/