The alpha-review was great in terms in getting feedback on my current progress. There are two main issues/areas of focus that have been repeatedly emphasized by my reviewers:
1. Gesture Recognition is not trivial and can be challenge.
2. You should define an application/use-case early on and adjust your motion recognition accordingly.
These are two really good points and I definitely agree with them.
I'm currently bringing my existing C# test framework into Unity, which will my final production code. This has taken sometime, but I have finally resolved many issues regarding wrappers, dlls and Unity's inability to use the Microsoft .net 4.0 framework (the framework that the Kinect SDK utilizes).
I found this trick online
http://www.codeproject.com/KB/dotnet/DllExport.aspx and basically had to play around with lots of compile settings both in C# and C++ for this to work.
Now getting back to addressing my main two challenges, I have decided that my initial use case for the beta-review will be a spatial interaction environment for 3D objects. Thus in simple terms, the user should be able to move and rotate an object in 3D space while also being able to change his/her view point. Think of a the cis 277 object editor but in unity and with voice and gesture controls.
Probably one of the most important aspects to gesture recognition doesn't actually have to pertain directly to the recognition of motion, it has to do with user feedback. How does a user know if his gesture has been registered? How does the user even know if the interface is listening for input in the first place?
This user feedback has been implemented religiously in most traditional gui interfaces (good ones that is). Hover your mouse over a button--That button now will change color/opacity/shape letting the user know that the button is listening. Move your mouse out and the button will return its previous state, letting the user know that the button is no longer listening to your mouse click. Finally if you click the button, the button will change its visual state once more to signify that your action has been successfully recorded.
In order for successfully gesture recognition, this same interaction flow must be replicated in our NUI. If our user does not know that his action is being recorded, then they will madly wave back and forth which will leader to further misinterpretation by our recognition engine. Furthermore, the user must be told if his gesture/motion has been correctly recognized if it has been ignored because our engine is enable to parse it.
Since it does not make sense to have button feedback (the whole point of a NUI is to remove the mouse-pointer paradigm) and pop up dialog boxes are intrusive, I've decided to utilize a border highlight that displays the feedback. The response is coded in the color of the border. The color will then fade away after the feedback is shown.
Initial "recording feedback". The user has stepped into the interaction space.
User action has been, recorded and saved. However, it might be of a longer gesture sequence so the entire gesture is not complete yet and we are waiting for more motions.
User action or sequence of actions has recognized and we have updated the state of the interface to correspond with this.
Current action or current sequence of actions cannot be recognized as a specific command. The sequence/action has been deleted from queue. Start the action again from the beginning.
My rationale for this:
1. A color border is non-obtrusive but yet has enough global visual scale to reach the attention of the suer.
2. The differentiation between recorded gestures and actually completed gestures allows the use of gestures that formed from the build up of many atomic gestures. Thus as we build up to these complex gestures, it is still good to know that our sequence of actions is still being recorded.
3. The initial "on-phase" provides feedback to the user that he is in the interaction space and all his motions are currently being watched.
This mode of feedback is inspired from
Alan Cooper's concept of
modeless feedback from his book on interaction design,
About Face.