It has been an interesting journey, looking at my proposal and where I am now. I am first going to talk about parts of the project I felt i did a good job on and parts that I should have done a better job on.
Here are some things I liked about what I did:
-I liked that I wrote out the definition of interaction (mode+context+gesture) early on. I think it shaped the way
I would approach the entire problem (using motion segmentation and using 3D space as a modifier) in a way that was different from the current existing crop of NUI demos.
-I like that I quickly implemented a rough prototype just using c#. This allow me to become familiar with the Kinect API and understand the abilities of the Kinect at a very early stage.
-Building a motion-segmentation engine early on: This would be the basis of all my advanced motion recognition , and it help a lot that I developed a structure way of reading my motions, so that once I developed a workflow for reading one specific type of motion, I could use the skeleton of that code and port it to the other motion types.
-Iterating through different approaches of motion recognition. I first implemented a live 1-1 mapping. Then I parsed actions after they were completed. Finally I used a combination of the two approaches. By going through many approaches I was able to see the strength + benefits of each one, and it was a very interesting exercise.
Here are somethings I could have done a better job on:
-Focus more on the interaction designs throughout the entire process. Sometimes I became too focused in the implementation and debugging of my code. I would then have to play catch-up designing the interactions. My interactions would have been more fully fleshed out at this stage if I had made a continual focus on this.
-Have a clearer strategy for the actual implementation. Honesty, I should have taken Mubbasir's advice and just used our existing Kinect to Unity plugin. But instead I developed my own and ran into many problems and spent more time working on this than the actual interface. I ultimately did get all the wrappers to sync up with Unity, but my solution was very buggy and led to memory leaks. In the end I used existing plugin because I just didn't want to waste anymore time on debugging memory leaks. If I had paid closer attention to the feasibility of implementation at a early stage, this could have been avoided.
-Bring in users to try out the interface and get feedback throughout my work This is honestly something that I should have done. I might have been afraid to show my work at such an early stage, but with feedback from users, I could have had many new leads to my interaction designs. I realized this during the Beta Review, when I got a lot of great ideas in just a short 20 minute chat with Badler and Joe.
Conclusion
All in all, I felt that this has been a very stimulating project. It has the right mixture of human computer interaction design and coding to suite my tastes. I think that it has been fun to just take a stab at making a NUI interface. Now I realize the shortcomings of a motion-based nui, but also some of the strengths too. From my work and observations, the NUI is not ready at all to supplant the traditional gui, but it also brings whole new functionality that has never existed before. I am really glad that I focused on making my motion framework, because I have a strong base to build other applications on top of the Kinect and I plan to play around with this in the future.