Now that I have my base c# framework ported over to unity, I have began work building more advanced motion recognition to power my beta review object editor/navigator application.
There are multiple motion recognition engines that "look for" specific motions. They all take in the same atomic action data from my motion segmenter. The motion recognizers are also turned on/off based on the current state/context and from other motion recognizers. For example, I implemented a motion recognizer that checks to see if you have moved your hands into the "up" position (up from you side and pointed towards the general direction of the screen). Once this state is reached, then my rotation recognizer is activated and will look to is if you rotate you two hands in synchronization like you would if you were rotating a real life object.
Here is a snapshot of my work in progress. I am rotating a 3d cube in 3 dimensions (x,y,z axis) based from the motion controls. If you look on the bottom, your will see the 2 unity icons and a black bar. The unity icon (its a placeholder image for now) shows up if you right/left hand is the "up" position. The black bar represents the distance between your two hands which will affect the sensitivity of your rotation (more about that on the next post). The basic idea is to give on-screen feedback at every step of the interaction.
Look for a video demo once I get the rotation kinks worked out.
No comments:
Post a Comment