Wednesday, September 21, 2011

Hello all!

Now that I have finished a more complete design document, I have began jumping into the Microsoft Kinect SDK and taking a look at the capabilities of the Kinect.

Linking Kinect to a Windows 7 PC is actually a fairly simple process. Once the SDK has been installed, you simply plug the Kinect into the pc through the USB port and the fun begins. The first thing that I look at was the type of data that the Kinect SDK would give. The Kinect SDK is able to track up to two skeletal models, giving joint location values as a vector in the 3D plane.
The values are normalized from -1 to 1 on the x, y and z axis. The Kinect also returns a depth map that is stored in a byte array, but I have not decided yet if that data is necessary. The raw video streams are also fairly easy to extract using the SDK and this is be greatly useful for debugging as we can overlay the skeletal joints on top and see what the Kinect is recognizing at the moment.

I have also designed the general pipeline of our interface. The code will be divided in a very traditional model-view-controller pattern. Our controller consists of the Kinect device and its accompanying SDK. Here, raw motion and voice input is both captured and filtered into usable data. The model consists of our recognition engine. The current state of the interface is stored here and the engine is responsible for changing state due to input if necessary. Finally I will use the Unity engine solely for rendering, and it will connect to my recognition engine through the use of .dll’s. 



I will be coding a few simple Kinect demos in the next few days and I am beginning to design the interaction and user experience. Look for all this in another blog post shortly.

No comments:

Post a Comment