I showed my current interactions of Menu/mode selection, zooming, panning and rotation of an 3D object in my Unity-based interface. From the feedback, I realized that I had a good basis of interaction and motion interpretation, but I really needed to tighten the interactions so that they were intuitive and also easy to perform in 3D space. I had started playing around with the use of space as a "value modifier" for my various interactions, and I really need to fully flesh the utilization of this out in my current interactions. I had some great possible ideas from Badler and Joe, and I plan to implement them in the following weeks.
User feedback again was something that I had a good basis for, but still needed more iterations on. Current, I have implemented a passive feedback using a particle cloud that shows the movement of the user's hand velocities, and basic feedback on whether the user is in interaction space or not. My next steps for user feedback will be to implement my original idea of displaying the states of actions and whether they ended being accepted as gestures.
The good thing is that I am completely down the base motion framework, so the rest of the changes will be focused on the actual design of the interaction gestures and flow of interaction.
Here is my timeline for the next week.
11/4 - 11/7: use Badler's and Joe's input to design a more intuitive rotation gesture. Tighten up constants for zoom and pan gestures.
11/7-11/9: Implement user feedback of current action states.
11//9-11/16: Develop a menu system that will allow users to add different objects in the scene and create new scenes. Allow for a selection system that will allow users to select different objects for editing.
11/17-onward: Work on presentation of work in the video.
No comments:
Post a Comment