3D Drawing Canvas
Classroom Project - Interaction Design
The objective was to liberate 3D digital sketching from the physical constraints of traditional peripherals like mice and electronic pens.
I aimed to bridge the gap between physical movement and digital rendering, allowing designers to "step into" their work and manipulate 3D space as intuitively as they would in the physical world.
The concept for the idea eventually build-up while exploring the kinect and its various functionality to what extent it can be hacked to make a 3D Canvas for quick sketching.
I started off by making a small 3D canvas in Processing (which is used as a e-prototyping tool) which can be manipulated using the mouse. The basics were making points first then selecting the two points to join them with a line. The entire program was then added with functionality of making a surface with more than two points selected.
Then comes the part of introducing 3-D and NUI into the picture using kinect. The kinect actually gives me a hand coordinates which can be mapped onto the application. Furthermore, the depth in the scene here the user is interacting can be used to add the third dimension.
Once the part was over I need to map how the user is going to select options available to for drawing i.e., making points, lines, planes and moving points. So I started off with exploring options to make the switch from one option to another using hand and finger tracking. Every digit on my hand stands for one option above. I explored ways to make the hand and finger detecting more efficient using kinect, borrowing examples from the OpenCV and Kinect. I figured out many options to do so but none of them was actually implemented for Processing. So I had to write the libraries myself and make them to work. The finger tracking was efficient but it wasn't reliable enough so that I can map them to the main application. Although, I tried hard to do so.
Furthermore, the next challenge was to adding the functionality of rotating the canvas in X & Y axis to ease up the drawing purpose.
The final prototype was done using the Touch OSC for selecting the options and Kinect for point moving and tracking. The final prototype lacks the functionality of rotating the canvas.
The Technical Evolution
The project followed an iterative prototyping path, moving from basic 2D logic to a complex, multi-modal spatial system.
-
Phase 1: Geometry Engine (Processing): I developed the initial engine using Processing (Java). This involved creating a system that could plot points, calculate the distance between them to form lines, and eventually generate surfaces based on three or more coordinates.
-
Phase 2: Depth Integration (Kinect): To transition from 2D to 3D, I integrated Microsoft Kinect. By mapping the sensor's depth data to the Z-axis, I was able to translate a user's physical reach into digital depth, creating a true 3D coordinate system for input.
-
Phase 3: Gesture Logic & Libraries: I explored OpenCV to implement finger-tracking for tool selection (e.g., mapping each digit to a different drawing function). When existing libraries proved insufficient for the precision required in Processing, I authored custom library functions to improve detection accuracy.
Prototypes
Design Challenges & Pivots
Every exploratory project hits technical ceilings. The primary challenge here was gesture reliability vs. user intent.
-
The Problem: While finger-tracking was functional, it lacked the high fidelity needed for professional sketching. False positives in gesture recognition led to "jitter" in the 3D model.
-
The Pivot: To maintain the "hassle-free" feel while ensuring precision, I transitioned to a hybrid interface. I utilized Touch OSC on a mobile device for stable mode selection (switching between points, lines, and planes) while keeping the Kinect dedicated to high-motion spatial tracking and point manipulation.
Concept Video
Final Outcome & UX Insights
The final prototype successfully demonstrated a functional 3D canvas controlled by bodily movement.
Key Takeaways:
-
Ergonomics of Spatial UI: The project highlighted the "Gorilla Arm" effect and the need for simplified rotation controls in 3D space.
-
Multimodal Interaction: The success of the Touch OSC/Kinect hybrid proved that for complex creative tasks, a combination of tactile feedback (touch) and spatial movement (NUI) often outperforms pure gesture-based systems.




