Sonntag, 13. März 2011



3d modelling with KINECT + ARDUINO + PD

i did this to check out Kinects precition
and usability for Virtual Reality applications on one side -

and on the other

to create algorythms to SELECT / MOVE / LINK Objects
for multitouch Applications

which is all part of my attempts in developing
inovative human/computer interfaces.


while working with kinect, i realized pretty soon that
its quite annoying to percept something (very essential)like a MOUSECLICK
only from the sceleton Data. I made an attemt that makes
clicks occur by pointing at an object and then straightening the arm,
but this was far too exhaustive after a while.

so i decided to add comfortable clicking posibillities with Datagloves.

i had an arduino and some resistors lying around, went to Lagerhaus ------ARDUINO---GLOVES--HANDSTATES
bought some fancy gloves for 3 euros and soldered this together:
using 1 analog pin per glove i get 4 differnt currents for each fingerstate.
the arduino connects with pd using pduino.
of course it would be much nicer to have a wireless version of this,
it could easily be done using a XBEE arduino that sends data over bluetooth.
and a rumblepack would be cool aswell, to provide haptic feedback.

HOW does it work?
Kinect sends a depth image of the user to ---------skizze1
OPEN NI / NITE which tracks the Users Skeleton and
sends 3D Positions of every joint to
Pure Data over OSC.
the arduino sends the Finger States to PD.

in PD this data is streamed to various patches to serve multiple purposes: --------skizze 2

Navigation
Essential for 3D modelling is the virtual camera, that allows one to move around
the objects in virtual space.
the Fingerstates 2 + 3 of either hand are reserved for this.
with one finger2 pressed the camera orbits around its target. ------------navigation-----
with one finger3 pressed camera and target moves left/right + up down in screenspace.
also i added head tracking wich moves the camera up/down/left/right/back/front in screenspace.

Finger 1 is reserved for ----------zeitraffer
Selection/Moving

Here i need to satisfy 2 different requirements.
once i want to Select by pointing at something at the screen for the 2D menue stuff.
for this i made a calibration algorythm that works by pointing
at the top-left and bottom-right corner of the screen. then only handpositions within
this pyramid are used, and mapped to screenspace.
2 cursors are calculated, making a total of 3 with the mousecursor.
selection is done by comparing the unitvectors of camera-->oject and camera-->cursor.

for the modeling i select things by moving my hands in 3d space.
so for this i fix the sceletons hands infront of the camera
to have them moving and rotating with the view.
this works by multiplying the sceletons world coordinates with the camera matrix.

the most demanding part was the multitouch logic, that forbids a certain cursor for other objects, once one is selected. if you select something and move it around, you dont want to drag everything else on the way with it.
so there is many crooks and hacks to avoid unwanted situations in here.

the rest was a piece of cake:
for the modeling i made 2D menupoints that switch between the 4 basic operations needed for mesh-modeling:
[create Verticle] [create QuadFace] [create TriFace] [Move] ----------------create a box and a pyramid

when [create Verticle] is on
a select- and move-able point with a unique ID ins created at the hands 3d position when finger1 is pressed

when [create QuadFace] [create TriFace] is on
a Polygon is created and linked to 3 or 4 Verticals by selecting them one after the other.

finally, i needed to save and load the results of my modeling work. ----------MAX
i did this using the same syntax as the .obj format,
so the model can be interchanged directly with 3dsMAX or other 3D apps.

and of course ------stereo
its double the fun in stereo.

credits go out to
all the people
who made this possible
by developing
these wonderful pieces of
open source software

Pure Data

GEM

OpenNI

Arduino

Sensebloom

Linux

and all the Kinect Hackers