The first challenge that we mentioned was to create a user experience that allows people to feel like their iron man. In order to beat this challenge the first step that we took was using a Kinect to detect the hand of a user and to detect whether this hand moves in the X, Y or Z axis. This weekend we learned how the infrared sensor of the Kinect works and how it can detect the distance of objects. So we created a processing program that detects the hand of the user when it’s in a specific distance and it puts an ellipse in the middle of the hand and then if the user moves his hand closer than the code change the color of the ellipse simulating that the user shot at a target. The code is attached.
import org.openkinect.freenect.*; import org.openkinect.freenect2.*; import org.openkinect.processing.*; import org.openkinect.tests.*; PImage img; Kinect kinect; void setup(){ size(512,484); kinect = new kinect(this); kinect. initDepth(); kinect.initDevice(); img = createImage(kinect.depthWidth, kinect.depthHeight,RGB); } void draw(){ background(0); img.loadPixels(); int[] depth = kinect.getRawDepth(); float sumX = 0; float sumY = 0; float totalPixels = 0; for (int x = 0; x<kinect.depthWidth; x++){ for (int y =0; y< kinect.Height; y++){ int offset = x + y*kinect.depthWidth; int d = depth[offset]; if(d>400 && d<1000){ img.pixels[offset] = color(255,0,150); sumX+= x; sumY+=y; totalPixels++; } else if(d>200 && d<400){ img.pixels[offset] = color(128,255,0); sumX+= x; sumY+=y; totalPixels++; } else; img.pixels[offset] = color(0); } } img.updatePixels(); image(img,0,0); float avgX = sumX/totalPixels; float avgY =sumY/totalPixels; fill(150,0,255); ellipse(avgX,avgY,64,64); }
Credits to Daniel Shiffman from the Coding Train to teach us how to read kinect data