WaterBox Final Prototype & User-Testing

WaterBox is a musical loop station where depending on the user’s physical interaction with water plays different loops either individually or simultaneously. And, the track type changes depending on the roughness of the waves and ripples of the water surface. The final prototype is created using Kinect v2 to capture the waves of the water surface by its depth, and Touche Advanced Touch Sensor with Arduino for the capacitative sensor and interaction with water. Through the WaterBox, I wanted to share the rich feeling of interacting with water and fun of creating different music with physical motion of your hands in the water.

The calibration for the depth and the pixel counts for activating the change of track types was the most difficult part due to different lightings and positions of user’s arms. The final version will be carefully calibrated in the space with the accurate placement of supplement lighting to minimize the error caused in some of the user-testing videos. Also, I had a difficult time with the Touche library for the capacitative sensor code for calibrating and separating the different user inputs.

The following videos are user testing of the prototype:

The main feedbacks/questions that I have received are the following:

  • What happens if there is a user that has a long blue shirt?

For the above question, I have decided to edit a portion of the code (the minThresh) in order to ignore any pixels that are above the box height to disregard possible errors.

  • It will be good if there are some signs of possible interactions

For this feedback, I thought of creating a page of possible interactions in a succinct way. For instance, a page with a diagram and a text: “one finger, two fingers, the whole hand, or make waves” can provide basic instructions while not giving the whole idea of what the project is about.

Update:

<Created Signifiers to aid the interaction>

The one where the drawing comes to life – final project prototype updated

import processing.video.*;
import jp.nyatla.nyar4psg.*;

Capture cam;
MultiMarker nya;
Movie movie;

void setup() {
  size(640,480,P3D);
  colorMode(RGB, 100);
  println(MultiMarker.VERSION);
  cam=new Capture(this,640,480);
  nya=new MultiMarker(this,width,height,"camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  nya.addARMarker(loadImage("ar marker 5.png"),16,10,80);
  movie = new Movie(this, "IMG_2953.mp4");
  movie.loop();
  cam.start();
}

void movieEvent(Movie movie) {  
  movie.read();
}


void draw()
{
  if (cam.available() !=true) {
   
      return;
  }
  cam.read();
  nya.detect(cam);
  background(0);
  nya.drawBackground(cam);
  if((!nya.isExist(0))){
    return;
  }
  nya.beginTransform(0);
  fill(0,0,255);
  translate(25,35,0);
  //box(40);
  rotate(PI);
  image(movie, 0, 0, 50, 70);
  nya.endTransform();
}

I started with the idea of using AR markers by placing them on the side of the page and having the drawing projected onto each page but then I decided that I really want to focus on creating the feeling of each of the drawings coming to life, so I was determined to find a way to use my drawing itself as the marker. Through research and trial and error, I finally got it working properly. I tried out different things and learnt that AR markers actually don’t need to be in square form, only the edge needs to be a square. So I inserted the final drawing marker into the data folder and changed the code, edge percentage, and positioning, and size. I used example AR code that displayed a cube on top of the marker, and changed the code so as to have a video file play on top of the marker.

However I need to now test this out with a webcam and projector to see if it works to have a full drawing animation projected on top of the drawing or if I need to create a different video file with only the moving elements to be projected. I plan to include around 5 drawings and have them on different pages of a hard paper book to avoid bending of the paper, which would affect the marker detection and projection.

 

Final Prototype?

Here’s what I have so far, including the missing parts in brackets:

-One Kinect camera connected to the display’s soundtrack. (It will be easy to add the second Kinect for the second display, seeing as I already have the code).

-A soundtrack for my display. (I would like to re-record it using professional equipment for better quality).

-Half of my Snapchat simulator/booth sketch. (I still need to add the filters and map them onto the user’s face.)

-A functioning stylus for the iPhone connected to a motor. (I still need to connect it to my IR sensor)

Since I don’t have the pedestals yet, I printed out sketches just to show where how the display would be set up: 

 

So for example, this is how the two pedestals would look next to each other.

The sound would be controlled based on the movements tracked by the Kinect camera to measure the person’s distance from the display. Here’s a demo of me walking towards the display (thus playing the soundtrack), and walking away from it (stopping the soundtrack):

Here’s my code so far:

Kinect + Sound (Complete) : 

// Credits: Daniel Shiffman, Depth thresholding example

import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import processing.sound.*;

Kinect kinect;

SoundFile soundfile;
//SoundFile[] files;
// Depth image
PImage depthImg;

//soundtrack playing
boolean playing = false;

// pixels to be shown
int minDepth =  60;
int maxDepth = 900;

// kinect's angle
float angle;
float L;

void setup() {
  size(1280, 480);

  kinect = new Kinect(this);
  kinect.initDepth();
  angle = kinect.getTilt();

  // blank image showing everything
  depthImg = new PImage(kinect.width, kinect.height);
  
  //load soundtrack
  soundfile = new SoundFile(this, "futuremuse.aiff");
  //files = new SoundFile[2];
  //println("Duration= " + soundfile.duration() + " seconds");
  //L=soundfile.duration();
  // for (int i = 0; i < files.length; i++) {
  //  files[i] = new SoundFile(this, (i+1) + ".aif");
  //}


 
}

void draw() {
  // draw the raw image
  image(kinect.getDepthImage(), 0, 0);

  // threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  int counter =0;
  for (int i=0; i < rawDepth.length; i++) {
    
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      depthImg.pixels[i] = color(255);
      
      counter++;
      
    } else {
      depthImg.pixels[i] = color(0);
    }
  }
  
  if (counter > 10000){
    if (!playing)
    {
        soundfile.play();
        playing = true;
    }
  }
  else
  {
    if (playing)
    {
       soundfile.stop();
       playing = false;
    }
  }

  // Draw the thresholded image
  depthImg.updatePixels();
  image(depthImg, kinect.width, 0);

  fill(0);
  text("TILT: " + angle, 10, 20);
  text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

}

//// Adjust the angle and the depth threshold min and max
//void keyPressed() {
//  if (key == CODED) {
//    if (keyCode == UP) {
//      angle++;
//    } else if (keyCode == DOWN) {
//      angle--;
//    }
//    angle = constrain(angle, 0, 30);
//    kinect.setTilt(angle);
//  } else if (key == 'a') {
//    minDepth = constrain(minDepth+10, 0, maxDepth);
//  } else if (key == 's') {
//    minDepth = constrain(minDepth-10, 0, maxDepth);
//  } else if (key == 'z') {
//    maxDepth = constrain(maxDepth+10, minDepth, 2047);
//  } else if (key =='x') {
//    maxDepth = constrain(maxDepth-10, minDepth, 2047);
//  }
//}

Processing Snapchat Simulator (Incomplete):

import processing.video.*;

Capture cam;

PImage img;

void setup(){
 size(1280,720);
 img = loadImage("snap4.png");
 String[] cameras = Capture.list();
 printArray(cameras);
 cam = new Capture(this, cameras[3]); 
  cam = new Capture(this, width, height);
  cam.start();
}

void draw(){
  if (cam.available()){
    cam.read();
  }
 image(cam,0,0); 
 image(img, 0, 0);
}

//scale(-1,-1);
//image(cam,-width,-height);


//import processing.video.*;

//PImage img;

//PImage backgroundImage;

//Capture video;

//void setup() {
//  size(720,1182);
//  img = loadImage("snap.png");
//  video = new Capture(this, width, height);
//  video.start();
//  //backgroundImage = createImage(video.width, video.height, RGB);
//}

//void draw() {
//  //image(img, 0, 0);
//}

iPhone + Motor (needs IR sensor): 

#include <Servo.h>

Servo myservo;  // create servo object to control a servo
// twelve servo objects can be created on most boards

int pos = 0;    // variable to store the servo position

void setup() {
  myservo.attach(10);  // attaches the servo on pin 9 to the servo object
}

void loop() {
  for (pos = 0; pos <= 20; pos = 1) { // goes from 0 degrees to 180 degrees
    // in steps of 1 degree
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(25);                       // waits 15ms for the servo to reach the position
  }
//  for (pos = 180; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees
//    myservo.write(pos);              // tell servo to go to position in variable 'pos'
//    delay(15);                       // waits 15ms for the servo to reach the position
//  }
}

 

 

Final Project Prototype

After a little freak-out moment, when the whole project stopped working after rewiring some buttons (which was solved by moving them from Pin 1 and starting at Pin 2) , we managed to bring it back to life (relatively) fast and finished up with the prototype. We finished all of the classes so they all draw different shapes (based on the emotion), scale them according to the volume input, and change their path and speed of movement based on the potentiometer value (that reflects an energy level of the respondent).

What also gave us a little (or a LOT) of troubles was making a shape that is not rect() or ellipse() – as they have one center point that is easily manipulated with-  and move and scale the shape while still maintaining the same proportions. Push and PopMatrix() was the solution for this as well as the scale() function within the Matrix.

Now when the code is fully working in its intended essence, we still need to work on making it more visually attractive so it better represents the data. Also, the sizes of the objects are little uneven, so we need to adjust the mapped values (and distinguish the ones that are being adjusted at the moment so that the respondent can see it clearly). User testing, however, is what will get us the closest to the most suitable range of values.

We also finalized the design for the interface and hopefully will get to building it tomorrow so that we can get to the user testing stage as fast as possible. 

Final Prototype by Nick and me

So far we have improved the accuracy of the Kinect, although we still want to put more conditions to make it more accurate. In the beginning, we use an algorithm that set up thresholds for aiming and shooting, after realizing that this algorithm was not enough for the accuracy that we wanted we added the condition where the closest the aiming box will be created in the pixel with the closest distance to the Kinect sensor in that aiming or shooting threshold. The code will be attached below. Now we are currently working to destroy targets once the user shoots at it.

 

Also, we have been working in what we think will the toughest challenge that is creating the main menu and buttons to store the scores, to show in the screen who has been the best iron man from all the participants during the IM showcase. The code will attach below

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
PImage img;
float angle;
Kinect kinect;
boolean shoot;
float minShoot;
float minThresh;
float maxThresh;

void setup(){
  size(512,484);
  kinect = new Kinect(this);
  kinect. initDepth();
  kinect.initVideo();
  img = createImage(kinect.width, kinect.height,RGB); 
  angle = kinect.getTilt();
  shoot = false;
  minShoot = 610;
  minThresh = 710;
  maxThresh = 800;
}

void draw(){
  background(0);
  shoot = false;
  img.loadPixels();
  PImage dImg = kinect.getDepthImage();
  image(dImg,0,0);
  int[] depth = kinect.getRawDepth();
  float sumX = 0;
  float sumY = 0;
  float totalPixels = 0;
  
  int record = 4500;
  int rx = 0;
  int ry = 0;
  for (int x = 0; x<kinect.width; x++){
    for (int y =0; y< kinect.height; y++){
      int offset = x + y*kinect.width;
      int d = depth[offset];
      if(d>=minThresh && d<maxThresh){
        img.pixels[offset] = color(255,0,150);
        sumX+= x;
        sumY+=y;
        totalPixels++;
        if(d<record){
          record = d;
          rx = x;
          ry = y;
        }
      }
      
      else if(d>=minShoot && d<minThresh){
        img.pixels[offset] = color(255,255,0);
        sumX+= x;
        sumY+=y;
        totalPixels++;
        shoot = true;
         if(d<record){
          record = d;
          rx = x;
          ry = y;
        }
        
      }

      else{
      img.pixels[offset] = dImg.pixels[offset] ;
    }
    }
  }
  
  img.updatePixels();
  image(img,0,0);
  
  float avgX = sumX/totalPixels;
  float avgY =sumY/totalPixels;
  if(shoot == true){
  fill(255,0,0);
  }
  else{
    fill(150,0,255);
  }
  ellipse(rx,ry,64,64); 
}

void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      angle++;
    } else if (keyCode == DOWN) {
      angle--;
    }
    angle = constrain(angle, 0, 30);
    kinect.setTilt(angle);  
  }
}
PImage menupic;
PImage bpic;
PImage spic;
import controlP5.*;
int state = 0;
ControlP5 a;
ControlP5 c;
ControlP5 d;
String output;
PFont b;
PFont yo; //for Scoreboard text input
final int menu = 0;
final int game = 1;
final int scoreboard = 2;
final int options = 3;
final int quit = 4;


void setup(){
  menupic = loadImage("menu.jpg"); //menu picture
  bpic = loadImage("background.jpg"); //background picture
  spic = loadImage("scoreboard.jpg"); //scoreboard picture
  size(1920,1030);
  
  //MENU INTERACTIONS
  a = new ControlP5(this); //a is for menu
  b = createFont("Verdana",30); //font for menu
  yo = createFont("Verdana",15); //font for scoreboard form
  a.addButton("Play") //name
    .setPosition(100,150) //position
    .setSize(250,200) //size
    .setFont(b) //font
    ;
  a.addButton("Scoreboard")
    .setPosition(1500,150)
    .setSize(250,200)
    .setFont(b)
    ;
  a.addButton("Options")
    .setPosition(100,750)
    .setSize(250,200)
    .setFont(b)
    ;
  a.addButton("Quit")
    .setPosition(1500,750)
    .setSize(250,200)
    .setFont(b)
    ;
    //C = Back Button
  c= new ControlP5(this); //c is for back button
  c.addButton("Back")
    .setPosition(1500,750)
    .setSize(250,200)
    .setFont(b)
    ;
    //D = Scoreboard
  d= new ControlP5(this); //d is for scoreboard screen
  
  d.addTextfield("Insert Name Here").setPosition(200,200).setSize(200,50).setAutoClear(false).setFont(yo);
  d.addBang("Submit").setPosition(400,200).setSize(200,50).setFont(yo);
    
}

void draw(){
  image(menupic,0,0);
  if(state == 0){ //Menu
    runMenu();
    c.hide();
    a.show();
    d.hide();
  }
  else if(state == 1){ //Game
    runGame();
    a.hide();
    //c.hide();
    c.show();
    d.hide();
  }
  else if(state == 2){ //Scoreboard
    image(spic,0,0);
    c.show();
    a.hide();
    d.show();
  }
  else if(state ==4){ //Quit
    exit();
  }
}
void runMenu(){
  textSize(40);
  text("MENU",50,100);
  
}
void runGame(){
  image(bpic,0,0);
}

void Play(){
  state = 1;
}
void Scoreboard(){
  state = 2;
}
void Back(){
  state = 0;
}

void Quit(){
  state = 4;
}

void Submit(){ //Submit form for text input in Scoreboard
  output = d.get(Textfield.class,"Insert Name Here").getText();
  println(output);
}

 

WaterBox Prototype Stage

The progress of the prototype stage of the final project has been going smoothly. I have tested the kinect v2 code with additional calibrations on a container that has water in it – and the kinect was able to successfully capture the ripples and waves. Since I am catching only the blue pixels, I have placed blue acrylics at the bottom of the water container as a a prototype to test it out.

Also, for the sound part, I have inserted additional codes that I got from Aaron that uses the Processing Sound library. Currently in the code, I am calculating the minimum number of pixels shown (due to some possible errors in calibrations like reflections from the container, etc.), and I use the minimum and the maximum values of pixel counts to map the pixel count values to an amplitude between 0 and 1. The following code creates an effect where the more ripples there are in certain sections, the louder the sound becomes.

After encountering an idea of capacitive touch sensing from Aaron, so-called Touche for Arduino: Advanced Touching Sensor, I have decided to include as a part in the project and change how the final version of the project is going to look like. The capacitive touch sensor will be a trigger to initiating different loops/sound – for instance, one finger plays one sound, two finger plays two sound together or other sound alone, etc. And, the pixel counts from the waves detected from the Kinect v2 will serve as a the value for the reverb of all loops.

The below is the prototype video of using the touche advanced touching sensor library with water and fingers as the trigger for different music loops:

Prototyping My Final Project

What I have now is definitely somewhat of a prototype, although I think I might have to work extra hard this weekend to have everything done by Monday – I do not want to pull an allnighter on Wednesday and panic before the showcase.

Physical Aspect

I made a prototype of the physical aspect of it. It looks quite nice, but I am going to remake it for better matching and to better the overall aesthetics of the box.

Coding Aspect

The code I have right now is a major improvement than what I had back on Sunday. I can make the png images appear when scanning the markers. I have also managed to draw every marker separately. I managed to also load the factory image (the only one I made into a png so far) and assign it to a marker (Marker number 4).

Things I will need help with: How can I make it so the building stays and does not disappear once a marker is removed from the camera? Do I have to make a boolean variable for every single marker?

More Progress

I’ve now connected the tilt sensors to control the Processing screen. Currently, because I don’t have my el wires yet, I’m currently substituting the el wires with strings and the strings are attached to placeholders (the tripod in the IM room). The tilt sensors are attached to the string with resistors right now and I’d probably just end up using that to attach it for the show because the resistors are small enough so people can’t see it. I’ve also added music to the strumming of the string (mainly guitar notes). The tilt sensors aren’t too sensitive so it works perfectly because very light touches of the string will not do much.

For the rest of the project, I mainly have to have the room and the el wires to come in so I can position everything correctly. I’ll also be added 3 more wires so 3 more tones (acoustic guitar — D, A, E, G, B, Low E — found on soundcloud).

Final Project Prototype

I have continued to work on my final project. However, after attempting to solder additional components (a FSR) to a circuit board, my project no longer uses any soldered components for the time being. Aaron advised me to use capacitive touch instead of a FSR with the stuffed animal, and I have tested this yesterday and it works fine and responds well to touch. I eventually plan on sewing this to the animal. As I move into finalizing the project this weekend.

I have refined my code slightly and have fixed a few technical issues revolving around changing stages and some of the animations.

I plan on adding another FSR, which I will attach to a block of wood. I will also create a tool (most likely a hammer) that will have a foam cushion at the end. If the FSR’s reading goes above a certain value, the player will harvest the resources. I plan on going into the scene shop to quickly building these tools and testing out the FSR tonight.

The game is playable and a few of my friends have tested out the game. However, some refinements still need to be made in progression.

 

Here is a video of the prototype in action:

Three hardest parts of my final project: Coding, coding and coding…

My project is the claw arcade game. So you pull out a thing on the screen and an actual object falls behind the screen. So the hardest part is to make that claw game on computer. I have created a prototype of this game (but its too easy for now, will make it harder of course). The claw picks up and object within its range and holds it until the claw is open. Sounds basic, but it actually is hours and hours of coding 🙂 Here is the result (ignore Tori hating Kyle on the background):