All Posts

User Testing

I had Dhabia, Nick and Pangna user test my project. I think that the key takeaways that I got were the following:

  • Adding signifiers or instructions of some sorts so users would know when to pet the animal or harvest resources and how they would do this.
  • Featuring my (somewhat vague) instructions in a more prominent location on the screen.
  • Fixing boundary issues between the player and the resources and animals.
  • Some issues with the touch sensor, thinking of possibly making the hit location of the “hammer” smaller.

I think that the most important thing that can contribute to user experience is my use of instructions. Furthermore, people did not engage with the game as I thought they would. I wanted the users to pet the animals and then harvest the resources at each level, to fully see how the environment changes throughout the game, however, they often just wanted to complete the game as fast as they could.

User testing was extremely useful because a lot of what I have been doing for the past few days has become very narrow. I think it has been really hard to step out of my biases, my experiences and my intentions for the game that I neglected instructions and very basic things. I am lucky because there is very little that I have to do for the project to be complete by Thursday so I can really focus on fine-tuning the little things before the show.

Here are the videos of their user testing:

User Testing

So… I’m definitely not close to finishing my game and I know it’s missing a lot of things but I decided to test it out anyway so far.  All of my users said the same things which I kind of already knew but:

1). Although the collecting droplets is pretty intuitive, there should be an initial start screen with instructions/info and “press start”.

2). The timer should count down  instead of up (right now is confused as a score). So, could also include a score.

3). There needs to be a “try again” or “you win” kind of ending because people can’t actually tell how they’ve done.

User Testing

While testing my final project, I ran into a few issues:

  1. The soundtrack that plays as soon as the user steps in front of the exhibit is longer than the actual interaction.

Solution: Make the soundtracks shorter, test them again.

2. Two users began looking at the exhibit from the left, whereas one began from the right side. This is an issue because the left side is where the “first” soundtrack plays.

Solution: to fix it so that it wouldn’t matter which direction the user is coming from.

3. The user assumes there’s something they can touch, play with, etc. whereas the only actual interactions are based on sensors, distance, and body motion.

Solution: Improve the iPhone exhibit and add a more interactive component, since the motor and the sensor controlling the phone from under a glass cover is not a very straightforward interaction, and two users did not realize what was going on, or that their distance from the sensor was controlling the motor.

4. For someone who doesn’t focus on the background soundtrack, it is not clear what exactly is happening, or what the context of the whole “Future Museum” exhibit thing is. There need to be more visual cues.

Solution: Provide some form of description, or instructions? (Not sure about this one yet)

5. The webcam on the ‘Snapchat simulator’ kept lagging, and the program was running slow. Also, the camera was flipped and a little bit too zoomed so it didn’t feel very natural or selfie-like.

Solution: I think I’ll be able to, with some help, fix the camera flip situation. However, I was told that Processing doesn’t work very fast with cameras and video, so it is possible that I won’t be able to significantly improve the speed. I’ll have to ask Jack for help.

Here’s a video demo:

https://youtu.be/L9w7QQ3ffcQ

Nisala and Jana: User Testing

Our user testing pointed out a couple of imperfections that, however, can be relatively easily fixed:

  • the users do not know they are supposed to hold the button to scream – instructions said only “Scream as loud as you are stressed”.
  • also, instructions overall need to be more visible and readable, some more clearly formulated (which we wanted to print out and paste only after user testing anyway, exactly for this reason)
  • the visuals on the screen are a little difficult to see and to distinguish, what’s happening. Solution for this: adjust the colors and the mapped values that affect the speed and radius of the path. We need to make more clear that people are submitting some data, all of which are visualized.
  • the program is getting a little laggy with more inputs – using an iMac instead of a laptop with a larger monitor can be a potential solution

User Tests feedback post by Nick and Me

The first version of the videogame was done and we tested with three users.

The feedback given by user 1 was: Make the cursor brighter, avoid mirror effect with the cursor, have fewer targets that go faster, create random motions with the targets, and instructions so the user knows how to play;

The feedback given by user 2 was: Try to add more animations special effects so it feels more like a videogame, Make the cursor brighter and avoid the mirror effect.

The feedback given by user 3 (Jack) was: Modify the algorithm(Right now I am detecting the closer object in the Kinect in certain thresholds, but Jack told me that I would be a better idea to detect the change in the motion by making a comparison among different frame counts), to make the cursor more noticeable, and to declare the animations in the set up to make the code runs smoother

 

Conclusion and observations of the user tests:

  • We need to improve the design because users take time to figure it out what’s going on and it is still intuitive.
  • We will try the algorithm suggested by Jack to see if the movement detection accuracy improves.
  • We will use Jack advice to add properly the animation needed to improve the experience of the game.

WaterBox Final Prototype & User-Testing

WaterBox is a musical loop station where depending on the user’s physical interaction with water plays different loops either individually or simultaneously. And, the track type changes depending on the roughness of the waves and ripples of the water surface. The final prototype is created using Kinect v2 to capture the waves of the water surface by its depth, and Touche Advanced Touch Sensor with Arduino for the capacitative sensor and interaction with water. Through the WaterBox, I wanted to share the rich feeling of interacting with water and fun of creating different music with physical motion of your hands in the water.

The calibration for the depth and the pixel counts for activating the change of track types was the most difficult part due to different lightings and positions of user’s arms. The final version will be carefully calibrated in the space with the accurate placement of supplement lighting to minimize the error caused in some of the user-testing videos. Also, I had a difficult time with the Touche library for the capacitative sensor code for calibrating and separating the different user inputs.

The following videos are user testing of the prototype:

The main feedbacks/questions that I have received are the following:

  • What happens if there is a user that has a long blue shirt?

For the above question, I have decided to edit a portion of the code (the minThresh) in order to ignore any pixels that are above the box height to disregard possible errors.

  • It will be good if there are some signs of possible interactions

For this feedback, I thought of creating a page of possible interactions in a succinct way. For instance, a page with a diagram and a text: “one finger, two fingers, the whole hand, or make waves” can provide basic instructions while not giving the whole idea of what the project is about.

Update:

<Created Signifiers to aid the interaction>

The one where the drawing comes to life – final project prototype updated

import processing.video.*;
import jp.nyatla.nyar4psg.*;

Capture cam;
MultiMarker nya;
Movie movie;

void setup() {
  size(640,480,P3D);
  colorMode(RGB, 100);
  println(MultiMarker.VERSION);
  cam=new Capture(this,640,480);
  nya=new MultiMarker(this,width,height,"camera_para.dat",NyAR4PsgConfig.CONFIG_PSG);
  nya.addARMarker(loadImage("ar marker 5.png"),16,10,80);
  movie = new Movie(this, "IMG_2953.mp4");
  movie.loop();
  cam.start();
}

void movieEvent(Movie movie) {  
  movie.read();
}


void draw()
{
  if (cam.available() !=true) {
   
      return;
  }
  cam.read();
  nya.detect(cam);
  background(0);
  nya.drawBackground(cam);
  if((!nya.isExist(0))){
    return;
  }
  nya.beginTransform(0);
  fill(0,0,255);
  translate(25,35,0);
  //box(40);
  rotate(PI);
  image(movie, 0, 0, 50, 70);
  nya.endTransform();
}

I started with the idea of using AR markers by placing them on the side of the page and having the drawing projected onto each page but then I decided that I really want to focus on creating the feeling of each of the drawings coming to life, so I was determined to find a way to use my drawing itself as the marker. Through research and trial and error, I finally got it working properly. I tried out different things and learnt that AR markers actually don’t need to be in square form, only the edge needs to be a square. So I inserted the final drawing marker into the data folder and changed the code, edge percentage, and positioning, and size. I used example AR code that displayed a cube on top of the marker, and changed the code so as to have a video file play on top of the marker.

However I need to now test this out with a webcam and projector to see if it works to have a full drawing animation projected on top of the drawing or if I need to create a different video file with only the moving elements to be projected. I plan to include around 5 drawings and have them on different pages of a hard paper book to avoid bending of the paper, which would affect the marker detection and projection.

 

Final Prototype?

Here’s what I have so far, including the missing parts in brackets:

-One Kinect camera connected to the display’s soundtrack. (It will be easy to add the second Kinect for the second display, seeing as I already have the code).

-A soundtrack for my display. (I would like to re-record it using professional equipment for better quality).

-Half of my Snapchat simulator/booth sketch. (I still need to add the filters and map them onto the user’s face.)

-A functioning stylus for the iPhone connected to a motor. (I still need to connect it to my IR sensor)

Since I don’t have the pedestals yet, I printed out sketches just to show where how the display would be set up: 

 

So for example, this is how the two pedestals would look next to each other.

The sound would be controlled based on the movements tracked by the Kinect camera to measure the person’s distance from the display. Here’s a demo of me walking towards the display (thus playing the soundtrack), and walking away from it (stopping the soundtrack):

Here’s my code so far:

Kinect + Sound (Complete) : 

// Credits: Daniel Shiffman, Depth thresholding example

import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import processing.sound.*;

Kinect kinect;

SoundFile soundfile;
//SoundFile[] files;
// Depth image
PImage depthImg;

//soundtrack playing
boolean playing = false;

// pixels to be shown
int minDepth =  60;
int maxDepth = 900;

// kinect's angle
float angle;
float L;

void setup() {
  size(1280, 480);

  kinect = new Kinect(this);
  kinect.initDepth();
  angle = kinect.getTilt();

  // blank image showing everything
  depthImg = new PImage(kinect.width, kinect.height);
  
  //load soundtrack
  soundfile = new SoundFile(this, "futuremuse.aiff");
  //files = new SoundFile[2];
  //println("Duration= " + soundfile.duration() + " seconds");
  //L=soundfile.duration();
  // for (int i = 0; i < files.length; i++) {
  //  files[i] = new SoundFile(this, (i+1) + ".aif");
  //}


 
}

void draw() {
  // draw the raw image
  image(kinect.getDepthImage(), 0, 0);

  // threshold the depth image
  int[] rawDepth = kinect.getRawDepth();
  int counter =0;
  for (int i=0; i < rawDepth.length; i++) {
    
    if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
      depthImg.pixels[i] = color(255);
      
      counter++;
      
    } else {
      depthImg.pixels[i] = color(0);
    }
  }
  
  if (counter > 10000){
    if (!playing)
    {
        soundfile.play();
        playing = true;
    }
  }
  else
  {
    if (playing)
    {
       soundfile.stop();
       playing = false;
    }
  }

  // Draw the thresholded image
  depthImg.updatePixels();
  image(depthImg, kinect.width, 0);

  fill(0);
  text("TILT: " + angle, 10, 20);
  text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);

}

//// Adjust the angle and the depth threshold min and max
//void keyPressed() {
//  if (key == CODED) {
//    if (keyCode == UP) {
//      angle++;
//    } else if (keyCode == DOWN) {
//      angle--;
//    }
//    angle = constrain(angle, 0, 30);
//    kinect.setTilt(angle);
//  } else if (key == 'a') {
//    minDepth = constrain(minDepth+10, 0, maxDepth);
//  } else if (key == 's') {
//    minDepth = constrain(minDepth-10, 0, maxDepth);
//  } else if (key == 'z') {
//    maxDepth = constrain(maxDepth+10, minDepth, 2047);
//  } else if (key =='x') {
//    maxDepth = constrain(maxDepth-10, minDepth, 2047);
//  }
//}

Processing Snapchat Simulator (Incomplete):

import processing.video.*;

Capture cam;

PImage img;

void setup(){
 size(1280,720);
 img = loadImage("snap4.png");
 String[] cameras = Capture.list();
 printArray(cameras);
 cam = new Capture(this, cameras[3]); 
  cam = new Capture(this, width, height);
  cam.start();
}

void draw(){
  if (cam.available()){
    cam.read();
  }
 image(cam,0,0); 
 image(img, 0, 0);
}

//scale(-1,-1);
//image(cam,-width,-height);


//import processing.video.*;

//PImage img;

//PImage backgroundImage;

//Capture video;

//void setup() {
//  size(720,1182);
//  img = loadImage("snap.png");
//  video = new Capture(this, width, height);
//  video.start();
//  //backgroundImage = createImage(video.width, video.height, RGB);
//}

//void draw() {
//  //image(img, 0, 0);
//}

iPhone + Motor (needs IR sensor): 

#include <Servo.h>

Servo myservo;  // create servo object to control a servo
// twelve servo objects can be created on most boards

int pos = 0;    // variable to store the servo position

void setup() {
  myservo.attach(10);  // attaches the servo on pin 9 to the servo object
}

void loop() {
  for (pos = 0; pos <= 20; pos = 1) { // goes from 0 degrees to 180 degrees
    // in steps of 1 degree
    myservo.write(pos);              // tell servo to go to position in variable 'pos'
    delay(25);                       // waits 15ms for the servo to reach the position
  }
//  for (pos = 180; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees
//    myservo.write(pos);              // tell servo to go to position in variable 'pos'
//    delay(15);                       // waits 15ms for the servo to reach the position
//  }
}

 

 

Final Project Prototype

After a little freak-out moment, when the whole project stopped working after rewiring some buttons (which was solved by moving them from Pin 1 and starting at Pin 2) , we managed to bring it back to life (relatively) fast and finished up with the prototype. We finished all of the classes so they all draw different shapes (based on the emotion), scale them according to the volume input, and change their path and speed of movement based on the potentiometer value (that reflects an energy level of the respondent).

What also gave us a little (or a LOT) of troubles was making a shape that is not rect() or ellipse() – as they have one center point that is easily manipulated with-  and move and scale the shape while still maintaining the same proportions. Push and PopMatrix() was the solution for this as well as the scale() function within the Matrix.

Now when the code is fully working in its intended essence, we still need to work on making it more visually attractive so it better represents the data. Also, the sizes of the objects are little uneven, so we need to adjust the mapped values (and distinguish the ones that are being adjusted at the moment so that the respondent can see it clearly). User testing, however, is what will get us the closest to the most suitable range of values.

We also finalized the design for the interface and hopefully will get to building it tomorrow so that we can get to the user testing stage as fast as possible. 

Final Prototype by Nick and me

So far we have improved the accuracy of the Kinect, although we still want to put more conditions to make it more accurate. In the beginning, we use an algorithm that set up thresholds for aiming and shooting, after realizing that this algorithm was not enough for the accuracy that we wanted we added the condition where the closest the aiming box will be created in the pixel with the closest distance to the Kinect sensor in that aiming or shooting threshold. The code will be attached below. Now we are currently working to destroy targets once the user shoots at it.

 

Also, we have been working in what we think will the toughest challenge that is creating the main menu and buttons to store the scores, to show in the screen who has been the best iron man from all the participants during the IM showcase. The code will attach below

import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
PImage img;
float angle;
Kinect kinect;
boolean shoot;
float minShoot;
float minThresh;
float maxThresh;

void setup(){
  size(512,484);
  kinect = new Kinect(this);
  kinect. initDepth();
  kinect.initVideo();
  img = createImage(kinect.width, kinect.height,RGB); 
  angle = kinect.getTilt();
  shoot = false;
  minShoot = 610;
  minThresh = 710;
  maxThresh = 800;
}

void draw(){
  background(0);
  shoot = false;
  img.loadPixels();
  PImage dImg = kinect.getDepthImage();
  image(dImg,0,0);
  int[] depth = kinect.getRawDepth();
  float sumX = 0;
  float sumY = 0;
  float totalPixels = 0;
  
  int record = 4500;
  int rx = 0;
  int ry = 0;
  for (int x = 0; x<kinect.width; x++){
    for (int y =0; y< kinect.height; y++){
      int offset = x + y*kinect.width;
      int d = depth[offset];
      if(d>=minThresh && d<maxThresh){
        img.pixels[offset] = color(255,0,150);
        sumX+= x;
        sumY+=y;
        totalPixels++;
        if(d<record){
          record = d;
          rx = x;
          ry = y;
        }
      }
      
      else if(d>=minShoot && d<minThresh){
        img.pixels[offset] = color(255,255,0);
        sumX+= x;
        sumY+=y;
        totalPixels++;
        shoot = true;
         if(d<record){
          record = d;
          rx = x;
          ry = y;
        }
        
      }

      else{
      img.pixels[offset] = dImg.pixels[offset] ;
    }
    }
  }
  
  img.updatePixels();
  image(img,0,0);
  
  float avgX = sumX/totalPixels;
  float avgY =sumY/totalPixels;
  if(shoot == true){
  fill(255,0,0);
  }
  else{
    fill(150,0,255);
  }
  ellipse(rx,ry,64,64); 
}

void keyPressed() {
  if (key == CODED) {
    if (keyCode == UP) {
      angle++;
    } else if (keyCode == DOWN) {
      angle--;
    }
    angle = constrain(angle, 0, 30);
    kinect.setTilt(angle);  
  }
}
PImage menupic;
PImage bpic;
PImage spic;
import controlP5.*;
int state = 0;
ControlP5 a;
ControlP5 c;
ControlP5 d;
String output;
PFont b;
PFont yo; //for Scoreboard text input
final int menu = 0;
final int game = 1;
final int scoreboard = 2;
final int options = 3;
final int quit = 4;


void setup(){
  menupic = loadImage("menu.jpg"); //menu picture
  bpic = loadImage("background.jpg"); //background picture
  spic = loadImage("scoreboard.jpg"); //scoreboard picture
  size(1920,1030);
  
  //MENU INTERACTIONS
  a = new ControlP5(this); //a is for menu
  b = createFont("Verdana",30); //font for menu
  yo = createFont("Verdana",15); //font for scoreboard form
  a.addButton("Play") //name
    .setPosition(100,150) //position
    .setSize(250,200) //size
    .setFont(b) //font
    ;
  a.addButton("Scoreboard")
    .setPosition(1500,150)
    .setSize(250,200)
    .setFont(b)
    ;
  a.addButton("Options")
    .setPosition(100,750)
    .setSize(250,200)
    .setFont(b)
    ;
  a.addButton("Quit")
    .setPosition(1500,750)
    .setSize(250,200)
    .setFont(b)
    ;
    //C = Back Button
  c= new ControlP5(this); //c is for back button
  c.addButton("Back")
    .setPosition(1500,750)
    .setSize(250,200)
    .setFont(b)
    ;
    //D = Scoreboard
  d= new ControlP5(this); //d is for scoreboard screen
  
  d.addTextfield("Insert Name Here").setPosition(200,200).setSize(200,50).setAutoClear(false).setFont(yo);
  d.addBang("Submit").setPosition(400,200).setSize(200,50).setFont(yo);
    
}

void draw(){
  image(menupic,0,0);
  if(state == 0){ //Menu
    runMenu();
    c.hide();
    a.show();
    d.hide();
  }
  else if(state == 1){ //Game
    runGame();
    a.hide();
    //c.hide();
    c.show();
    d.hide();
  }
  else if(state == 2){ //Scoreboard
    image(spic,0,0);
    c.show();
    a.hide();
    d.show();
  }
  else if(state ==4){ //Quit
    exit();
  }
}
void runMenu(){
  textSize(40);
  text("MENU",50,100);
  
}
void runGame(){
  image(bpic,0,0);
}

void Play(){
  state = 1;
}
void Scoreboard(){
  state = 2;
}
void Back(){
  state = 0;
}

void Quit(){
  state = 4;
}

void Submit(){ //Submit form for text input in Scoreboard
  output = d.get(Textfield.class,"Insert Name Here").getText();
  println(output);
}