Final Project – Barrow Controller

Concept

Since we are learning physical computing, I am particular interested the human and computer interaction. The relationship of between human and a machine is very bare bone in my project. The human is the controller, while the motor is being controlled. However, I feel that it should not be always like that. In the time of AI development where the machines are getting awareness of the surroundings, machines should not be only controlled. Therefore, for this project, I want to give some basic controls to the motor so that it will not always be under human’s control.

Images of the project

Design inspiration:

Physical Parts:

P5.js images:

Schematic Diagram

Screenshot

p5.js Embedded
Link to fullscreen

User Testing Videos

 

Implementation

Interaction Design:

The main components of the project is the handpose model of the ML5.js library. Use its model, the hands are used as a controller for the motor. There are different types of actions that the users can interact with the project. First is the simple hand poses such as showing the palm, fist, turn it to 90 degrees. Each of this will give a different command to the motor which are stop, go, turn left/right respectively.

Since the recognition of the hand joints are done by the handpose library, I just need to give the conditional actions based on the position of the fingers. It is quite difficult to recognize the correct patterns of the of different hand poses initially. There are a lot of trial and errors to identify different hand patterns.

There are a lot of illustration on the screen as the instructions of the project. However, I further made a small icon of the Arduino car as a small companion with the users. This will display the direction in which the user is giving the command by moving in that direction.

Below is a block of code for recognizing the hand poses and giving the corresponding order:

if (indexFinger[1] > thumb[1] && middleFinger[1] > thumb[1] && ringFinger[1] > thumb[1] && pinky[1] > thumb[1]){
  if (middleFinger[0] > thumb[0] + 80){
    // console.log("run");
    // console.log("turn right");
    commandRight = 1;
    commandLeft = 0;
    x++;

    push();
    imageMode(CENTER);
    image(right, 100, 100, 100, 100);
    pop();

  }
  else if (middleFinger[0] < thumb[0] - 80){
    // console.log("stop");
    // console.log("turn left");
    commandRight = 0;
    commandLeft = 1;
    x--;

    push();
    imageMode(CENTER);
    image(left, 100, 100, 100, 100);
    pop();

  }
  else{
    // console.log("straight");
    commandRight = 1;
    commandLeft = 1;
    y--;

    push();
    imageMode(CENTER);
    image(fist, 100, 100, 100, 100);
    pop();

  }
}
else{
  // console.log("stop");
  commandLeft = 0;
  commandRight = 0;
  push();
  imageMode(CENTER);
  image(straight, 100, 100, 100, 100);
  pop();

}

However, you can notice that there is no command for going backward. This is the decision of the motor. Currently, there is no actual machine learing algorithm in the project, the project is just using simple decision making that is giving a portion of decision to the motor. The motor only decides to go back if and only if there is an obstacle blocking its way. This is done using a ultrasonic distance sensor. When the value is smaller than a certain threshold. After it detect an obstacle, the motor will automatically go backward and turn 180 degrees. Below is portion of code for that:

  if (rand) {
    digitalWrite(ain1Pin, HIGH);
    digitalWrite(ain2Pin, LOW);
    analogWrite(pwmAPin, potVal);

    // digitalWrite(bin1Pin, LOW);
    // digitalWrite(bin2Pin, LOW);
    analogWrite(pwmBPin, 0);
  }
  else{
    digitalWrite(bin1Pin, LOW);
    digitalWrite(bin2Pin, HIGH);
    analogWrite(pwmBPin, potVal);

    // digitalWrite(ain1Pin, LOW);
    // digitalWrite(ain2Pin, LOW);
    analogWrite(pwmAPin, 0);

  }

  unsigned long currentMillis = millis();

  if (currentMillis - previousMillis >= interval) {
    // save the last time you blinked the LED
    previousMillis = currentMillis;
    back = false;
  }
}

Furthermore, to give it some human actions, if it is in contact with one of the obstacles, it will express itself by saying that it don’t want to do that. However, since the showcase situation may be too noisy, the sound will be played in the p5.js side using the computer’s speaker.

P5.js and Arduino Communication:

The hand pose are detected in the p5.js side. Since I have 2 different wheels, I have 2 different variables for the left and the right wheels, the communication of the p5.js to the Arduino sketch is the status of the left and right control variables. The Arduino is then use those information to run the corresponding action.

Below is the Arduino code for processing the p5.js information (p5.js code for this has already be included in the earlier section):

if (!back){
  if (left) { //counterclockwise
    digitalWrite(ain1Pin, LOW);
    digitalWrite(ain2Pin, HIGH);
    analogWrite(pwmAPin, potVal);
  }
  else { //clockwise
    // digitalWrite(ain1Pin, LOW);
    // digitalWrite(ain2Pin, LOW);
    analogWrite(pwmAPin, 0);
  }

  if (right) { //counterclockwise
    digitalWrite(bin1Pin, HIGH);
    digitalWrite(bin2Pin, LOW);
    analogWrite(pwmBPin, potVal);
  }
  else { //clockwise
    // digitalWrite(bin1Pin, LOW);
    // digitalWrite(bin2Pin, LOW);
    analogWrite(pwmBPin, 0);
    // analogWrite(pwmBPin, 255 - potVal / 4);
  }
} else{
  if (cm < 10){
    Serial.print(back);
    Serial.print(',');

    digitalWrite(ain1Pin, HIGH);
    digitalWrite(ain2Pin, LOW);
    analogWrite(pwmAPin, potVal);

    digitalWrite(bin1Pin, LOW);
    digitalWrite(bin2Pin, HIGH);
    analogWrite(pwmBPin, potVal);
  }

For the communication from Arduino back to p5.js, since the small illustration on the screen need to also display the status of going backward, this information in sent to p5.js to know when the motor is going backward to display accordingly. Below is the block of code for that:

if (fromArduino[0] == "1"){
  y++;
  if(prev1 == "0"){
    sound1.play();

  }
}

What I am proud of:

I am really like the way the motor can run by itself without the commands. It is similar to use the state of the surrounding and make the decision by themselves. Even though this is not a machine learning project so it can not think by themselves, a very bare bone of intelligence can be simply made by Arduino.

Also, the implementation of ml5 library is also something new to me. It took me quite a bit of time to figuring out the correct number for the difference threshold of the hand poses. It still does not work that smoothly due to the errors in hand detections.

Resources used:

ml5 handpose documentation

ml5 examples

Youtube instruction

Challenges:

It was quite difficult to make sure that while the motor is taking the control, the user can no longer control the which direction it can move. Initially, I thought I should done it in the p5.js side where if the variable “back” is true, it can stop sending the information to the Arduino. However, this just stop the whole communication between the p5.js and Arduino. Therefore, I made it to be controlled in the Arduino side. So the variable called “back” is used to control the state and it can only be reset after the motor finish doing its role.

Apart from this is that I need to implement the decision of turn itself 180 degrees right after running backward for a while. Since I cannot use delay which will cause the motor to stop running, I used Blink without Delay technique to set the status and duration of the turning. Below is an illustration of this logic:

unsigned long currentMillis = millis();

if (currentMillis - previousMillis >= interval) {
  // save the last time you blinked the LED
  previousMillis = currentMillis;
  back = false;
}

Improvements:

One of the things that I want to do is that the illustration on the screen can be mapped to the actual distance travelled by the motor. However, since the power of the motor is not the same as speed, I was not able to implement this.

Also, I would like to allow the motor to have more decision making and not just return and simple speech. I think this also require complex analysis of the roles between the human and the machine.

IM Showcase:

The showcase went smoothly general. However, there are minor problems with the motor running. For example, the wire is stuck which is preventing the motor to run. One of the motor for some reason is significantly weaker than the other, so the device does not go in a straight line. Also, because of the lighting in the room, the ML5 library failed to detect the hand poses multiple times. I recognize that the environment plays a big role in keeping the project running smoothly.

Image:

Below are the videos of the user interaction during the show:

Week 12 – Design for disability

The reading reminds me about the question of whether we should make disability look normal or to make it visually attractive. There are different ways of how equipment for disability can be presented. One of them is making it become a part of fashion. It is similar to the example of eyeglasses. Eyeglasses are a tool for eye problems. However, it has become a fashion tool, an accessory for clothes matching. It seems the first thing they will see is the glasses because they are dangling on our face.

However, in the similar case of the hearing aids, its presence seems to be more and more invisible. Its design has become so small that if we do not look for it, we may miss it. Controversially, the earbuds, which are also used for hearing, are more and more widely used. If something such as the earbuds has become so common, why is there a need to hide away the hearing aids?

Comparing the two different examples, it seems that there are some unknowing gaps between design for everyone and design for disabilities. I think it is because of the actual needs of the majority of the crowd. For example, the glasses can be worn without the wearer actually having any visual problem. On the other hand, hearing aids can only be used for those who need them. There are also differences in accessibility where glasses are widely sold and the hearing aids are produced by specific companies.

Week 12 – Exercises – Linh and Sihyun

Exercise 1:

Link to the video: https://youtube.com/shorts/4ek33gKQaNY?si=3ArQyArCtsPfHlGx

P5 sketch:

p5.js Code:

let xposition = 100;
function setup() {
  createCanvas(640, 480);
  textSize(18);
}
function draw() {
  background(0, 255, 0);
  fill(0);
  ellipse(xposition, height / 2, 50, 50);
}

function keyPressed() {
  if (key == " ") {
    // Calls a function to set up the serial connection
    setUpSerial();
  }
}

// Function to read data from the serial port
function readSerial(data) {
  // Check if data received is not null
  if (data != null) {
    // Maps the integer value of data from range 0-1023 to 0-640 and updates xposition
    xposition = map(int(data), 0, 1023, 0, 640);
  }
}

Arduino Code:

void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);
}

void loop() {


      int sensor = analogRead(A0);
      delay(5);
      Serial.println(sensor);
 
}

Circuit:

For this exercise, we have utilized the potentiometer to change the position of the ellipse in the p5.js sketch. As shown in the code snippet, the ellipse’s x position is mapped to the value of the potentiometer from the Arduino. First of all, by calling the setUpSerial() function when the space bar is pressed, the p5.js sets up the serial connection with the Arduino. Then, using the readSerial(data) the p5.js reads data regarding the potentiometer value from the Arduino.

For the Arduino, we used Serial. begin(9600) to start the serial communication. Inside the loop function, we have Serial.println(sensor) to send the sensor value to the serial port as a line of text (string), ending in a newline character.

Exercise 2:

Video:

P5 sketch:

P5.js code

In the p5 code, the mouseX is used to control the brightness of the LED light. MouseX is mapped from 0 to width of the canvas to 0 to 255. Utilizing the function readSerial(), we send the value of the mapped mouseX to the Arduino using the below line of codes:

function readSerial(data) {

  if (data != null) {
    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = left + "\n";
    writeSerial(sendToArduino);
  }
}

Arduino Code

In the Arduino code, after the initial handshake, the code continuously checks for any data that is sent from the Arduino code. It then parses the sending string and uses that string to control the brightness of the LED using analogWrite. The value has been mapped to the range of 255 so there is no need of mapping again in the Arduino code.

int leftLedPin = 3;


void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);


  // We'll use the builtin LED as a status output.
  // We can't use the serial monitor since the serial connection is
  // used to communicate to p5js and only one application on the computer
  // can use a serial port at once.
  pinMode(LED_BUILTIN, OUTPUT);


  // Outputs on these pins
  pinMode(leftLedPin, OUTPUT);


  // Blink them so we can check the wiring
  digitalWrite(leftLedPin, HIGH);
  delay(200);
  digitalWrite(leftLedPin, LOW);






  // start the handshake
  while (Serial.available() <= 0) {
    digitalWrite(LED_BUILTIN, HIGH); // on/blink while waiting for serial data
    Serial.println("0,0"); // send a starting message
    delay(300);            // wait 1/3 second
    digitalWrite(LED_BUILTIN, LOW);
    delay(50);
  }
}


void loop() {
  // wait for data from p5 before doing something
  while (Serial.available()) {
    digitalWrite(LED_BUILTIN, HIGH); // led on while receiving data


    int left = Serial.parseInt();
    // int right = Serial.parseInt();
    if (Serial.read() == '\n') {
      analogWrite(leftLedPin, left);
      Serial.println();
    }
  }
  digitalWrite(LED_BUILTIN, LOW);
}

Circuit

Exercise 3

Video:

P5.js Sketch

For the LED to light up every bounce, we assigned a flag “ground”. Every time the ball hits the ground, we set ground = true else it equals to false.

if (position.y > height-mass/2) {
    velocity.y *= -0.9;  // A little dampening when hitting the bottom
    position.y = height-mass/2;
  ground = true;
  }
else{
  ground = false;
}

If ground = true, we will send to Arduino the value of 1, else, we will send 0. We did this because we only use digitalWrite in Arduino which only accepts LOW (0) and HIGH (1) as its value.

if (ground) {
     sendToArduino = "1"+"\n";
    }
    else{
       sendToArduino = "0"+"\n";
    }

For the control of the wind, we used the potentiometer to change the value of the wind. Since there is only one data coming from the Arduino, we didn’t use the trim to parse the function. Instead, we directly map the receiving data to the wind value:

wind.x= map(int(data), 0,1023,-1, 1);

Arduino Code:

int ledPin = 8;


void setup() {
 // Start serial communication so we can send data
 // over the USB connection to our p5js sketch
 Serial.begin(9600);


 // We'll use the builtin LED as a status output.
 // We can't use the serial monitor since the serial connection is
 // used to communicate to p5js and only one application on the computer
 // can use a serial port at once.
 pinMode(LED_BUILTIN, OUTPUT);


 // Outputs on these pins
 pinMode(ledPin, OUTPUT);


 // Blink them so we can check the wiring
 digitalWrite(ledPin, HIGH);
 delay(200);
 digitalWrite(ledPin, LOW);


 // start the handshake
 while (Serial.available() <= 0) {
   digitalWrite(LED_BUILTIN, HIGH); // on/blink while waiting for serial data
   Serial.println("0,0"); // send a starting message
   delay(300);            // wait 1/3 second
   digitalWrite(LED_BUILTIN, LOW);
   delay(50);
 }
}


void loop() {
 // wait for data from p5 before doing something
 while (Serial.available()) {
   digitalWrite(LED_BUILTIN, HIGH); // led on while receiving data


   int led = Serial.parseInt();
   if (Serial.read() == '\n') {
     digitalWrite(ledPin, led);
     int sensor = analogRead(A5);
     Serial.println(sensor);
   }
 }
 digitalWrite(LED_BUILTIN, LOW);
}

In the Arduino sketch, we initiated serial communication with Serial.begin(9600) within the setup() function to set up data transfer at 9600 bits per second. The ledPin variable is set to 8, corresponding to the pin connected to an LED. We use Serial.available() to check for incoming serial data. Initially, the sketch blinks the LED on pin 8 to verify the correct wiring. When it begins receiving data from the serial port, the built-in LED lights up as a signal of data processing. The received data is then used to control the state or brightness of the LED on pin 8. Additionally, the sketch reads the analog value from pin A5 (connected to a potentiometer) and sends this value back to the p5.js, facilitating a continuous, dynamic interaction between the Arduino hardware and the p5.js.

Circuit:

Reflection:

Initially, we had a bit of a problem understanding how the communication between the p5.js and Arduino works. After spending quite a few investigating the sample code, we identified a few of the differences we had in our code and the sample code that caused our code to not work. We also learnt the mapping of values between the p5.js values and the Arduino values (including both variable resistors and analogWrite ranges).

Final Project Idea

Hands as Controllers

I am not sure how feasible is this idea. However, I want to display the silhouettes of the hand’s actions through the LED lights. As the hand moves, the lights will light up based on the changes of location of the hand. Below is an illustration of the idea:

The way the hand moves will decide on how the light will behave. I am intending to use the ML5 library from p5.js to track the hand movements. The data return from the library function will be mapped into the LED lights that will light up accordingly.

Map Drawing

I want to have p5js as a drawing tool that can decide on the path that the 4-wheel motor model can run on. However, I want this to be drawn using the hands in the air, using different libraries in p5js. Then the data can be transferred to the model to allow the motor to run. Also, the model will have distance sensor to detect any blockage in the pathway to stop accordingly.

Week 11 – Reading Reflection – Main Point of Interaction

Due to the development of technology, people are having less and less contacts with the actual environment. Therefore, we are getting more familiar with the sensory of the plastics and glass screen rather the actual environment. We are more familiar with see images through a glass screen (and nowadays we  have VR/AR technologies to replace that). Hence, it seems that we start to forgot how to interact with the actual objects and environment.

Take the hand as the example. The hands in the futuristic environment are used as an intermediate tool to have a visually satisfied results. However, why should the hands have only that limited functionality (only touches)?  As mentioned in the reading, the hands can understand the change in patterns, change in weight. It is very sensitive to any changes.  Therefore, why don’t make it to be responsible for more interactive things.

I believe that the potential for changing the ways we interact with technology is really significant. Interactivity does not necessary comes from the screen itself. It is how the user interact with it. Therefore, the technology should be the intermediate tool for any actions that will be carried out by the audiences. Physical computing seems to be a possible way to do that. While using the hardware materials as the central technologies, the actions and sensory are experienced by the users.

Finally, this is not to say that all of the current technologies of tablet and laptop are the terrible choices. My point is that those technologies shall not be the main attention of our interaction with any media and technology.

Week 11: Lullaby Box by Sihyun and Linh

Concept

While brainstorming for this assignment, we decided that we would like to make a project that utilizes the photocell and the buttons. The photocell changes its resistance value depending on the light intensity. And this inspired us to explore interactions between light presence and absence. We came up with a cute project called the “Lullaby Box”, which plays nursery rhymes when the photocell is covered and a button is pressed. This project features yellow, red, and green buttons, each corresponding to different songs: ‘Mary Had a Little Lamb,’ ‘Twinkle Twinkle Little Star,’ and ‘Happy Birthday Song,’ respectively. The rationale for the songs playing only when the photocell is covered is tied to the project’s purpose: to soothe children to sleep in the dark.

Schematics and Diagram

The circuit consists of three main parts: the light sensor, the Piezo Speaker, and the buttons. The light sensor is connected to port A5 for analog reading. It acts as a button in the circuit, allowing the song to play when the photocell is covered. The buttons are the main controller of this assignment. They are connected to ports A2, A3, and A4 for digital reading. The song will only play when the photocell is covered (its value is low) and the button is pressed. Each button is responsible for a different song that will be played by the Piezo Speaker (connected to port 8 for digital output).

Arduino Code

#include "pitches.h"
//mary had a little lamb
int melody1[] = {
  NOTE_E4, NOTE_D4, NOTE_C4, NOTE_D4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_D4, NOTE_D4,
  NOTE_E4, NOTE_G4, NOTE_G4, NOTE_E4, NOTE_D4, NOTE_C4, NOTE_D4, NOTE_E4, NOTE_E4, NOTE_E4,
  NOTE_E4, NOTE_D4, NOTE_D4, NOTE_E4, NOTE_D4, NOTE_C4
};
// twinkle twinkle little star
int melody2[]={
  NOTE_C4, NOTE_C4, NOTE_G4, NOTE_G4, NOTE_A4, NOTE_A4, NOTE_G4, NOTE_F4, NOTE_F4, NOTE_E4,
  NOTE_E4, NOTE_D4, NOTE_D4, NOTE_C4, NOTE_G4, NOTE_G4, NOTE_F4, NOTE_F4, NOTE_E4, NOTE_E4,
  NOTE_D4, NOTE_G4, NOTE_G4, NOTE_F4, NOTE_F4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_C4, NOTE_C4,
  NOTE_G4, NOTE_G4, NOTE_A4, NOTE_A4, NOTE_G4, NOTE_F4, NOTE_F4, NOTE_E4, NOTE_E4, NOTE_D4,
  NOTE_D4, NOTE_C4
};
// happy birthday
int melody3[] = {
  NOTE_G4, NOTE_G4, NOTE_A4, NOTE_G4, NOTE_C5, NOTE_B4,
  NOTE_G4, NOTE_G4, NOTE_A4, NOTE_G4, NOTE_D5, NOTE_C5,
  NOTE_G4, NOTE_G4, NOTE_G5, NOTE_E5, NOTE_C5, NOTE_B4, NOTE_A4,
  NOTE_F5, NOTE_F5, NOTE_E5, NOTE_C5, NOTE_D5, NOTE_C5
};



int noteDurations[] = {
  4, 4, 4, 4, 4, 4, 2, 4, 4, 2, 4, 4, 2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1
};
int noteDurations2[] = {
  4,4,4,4,4,4,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2
};

int noteDurations3[] = {
  8, 8, 4, 4, 4, 2,
  8, 8, 4, 4, 4, 2,
  8, 8, 4, 4, 4, 4, 4,
  8, 8, 4, 4, 4, 2
};


void setup() {
  Serial.begin(9600);
  pinMode(A4, INPUT); // yellow switch

}

void loop() {
  int sensorValue= analogRead(A5);
  int yellowButtonState = digitalRead(A4);
  int redButtonState = digitalRead(A3);
  int greenButtonState= digitalRead(A2);
  // Serial.println(sensorValue);
  if (sensorValue<=800){
  if (yellowButtonState == HIGH) {
    // Play the melody only if the button is pressed
    for (int thisNote = 0; thisNote < 26; thisNote++) {
      // Check the button state inside the loop
      yellowButtonState = digitalRead(A4);
      if (yellowButtonState == LOW) {
        noTone(8);
        break; // Exit the loop if the button is released
      }

      int noteDuration = 1000 / noteDurations[thisNote];
      tone(8, melody1[thisNote], noteDuration);
      int pauseBetweenNotes = noteDuration * 1.30;

      // Use non-blocking delay
      int startTime = millis();
      while (millis() - startTime < pauseBetweenNotes) {
        // Update the button state
        yellowButtonState = digitalRead(A4);
        if (yellowButtonState == LOW) {
          noTone(8);
          break; // Stop the tone if the button is released
        }
      }
      noTone(8); // Ensure the tone is stopped after the loop
    }
  }
  else if (redButtonState == HIGH) {
    // Play the melody only if the button is pressed
    for (int thisNote = 0; thisNote < 42; thisNote++) {
      // Check the button state inside the loop
      redButtonState = digitalRead(A3);
      if (redButtonState == LOW) {
        noTone(8);
        break; // Exit the loop if the button is released
      }

      int noteDuration = 1000 / noteDurations2[thisNote];
      tone(8, melody2[thisNote], noteDuration);
      int pauseBetweenNotes = noteDuration * 1.30;

      // Use non-blocking delay
      int startTime = millis();
      while (millis() - startTime < pauseBetweenNotes) {
        // Update the button state
        redButtonState = digitalRead(A3);
        if (redButtonState == LOW) {
          noTone(8);
          break; // Stop the tone if the button is released
        }
      }
      noTone(8); // Ensure the tone is stopped after the loop
    }
  }
    else if (greenButtonState == HIGH) {
    // Play the melody only if the button is pressed
    for (int thisNote = 0; thisNote < 25; thisNote++) {
      // Check the button state inside the loop
      greenButtonState = digitalRead(A2);
      if (greenButtonState == LOW) {
        noTone(8);
        break; // Exit the loop if the button is released
      }

      int noteDuration = 1000 / noteDurations3[thisNote];
      tone(8, melody3[thisNote], noteDuration);
      int pauseBetweenNotes = noteDuration * 1.30;

      int startTime = millis();
      while (millis() - startTime < pauseBetweenNotes) {
        // Update the button state
        greenButtonState = digitalRead(A2);
        if (greenButtonState == LOW) {
          noTone(8);
          break; // Stop the tone if the button is released
        }
      }
      noTone(8); // Ensure the tone is stopped after the loop
    }
  }
   else {
    noTone(8); // Ensure the tone is stopped if not in the loop
  }
}}

Firstly, we initialize the code with arrays of the melody we will play. There are 3 different songs: ‘Mary Had a Little Lamb,’ ‘Twinkle Twinkle Little Star,’ and ‘Happy Birthday Song’. Each song will have one array for the notes and another array for the note durations.

Next, we initialized all the input readings at A2, A3, A4, and A5 for the buttons and the photocell. To run the songs, we continuously check for the value of the photocell to allow the buttons to turn on the songs. If the photocell is covered (the value is low), the song will play according to the buttons. For each note that is about to play, the code will check for the state of the button whether it is pressed or not. If the button is not pressed, the song will stop playing immediately.

Built Circuit Based on the Diagram

Demonstration of the Final Outcome

Challenges and Reflection

We genuinely enjoyed tackling this assignment, despite its initial challenges. Creating the melodies was particularly tricky since neither of us is knowledgeable about musical composition and notes. Eventually, we grasped what was required and made progress.

A weird issue arose while we were constructing the circuits for this project. To determine the resistance value of the photocell, we attempted to use code and the serial monitor. However, we discovered that the photocell was malfunctioning. Initially, we suspected errors in our coding or circuit assembly. Despite repeatedly consulting past lecture slides, re-uploading the code, and rebuilding the circuit, we couldn’t pinpoint any mistakes. Then, quite mysteriously, the photocell began functioning correctly after another code upload. We still don’t understand the cause of this issue. Aside from this strange incident, we encountered no significant problems throughout the project

For improvement, we would like to add more button switches and songs for the variety of choices. In this way, the range of songs that the Lullaby box can offer would make this project more entertaining to play with during the nap time of children. Also, we think it would be nice if we could change the volume through the potentiometer. We think that this would enhance user control, allowing the sound level to be easily modified to suit the environment and preferences of the user.

 

 

Week 10 – Reading Reflection – It’s not the artist performance

In the reading “Making Interactive Art: Set the Stage, Then Shut Up and Listen”, I understand that the creators are not the main performance. Interactive art allows users to impose actions onto the artworks instead of only observing. Therefore, their user experience and their understandings are different from each other with the same interactivity of the artwork.

Hence, we need to guide the users to the experience that we want them to have. As it is mentioned in the reading, we can give hints, use the space, use the design to allow the users to understand their respective actions. However, after that, it is the users’ performance. It will become their experience and their understandings because arts do not necessarily have a set meaning that we need the others to understand.

Finally, there are a variety of physical computing arts. Even though it may be in the same category, it does not mean that the users will have the same experience. For example, there are multiple projects about music. However, each project may require different skillset and understanding in music to interact with. Therefore, it is important to always experiment with the any project without the fear to be repetitive.

Week 10 – Everything, Everywhere, All at Once

Concept

I wanted to explore how different components can interact with each other in a circuit. It is not simply how the buttons can control the lights or the variable resistors can transmit ranges of analog values. I want to understand if I can make an indirect interaction between the lights and the resistors such that I can light up another light using a light.

IMPLEMENTATION

Using a button, I manipulate 1 light to light up as we push down on the button. For the analog input, I used the photocell to manipulate another light to light up when I am not covering it. However, another special thing is that the button can also control the second light when the photocell is covered.

Below is an estimation of the material that I used:

 

Below is the schematic for circuit:

Code implementation:

int led1 = 13;
int led2 = 12;
int brightness = 0;

// the setup routine runs once when you press reset:
void setup() {
  // initialize serial communication at 9600 bits per second:
  Serial.begin(9600);
  pinMode(led1, OUTPUT);
  pinMode(A3, INPUT);
  pinMode(led2, OUTPUT);
}

// the loop routine runs over and over again forever:
void loop() {
  // read the input on analog pin 0:
  int sensorValue = analogRead(A2);
  int buttonState = digitalRead(A3);

  sensorValue = constrain(sensorValue, 500, 900);
  brightness = map(sensorValue, 500, 860, 255, 0);

  analogWrite(led1, brightness);
  analogWrite(led2, brightness);

  Serial.println(sensorValue);
  delay(30);  // delay in between reads for stability


  if (buttonState == LOW){
    digitalWrite(led2, LOW);
  }
  else{
    digitalWrite(led2, HIGH);
  }
}

Video in Action

I realized that everything in my circuit is connected to each other even though there is no link between them. Take the inspiration of the movie “Everything, Everywhere, All at Once”, the ultimate result can only happen if all of the previous actions are done:

Challenges

It is quite difficult to grasp the concept of using different small circuits to serve for the bigger general circuit. I have a bit of trouble to utilize the brightness of another LED to adjust the value for the photocell. However, it works out in the end as I try to figure out the correct range of the new values of the photocell with the LED.

Reflection

Even though I was able to make a schematic diagram for the circuit, it is quite unorganized right now, I would like to learn more about how I can organize the wires and components so that it can be seen clearly in the diagram.

Unusual Switch – Hat and Head

Inspiration

To get started on the project, I was thinking if there is any object that I usually used. I started to look for any object that is used without the contact of my hands. The solution is my hat. Since the hat is the point of contact with the head and not with the hands, I decided to make a circuit around my hat. Therefore, as I wear my hat, the light will turn on.

Concept

The hat switch uses the a simple circuit with the Arduino board, an LED light, a resistor, some aluminum foils along with battery to attach them on the hat. Below is the schematic of the circuit:

Below is the actual circuit after everything is connected:

implementation

Firstly, I connected the circuit together with the male-to-female wires so that I do not need to use the bread board. After making sure that everything is working properly, I start to attach the aluminum foils to the wires to increase the surface area of contact for the switch. Lastly, I attach everything to my head such that the aluminum foils are placed inside the hat and only come to contact if someone wears it.  Below is the actual visualization of how the final product looks like:

Challenges

It was quite difficult to figuring out how I can attach the breadboard (which also along with the batteries and the Arduino board) to the hat. However, after I look around the IM labs “Consumables” to see what type of material they have, I noticed the different wires that can be connected together without the breadboard.

Reflection

It was a fun assignment that I have to think about the daily action that I could have done differently. Also, I also explore the possible resources that I have in the IM labs in the “Consumables” section.

What I want to improve on is how I can design the hat to be more aesthetic. The current version only have tapes and wires that can clearly be seen at the first glance. I would love to know how I can hide the wires for better design.

The video

Reading Reflection – Week 8

Design is the first thing a user will see for any type of product, either tangible or digital. Therefore, it is important to have a design that can serve the purpose of the product.

However, how can we know that we are serving the purpose of the product through design? Similar to the story of the “Three Teapots”, sometimes the teapot is chosen solely on the mood of the user. Then, I wonder that how we can predict the mood of our customers through designs. As in design, since we can not predict the behavior of our potential customers, we tend to design it the be as easy to understand as possible based on their functionality with the highest degree of aesthetics.

Nevertheless, there is a property that we need to further consider: usability. As it is mentioned in the reading, “[usability is] equal to beauty, equal to function equal but not superior” (Norman).  I learnt that it is important to also consider how the user will use it and in what situation they can use them. There are different situations that a certain product can be used. Take the most common example: a door. In the case of emergency, the door should not be a stress point. In other words, it should provide as quick and easy access as possible. On the other hand, for the door that is a secure gate to seek for protection, it should be tough to open at any point. Therefore, these 2 doors with similar functionality have different usability and, hence, have different designs.

I believe that the how a certain product is used is heavily depended on the user. It is indeed difficult to understand all users, but we can still design with how the users react with the product in mind. Similar to how Hamilton helped with the NASA mission to Mars, we need to account for all possibility of interaction between the users and the product itself.