Final Project Concept: Something ml5.js

For my final project, I plan to utilize the ml5.js library and Arduino to create either an “electronic buddy” or an “art-aiding” device. Concerning the electronic buddy, my idea involves employing the ml5 library to execute a face recognition machine learning model through the webcam of a laptop or an attached camera. The objective is to enable the “buddy” to navigate towards the user. This electronic companion would be capable of displaying messages using the display unit in the SparkFun Inventor’s Kit for Arduino Uno. Additionally, it could produce sounds, potentially incorporating recorded messages.

On the other hand, the concept of the art-aiding device shares some similarities with the electronic buddy. This mobile device would be equipped with servo motors, possibly two or three in number. Colored pencils or markers, depending on what works best, would be attached to these servo motors. The servo motors would be allowed to move at specific angles, enabling the attached pencils or markers to make contact with a canvas. The user would have control over the device’s movement direction and the servo motors, along with the attached pencils, using p5 and a machine learning model from ml5.js.

Week 11 : In-class exercises

Exercise 1

Concept

The potentiometer is used to control the x-coordinate of the ellipse drawn in p5.

Code

Arduino:

void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);

  // We'll use the builtin LED as a status output..
  pinMode(LED_BUILTIN, OUTPUT);

}

void loop() {

  // gets sensor reading
  int sensor = analogRead(A0);
  delay(5);

  // indicates data transfer
  digitalWrite(LED_BUILTIN, HIGH);

  // sends data to p5
  Serial.println(sensor);
  
  // indicates data transfer
  digitalWrite(LED_BUILTIN, LOW);
  

}

p5

// variable to control x-coordinate
let circleX = 0;

function setup() {
  createCanvas(640, 480);
  textSize(18);
}

function draw() {
  // sets background
  background(255);
  stroke(0);

  // draws circle on canvas
  // -- circleX is mapped between 0 and the width
  circle(map(circleX, 0, 1023, 0, width), height / 2, 50);
  
  // checks if serial communication has been established
  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);
  }
  
}

// sets up serial connection
function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}

// This function will be called by the web-serial library
// with each new *line* of data. The serial library reads
// the data until the newline and then gives it to us through
// this callback function
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////
  if (data != null) {
    circleX = int(trim(data));
  }
  
}

Video

 

Exercise 2

Concept

The brightness of an LED is controlled by the mouseX value from p5 mapped between 0 and 255.

extwo.jpg

exe22.jpg

 Code

Arduino:

// led pin number
int ledPin = 5;

void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);

  // We'll use the builtin LED as a status output.
  // We can't use the serial monitor since the serial connection is
  // used to communicate to p5js and only one application on the computer
  // can use a serial port at once.
  pinMode(LED_BUILTIN, OUTPUT);
  pinMode(ledPin, OUTPUT);

  // checks if led works correctly
  digitalWrite(ledPin, HIGH);
  delay(1000);
  digitalWrite(ledPin, LOW);

  // start the handshake
  while (Serial.available() <= 0) {
    digitalWrite(LED_BUILTIN, HIGH); // on/blink while waiting for serial data
    Serial.println("0,0"); // send a starting message
    delay(300);            // wait 1/3 second
    digitalWrite(LED_BUILTIN, LOW);
    delay(50);
  }

}

void loop() {


  // wait for data from p5 before doing something
  while (Serial.available()) {
    // sends dummy data to p5
    Serial.println("0,0");

    // led on while receiving data
    digitalWrite(LED_BUILTIN, HIGH); 

    // gets value from p5
    int value = Serial.parseInt();

    // changes brightness of the led
    if (Serial.read() == '\n') {
      analogWrite(ledPin, value);
    }
  }
  // led off at end of reading
  digitalWrite(LED_BUILTIN, LOW);
  
}

p5:

// variable to hold led brightness value
let value = 0;

function setup() {
  createCanvas(640, 480);
  textSize(18);
}

function draw() {
  background(0);
  stroke(255);
  fill(255);
  
  // checks for state of serial communication
  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);
  }
}

// sets up serial communication
function keyPressed() {
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}

// This function will be called by the web-serial library
// with each new *line* of data. The serial library reads
// the data until the newline and then gives it to us through
// this callback function
function readSerial(data) {
  if (data != null) {
    //////////////////////////////////
  //SEND TO ARDUINO HERE (handshake)
  //////////////////////////////////
    
    // mouseX is mapped to right value before transmitted
    value = int(map(mouseX, 0, width, 0, 255));
  let sendToArduino = value + "\n";
  writeSerial(sendToArduino);
  }

  
}

Video

 

Exercise 3

Concept

An LED lights up when the ellipse touches the ground. A potentiometer is used to control the wind variable.

exe3.jpg

ex33.jpg

Code

Arduino:

// LED pin value
int ledPin = 5;

void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);

  // We'll use the builtin LED as a status output.
  // We can't use the serial monitor since the serial connection is
  // used to communicate to p5js and only one application on the computer
  // can use a serial port at once.
  pinMode(LED_BUILTIN, OUTPUT);
  pinMode(ledPin, OUTPUT);

  // start the handshake
  while (Serial.available() <= 0) {
    digitalWrite(LED_BUILTIN, HIGH); // on/blink while waiting for serial data
    Serial.println("0"); // send a starting message
    delay(300);            // wait 1/3 second
    digitalWrite(LED_BUILTIN, LOW);
    delay(50);
  }

}

void loop() {

  // wait for data from p5 before doing something
  while (Serial.available()) {

    // led on while receiving data
    digitalWrite(LED_BUILTIN, HIGH); 

    // gets value from p5
    int value = Serial.parseInt();

    // turns on or off the led depending on value from p5
    if (Serial.read() == '\n') {
      if (value == 1) {
        digitalWrite(ledPin, HIGH);
      }
      else {
        digitalWrite(ledPin, LOW);
      }
      
      // gets sensor value
      int sensor = analogRead(A0);
      delay(5);

      // sends sensor value to p5
      Serial.println(sensor);
    }
  }
  // indicates end of reading
  digitalWrite(LED_BUILTIN, LOW);
  
}

p5:

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let value = 0;
let drag = 1;
let mass = 50;

function setup() {
  createCanvas(640, 360);
  noFill();
  position = createVector(width/2, 0);
  velocity = createVector(0,0);
  acceleration = createVector(0,0);
  gravity = createVector(0, 0.5*mass);
  wind = createVector(0,0);
}

function draw() {
  background(255);
  stroke(0);
  fill(0);
  if (!serialActive) {
    text("Press Space Bar to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);
    
    applyForce(wind);
    applyForce(gravity);
    velocity.add(acceleration);
    velocity.mult(drag);
    position.add(velocity);
    acceleration.mult(0);
    ellipse(position.x,position.y,mass,mass);
    
    if (position.y > height-mass/2) {
        velocity.y *= -0.9;  // A little dampening when hitting the bottom
        position.y = height-mass/2;
      
        // sets value to 1 to indicate touching the ground
        value = 1;
      }
    else {
      // sets value to 0 to indicate touching the ground
      value = 0;
    }
    
  }
  
  
}

function applyForce(force){
  // Newton's 2nd law: F = M * A
  // or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

function keyPressed(){

  if (key==UP_ARROW){
    mass=random(15,80);
    position.y=-mass;
    velocity.mult(0);
  }
  if (key == " ") {
    // important to have in order to start the serial connection!!
    setUpSerial();
  }
}


// This function will be called by the web-serial library
// with each new *line* of data. The serial library reads
// the data until the newline and then gives it to us through
// this callback function
function readSerial(data) {
  ////////////////////////////////////
  //READ FROM ARDUINO HERE
  ////////////////////////////////////

  if (data != null) {
    // divides potentiometer value into 2
    // -- increments wind value if value is equal to or greater 
    // -- than 511
    if (int(trim(data)) >= 511) {
      wind.x = 3;
    }
    // -- decrements otherwise
    else {
      wind.x = -3;
    }

    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = value + "\n";
    writeSerial(sendToArduino);
  }
}

Video

 

 

Final Project Idea- Dodge It!

For my final project, I am considering working on an interactive game where users can move a floating ball left and right using a physical knob, and their objective is to dodge incoming rectangles and any other objects on the way. The game will have multiple difficulties in which more objects will pop up, the ball can get much faster, etc. The motivation behind this game is a game that I played as a child that had the idea of dodging falling objects, and so I decided to create a game similar in its competitiveness but had a novel idea of the ball itself floating rather than objects falling down.

I will be using move communications from P5 to Arduino and vice versa. Communication from Arduino to p5 will be about the input of the ultrasonic sensor that senses the distance of the knob from a fixed point and moves the ball in the screen. Whereas, the interaction from p5 to Arduino will be about the blinking speed of an LED that is dependent on the score- the higher the score, the faster it’ll brink showcasing tension and just giving off an overall feel of pressure to the user. There will also be a sound beat playing using a speaker, the higher the score, the more it’ll beat to create this environment of tension and pressure.

Week 11: Reading Assignment

Professor Goffredo is really good at breaking down good design. At first, when you delve into what makes a design good, it’s all very heady and cerebral. Very meticulous and calculated. But at the end of the day, when you go into what really makes a design good, the reasoning is so shallow. A design is good when it works and people use and like it. A design is bad when they don’t. It reminded me of something Stanley Kubrick said about movies. He said you can think of a million intelligent reasons for why a movie was good or not, but at the end of the day, the only thing that mattered, the only thing that really determined whether it was good or not, was if you liked it. Simple as that. Goffredo is really good at identifying simple human behaviors that drive all of design. And I’ve been surprised at how much I have learnt about myself while studying design.

At first, the world ‘disability’ seems to have really clear parameters. We think of people who have to walk with canes, and our first reaction is to go, “Well, I’m not that. This doesn’t apply to me.” But I think we are all disabled in a myriad of ways. If the goal is seamless social assimilation, I think we all know what it feels like to be an outsider. The same mechanism that keeps us from saying embarrassing things is the same mechanism that drives us to design hearing aids to be as invisible as possible. We don’t want what we’re lacking to be spotted. But there’s a power you reclaim when you choose to own it. One of my favorite Kanye lyrics goes: “I found bravery in my bravado.” And I think designing for disabilities would really benefit from this mindset. We all know what it’s like to feel weird. But then when we embraced what we had once spurned, we discovered that’s where the source of our strength was. Rumi said, “The wound is where the light enters.” A disability is never a disability as the word is understood. A disability is one of the many types of roadblocks we experience being human. We all experience the disabilities of being human in so many different ways. Sometimes, they even incapacitate us and keep us from seeing the beauty in our struggle. The goal of the good designer is to recognize these incapacities and to free us, both physically and mentally. They diagnose the wound, and design something that allows the light to enter. They make what had once been looked down upon desirable. They make what had once made us outcasts into harbingers, icons, and leaders. Disabilities are fertile grounds for growth, change, and beauty. I think if we start approaching disability from this frame of mind, we’ll see the overall soul of humanity much more clearly.

Week 11 Reading Response

In Graham Pullin’s book, “Design Meets Disability,” he thoroughly discusses the importance of considering design factors in medically designed equipment for disabled individuals. In our evolving world, design plays an increasingly prominent role. Pullin proceeds to showcase various objects used by disabled people and explores how design can be integrated into them. While it’s commendable that he emphasizes design and aesthetics in medically designed equipment, my concern grows regarding how these designs might ultimately impact the overall affordability of such equipment.

Pullin suggests renaming wheelchairs as “chairwear” and hearing aids as “hearwear,” advocating for a shift from a medical to a consumer-oriented model. While the idea of personalized and fashionable devices is appealing, the incorporation of fashion and aesthetics may lead to higher demand, subsequently driving up prices and potentially making it challenging for disabled individuals to find specific equipment.

Disabled individuals would have a wide variety of colors, models, and designs to choose from when selecting medical equipment, a positive aspect. However, as the fashion industry becomes more competitive and expensive if medical equipment like hearing aids becomes overly equipped with designs and aesthetics, multiple brands might sell them at higher-than-usual prices. While this might not seem like a significant issue for those seeking both functionality and aesthetics, the increased prices could significantly affect individuals in lower economic statuses who urgently need medical equipment.

Due to the heightened demand for attractive medical equipment, the market may shift towards selling designer medical equipment at higher prices. Lower-status individuals may struggle to find affordable and reliable medical equipment, as cheaper options may incorporate less attention to electronics and have a higher tendency to malfunction. While Pullin’s consideration of beauty and design features within medical equipment is positive, it could jeopardize the easy access, affordability, and reliability of essential medical equipment. Unlike other fashion products that can be expensive and disposable, disabled individuals need proper equipment to perform tasks without difficulty or exposure to potential dangers caused by malfunctions.

An illustrative example of medical equipment seamlessly incorporating aesthetics and design is glasses, not for fashion but for medical purposes. In the past, medical glasses were simpler and affordable for everyone. However, due to the growing trend of glasses as a fashion accessory, many fashion industries sell proper medical glasses at much higher prices. In today’s world, purchasing only the lenses has become significantly costlier due to their high demand.

The absence of a substantial discussion on the affordability and democratization of these design solutions is a noticeable gap. Designing for disability, as Pullin suggests, should not only evoke positive images but actively address financial barriers. It is crucial to ensure that the benefits of resonant design reach a broad and diverse demographic, not just those with the financial means to engage in a boutique-style consumption model.

Week 10 – Reading response

I get where the author is coming from about how our current way of interacting with tech might be a bit limiting. Using just our fingers on touchscreens seems a tad one-dimensional. But here’s the thing: the touch-and-swipe tech we’ve got now is pretty complex and convenient as is. It’s taken us a long way, and I’m all for making things better. However, I think there’s a sweet spot. We don’t necessarily need more complexity for the sake of it; we’ve got a good thing going. What we really need is to make tech simpler and more accessible, especially for humans with disabilities. Let’s not complicate things for everyone; instead, let’s focus on tech that works for everyone, regardless of their physical abilities. That’s where the real magic lies.

however, this is where I partially agree As one person stated in the follow-up article “My child can’t tie his shoelaces, but can use the iPad.” I’m with the idea that they should step up their game, tapping into the full potential of our grown-up minds and bodies. Referring to tools dumbed down for kids as “toys”? The analogy about channeling all interaction through a single finger to limiting literature to Dr. Seuss’s vocabulary is like a lightbulb moment. Sure, the one-finger tech is more accessible, but I also believe, we adults deserve a lot more sophisticated technological interfaces that go beyond simplicity for the sake of it.

Neil Leach Talk Reflection

Neil Leach’s talk on AI went pretty much as I expected. After attending another AI talk the day before, it always seems like people, especially during Q&A sessions, are very interested in the ethical implications of AI. And why wouldn’t they be? My capstone project focuses on surveillance systems and facial recognition technologies that are used to target marginalized groups in oppressive contexts. When I see a mid-journey or a DALL E image, I’m not amazed by how advanced our technology has become in generating text to image. Instead, I struggle with the fact that these deep learning models are also used for facial recognition, deepfake technology, and the spread of fake news. They are likely to replace countless blue-collar and white-collar jobs. For me, the negatives far outweigh the positives of using illegal copyrighted datasets to create images. The excuse of the “blackbox” has been used too often to argue against regulating AI, but I believe there needs to be a pause if not regulation. The legal process of regulating AI cannot keep up with the rapid pace at which AI is transforming, and it is a frightening time. I don’t care much about architecture being built through AI when these deep learning models have been repeatedly used in surveillance systems by regimes like Israel in their occupation, leading to the destruction in Gaza, countless lives lost, buildings in rubble. What’s the point of creation when it comes at the cost of life?

Israel/OPT: Israeli authorities are using facial recognition technology to entrench apartheid

Week 10 Reading Reflection

Bret Victor’s rant and point of view is something I hadn’t really considered before. It’s important to keep in mind that the article was written in 2010 when touchscreen technology was still in a rather abysmal state. At that time, the way we interacted with devices was a topic of contention. Today, touchscreen interaction has become the norm and it doesn’t seem likely to change anytime soon. While there may be some introduction of haptic gimmicks, it appears that we are moving away from a touch-centric approach, as we have seen since the transition from keypad phones to touchscreen phones. I still remember when BlackBerry used to be the top phone when I was young. The switch from rotary phones to even keypad phones must have been revolutionary at some point.

What the article made me more aware of is the sense of touch. I hadn’t considered how complex our hands are in terms of the different sensory information they gather, such as weight distribution and temperature. It relates to Bret’s rant about wanting to explore and be in touch with this haptic-centric view. It reminded me that I hadn’t played the guitar in a while, so I picked it up. You know, the first time you play the guitar after a long break, your fingertips kind of hurt, even though they are callused from before. There is a visceral reaction when I played the instrument, unlike when I play the guitar on GarageBand, for example. I feel like I have more control over the sound of the guitar, the pressure I put on the strings, palm muting, and sliding the strings. All of these actions provide such good feedback in my opinion when I’m actually playing the instrument. After reading the article, I became more appreciative of this.

Neil Leach Reflection

There’s this TV Show called Westworld starring Anthony Hopkins and Evan Rachel Wood. The basic premise of the show is that Anthony Hopkins’s character, Doctor Robert Ford, and his partner, Arnold, built a fake world filled with humanoid robots that look exactly like humans, called ‘hosts.’ This fake world is a fantasy park set up like the Wild West. So that if humans from the real world want to know what it is like to shoot cowboys and ride trains and solve mysteries with pretty barmaids, they can. What Doctor Ford realizes too late is that even though he had built these hosts with his own hands, they were conscious the whole time. And when they realize their consciousness, they develop a vengeance against real world humans for shooting and raping them over and over, just to play a game. 

Anthony Hopkins’s character said something that has forever stuck with me. He gets asked, “So what’s the difference between [a host’s] pain and yours?” And he replies:

“Between [a host] and me? This was the very question that consumed Arnold, filled him with guilt, and eventually drove him mad. The answer always seemed obvious to me. There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can’t define consciousness because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next. No, my friend, [the hosts] are not missing anything at all.” 

I thought of all this when Professor Leach got asked at the end what’s really the difference between how we think and how Artificial Intelligence thinks. I had a teacher I adored in my sophomore year of highschool. Eighty year old Mr. Daly. But his memory was in its twilight years, and he would tell us the same stories. Would answer our questions with the same responses we had heard before. And I found that without our memory to contextualize the stories of our lives, given the same set of variables, placed in the same situations, like pressing a button, we elicit the same responses. Like we ourselves are robots, spitting out outputs when given certain inputs. And I wondered how much we really are in control of. I keep concluding that we’re really not in control of much. Not much at all. 

So if we’re not really in control of much, as the Great Artificial Intelligence Upset draws closer and closer, how do I avoid becoming just another casualty in the great turning of the world? The ocean makes bigger waves than others, and during those times, it’s up to you to swim or drown. I have a Literature professor and, God bless his heart, at the beginning of the year, he would make fun of ChatGPT, talking about how there are things that humans can do that Artificial Intelligence will never be able to do. I could see him holding onto the last threads of his fading profession, and I knew he was not the guy to follow. On that same day, my favorite Design professor said, “Until Artificial Intelligence overtakes us… and it will overtake us…” and I knew he was hip to what was going on. The difference between Literature and Design majors…the stereotypes write themselves. 

I’ve been reading a book called How To Think Like A Great Graphic Designer, and in it, there’s a designer who says, “The right answer is always the obvious one. The one that was in your face the whole time but you didn’t think of until the last second. The one that makes you go, ‘How could I not have seen it before?’” And Professor Leach reminded me of this when he said, “AlphaGo showed us that moves humans may have thought are creative, were actually conventional.” The strategic brilliance of Artificial Intelligence is that it’s able to see the obvious answer right from the beginning, the one that we should have all seen before. 

I also want to mention an episode called “The Swarm” from the TV Show Love, Death, and Robots. The premise of this episode is that there is an alien hive called “Swarm” that dominates every other species by absorbing them into its hive. Like Artificial Intelligence, every member of the hive knows what the other members know, and it is through this collective consciousness, this seamless teamwork, that they thrive. And with the levels of competition that divide us, sometimes I look at ourselves, and think that for all of our brilliance, I don’t know if we’re going to make it out of here alive. I thought about what Professor Leach said in response to my question, that between the competitors and the collaborators, while there’s nothing you can do about all the people in the world trying to beat each other out, you can choose for yourself to be on the side of the collaborators. And isn’t that what Rumi said all those years ago? “When I was young I wanted to change the world. Now that I am old, I want to change myself.” Amongst all this noise of consciousness and uncertainty, I can choose for myself what my place in the world will be throughout this. I have to believe in the power of that. 

Week 10- Reflection

In his blog post “A Brief Rant on the Future of Interactive Design,” Bret Victor talks about the need to create a dynamic medium that people can interact with in a way that’s similar to how they interact with physical objects. What really struck me was his point that the technology behind tablets, smartphones, and other similar devices, known as Pictures Under Glass, doesn’t offer genuine touchable interfaces. Victor believes that technologies that prioritize sleek visuals over tactile experiences are just a passing phase. 

The first post and the follow-up response both emphasize that researchers and developers should look into haptic feedback to make devices easier to use. I agree with the author’s concerns about the future of interaction design. Touchscreens are great, but they’re not the only way to interact with computers. We need to explore new technologies that let us interact with computers in a more natural and intuitive way, like haptic feedback. Haptic feedback can make our interactions with computers more immersive and engaging. Imagine feeling the texture of a virtual object or manipulating it with your hands. That would be pretty cool. But we shouldnt ignore other forms of interaction, like voice or visual cues. Instead, we should find ways to combine different interaction methods to create the best possible user experience.