Week 11 – Serial Communication

Exercise 1: Make something that uses only one sensor on Arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on arduino is controlled by p5. 

Concept: When a person covers the light sensor, the reduced light makes the ball move horizontally in p5. The player controls the ball simply by changing how much light reaches the sensor.

P5 Code:

let address = 0;

function setup() {
  createCanvas(600, 600);
  noFill();
}

function draw() {
  background("purple");
  stroke("white");

  // Convert the incoming sensor reading (0–1023) into a horizontal screen position
  ellipse(map(address, 0, 1023, 0, width), height / 2, 100, 100);

  if (!serialActive) {
    // Show a connection screen while serial communication hasn’t started yet
    background("rgb(70,9,70)");
    stroke("white");
    textSize(50);
    text("Press Space Bar to select Serial Port", 20, 30, width - 30, 200);
  }
}

function keyPressed() {
  // When the space bar is pressed, begin the setup process for the serial port
  if (key == " ") setUpSerial();
}

function readSerial(data) {
  // If valid data arrives from the Arduino, save it for use in draw()
  if (data != null) {
    address = int(data);
  }
}

Arduino Code:

int LED = A0;

void setup() {
Serial.begin(9600);
pinMode(LED, INPUT);
}

void loop() {
int sensorValue = analogRead(A0);
Serial.println(sensorValue);
delay(5);
}

Github Link

Setup and Schematic:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Video Demonstration

Exercise 2: Make something that controls the LED brightness from p5

Concept: When the player touches the trackpad and increases the sketch’s color from black to white, the LED also gets brighter.

// Holds the brightness value we will send to the Arduino
let brightness = 0;

// Stores any data received back from Arduino (not used, but required)
let latestData = "";

function setup() {
  // Create the canvas where visual feedback will appear
  createCanvas(600, 400);
  noStroke();
}

function draw() {
  // Clear the screen each frame with a black background
  background(0);

  // Convert trackpad/mouse X position (0 → width) into brightness (0 → 255)
  brightness = int(map(mouseX, 0, width, 0, 255));

  // Draw a rectangle whose fill intensity matches the brightness value
  fill(brightness);
  rect(0, 0, width, height);

  // If a serial port is active, send the brightness value to the Arduino
  if (serialActive) {
    writeSerial(brightness + "\n"); // "\n" ensures Arduino reads full numbers
  }

  // If serial is NOT open, show instructions to the user
  if (!serialActive) {
    background("purple");
    fill("white");
    textSize(28);
    text("Press SPACE to choose Serial Port", 20, 40);
  }
}

function keyPressed() {
  // Press SPACE to open the Web Serial port selection dialog
  if (key === " ") {
    setUpSerial();
  }
}

// This function is REQUIRED by p5.webserial
// It receives data sent from Arduino (even if unused)
function readSerial(data) {
  if (data) latestData = data;
}

// Sends data to Arduino IF the writer is available
function writeSerial(value) {
  if (writer) {
    writer.write(value);
  }
}

Arduino:

int ledPin = 9;
int brightness = 0;

void setup() {
  Serial.begin(9600);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  if (Serial.available() > 0) {
    brightness = Serial.parseInt();
    brightness = constrain(brightness, 0, 255);
  }

  analogWrite(ledPin, brightness);
}

Github link

Setup and Schematic:

 

 

 

 

 

 

 

 

 

 

 

 

Video Demonstration

Exercise 3:  Take the gravity wind example and make it so every time the ball bounces one led lights up and then turns off, and you can control the wind from one analog sensor

Concept: As our ball bounced, and the red LED lit up. Using the potentiometer, the player controlled the wind, making the ball move from one side to the other.

P5 Code:

let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let on = 0;

function setup() {
  createCanvas(640, 360);
  //noFill();
  position = createVector(width/2, 0);
  velocity = createVector(0,0);
  acceleration = createVector(0,0);
  gravity = createVector(0, 0.5*mass);
  wind = createVector(0,0);
}

function draw() {
  background(255);
  
  if (!serialActive) {
    text("Click on the Screen to select Serial Port", 20, 30);
  } else {
    text("Connected", 20, 30);
  }
  
  applyForce(wind);
  applyForce(gravity);
  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);
  ellipse(position.x,position.y,mass,mass);
  if (position.y > height-mass/2) {
      velocity.y *= -0.9;  // A little dampening when hitting the bottom
      position.y = height-mass/2;
  }
  // turn on the LED only when it's on the ground or hits the ground
  if(position.y == height-mass/2){ 
    on = 0;
  }else{
    on = 1;
  }
}
function applyForce(force){
  // Newton's 2nd law: F = M * A
  // or A = F / M
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}
function keyPressed(){
  if (key==' '){
    mass=random(15, 80);
    position.y=-mass;
    velocity.mult(0);
  }
}
function mousePressed() {
    setUpSerial();
}
function readSerial(data) {
  if (data != null) {
    // make sure there is actually a message
    // split the message
    wind.x = data;
    //////////////////////////////////
    //SEND TO ARDUINO HERE (handshake)
    //////////////////////////////////
    let sendToArduino = on + "\n";
    writeSerial(sendToArduino);
  }
}

Arduino Code:

const int LED = 9;
const int POT = A0;


void setup() {
Serial.begin(9600);
pinMode(LED, OUTPUT);


// Test the LED
digitalWrite(LED, HIGH);
delay(500);
digitalWrite(LED, LOW);
}


void loop() {
int p_value = analogRead(POT); // read from the potentiometer
int move = map(p_value, 0, 1023, -1, 2); // map the value to -1, 0, and 1
Serial.println(move);


if (Serial.available() > 0) {
// read from p5.js
int touch = Serial.parseInt();
// set the LED command
if (touch == 1) {
digitalWrite(LED, HIGH);
} else {
digitalWrite(LED, LOW);
}
}
}

Github file

Setup and Schematic:

 

 

 

 

 

 

 

 

 

 

 

Video Demonstration

Reflection: Across all three projects, I learned how different sensors can shape the experience through serial communication and p5. Working with light, the trackpad, and the potentiometer showed me how physical input can smoothly translate into visual changes on the screen. In the future, I would improve the responsiveness and make the serial connection more stable so the interactions feel smoother and more reliable.

 

Week 11 – Reading Reflection

Design Meets Disability:

As I was reading, I found out that what stayed with me most was how deeply design choices are entangled with social attitudes, even often more than we admit. The author’s critique of “camouflage” design exposed how much stigma gets quietly built into objects that are supposed to help people. It made me reconsider how often “neutrality” in design actually reinforces harmful norms by trying to erase difference instead of valuing it.

I was also struck by the idea that design for disability shouldn’t be separated from regular, mainstream design. The examples of the Eames splint and Aimee Mullins’ prosthetics shifted my understanding of what inclusivity can look like. Instead of treating disability as a constraint to minimize, the reading frames it as a space for experimentation, a site where new forms, aesthetics, and cultural meanings can emerge. That idea felt surprisingly liberating, because it challenges designers to imagine disability not as a deviation, but as part of the full spectrum of human experience.

Also, the discussion of universal design made me question my own assumptions about what “inclusive” even means. Sometimes we equate inclusivity with adding features for everyone, but the reading suggests that true thoughtfulness might come from editing, refining, and listening to individual needs. It left me with a sense that designing responsibly requires both humility and boldness: the humility to recognize one’s blind spots, and the boldness to challenge conventions that limit how people see themselves.

Week 10 – Reading Reflection

A Brief Rant on the Future of Interaction Design:

When I was reading Bret Victor’s “A Brief Rant on the Future of Interaction Design” I thought about how I perceive technology and design. I realized how easily I accept “innovations” like touchscreens as the peak of progress, even though, as Victor argues, they often limit our potential rather than expand it. His critique of “pictures under glass” especially resonated with me, because I use my phone and laptop every day, but I rarely think about how numb those interactions actually are. There is no real feeling, no texture, no sense of connection between my hands and what I’m creating.

I think, this reading challenged me to imagine interfaces that feel alive, that respond to our touch and movement in meaningful ways. Victor’s idea that tools should “amplify human capabilities” made me wonder whether I am designing for convenience or for human expression. I started thinking about how interaction could involve more of the body, maybe through gestures, pressure, or sound, so that users could experience technology in a fuller, more emotional way. I also liked Victor’s reminder that “the future is a choice.” It gave me a sense of agency and responsibility as a future designer. Instead of waiting for big tech companies to define how we interact, I can be part of shaping alternatives that are more tactile, intuitive, and human-centered. This reading did not just critique existing designs, it inspired me to dream bigger and to treat design as a way of expanding what people can truly feel and do.

A follow-up article

These responses challenged the way I think about technology and its relationship to the human body. His insistence that our current interfaces are “flat and glassy” made me realize how limited most digital experiences truly are. I started questioning how often I accept these limitations without noticing them. The idea that our tools should adapt to us not the other way around feels both radical and necessary.What I found most striking was his defense of our hands and bodies as essential to understanding and creativity. It made me see touch not as something trivial, but as a form of intelligence. The thought that we could lose part of that richness by constantly interacting with lifeless screens feels unsettling.

As someone studying Interactive Media, I see this as a call to design technologies that reconnect people with the physical world. Instead of chasing the newest gadget, I want to think about how digital experiences could feel more alive, how they could move, resist, or respond in ways that make us aware of our own presence. The reflections did not just critique modern design, they opened a space for imagining interaction as something deeply human, sensory, and expressive.

Week 10 – Musical instrument

Group members: Aizhan and Darmen
Concept:

As we began planning how to present our musical instrument and the sound produced by the buzzer while using both digital and analog sensors, we decided to use a button as our digital sensor and a distance-measuring sensor as our analog sensor. The main concept is that the distance sensor detects how far an object or hand is, and based on that distance, it produces different notes and sounds through the buzzer.  When the button is pressed, the system pauses for 300 milliseconds (0.3 seconds) and temporarily stops reading distance values, effectively muting the instrument. Pressing the button again reactivates the sensor, allowing the instrument to continue playing notes according to the object’s position. This simple toggle system makes it easy to control when the instrument is active, giving users time to experiment with the sound and movement.

Arduino Setup and Demonstration: 

Schematic Diagram: 

 

 

 

 

 

 

Setup:

 

 

 

 

 

 

 

Video Demonstration

Coding:

Arduino file on Github

if (sensorActive) {
    int distance = getDistance();
    Serial.println(distance);
    if (1 < distance && distance < 5) {
      tone(BUZZER, NOTE_C4);
    } else if (5 < distance && distance < 10) {
      tone(BUZZER, NOTE_D4);
    } else if (10 < distance && distance < 15) {
      tone(BUZZER, NOTE_E4);
    } else if (15 < distance && distance < 20) {
      tone(BUZZER, NOTE_F4);
    } else if (20 < distance && distance < 25) {
      tone(BUZZER, NOTE_G4);
    } else if (25 < distance && distance < 30) {
      tone(BUZZER, NOTE_A4);
    } else if (30 < distance && distance < 35) {
      tone(BUZZER, NOTE_B4);
    } else {
      noTone(BUZZER);
    }
  }
}

float getDistance() {
  float echoTime;                   //variable to store the time it takes for a ping to bounce off an object
  float calculatedDistance;         //variable to store the distance calculated from the echo time
  //send out an ultrasonic pulse that's 10ms long
  digitalWrite(TRIG_PIN, HIGH);
  delayMicroseconds(10);
  digitalWrite(TRIG_PIN, LOW);
  echoTime = pulseIn(ECHO_PIN, HIGH);      //pulsein command to see how long it takes for pulse to bounce back to sensor
  calculatedDistance = echoTime / 55.2;    //calculate  distance of object that reflected the pulse in cm 
  return calculatedDistance;
}

We would say the highlight of the coding would be the part where the distance-measuring sensor interacts with the buzzer to produce different musical notes based on how close or far an object is. We were excited to experiment with the distance-measuring sensor and explore how it could be programmed to produce sound through the buzzer. To get better understanding of the integrating notes we referred to Arduino tutorials. 

In the code above, the sensor continuously measures the distance of an object and converts the time it takes for the sound wave to bounce back into centimeters. Depending on the measured distance, the buzzer plays different musical notes, creating a simple melody that changes as the object moves closer or farther away.

Reflection:

We enjoyed experimenting with the distance-measuring sensor as it taught us how precise sensor readings can be transformed into meaningful outputs like sound, and combining it with the button control helped us manage the instrument’s activation smoothly. For future improvements, we would like to expand the range of notes to create more complex melodies and add LED lights that change color with the pitch to make the instrument more visually engaging. We could also experiment with different sensors, such as touch or motion sensors, to add more layers of interactivity. Finally, refining the accuracy and response speed of the sensor would make the sound transitions smoother and enhance the overall experience.

Week 9 – Reading Reflection

Physical Computing’s Greatest Hits (and Misses)

Reading Physical Computing’s Greatest Hits (and Misses) really made me reflect on the role of human presence and movement in interactive design. The article explores recurring motifs in physical computing, such as theremin-style sensors, body-as-cursor interfaces, tilty tables, and mechanical pixels, and evaluates which ideas succeed and which fall short. What struck me most was the idea that interaction alone is not meaningful unless it is framed with intention or context.  I found this insight particularly relevant because gestures, motion, and bodily engagement only carry meaning when integrated into a space or narrative. The article also emphasizes that even commonly used ideas can be made fresh through variation and creativity.

The discussion of emotionally responsive projects, like “remote hugs”, also inspired me to think about the potential of physical computing to create connection and presence. It made me consider designing experiences where participants’ actions are not only triggers for a response but also carriers of meaning, emotion, or narrative. I found myself imagining interactive installations or performance spaces where movement, gesture, and proximity could communicate emotion or tell a story, giving participants a sense of agency and contribution. Overall, the article reinforced the importance of centering human input and intention over technical complexity. It motivated me to experiment more boldly with interactive media, blending technology, space, and human engagement in ways that feel purposeful, immersive, and emotionally resonant.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

When I was reading Making Interactive Art: Set the Stage, Then Shut Up and Listen I noticed a perspective of the audience or participant as a a co-creator, and the creator’s role is to design opportunities for exploration and discovery. The article encouraged setting the stage, providing affordances, and then stepping back to let participants engage on their own terms. This concept resonated deeply with me because I often feel the need to over-explain or control how people interact with my work, whether in interactive media projects, installations, or themed environments. Learning to trust participants’ curiosity and creativity is both challenging and exciting, and it made me rethink how I approach design: sometimes the most compelling experiences arise when the creator resists guiding every step and instead observes how others explore, interpret, and respond.

I also liked the idea  of “listening” to participant interaction. Observing how people engage, adapt, or even misuse an interactive installation can reveal insights the creator never intended, and these discoveries can guide future iterations. This connects to my interests in performance and immersive storytelling because, in both cases, the audience’s reactions shape the experience. It also made me reflect on how I design spaces and experiences outside of class projects, including themed parties or interactive setups, where I can experiment with encouraging participation rather than prescribing behavior. The article inspired me to embrace unpredictability, co-creation, and emergent experiences, reminding me that interaction is not just about technology or novelty, it is about creating a dynamic relationship between the participant, the space, and the narrative. Now, I want to apply this mindset to my projects, designing experiences where participants’ actions carry weight and meaning, and where discovery becomes a central part of engagement.

Week 9 – Analog input & output

Concept:

For my project, I wanted to create a system that could control multiple LEDs in both a digital and an analog fashion, and I drew inspiration from traffic lights. Instead of just two LEDs, I decided to use three LEDs to mimic a simplified traffic light sequence. When the push button is pressed, the system cycles through the lights in order: red → yellow → green, which demonstrates digital control, as each LED turns fully on or off with each button press.

For the analog aspect, I incorporated a potentiometer connected to an analog input pin on the Arduino. By adjusting the potentiometer, users can control the brightness of the LEDs, allowing for smooth, variable light intensity. Together, the push button and potentiometer create an interactive system: pressing the button cycles through the traffic light colors, while turning the potentiometer adjusts their brightness. This combination not only makes the project functional and educational but also connects to the familiar real-world concept of traffic lights.

Arduino Setup and Demonstration: 

Setup

Arduino video

Hand-drawn schematic

Coding:

Arduino File on Github

// Pins
const int POTENTIOMETER_PIN = A0; // Analog input pin for potentiometer
const int BUTTON_PIN = 2; // Digital input pin for push button
const int RED = 5;  // Digital output pin for RGB LED (red)
const int GREEN = 6;  // Digital output pin for RGB LED (green)
const int YELLOW = 9; // Digital output pin for RGB LED (yellow)

// Variables
int potentiometerValue = 0; // Potentiometer value
bool buttonState = false; // Button state
int colorIndex = 0; // Index of current color

void setup() {
  pinMode(POTENTIOMETER_PIN, INPUT);
  pinMode(BUTTON_PIN, INPUT_PULLUP);
  pinMode(RED, OUTPUT);
  pinMode(GREEN, OUTPUT);
  pinMode(YELLOW, OUTPUT);
}

void loop() {
  potentiometerValue = analogRead(POTENTIOMETER_PIN);
  
  // Map potentiometer value to LED brightness
  int brightness = map(potentiometerValue, 0, 1023, 0, 255);
  
  // Set the LED color based on the current color index
  switch (colorIndex) {
    case 0:  // Red
      analogWrite(RED, brightness);
      analogWrite(GREEN, 0);
      analogWrite(YELLOW, 0);
      break;
    case 1:  // Green
      analogWrite(RED, 0);
      analogWrite(GREEN, brightness);
      analogWrite(YELLOW, 0);
      break;
    case 2:  // Yellow
      analogWrite(RED, 0);
      analogWrite(GREEN, 0);
      analogWrite(YELLOW, brightness);
      break;
  }

  // Check button state
  if (digitalRead(BUTTON_PIN) == LOW) {
    if (!buttonState) {
      // Toggle color index
      colorIndex = (colorIndex + 1) % 3;
      buttonState = true;
      delay(200);
    }
  } else {
    buttonState = false;
  }
}

For this code, I created a circuit where I can control three LEDs—red, yellow, and green using a potentiometer and a button. The potentiometer adjusts the brightness of the active LED, while the button lets me switch between the LEDs. Each LED is connected to separate pins on the Arduino (5, 6, and 9), and the potentiometer is connected to the analog input A0. By pressing the button, I can cycle through the colors, and by turning the potentiometer, I can smoothly change how bright each LED glows.

Reflection:

Through this project, I learned how digital and analog inputs can work together to create an interactive system. It was rewarding to see how the push button and potentiometer complemented each other, one controlling the LED sequence and the other adjusting brightness. This helped me better understand how sensors and user inputs can bring a project to life.
For the future improvements, I would say adding a delay or debounce to make the button response smoother and more reliable, or using PWM pins consistently for better brightness control. In the future, I could also expand the project by adding automatic timing to simulate real traffic light behavior or incorporating sensors to make it more dynamic and realistic.

Week 8 – Reading Reflection

Her Code Got Humans on the Moon:

Margaret Hamilton’s story impressed me with how much dedication and focus one person can bring to their work. She did not see programming as a small or temporary job, she treated it as something worth building a whole discipline around. At a time when few people even understood what software was, she worked with a sense of seriousness and care that helped define the entire field. I found it admirable that she led a team in such a demanding environment, while also constantly pushing for precision and reliability in every line of code. It must have taken even more determination to do this in a male-dominated field, where her ideas were often questioned or overlooked. Yet she proved that talent and persistence can speak louder than bias.

The moment that stayed with me most was when people told her, “That would never happen.” She had warned about a potential error that others dismissed as impossible, but it did happen during the Apollo 11 mission. Because of her preparation, the software knew how to handle it, and the astronauts landed safely. This part made me reflect on how important it is to think beyond what seems likely or convenient. Her ability to imagine every possible mistake shows not only intelligence but humility, the awareness that humans and systems can fail, and that good work anticipates that. Hamilton showed that real achievement does not come from recognition, but from persistence and attention to detail. Even when others doubted her, she stayed focused on what she believed was right. That quiet confidence and responsibility are qualities I hope to develop in my own work.

Emotions & Design: Attractive things work better

This reading made me think deeply about how design affects both our emotions and our behavior. One of the main ideas is that attractive things work better because they make people feel happy and confident. The author explains that when we feel good, our minds become more open and creative, but when we are stressed, we tend to think narrowly and make mistakes. I found this especially interesting because it shows that emotions are not separate from thinking, they actually shape how we use and understand things. Another important idea the author discusses is that good design balances beauty and function. This made me reflect on how I interact with everyday objects. For example, I prefer using items that are both practical and attractive whether it’s a tea set, a notebook, or even my phone interface. When something looks nice, I automatically treat it with more care and feel more motivated to use it.

I also strongly connected with the author’s point about context and mood. He writes, “Design matters, but which design is preferable depends upon the occasion, the context, and above all, upon my mood.” This reminded me of how Kazakh families choose different tea sets depending on the situation. When it is just family, they use the simplest and fastest set. But when guests come, they always bring out the most beautiful one to show respect and hospitality. Another part that stood out to me was how the author connects pleasure with usability. He suggests that when something looks good, we are more tolerant of small problems. I realized this is true for me too, I do not mind if a pretty cup is a bit heavier or a stylish app takes a second longer to load, because its beauty gives me a pleasant feeling. I even change my tea sets every season, one for winter, spring, summer, and fall, because I enjoy drinking from something that matches the season’s atmosphere. The same goes for digital things: an attractive design makes me feel happier and more productive.

Week 8 – Unusual switch

Concept:

I designed a project inspired by one of my favorite arm exercises. The idea is to make the arm’s movement interact with light using an Arduino. I attached pieces of foil and wires on both sides of the arm at a specific distance. As the person bends their arm, the foil pieces move closer together. When the arm is fully bent, the foils touch and complete the circuit, causing both the yellow and blue LEDs to turn on. At the same time, the yellow LED starts blinking while the blue LED stays steadily lit. This setup transforms a simple arm exercise into an interactive experience. The lights provide visual feedback, it also makes the activity more engaging and helps represent effort and motion in a creative, easy-to-understand way.

Coding:

Arduino file on Github

int ledBlue = 8;      // Blue LED pin
int ledYellow = 4;    // Yellow LED pin
int foilSwitch = 2;   // Foil switch pin

void setup() {
  pinMode(ledBlue, OUTPUT);
  pinMode(ledYellow, OUTPUT);
  digitalWrite(foilSwitch, HIGH);
 
}

void loop() {
  int switchState = digitalRead(foilSwitch); // read foil switch

  if (switchState == LOW) {  // foils touching
    digitalWrite(ledBlue, HIGH);
    digitalWrite(ledYellow, HIGH);
    delay(500);                    // on for 500ms
    digitalWrite(ledYellow, LOW);
    delay(500);
    digitalWrite(ledBlue, LOW);
    digitalWrite(ledYellow, LOW);
  }
}

I defined three integers: ledBlue for the blue LED on pin 8, ledYellow for the yellow LED on pin 4, and foilSwitch for the foil sensor on pin 2. Then, in the setup part, I set the LEDs as outputs and used the internal pull-up resistor for the foil switch so it can detect when the foils touch each other. In the loop, I made the Arduino constantly read the state of the foil switch. When the foils touch, the circuit closes, and both LEDs turn on, the blue LED stays on, while the yellow LED blinks every half second. When the foils are not touching, both LEDs stay off.

Setup:

Photo SetupDemonstration Video

Reflection:

This project taught me how simple electronics can respond to physical movement and turn a regular arm exercise into an interactive activity. I learned how to use an Arduino to read input from a foil switch and control LEDs, as well as the difference between input and output pins and how HIGH and LOW signals work with a pull-up resistor. For future improvements, I could also try different colors or numbers of LEDs to show progress in a clearer way. Another improvement is to make the device more comfortable to wear on the arm, so it can be used easily during exercises. I could also practice adjusting the code to make the lights respond faster and more smoothly to the arm movement.

Midterm Project – Chic & Click

Concept:

Chic & Click is an interactive dress-up game where players create stylish outfits by mixing and matching hats, tops, and skirts on a mannequin. The project is inspired by the dressing games I loved playing as a child, which sparked my interest in fashion and interactivity. Before building the game, I researched similar games and gathered references to refine the concept of changing outfits, making sure that each clothing piece fits properly and looks visually appealing (I attached the inspiration image below). The mission of the game is to provide a fun and creative experience that allows players to explore their personal style. Interactivity is at the core of the game: players click on the hat, top, or skirt areas to cycle through clothing options, accompanied by audio feedback and visual effects like neon glows and a camera flash when taking a photo of their final outfit. 

Inspiration from the dressing game:

Sketch:

Link to the sketch: https://editor.p5js.org/Aizhan/sketches/h5QYwQTMS

Key Elements of the Game:

Starting Page:
The game begins with a clean and minimalist start page designed to be visually pleasant, using pastel and calm colors throughout the game. On this page, players see the game cover and instructions, along with a “Start” button. When pressed, it transitions to the main playing page. The background image was designed using a ChatGPT image generator, while the button is created using shapes and text. The game can also be played in full-screen mode by pressing the “F” key, providing a bigger and more immersive experience.

Playing Page:
On the playing page, players can interact with the mannequin to change outfits. The clothing items—hats, tops, and skirts/pants were designed in Canva, with five options for each category, giving a total of 15 different clothing pieces. When a player clicks on a clothing area, they can go through the different options, and after the fifth item, it loops back to the first. The clothes come in various colors and styles, and each click plays a game click sound, making the game more engaging and interactive.

Result Page:
After clicking the “Finish” button, the chosen outfit is displayed on a result page with a photo studio background. A fun gaming song starts playing as the page appears, creating a celebratory mood. This page includes a “Take a Photo” button,  which triggers a camera sound, a visual flash effect, and automatically downloads the screenshot to the user’s computer as “MyOutfit.png.” This allows players to save and share their styled outfits. The second button of the page is “Finish”, and as the users press the button, they move on to the next page.

Restart Page:
The restart page thanks the player for playing and provides a “Restart” button. The background image remains consistent with the aesthetic of the game, maintaining the calm pastel theme, and allows players to start a new round of outfit creation easily.

Code snippet and parts I am proud of:

I am proud of how I implemented the camera effect and photo feature. The flash effect combined with the screenshot download feels polished and gives the game a fun feeling To be honest, this was a last-minute addition, as I initially only had music and the background. The p5 references and tutorials really helped me bring it to life.

// Camera Effect
function drawCameraEffect() {
  let elapsed = millis() - flashStartTime;
  if (elapsed < 500) { // flash lasts 0.5 seconds
    fill(255, 255, 255, 150);
    rect(0, 0, width, height); // semi-transparent overlay
    image(cameraImage, width / 2 - 450, height / 2 - 400, 900, 750); // display camera overlay
  } else if (showFlash) { // save screenshot after flash ends
    saveCanvas(cnv, "MyOutfit", "png");
    showFlash = false;
  }

I am also proud of how I handled the clothing interactivity. Using arrays and a simple .next() function to cycle through hats, tops, and skirts kept the code clean and easy to manage, while keeping the gameplay smooth and responsive.

// Clothing class 
class Clothing {
  constructor(images, offsetX, offsetY) {
    this.images = images;       // array of clothing images
    this.offsetX = offsetX;     // array of X offsets for each image
    this.offsetY = offsetY;     // array of Y offsets for each image
    this.current = 0;           // index of the currently displayed clothing
    this.show = false;          // whether to display this clothing
  }

  // Loops back to the first item after the last one
  next() {
    this.current = (this.current + 1) % this.images.length;
  }

  // Display the current clothing item on the canvas
  display(x, y, w, h, extraOffsetY = 0) {
    if (this.show) { // only draw if show is true
      let img = this.images[this.current];
      let aspect = img.width / img.height;       // maintain aspect ratio
      let targetW = w;
      let targetH = targetW / aspect;
      let offsetX = this.offsetX[this.current] || 0; // use offset for proper alignment
      let offsetY = this.offsetY[this.current] || 0;
      image(img, x + offsetX, y + offsetY + extraOffsetY, targetW, targetH);
    }
  }
}

I was also inspired by other students’ work to add neon hover effects on the buttons, which were fun to code and added polish without making the program more complicated.

function applyNeonEffect(btn) {
  let x = btn.x, y = btn.y, w = btn.width, h = btn.height;
  
  // Check if mouse is over the button
  if (mouseX > x - 20 && mouseX < x + w + 20 && mouseY > y - 20 && mouseY < y + h + 20) {
    // Apply neon glow effect
    btn.style("box-shadow", "0 0 10px #fff, 0 0 20px #fff, 0 0 30px #fff, 0 0 50px #fff");
  } else {
    // Remove glow when mouse is not over
    btn.style("box-shadow", "none");
  }
}

Problems/Future Improvements:

One of the most difficult parts of this project was getting the clothes to fit correctly on the mannequin. Each clothing image had a different size and shape, so I spent several hours trying to align them properly. At first, it was tricky because using the same coordinates for every clothing item caused them to look off. I solved this by creating offset arrays for X and Y positions for each clothing item. For example, topOffsetsX, topOffsetsY, skirtOffsetsX, skirtOffsetsY, and hatOffsetsY allowed me to manually adjust the position of each item so it would sit correctly on the mannequin.

Regarding future improvements, I would focus on a few key areas. One of them would be adding more clothing options and categories, like shoes, accessories, or jackets, would give the game more variety and customization. I could also implement a drag-and-drop feature so users can position clothes more freely instead of just clicking to cycle through them. Overall, this project was a great way for me to combine coding and design. I learned how to manage interactive elements, solve alignment issues, and create a smooth user experience.  It also improved my problem-solving skills, especially when dealing with image alignment and user interactions. This midterm game gave me a better understanding of how design and programming can come together to create an engaging user experience.

Midterm Progress

Concept:

I was inspired by the dressing game that I liked to play when I was a child, I attached the image below. In my version, players can click on a character’s shirt, pants, and shoes to change their colors and create different outfits. The game is simple and interactive, allowing players to explore many combinations and experiment with different styles. It has a start screen to begin the game, a dressing area where the character is displayed with a calm background, and an end screen with a restart option so the game can be played again. I wanted to recreate the fun and creativity I experienced as a child, while keeping the game easy to use and visually pleasing. I wanted to capture joy of trying new looks and making choices in a playful and colorful way.

Inspiration:

Design and User Interactive:

The game has a simple and playful design, using soft and pleasant colors to create a friendly atmosphere. Most of the visuals are based on images that I created myself and uploaded into the project. Currently, the character is represented with basic shapes that change color when the user clicks on them, giving a simple interactive experience. In the future, I plan to replace the shapes with a full mannequin, where each part of her clothing will change whenever clicked. I will also create and upload a full set of clothing images into the p5 sketch, allowing for a more detailed and visually appealing dress-up experience.

The game itself is highly interactive, allowing players to click directly on the character’s shirt, pants, and shoes to change colors. Buttons like PLAY, FINISH, and RESTART respond to clicks to move between game states. This interactivity makes the game engaging, as players can experiment with different outfit combinations and immediately see the results on the character.

function mouseClicked() {
  if (gameState === "start" && playButton.clicked(mouseX, mouseY)) {
    playButton.action();
  } 
  else if (gameState === "playing") {
    // Change clothes when clicking on body parts
    if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 300 && mouseY < 400) {
      currentPants = (currentPants + 1) % pantsColors.length; // Change pants
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 150 && mouseY < 300) {
      currentShirt = (currentShirt + 1) % shirtColors.length; // Change shirt
    }
    else if (mouseX > width / 2 - 50 && mouseX < width / 2 + 50 && mouseY > 400 && mouseY < 500) {
      currentShoes = (currentShoes + 1) % shoesColors.length; // Change shoes
    }

Sketch:

The most frightening part and what I did to reduce this risk 

One of the most challenging parts was managing the game states start screen, playing screen, and end screen, because each screen had different buttons and interactions. It was difficult to make sure the right buttons showed at the right time and that clicking them went to the correct screen.

I solved this by creating a Button class to handle all clickable buttons in the same way and by using separate functions to draw each screen. This kept the code organized and made it easy to add or change buttons later. I also made sure the mouseClicked() function only responded to buttons for the current screen. As a result, the screens change smoothly, the game feels easy to use, and the interactions are clear for the player.

Reflection and Future Improvements:

So far, I really like the concept of my project and I’m excited to experiment and see the final result. For future improvements, I plan to replace the simple shapes with detailed images of the clothes, making the character and outfits more visually appealing. I also want to add sound effects that play when the user clicks on each clothing item to make the game more interactive. Additionally, I would like the character to have a speech bubble with text whenever the user clicks on her. These features will make the game more dynamic, engaging, and fun for players.