While thinking about my Final Project, I decided that I want to do either something useful or something connected to art/music. Thus, I so far have two ideas that I will be deciding between during the upcoming week.
1) Useful – Morse Code (or anything similar) translator. I will create the physical device that will take the input as a Morse Code (dashes and dots by pressing the button long or short period of time), and then translate it to the words on the screen of p5.js. At the same time, the person will be able to type the text on the laptop and it will be played as a sound on the p5.js.
2) Art/Music – I want to combine those two terms by expanding on the usage of the sensors. I want to connect a bunch of distance/light sensors and let the user control the music and the drawings on the canvas of p5.js. Just like with the assignment where we played the Jingle Bells song, I will use the sensors to speed up the music, change its tone of, and also add sounds on top of the song that is playing to allow a user to create the music. At the same time, it will also be reflected on the canvas as a unique kind of art. This is a very ambitious project that I am not 100% sure how to accomplish, but I will do my best to think about it during the week.
The Traffic Light Control Game is an interactive simulation where a car on the p5.js screen reacts to traffic light LEDs controlled by an Arduino Uno. Players use keyboard arrow keys to control the car’s movement, adhering to basic traffic light rules:
Red LED: Stop the car.
Green LED: Move the car.
Yellow LED: Serves as a warning with no required interaction.
The game emphasizes real-time interaction between physical components and digital visuals, showcasing the integration of hardware and software.
Key Features:
Traffic Light Simulation:
Red, yellow, and green LEDs simulate a real-world traffic light.
The lights change sequentially in predefined intervals.
Interactive Car Movement:
Players use arrow keys to control the car:
Up Arrow: Move forward.
Down Arrow: Stop the car.
The car’s behavior must match the traffic light signals.
Real-Time Feedback:
If the car moves during a red light or stops during a green light, a buzzer sounds to indicate a violation.
Game Over:
If repeated violations occur, the game ends with a “Game Over” screen.
Objective:
The goal is to follow traffic light rules accurately and avoid violations. The game offers an educational yet engaging experience, simulating real-world traffic scenarios.
Technical Components:
Hardware:
Arduino Uno:
Controls traffic light LEDs and buzzer.
3 LEDs:
Represent traffic lights (red, yellow, green).
Buzzer:
Provides auditory feedback for rule violations.
Resistors:
Ensure proper current flow for LEDs and buzzer.
Breadboard and Wires:
Connect and organize the components.
Software:
Arduino IDE:
Manages traffic light logic and sends the light states to p5.js via serial communication.
p5.js:
Displays the car and road.
Handles player input and real-time car behavior based on the light states.
Implementation Plan:
1. Traffic Light Control:
The Arduino controls the sequence of LEDs:
Green for 5 seconds.
Yellow for 3 seconds.
Red for 5 seconds.
The current light state is sent to p5.js via serial communication.
2. Car Movement:
The p5.js canvas displays:
A road with a car.
The current traffic light state using on-screen indicators.
Arrow keys control the car’s position:
Right Arrow: Move forward.
Up Arrow: Stop.
3. Feedback System:
If the car moves during a red light or doesn’t move during a green light:
A buzzer sounds via Arduino.
Violations are logged, and after three violations, the game ends with a “Game Over” message.
Expected Outcome:
Players will interact with a dynamic simulation where their actions on the keyboard directly correspond to the car’s behavior.
The integration of physical LEDs and buzzer with digital visuals will create a seamless interactive experience.
The project demonstrates a clear understanding of hardware-software integration and real-time interaction design.
Extensions (STILL THINKING ABOUT IT):
Scoring System:
Reward correct responses with points.
Dynamic Difficulty:
Reduce light duration intervals as the game progresses.
Enhanced Visuals:
Add animations for the car (e.g., smooth movement, brake effects).
My first Idea was to develop an interactive game called “Connect Four.” The game involves two (or one player vs the computer) taking turns dropping coins into a 7×6 grid displayed in p5.js. Each player is assigned a color—red or blue—and the goal is to be the first to connect four coins in a row, column, or diagonal or the grid is full resulting into a tie. My design will combine physical interaction with a hardware that allows players to select columns and insert coins. Then whole grid display will be shown in the P5.js in real time.
I hope to develop two modes :
1.Two-Player Mode: In this mode, two players alternate turns, dropping coins into the grid to win against their opponent.
2. Single-Player Mode (vs. Computer): In this mode, the player competes against the computer that follows either of two approaches one being for easy and one for difficult play to accommodate different skill levels.
I hope that the physical interaction adds a unique experience and the game nature makes it competitive and fun.
Second Idea
I am hoping to develop an interactive game called “NYUAD Puzzle Adventure.” The game involves solving digital jigsaw puzzles displayed in p5.js sketch, controlled entirely through user input on the hardware. Players will use two adjustable controllers (potentiometers) to move puzzle pieces, one controlling horizontal movement and the other controlling vertical movement. A slide button will be used to select and lift a piece or release it into the desired position on the board.
The game will feature images of the NYU Abu Dhabi campus, and a timer will track how long players take to complete each puzzle. To make the game even more interactive, I will use hidden motors to provide physical feedback like vibrations whenever players move pieces and whenever the Game is solved correctly.
Whenever the game is completed and the player manages to solve the puzzle setting a new record in the shortest time possible the wooden box controlled will open to reveal an NYU Abu Dhabi inspired gift to the player.
I hope that with this design:
Players’ interaction with the puzzle through adjustable controls and physical feedback makes the game more engaging.
The storytelling nature of the puzzle, by using NYUAD images, and the rewarding mechanism will make it fun and relevant to players.
“The Glass Box” draws inspiration from some of the most renowned interactive installations that seamlessly blend art, technology, and human emotion. Works like Random International’s “Rain Room” and Rafael Lozano-Hemmer’s “Pulse” have redefined how art responds to and engages with the presence of its audience. These pieces demonstrate how technology can turn human interactions into immersive, deeply personal experiences. For instance, “Rain Room” creates a space where participants walk through a field of falling rain that halts as they move, making their presence an integral part of the art. Similarly, “Pulse” transforms visitors’ biometric data, like heartbeats, into mesmerizing light and sound displays, leaving an impression of their presence within the installation.
In this spirit, “The Glass Box” is conceived as an ethereal artifact—a living memory keeper that reacts to touch, gestures, sound, and even emotions. It is designed to transform fleeting human moments into tangible, evolving displays of light, motion, and sound. Inspired further by works like “Submergence” by Squidsoup, which uses suspended LEDs to create immersive, interactive environments, and TeamLab’s Borderless Museum, where visuals and projections shift dynamically in response to viewers, “The Glass Box” similarly blurs the line between viewer and art. It invites users to actively shape its form and behavior, making them co-creators of a dynamic, ever-changing narrative.
The central theme of “The Glass Box” is the idea that human presence, though transient, leaves a lasting impact. Each interaction—whether through a gesture, a clap, or an expression—is stored as a “memory” within the box. These memories, visualized as layers of light, sound, and movement, replay and evolve over time, creating a collaborative story of all the people who have interacted with it. For example, a joyful wave might create expanding spirals of light, while a gentle touch might ripple across the sculpture with a soft glow. When idle, the box “breathes” gently, mimicking life and inviting further interaction.
Key Features
Dynamic Light and Motion Response:
The Glass Box uses real-time light and motion to respond to user gestures, touch, sound, and emotions. Each interaction triggers a unique combination of glowing patterns, pulsating lights, and kinetic movements of the artifact inside the box.
The lights and motion evolve based on user input, creating a sense of personalization and engagement.
Emotion-Driven Feedback:
By analyzing the user’s facial expression using emotion recognition (via ml5.js), the box dynamically adjusts its response. For example:
A smile produces radiant, expanding spirals of warm colors.
A neutral expression triggers soft, ambient hues with gentle movements.
A sad face initiates calming blue waves and slow motion.
Memory Creation and Replay:
Each interaction leaves a “memory” stored within the box. These memories are visualized as layered patterns of light, motion, and sound.
Users can replay these memories by performing specific gestures or touching the box in certain areas, immersing them in a past interaction.
Interactive Gestural Control:
Users perform gestures (like waving, pointing, or swiping) to manipulate the box’s behavior. The ml5.js Handpose library detects these gestures and translates them into corresponding light and motion actions.
For example, a waving gesture might create rippling light effects, while a swipe can “clear” the display or shift to a new pattern.
Multi-Sensory Interactivity:
The box reacts to touch via capacitive sensors, sound via a microphone module, and visual gestures through webcam-based detection. This multi-modal interaction creates an engaging, immersive experience for users.
Dynamic Visual Narratives:
By combining input data from touch, gestures, and emotions, the box generates unique, evolving visual patterns. These patterns are displayed as 3D light canvases inside the box, blending aesthetics and interactivity.
Reflecting on the insights from the “Design Meets Disability” document, it’s clear that not enough designs are created with disability in mind. The beginning of the book, which references San Francisco’s disability-friendly environment, reminds me of an interview between Judith Butler and Sunaura Taylor discussing how accessible San Francisco is compared to New York. This backdrop sets the stage for the book’s intriguing point about glasses. Glasses have significantly changed our views on vision impairment; they’re no longer seen as a taboo or an expensive burden. Thanks to their design evolution, people with vision impairments are not commonly tagged as disabled.
During a guest lecture at NYUAD, Professor Goffredo Puccetti, a graphic designer with visual impairments, shed light on the importance of inclusive design. His own experiences and professional expertise underscored how subtle design elements can vastly improve accessibility. He pointed out specific shortcomings at NYUAD, such as some door designs that fail to be truly disability-friendly. This gap between the institution’s inclusive intentions and their actual implementations has heightened my awareness of the practical challenges in achieving genuine accessibility.
Moreover, noise-canceling headphones have emerged as an assistive technology beneficial for individuals like my friend Shem, who has autism. She uses these headphones daily to work without distraction, showing how design can aid in overcoming some challenges posed by disabilities. However, mainstream designs often inadvertently promote inaccessibility, like buildings with stairs but no ramps, presenting significant barriers to the disabled community.
Even as NYUAD champions inclusion and accessibility, the actual campus design tells a different story. Those with permanent visual impairments struggle to access Braille signage without assistance, and the dining hall doors pose challenges for wheelchair users. This disparity prompts critical questions: What steps can institutions like NYUAD take to bridge the gap between their inclusive ideals and their physical implementations? How can designers, both current and future, better anticipate and address the diverse needs of their audience?
Understanding that there is no “one-size-fits-all” solution in design for disability, it becomes clear that more thoughtful methodologies and steps are needed. Designs should not only meet minimum standards of accessibility but should also strive to enhance autonomy and integration, ensuring that everyone can navigate spaces independently and with dignity.
This week’s reading, Design Meets Disability, made me think differently about how we design for people with disabilities. Instead of just focusing on making tools that work, the reading talks about making them look good too. One idea that stood out to me was how assistive devices, like hearing aids, can be designed to match the user’s style. This turns them from something people might feel shy about into something they’re proud to wear.
I also liked the focus on working directly with the people who will use these designs. When users are involved, the tools are not only more useful but also feel more personal and meaningful. For example, the way glasses became a fashion statement over time shows how design can change how we see things, not just how they work.
This reading made me think about my own projects and how I can use similar ideas. I want to make designs that are simple and easy to use but still look creative and fun. I also want to involve users more, so the designs feel like they belong to them, not just something made for them.
In the end, this reading reminded me that design isn’t just about fixing problems—it’s about improving lives in ways that make people feel seen and valued. It’s a small change in thinking but one that can make a big difference.
“make something that uses only one sensor on Arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on Arduino is controlled by p5”
The following code was utilized for this particular exercise:
let sensorValue = 0; // To store the sensor value from Arduino
function setup() {
createCanvas(640, 480);
textSize(18);
}
function draw() {
background(220);
if (!serialActive) {
text("Press Space Bar to select Serial Port", 20, 30);
} else {
text("Connected", 20, 30);
// Display the sensor value
text('Sensor Value: ' + sensorValue, 20, 50);
// Map the sensor value to the horizontal position of the ellipse
let ellipseX = map(sensorValue, 0, 1023, 0, width);
// Draw the ellipse in the middle of the canvas vertically
fill(255, 0, 0);
ellipse(ellipseX, height / 2, 50, 50);
}
}
function keyPressed() {
if (key == " ") {
setUpSerial(); // Start the serial connection
}
}
// This function is called by the web-serial library
function readSerial(data) {
if (data != null) {
let fromArduino = trim(data); // Trim any whitespace
if (fromArduino !== "") {
sensorValue = int(fromArduino); // Convert the sensor value to an integer
}
}
}
The following code was used in the Arduino IDE for this exercise:
// make something that uses only one sensor on Arduino and makes the ellipse in p5 move on the horizontal axis,
// in the middle of the screen, and nothing on arduino is controlled by p5
int sensorPin = A0; // Single sensor connected to A0
void setup() {
Serial.begin(9600);
}
void loop() {
int sensorValue = analogRead(sensorPin); // Read sensor value
Serial.println(sensorValue); // Send sensor value to p5.js
delay(50); // Short delay for stability
}
Exercise 2:
“make something that controls the LED brightness from p5”
The following code was used to make this exercise come to fruition:
let brightness = 0; // Brightness value to send to Arduino
function setup() {
createCanvas(640, 480);
textSize(18);
// Create a slider to control brightness
slider = createSlider(0, 255, 0);
slider.position(20, 50);
}
function draw() {
background(220);
if (!serialActive) {
text("Press Space Bar to select Serial Port", 20, 30);
} else {
text("Connected", 20, 30);
// Display brightness value
text("Brightness: " + brightness, 20, 90);
// Update brightness from the slider
brightness = slider.value();
// Send brightness to Arduino
writeSerial(brightness + "\n");
}
}
function keyPressed() {
if (key == " ") {
setUpSerial(); // Start the serial connection
}
}
function readSerial(data) {
if (data != null) {
let fromArduino = trim(data); // Trim whitespace
brightness = int(fromArduino); // Parse data into an integer
}
}
The following Arduino code was used for this particular exercise:
//make something that controls the LED brightness from p5
int ledPin = 3;
void setup() {
Serial.begin(9600);
pinMode(ledPin, OUTPUT);
}
void loop() {
if (Serial.available()) {
int brightness = Serial.parseInt();
if (Serial.read() == '\n') {
brightness = constrain(brightness, 0, 255);
analogWrite(ledPin, brightness);
Serial.println(brightness); // Send brightness to p5.js
}
}
}
Exercise 3:
The following code is an alteration of professor Aaron Sherwood’s code which was used for this exercise:
let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let windSensorValue = 0; // Value from the wind sensor
let connectButton; // Button for connecting to the serial port
function setup() {
createCanvas(640, 360);
noFill();
position = createVector(width / 2, 0); // Initial position of the ball
velocity = createVector(0, 0); // Initial velocity
acceleration = createVector(0, 0); // Initial acceleration
gravity = createVector(0, 0.5 * mass); // Gravity force
wind = createVector(0, 0); // Initial wind force
// Create a button to initiate the serial connection
connectButton = createButton("Connect to Serial");
connectButton.position(10, 10);
connectButton.mousePressed(setUpSerial); // Trigger serial connection on button press
}
function draw() {
background(255);
if (!serialActive) {
text("Click 'Connect to Serial' to start", 20, 50);
return; // Exit the draw loop until the serial connection is established
}
// Map wind sensor value to wind force (affects horizontal movement)
wind.x = map(windSensorValue, 0, 1023, -1.5, 1.5); // Adjust force range as needed
// Apply forces
applyForce(wind); // Apply wind force
applyForce(gravity); // Apply gravity force
// Update velocity and position
velocity.add(acceleration);
velocity.mult(drag); // Apply drag (friction)
position.add(velocity);
acceleration.mult(0); // Reset acceleration
// Ball bounce logic (vertical boundary)
if (position.y > height - mass / 2) {
position.y = height - mass / 2; // Place the ball on the ground
velocity.y *= -0.9; // Reverse and dampen vertical velocity
// Notify Arduino to toggle the LED when the ball touches the ground
writeSerial("1\n"); // Send '1' to Arduino
} else {
// Ensure the LED is off when the ball is not touching the ground
writeSerial("0\n"); // Send '0' to Arduino
}
// Draw the ball
ellipse(position.x, position.y, mass, mass);
}
function applyForce(force) {
// Newton's 2nd law: F = M * A -> A = F / M
let f = p5.Vector.div(force, mass); // Scale force by mass
acceleration.add(f); // Add force to acceleration
}
// Reset the ball to the top of the screen when the space key is pressed
function keyPressed() {
if (key === " ") {
position.set(width / 2, 0); // Reset position to top center
velocity.set(0, 0); // Reset velocity to zero
mass = random(15, 80); // Randomize mass
gravity.set(0, 0.5 * mass); // Adjust gravity based on new mass
}
}
// Serial communication: Read sensor value from Arduino
function readSerial(data) {
if (data != null) {
let trimmedData = trim(data);
if (trimmedData !== "") {
windSensorValue = int(trimmedData); // Read wind sensor value
}
}
}
The following code was used in the Arduino IDE to bring this to life:
//gravity wind example
int ledPin = 2; // Pin connected to the LED
int windPin = A0; // Analog pin for the potentiometer (A0)
void setup() {
Serial.begin(9600); // Start serial communication
pinMode(ledPin, OUTPUT); // Set the LED pin as an output
digitalWrite(ledPin, LOW); // Turn the LED off initially
}
void loop() {
// Read the analog value from the potentiometer
int windValue = analogRead(windPin);
// Send the wind value to p5.js over serial
Serial.println(windValue);
// Check if a signal is received from p5.js for the LED
if (Serial.available()) {
char command = Serial.read(); // Read the signal from p5.js
if (command == '1') {
digitalWrite(ledPin, HIGH); // Turn on the LED when the ball touches the ground
} else if (command == '0') {
digitalWrite(ledPin, LOW); // Turn off the LED
}
}
delay(5); // Small delay for stability
}
The following schematic was used for all 3 of the exercises with slight moderations, provided in class:
These are a few in-class activities Zavier and I did this week (and had to post), resolving around serial communication between p5 and Arduino.
# Exercise 1: Arduino Affecting p5
Task:
“Make something that uses only one sensor on Arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on arduino is controlled by p5.”
let xPos = 0;
function setup() {
createCanvas(600, 600);
noFill();
}
function draw() {
background(32, 64);
stroke("white")
ellipse(map(xPos, 0, 1023, 0, width), height/2, 100, 100);
// Turn the screen red to make it very clear that we aren't connected to the Arduino
if (!serialActive)
background(128, 0, 0);
}
function keyPressed() {
if (key == " ")
setUpSerial(); // Start the serial connection
}
function readSerial(data) {
if (data != null) // Ensure there's actually data
xPos = int(data);
}
# Exercise 2: p5 Affecting Arduino
Task:
“Make something that controls the LED brightness from p5.”
let xPos = 0;
let LEDBrightness = 0; // 0 - 255
function setup() {
createCanvas(600, 600);
}
function draw() {
if (keyIsDown(UP_ARROW) && LEDBrightness < 255) LEDBrightness += 1; else if (keyIsDown(DOWN_ARROW) && LEDBrightness > 0)
LEDBrightness -= 1;
// Just a visual indicator of the brightness level on p5
background(LEDBrightness);
fill(LEDBrightness < 128 ? 'white' : 'black')
text(LEDBrightness, 25, 25);
// Turn the screen red to make it very clear that we aren't connected to the Arduino
if (!serialActive)
background(128, 0, 0);
}
function keyPressed() {
if (key == " ")
setUpSerial(); // Start the serial connection
}
function readSerial(data) {
writeSerial(LEDBrightness);
}
# Exercise 3: Arduino and p5 Affecting Each Other
Demo:
Task:
“Take the gravity wind example and make it so every time the ball bounces one led lights up and then turns off, and you can control the wind from one analog sensor.”
Code:Arduino:
const int LED_PIN = 3;
const int POT_PIN = A0;
void setup() {
Serial.begin(9600);
pinMode(LED_PIN, OUTPUT);
pinMode(POT_PIN, INPUT);
// Start the handshake
while (Serial.available() <= 0) {
Serial.println("0"); // Send a starting message
delay(300); // Wait ~1/3 second
}
}
void loop() {
while (Serial.available()) {
int LEDState = Serial.parseInt(); // 0 or 1
if (Serial.read() == '\n') {
digitalWrite(LED_PIN, LEDState);
Serial.println(analogRead(POT_PIN));
}
}
}
p5:
let position, velocity, acceleration, gravity, wind; // vectors
let drag = 0.99, mass = 50, hasBounced = false;
function setup() {
createCanvas(640, 360);
noFill();
position = createVector(width/2, 0);
velocity = createVector(0,0);
acceleration = createVector(0,0);
gravity = createVector(0, 0.5*mass);
wind = createVector(0,0);
textAlign(CENTER);
textSize(18);
}
function draw() {
background(255);
applyForce(wind);
applyForce(gravity);
velocity.add(acceleration);
velocity.mult(drag);
position.add(velocity);
acceleration.mult(0);
fill(hasBounced ? 'green' : 'white')
ellipse(position.x,position.y,mass,mass);
if (position.y > height-mass/2) {
velocity.y *= -0.9; // A little dampening when hitting the bottom
position.y = height-mass/2;
if (!hasBounced && abs(velocity.y) > 1) {
hasBounced = true;
setTimeout(() => hasBounced = false, 100); // Set hasBounced to false after 0.1 s
}
}
if (!serialActive) {
background(8);
fill('white');
text("Press c to connect to the Arduino", width/2, height/2)
}
}
function applyForce(force){
// Newton's 2nd law: F = M * A
// or A = F / M
let f = p5.Vector.div(force, mass);
acceleration.add(f);
}
function keyPressed(){
if (key == ' '){
mass = random(15,80);
position.set(width/2, -mass);
velocity.mult(0);
gravity.y = 0.5*mass;
wind.mult(0);
} else if (key == 'c') {
setUpSerial(); // Start the serial connection
}
}
function readSerial(data) {
if (data != null) { // Ensure there's actually data
wind.x = map(int(data), 0, 1023, -2, 2);
writeSerial((hasBounced ? 1 : 0) + '\n');
}
}
The reading highlights some important points, especially on how we should rethink design for disability, emphasizing a balance between function and beauty. The examples given, such as the Eames’ leg splint, stood out to me because they show that disability products can be both useful and attractive. I was also inspired by the story of how glasses changed from a medical tool to a fashion item. This change made glasses a symbol of personal style rather than something to hide.
For my own designs going forward, I hope to focus on some key ideas from this text. First, I want to make designs that feel good to use—not only functional but also enjoyable and comfortable—so that even if a design is intended for people with disabilities, they feel comfortable using it. I plan to use creative solutions that combine usefulness with visual appeal. Second, I’ll seek to work with people from different backgrounds—such as artists, fashion designers, and people with disabilities—to create designs that are more thoughtful and inclusive. Lastly, I’ll avoid “one-size-fits-all” designs, instead creating products that allow people to show their unique personalities.
In the end, I believe designing for disability is a chance to make products that improve people’s lives in meaningful ways, so designers should prioritize combining beauty and function.
let circleX;
function setup() {
createCanvas(500, 500);
noFill();
}
function draw() {
background('white');
if (!serialActive) {
console.log('ARDUINO IS NOT CONNECTED'); //output to check if Arduino connected or not
}
if (serialActive) {
fill('violet')
ellipse(map(circleX, 0, 1023, 0, width), height / 2, 100, 100); // using map to make the circle move
console.log(circleX) //output position to check
}
}
function keyPressed() {
if (key == " ")
setUpSerial();
}
function readSerial(data) {
if (data != null) //
circleX = int(data);
}
// ARDUINO CODE
/*
void setup() {
Serial.begin(9600);
pinMode(A0, INPUT);
}
void loop() {
Serial.println(analogRead(A0));
delay(5);
}
*/
Exercise 2:
let brightnessLVL = 0;
function setup() {
createCanvas(500, 500);
}
function draw() {
if (!serialActive) {
console.log('ARDUINO IS NOT CONNECTED')
}
if (keyIsDown(UP_ARROW)) {
brightnessLVL += 1;
}
else if (keyIsDown(DOWN_ARROW) && brightnessLVL > 0) {
brightnessLVL -= 1;
}
console.log(brightnessLVL)
}
function keyPressed() {
if (key == " ")
setUpSerial(); // Start the serial connection
}
function readSerial(data) {
writeSerial(brightnessLVL);
}
// ARDUINO CODE
/*
void setup() {
Serial.begin(9600);
pinMode(10, OUTPUT);
}
void loop() {
analogWrite(10, Serial.parseInt());
Serial.println();
delay(1);
}
*/
Exercise 3:
let velocity;
let gravity;
let position;
let acceleration;
let wind;
let drag = 0.99;
let mass = 50;
let checkBounce;
let outputBounce;
function setup() {
createCanvas(640, 360);
noFill();
position = createVector(width/2, 0);
velocity = createVector(0,0);
acceleration = createVector(0,0);
gravity = createVector(0, 0.5*mass);
wind = createVector(0,0);
}
function draw() {
background(255);
applyForce(wind);
applyForce(gravity);
velocity.add(acceleration);
velocity.mult(drag);
position.add(velocity);
acceleration.mult(0);
ellipse(position.x,position.y,mass,mass);
if (!serialActive) {
console.log('ARDUINO IS NOT CONNECTED')
fill('red')
}
if (serialActive) {
textAlign(CENTER, TOP);
textSize(24);
fill('green');
text('ARDUINO IS CONNECTED', width / 2, 10);
}
if (position.y > height-mass/2) {
checkBounce = 1;
velocity.y *= -0.9; // A little dampening when hitting the bottom
position.y = height-mass/2;
}
else {
checkBounce = 0;
}
}
function applyForce(force){
// Newton's 2nd law: F = M * A
// or A = F / M
let f = p5.Vector.div(force, mass);
acceleration.add(f);
}
function keyPressed(){
if (key==' '){
mass=random(15,80);
position.y=-mass;
velocity.mult(0);
}
if (key == "h") {
setUpSerial();
}
}
function readSerial(data) {
if (data != null) {
wind.x = map(int(data), 0, 1023, -2, 2);
let sendToArduino = checkBounce + '\n';
writeSerial(sendToArduino);
}
}
// ARDUINO CODE
/*
const int LED_PIN = 10;
const int POT_PIN = A0;
void setup() {
Serial.begin(9600);
pinMode(LED_PIN, OUTPUT);
pinMode(POT_PIN, INPUT);
while (Serial.available() <= 0) {
Serial.println("0");
delay(300);
}
}
void loop() {
while (Serial.available()) {
int LEDtrigger = Serial.parseInt();
if (Serial.read() == '\n') {
digitalWrite(LED_PIN, LEDtrigger);
Serial.println(analogRead(POT_PIN));
}
}
}
*/
The articles profoundly challenged my assumptions about interaction design, particularly regarding our overreliance on flat touchscreen interfaces. While I’ve always appreciated the sleek aesthetics of modern devices, I now recognize how we’ve sacrificed the rich tactile experiences our hands are capable of experiencing.
The discussion about the trade-offs between physical and touch interfaces resonates strongly with my own experiences in technology. Like many others, I’ve noticed the satisfying feedback of mechanical keyboards versus the hollow experience of typing on glass surfaces. This observation extends beyond personal preference – it reflects a fundamental human desire for tactile feedback that current touch interfaces often fail to provide. In my own projects, I’m now exploring ways to incorporate haptic feedback and physical controls that complement, rather than replace, touch interfaces, understanding that different interaction methods serve different purposes and contexts.
The vision of future interfaces that better adapt to human capabilities has inspired me to think more boldly about interaction design. Rather than accepting the limitations of current technology, I’m now exploring how to create interfaces that engage multiple senses and leverage our natural ability to manipulate objects in three-dimensional space. This could mean developing prototypes that combine touch interfaces with physical controls, or experimenting with new forms of haptic feedback that provide more nuanced physical responses. The articles have helped me understand that the future of interaction design isn’t about choosing between physical and digital interfaces, but rather about finding innovative ways to blend them together to create more intuitive and satisfying user experiences.