I am ECSTATIC to say that I am finally done with this project! It has certainly been a dynamic experience.
As I had proposed, I thought of incorporating one of my favorite things ever into my final project – my love for penguins. Therefore, I decided to create a fun little game where a Penguin called Pengu, has to jump over platforms — inspired by the Doodle Jump game.
A lot has changed since my previous User Proposal, as my idea now is fully fleshed out/ In terms of the game itself, the primary objective is for Pengu to hop on different platforms till the timer ends — the person is supposed to last sixty seconds in the game. If the penguin falls off – then they lose.
In terms of the physical implementation, this game has four buttons: Restart, Left, Right, and Jump.
There are several challenges I faced, most of them mainly to do with the game itself rather than the arduino.
For example, I was struggling with generating the actual platforms for the penguin to jump on. After I added the special ‘disappear’ platforms, it felt like the screen was being overcrowded. In addition, sometimes, the penguin would start on a disappear platform and therefore lose the game immediately, so I decided on a set of three normal platforms for the penguin to jump on at the start of the game.
I also had struggled with making the platforms disappear once the penguin moved up, and ,make new ones appear. However, my friend had taught me about a handy concat built in function and filter, and as well as the spread operator, which I actually ended up finding useful and using it here now.
The initial idea of my final is to transform my midterm project from an offline PvE game into an engaging online PvP experience. Building upon the PvP framework, I realized in the first weeks working on the final that the latest concept incorporates a physical robot to seemingly operate one of the players within the game. This dual-player setup creates a dynamic competition between a human-controlled player and a robot-controlled player, leveraging the newly established online PvP mechanism. As the physical installation is actually an illusion, the project also serves as a mind experiment to observe to what extent users will discover the installation during the experience.
2. Project Demonstration
3. Implementation Details
Interaction Design
The key components include:
Game Logic (p5.js): Manages game states, player actions, and AI behaviors.
Robot Hand (Arduino): Translates game commands into physical movements by controlling servos that simulate key presses.
Serial Communication: Facilitates real-time data exchange between the p5.js application and the Arduino-controlled robot hand, ensuring synchronized actions.
Physical Installation and Arduino Integration
3D Printing:
Materials: PLA filaments
Process:
Experimental Print
Separate Print Of Joints and Body
Hinges Construction:
Materials: 3D-printed molds and hot glue gun.
Purpose: Form sturdy and flexible hinges for finger movements.
Process: Injected hot glue into the molds and install the hinges between the joints.
Tendon Implementation:
Materials: Fishing lines.
Purpose: Act as tendons to control finger movements.
Process: Attached fishing lines to servos and the tips of the fingers.
Servo Control:
Components: 6 9g servo motors.
Control Mechanism: Driven by serial commands from the Arduino, allowing the robot hand to mimic key presses (`w`, `a`, `s`, `d`, `space`, `x`) by turning to specific angles
Assembly of the Installation
Components: All listed above, LEDs, jump wires, acrylic plates
Process:
Acrylic assembly
#include <Servo.h>
unsigned long previousMillis = 0; // Store the last time the LED was updated
const long interval = 250; // Interval to wait (2 seconds)
// Define servo objects for each finger
Servo indexServo;
Servo middleServo;
Servo ringServo;
Servo pinkyServo;
Servo indexServo2;
Servo ringServo2;
// Define servo pins
const int indexPin = 2;
const int middlePin = 3;
const int ringPin = 4;
const int pinkyPin = 5;
const int indexPin2 = 6;
const int ringPin2 = 7;
// Define LED pins
const int LEDPins[] = {8, 9, 10, 11, 12, 13};
// indexLEDPin, middleLEDPin, ringLEDPin, pinkyLEDPin, indexLEDPin2, ringLEDPin2
// Array to hold servo objects for easy access
Servo servos[6];
// Blink LED while waiting for serial data
const int ledPin = LED_BUILTIN;
// Array to hold default angles
const int fingerDefaultAngles[] = {0, 15, 20, 20, 60, 30};
void setup() {
// Initialize serial communication
Serial.begin(9600);
// Attach servos to their respective pins
servos[0].attach(indexPin);
servos[1].attach(middlePin);
servos[2].attach(ringPin);
servos[3].attach(pinkyPin);
servos[4].attach(indexPin2);
servos[5].attach(ringPin2);
// Set LED pins to output mode
pinMode(8, OUTPUT);
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
pinMode(11, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);
// Initialize all servos to 0 degrees (open position)
for(int i = 0; i < 6; i++) {
servos[i].write(0);
delay(100);
}
// Initialize LED pin
pinMode(ledPin, OUTPUT);
// Handshake: Wait for p5.js to send initial data
while (Serial.available() <= 0) {
digitalWrite(ledPin, HIGH); // LED on while waiting
Serial.println("0,0,0,0,0,0"); // Send initial positions
delay(300);
digitalWrite(ledPin, LOW);
delay(50);
}
}
void loop() {
// Check if data is available from p5.js
while (Serial.available()) {
// digitalWrite(ledPin, HIGH); // LED on while receiving data
// Read the incoming line
String data = Serial.readStringUntil('\n');
data.trim(); // Remove any trailing whitespace
// Split the data by commas
int angles[6];
int currentIndex = 0;
int lastComma = -1;
for(int i = 0; i < data.length(); i++) {
if(data[i] == ',') {
angles[currentIndex++] = data.substring(lastComma + 1, i).toInt();
lastComma = i;
}
}
// Last value after the final comma
angles[currentIndex] = data.substring(lastComma + 1).toInt();
// Get the current time
unsigned long currentMillis = millis();
// Check if the interval has passed
if (currentMillis - previousMillis >= interval) {
// Save the last time the LED was updated
previousMillis = currentMillis;
// Update servo positions
for(int i = 0; i < 6; i++) {
servos[i].write(angles[i]); // Set servo to desired angle
}
}
for(int i = 0; i < 6; i++) {
digitalWrite(LEDPins[i], angles[i] != fingerDefaultAngles[i]? HIGH : LOW); // Light the LED accordingly
}
// Echo back the angles
Serial.print(angles[0]);
for(int i = 1; i < 6; i++) {
Serial.print(",");
Serial.print(angles[i]);
}
Serial.println();
// digitalWrite(ledPin, LOW); // Turn off LED after processing
}
}
Purpose: Establish real-time, bi-directional communication between clients.
Implementation: Set up a local server using Node.js and integrated Socket.io to handle event-based communication for synchronizing game states.
Local Server for Data Communication:
Function: Manages user connections, broadcasts game state updates, and ensures consistency across all clients.
Synchronized Game State:
Outcome: Ensures that both players have an up-to-date and consistent view of the game, enabling fair and competitive interactions.
// server.js
/* Install socket.io and config server
npm init -y
npm install express socket.io
node server.js
*/
/* Install mkcert and generate CERT for https
choco install mkcert
mkcert -install
mkcert <your_local_IP> localhost 127.0.0.1 ::1
mv <localIP>+2.pem server.pem
mv <localIP>+2-key.pem server-key.pem
mkdir certs
mv server.pem certs/
mv server-key.pem certs/
*/
const express = require('express');
const https = require('https');
const socketIo = require('socket.io');
const path = require('path');
const fs = require('fs'); // Required for reading directory contents
const app = express();
// Path to SSL certificates
const sslOptions = {
key: fs.readFileSync(path.join(__dirname, 'certs', 'server-key.pem')),
cert: fs.readFileSync(path.join(__dirname, 'certs', 'server.pem')),
};
// Create HTTPS server
const httpsServer = https.createServer(sslOptions, app);
// Initialize Socket.io
const io = socketIo(httpsServer);
// Serve static files from the 'public' directory
app.use(express.static('public'));
// Handle client connections
io.on('connection', (socket) => {
console.log(`New client connected: ${socket.id}`);
// Listen for broadcast messages from clients
socket.on('broadcast', (data) => {
// console.log(`Broadcast from ${socket.id}:`, data);
// Emit the data to all other connected clients
socket.broadcast.emit('broadcast', data);
});
// Handle client disconnections
socket.on('disconnect', () => {
console.log(`Client disconnected: ${socket.id}`);
});
});
// Start HTTPS server
const PORT = 3000; // Use desired port
httpsServer.listen(PORT, () => {
console.log(`HTTPS Server listening on port ${PORT}`);
});
Computer Player Algorithm
The computer player, controlled by AI within p5.js, employs sophisticated algorithms to simulate human-like behaviors, including:
Threat Detection and Evasion:
Mechanism: Continuously scans for incoming threats (e.g., enemy lasers, objects) and calculates optimal evasion paths to avoid collisions.
Strategic Movement and Firing:
Behavior: Moves toward or away from the enemy and fires lasers when within range, balancing offensive and defensive strategies based on current game states.
Tactic Engine Activation:
Function: Activates special abilities (e.g., infinite health or energy) when certain conditions are met, enhancing strategic depth and competitiveness.
// ComputerPlayer.js
class ComputerPlayer extends Player {
constructor(model, texture, difficulty = 1, behaviorPriority = 'attack') {
super(model, texture);
this.difficulty = difficulty; // Higher values mean smarter AI
this.behaviorPriority = behaviorPriority; // 'survival' or 'attack'
this.enemy = game.enemy;
this.lastActionTime = millis();
this.actionCooldown = map(this.difficulty, 1, 10, 500, 50); // in milliseconds
this.actionQueue = []; // Queue of actions to perform
this.currentAction = null;
this.firingRange = 100; // Define firing range threshold
this.bornTime = millis();
this.difficultyTime = frameCount;
}
updateAI() {
// Set local enemy target
this.enemy = game.enemy;
// Count in frame, 1200 = 20s, to increase AI difficulty
if (frameCount - this.difficultyTime > 1200) {
this.difficulty ++;
}
if (currentTime - this.lastActionTime > this.actionCooldown) {
console.log(`[AI][${this.behaviorPriority.toUpperCase()}] Deciding next action...`);
this.decideNextAction();
this.lastActionTime = currentTime;
}
// Execute actions from the queue
this.executeActions();
}
decideNextAction() {
// Determine behavior based on priority
if (this.behaviorPriority === 'survival') {
this.decideSurvivalActions();
} else if (this.behaviorPriority === 'attack') {
this.decideAttackActions();
} else {
// Default behavior
this.decideAttackActions();
}
}
decideSurvivalActions() {
// Abandoned method, will not be used
// (unless another behavior mode 'Survival' is to be used)
}
decideAttackActions() {
console.log(`[AI][DECIDE] Assessing attack strategies...`);
// 1. Detect and handle threats
let threats = this.detectThreats();
if (threats.hasThreats) {
console.log(`[AI][DECIDE] Threats detected: ${threats.allThreats.length} threats.`);
if (threats.hasCriticalObjectThreat && this.energy >= 30) {
console.log(`[AI][DECIDE] Critical object threat detected. Attempting to destroy it.`);
for (let j = 0; j < 3; j++) {
this.queueAction('fireAt', threats.criticalObject);
}
}
// Evade all detected threats
let evadeDirection = this.calculateEvasionDirection(threats.allThreats);
console.log(`[AI][EVADE] Evasion direction: ${JSON.stringify(evadeDirection)}`);
this.queueMovement(evadeDirection);
} else {
console.log(`[AI][DECIDE] No immediate threats detected.`);
// 2. No immediate threats
if ((this.energy < 40) && (this.enemy.health > 15)) {
console.log(`[AI][DECIDE] Energy low (${this.energy.toFixed(2)}).`);
if (30 <= this.energy) {
console.log(`[AI][DECIDE] Energy low. Wait for replenish.`);
} else {
// Move towards the closest energyOre to gain energy
let closestEnergyOre = this.findClosestEnergyOre();
if (closestEnergyOre) {
console.log(`[AI][DECIDE] Closest energy ore at (${closestEnergyOre.x}, ${closestEnergyOre.y}). Moving towards it.`);
this.moveTowardsObject(closestEnergyOre);
for (let j = 0; j < 3; j++) {
this.queueAction('fireAt', closestEnergyOre); // Attempt to destroy it to collect energy
}
} else {
console.log(`[AI][DECIDE] No energy ore found. Proceeding to attack.`);
// Move towards the enemy and attack
this.moveTowardsEnemy();
for (let j = 0; j < 3; j++) {
this.queueAction('fireAt', this.enemy);
}
}
}
} else {
console.log(`[AI][DECIDE] Energy healthy (${this.energy.toFixed(2)}). Moving towards enemy to attack.`);
// Move towards the enemy and attack
this.moveTowardsEnemy();
for (let j = 0; j < 3; j++) {
this.queueAction('fireAt', this.enemy);
}
}
}
// 3. Utilize tactic engine if advantageous
if (this.shouldUseTacticEngineAttack()) {
console.log(`[AI][DECIDE] Activating tactic engine.`);
this.difficulty ++;
this.queueAction('activateTacticEngine');
}
}
executeActions() {
while (this.actionQueue.length > 0) {
this.currentAction = this.actionQueue.shift();
switch (this.currentAction.type) {
case 'move':
this.simulateMovement(this.currentAction.direction, this.currentAction.duration);
break;
case 'fireAt':
this.simulateFireAt(this.currentAction.target);
break;
case 'activateTacticEngine':
this.simulateTacticEngine();
break;
default:
break;
}
}
}
simulateMovement(direction, duration = 500) {
// Log the movement simulation
console.log(`[AI][MOVE] Simulating movement directions: ${JSON.stringify(direction)} for ${duration}ms.`);
// Direction is an object { up: bool, down: bool, left: bool, right: bool }
// Duration is in milliseconds; map duration to number of frames based on difficulty
const frames = Math.max(Math.floor((duration / 1000) * 60 / (11 - this.difficulty)), 1); // Higher difficulty, fewer frames
console.log(`[AI][MOVE] Calculated frames for movement: ${frames}`);
for (let i = 0; i < frames; i++) {
if (direction.up) game.aiKeysPressed.w = true;
if (direction.down) game.aiKeysPressed.s = true;
if (direction.left) game.aiKeysPressed.a = true;
if (direction.right) game.aiKeysPressed.d = true;
}
}
simulateFire() {
let currentTime = millis();
if (currentTime - this.bornTime > stateBufferTime) {
console.log(`[AI][FIRE] Simulating space key press for firing laser.`);
// Simulate pressing the space key
game.aiKeysPressed.space = true;
} else {
console.log(`[AI][CEASEFIRE] AI Waiting For Game Loading.`);
}
}
simulateFireAt(target) {
// Calculate distance to target before deciding to fire
let distance = dist(this.x, this.y, target.x, target.y);
console.log(`[AI][FIRE_AT] Distance to target (${target.type}): ${distance.toFixed(2)}.`);
if (distance <= this.firingRange) {
console.log(`[AI][FIRE_AT] Target within firing range (${this.firingRange}). Firing laser.`);
// Target is close enough; simulate firing
this.simulateFire();
} else {
console.log(`[AI][FIRE_AT] Target out of firing range (${this.firingRange}). Skipping fire.`);
// Optional: Implement alternative actions if target is out of range
}
}
simulateTacticEngine() {
console.log(`[AI][TACTIC_ENGINE] Simulating 'x' key press for tactic engine activation.`);
// Simulate pressing the 'x' key
game.aiKeysPressed.x = true;
}
queueMovement(direction) {
// console.log(`[AI][QUEUE] Queuing movement: ${JSON.stringify(direction)}.`);
this.actionQueue.push({ type: 'move', direction: direction, duration: 500 });
}
queueAction(actionType, target = null) {
if (actionType === 'fireAt' && target) {
// console.log(`[AI][QUEUE] Queuing fireAt action for target: ${target.type} at (${target.x}, ${target.y}).`);
this.actionQueue.push({ type: actionType, target: target });
} else {
// console.log(`[AI][QUEUE] Queuing action: ${actionType}.`);
this.actionQueue.push({ type: actionType });
}
}
detectThreats() {
let threatsFound = false;
let criticalObjectThreat = null;
let allThreats = [];
const laserThreatRange = 5 * this.difficulty; // Adjustable based on difficulty
const objectThreatRange = 25 * this.difficulty; // Larger range for objects
// Detect laser threats
for (let laser of game.enemyLaser) {
let distance = dist(this.x, this.y, laser.x, laser.y);
if (distance < laserThreatRange) {
threatsFound = true;
allThreats.push(laser);
// console.log(`[AI][DETECT] Laser threat detected at (${laser.x}, ${laser.y}) within range ${laserThreatRange}.`);
}
}
// Detect object threats
for (let obj of game.objects) {
let distance = dist(this.x, this.y, obj.x, obj.y);
if (distance < objectThreatRange) {
// Additionally check z-axis proximity
if ((obj.z - this.z) < 200) { // Threshold for z-axis proximity
threatsFound = true;
criticalObjectThreat = obj;
allThreats.push(obj);
// console.log(`[AI][DETECT] Critical object threat detected: ${obj.type} at (${obj.x}, ${obj.y}) within range ${objectThreatRange} and z-proximity.`);
} else {
threatsFound = true;
allThreats.push(obj);
// console.log(`[AI][DETECT] Object threat detected: ${obj.type} at (${obj.x}, ${obj.y}) within range ${objectThreatRange}.`);
}
}
}
return {
hasThreats: threatsFound,
hasCriticalObjectThreat: criticalObjectThreat !== null,
criticalObject: criticalObjectThreat,
allThreats: allThreats
};
}
calculateEvasionDirection(threats) {
// Determine evasion direction based on all threats
let moveX = 0;
let moveY = 0;
for (let threat of threats) {
if (threat.z > -2000) {
let angle = atan2(this.y - threat.y, this.x - threat.x);
moveX += cos(angle);
moveY += sin(angle);
console.log(`[AI][EVADE] Calculating evasion for threat at (${threat.x}, ${threat.y}).
Angle: ${angle.toFixed(2)} radians.`);
}
}
// Normalize and determine direction
if (moveX > 0.5) moveX = 1;
else if (moveX < -0.5) moveX = -1;
else moveX = 0;
if (moveY > 0.5) moveY = 1;
else if (moveY < -0.5) moveY = -1;
else moveY = 0;
return {
up: moveY === 1,
down: moveY === -1,
left: moveX === -1,
right: moveX === 1
};
}
findClosestEnergyOre() {
let energyOres = game.objects.filter(obj => obj.type === 'energyOre'); // Assuming objects have a 'type' property
if (energyOres.length === 0) {
console.log(`[AI][ENERGY] No energy ore available to collect.`);
return null;
}
let closest = energyOres[0];
let minDistance = dist(this.x, this.y, closest.x, closest.y);
for (let ore of energyOres) {
let distance = dist(this.x, this.y, ore.x, ore.y);
if (distance < minDistance) {
closest = ore;
minDistance = distance;
}
}
console.log(`[AI][ENERGY] Closest energy ore found at (${closest.x}, ${closest.y}) with distance ${minDistance.toFixed(2)}.`);
return closest;
}
moveTowardsObject(target) {
// Determine direction towards the target object
let dx = target.x - this.x;
let dy = target.y - this.y;
let direction = {
up: dy < 20,
down: dy > -20,
left: dx < -20,
right: dx > 20
};
console.log(`[AI][MOVE_TO_OBJECT] Moving towards ${target.type} at (${target.x}, ${target.y}). Direction: ${JSON.stringify(direction)}.`);
this.queueMovement(direction);
}
moveTowardsEnemy() {
// Determine direction towards the enemy
let dx = this.enemy.x - this.x;
let dy = this.enemy.y - this.y;
let direction = {
up: dy < 20,
down: dy > -20,
left: dx < -20,
right: dx > 20
};
console.log(`[AI][MOVE_TO_ENEMY] Moving towards enemy at (${this.enemy.x}, ${this.enemy.y}). Direction: ${JSON.stringify(direction)}.`);
this.queueMovement(direction);
}
shouldUseTacticEngineSurvival() {
// Abandoned method
}
shouldUseTacticEngineAttack() {
// Decide whether to activate tactic engine based on attack advantage
if (!this.tacticEngineUsed) {
if (this.health < 30) {
console.log(`[AI][TACTIC_ENGINE] Conditions met for tactic engine activation (Health: ${this.health}, Energy: ${this.energy}).`);
return true;
}
if (this.model === assets.models.playerShip2) {
// Additional condition: If enemy health is low and need more energy to destroy it
if (game.enemy.health < 30 && this.energy < 50) {
console.log(`[AI][TACTIC_ENGINE] Condition met for playerShip2: Enemy health is low (${game.enemy.health}).`);
return true;
}
}
}
return false;
}
render() {
// Add indicators or different visuals for ComputerPlayer
super.render();
// Draw AI status
push();
fill(255);
textFont(assets.fonts.ps2p);
textSize(12);
textAlign(LEFT, TOP);
text(`X: ${this.x.toFixed(1)}`+`Y: ${this.y.toFixed(1)}`, this.x - 50, this.y - 75);
text(`AI Difficulty: ${this.difficulty}`, this.x - 50, this.y - 60);
if (this.currentAction != null) {
text(`Behavior: ${this.currentAction.type}`, this.x - 50, this.y - 45);
}
pop();
}
}
Servo Motion Control
Commands from the AI are translated into servo movements to control the robot hand:
Command Translation:
Process: Maps AI decisions to corresponding servo angles, ensuring accurate physical representations of game inputs.
Async Update:
Outcome: Ensures that physical actions performed by the robot hand are crowded out by serial communication while keeping in sync with the game’s digital state.
class RobotHandController {
constructor() {
this.lastUpdateTime = millis();
}
init() {
//
}
update() {
// Update finger bends to Arduino
this.updateFingerAngles();
}
// Update Fingers according to the virtual keys
updateFingerAngles() {
// Stop function if no serial connections
if (!serialActive) return;
let currentTime = millis();
const keys = ['w', 'a', 's', 'd', 'space', 'x'];
const angles = [30, 50, 50, 60, 75, 70]; // Different angles for each key
for (let i = 0; i < 6; i++) {
if (game.aiKeysPressed[keys[i]] === true) {
if (fingerAngles[i] != angles[i]) {
fingerAngles[i] = angles[i];
}
}
}
// Send data every second
if (frameCount % 120 === 0) {
this.sendAngles(fingerAngles);
// Schedule Release
setTimeout(() => {
console.log('reached')
this.sendAngles(fingerDefaultAngles);
}, 2000 / 2);
}
this.lastUpdateTime = currentTime;
}
// Send Current Angles to Arduino via Serial
sendAngles(angles) {
if (serialActive) {
let message = angles.join(",") + "\n";
writeSerial(message);
console.log("Sent to Arduino:", message.trim());
}
}
}
/*
function readSerial(data) {
// Handle incoming data from Arduino
// For this project, we primarily send data to Arduino
}
*/
4. Project Highlights
Network Communication
Real-Time Synchronization: Successfully implemented real-time data exchange between clients using Node.js and Socket.io.
Robust Server Setup: Developed a stable local server that handles multiple connections.
Physical Installation
Robot Hand Fabrication: Crafted a functional robot hand using 3D printing, hot-glued hinges, and fishing line tendons.
Servo Integration: Connected and controlled multiple servos via Arduino to simulate human key presses.
AI Player Algorithm
Dynamic Threat Handling: Developed an AI that intelligently detects and responds to multiple simultaneous threats, prioritizing evasion and strategic attacks based on predefined behavior modes.
5. Future Improvements
Strengthening the Robot Hand
Enhanced Strength: Upgrade materials and servo to increase the robot hand’s strength and responsiveness, realizing actual control over the physical buttons.
Network Communication Structure
Peer-to-Peer Networking: Transition from a broadcast-based communication model to a peer-to-peer (P2P) architecture, facilitating support for more than two players and reducing server dependencies.
Gladly, I had finished most of the project before the user testing – although it turned out that as soon as the uncertainty a user brought into the system arose, bugs followed.
Since my final project is built upon my midterm, I invited people outside the class to conduct user testing to avoid any pre-perceived knowledge about the system. The two samples also came from different backgrounds that varied in terms of their familiarity with video games, contributing to the comprehensiveness of my user testing.
On balance, I would say the tests were successful in terms of conveying the gist mechanism of the project – from the PVE experience to the presence of the robotic hand. Both participants have (almost) no issue in carrying out the experience (although the fact that people always stumble at how to config the game when pressing the keyboard persists, and a bug related to flow control popped up).
Other suggestions I collected include (but are not limited to):
The purpose of the robot hands is a bit vague at the beginning. In the final presentation, with the aid of a larger screen and closer installation of the robot hands, this should be more obvious.
The meaning of ‘X’, ‘tactic engine.’ More prominent notification is applied.
The static AI difficulty may not be exciting enough (from a video game player). The gradual increase of AI difficulty is now applied.
The pace of the interaction is a bit quick – the same input to control the game stages may cause accidental/undesired input. This is solved by adding buffer time between stage changes.
The game objective could be a bit unclear – given some will skip the instruction or skim through. Another global notification in the game is added.
The enemy may be too small on the screen. It is now moved closer to the player on the z axis.
For user testing, I connected my p5.js sketch to the Arduino and tested the gameplay with a friend who had a general idea of the game. The user interacted with the arduino buttons and the falling fruits synced to the beats of the song.
The feedback highlighted that the timing between the falling fruits and button presses felt responsive and engaging. The user appreciated the dynamic note generation for each fruit, which added a musical layer to the experience. Minor improvements were suggested for button responsiveness and visual feedback when a fruit was missed.
The next steps include building the box for the basket and integrating the arcade buttons into it, which I have soldered. This test successfully validated the functionality of the game logic, Arduino integration, and user interaction, paving the way for completing the physical setup.
I have made significant progress with my work, and have come close to a finish. For my project, a game called Pingu Pounce, I’ve decided to add physical controls to make the overall experience much more enjoyable and interactive.
I have used 4 buttons — A restart, left, right, and jump button.
Here I have attached a video of a classmate playing with my game and I received some overall feedback. This is not the final version of my project as I have many improvements left to do – namely implementing my buttons in a much more accessible way.
Over the remaining days, I will work on how to make my game more intuitive without me having to explain the instructions too much — perhaps I will add labels to my buttons to do so.
Today I conducted the the alpha version user testing of my final project, and I was more than happy to hear the reaction and feedback. The person did not know anything about my project except for the fact that it was connected to music. The reaction and a small part of the feedback can be found in the video below.
First of all, I want to note that I have not finalized my final project yet as some laser cutting as well as connection fixing need to be done. Nevertheless, the core functions that represent my overall idea work as intended. My friend emphasized that the controls are quite intuitive and he enjoyed the process of exploration of what each of the controls stands for. Moreover, he managed to combine the effects in a way that even I have never tried, which is certainly a great thing because it shows that the project incentivizes user’s curiosity and interest.
The thing I will work on in the next couple of days is the signs written on the panel that will prompt the purpose of the controls, because my friend, although explored most of the functions, did not pay attention to one of them. I will try to find a balance so as not to make the explanations too heavy to leave room for exploration. I have also decided, thanks to a recommendation from my friend, to borrow a set of headphones from the equipment center and connect them to my laptop during the IM showcase to create a more ‘ambient’ experience around my project by fully connecting the users with the music and the visualizations on the screen.
Overall, I loved to hear that my project is especially perfect for people who like music because it is exactly what I kept in mind when I was coming up with this idea for my final project. Music enjoyers are my target audience, so I will do my best to finish the project as I see it, and, hopefully, give a chance to many people to test it during the IM showcase.
User Testing Experience
I conducted a user test with my friend, creating a scenario where he had to interact with the clock from scratch. I completely reset the device and watched how he would handle the initial setup and configuration process.
Clock Setup
Clock interaction
What I Observed
The first hurdle came with the connection process. Since the clock needs to connect to WiFi and sync with the time server, this part proved to be a bit challenging for a first-time user. My friend struggled initially to understand that he needed to connect to the clock’s temporary WiFi network before accessing the configuration interface. This made me realize that this step needs better explanation or perhaps a simpler connection method.
However, once past the connection stage, things went much smoother. The configuration interface seemed intuitive enough – he quickly figured out how to adjust the display settings and customize the clock face. The color selection and brightness controls were particularly straightforward, and he seemed to enjoy experimenting with different combinations.
Key Insights
The most interesting part was watching how he interacted with the hexagonal display. The unusual arrangement of the ping pong balls actually made him more curious about how the numbers would appear. He mentioned that comparing different fonts was satisfying , especially with the lighting background effects.
Areas for Improvement
The initial WiFi setup process definitely needs work. I’m thinking about adding a simple QR code on the device that leads directly to the configuration page, or perhaps creating a more streamlined connection process. Also, some basic instructions printed on the device itself might help first-time users understand the setup process better.
The good news is that once configured, the clock worked exactly as intended, and my friend found the interface for making adjustments quite user-friendly. This test helped me identify where I need to focus my efforts to make the project more accessible to new users while keeping the features that already work well.
Moving Forward
This testing session was incredibly valuable. It showed me that while the core functionality of my project works well, the initial user experience needs some refinement. For the future I will focus particularly on making the setup process more intuitive for first-time users.
Reading this article really made me rethink how I interact with technology. I’ve always been fascinated by touchscreens and their simplicity, but I never stopped to consider how limiting they actually are. The author’s critique of “Pictures Under Glass” really hit me, especially when they described how flat and numb these interfaces feel. It’s true—I use my phone every day, but when I think about it, the experience of swiping and tapping feels so disconnected compared to how I interact with physical objects.
One part that really stood out to me was the comparison to tying shoelaces. It reminded me of when I was little and struggling to learn how to tie mine. My hands learned by feeling, adjusting, and figuring things out without needing to rely on my eyes all the time. That’s such a natural way for us to interact with the world, and it’s crazy to think how little that’s reflected in our technology today.
The section about making a sandwich was also a moment of realization for me. It’s such a simple, everyday task, but it involves so much coordination and subtle feedback from your hands—how the bread feels, the weight of the knife, the texture of the ingredients. None of that exists when I swipe through apps or scroll on a website. It made me wonder: why do we settle for technology that ignores so much of what our hands can do?
This article really inspired me to think differently about the future of technology. I agree with the author that we need to aim higher—to create interfaces that match the richness of our human abilities. Our hands are capable of so much more than sliding on glass, and it’s exciting to imagine what might be possible if we started designing for that.
Responses: A Brief Rant on the Future of Interaction Design
I found this follow-up just as thought-provoking as the original rant. The author’s unapologetic tone and refusal to offer a neatly packaged solution make the piece feel refreshingly honest. It’s clear that their main goal is to provoke thought and inspire research, not to dictate a specific path forward. I really appreciated the comparison to early Kodak cameras—it’s a great reminder that revolutionary tools can still be stepping stones, not destinatione.
The critique of voice and gesture-based interfaces resonated with me too. I hadn’t really considered how dependent voice commands are on language, or how indirect and disconnected waving hands in the air can feel. The section on brain interfaces was particularly interesting. I’ve always thought of brain-computer connections as a futuristic dream, but the author flipped that idea on its head. Instead of bypassing our bodies, why not design technology that embraces them? The image of a future where we’re immobile, relying entirely on computers, was unsettling but eye-opening.
I love how the author frames this whole discussion as a choice. It feels empowering, like we’re all part of shaping what’s next. It’s made me more curious about haptics and dynamic materials—fields I didn’t even know existed before reading this. I’m left thinking about how we can create tools that actually respect the complexity and richness of human interaction.
Concept
Our project, “Light and Distance Harmony,” emerged from a shared interest in using technology to create expressive, interactive experiences. Inspired by the way sound changes with distance, we aimed to build a musical instrument that would react naturally to light and proximity. By combining a photoresistor and distance sensor, we crafted an instrument that lets users shape sound through simple gestures, turning basic interactions into an engaging sound experience. This project was not only a creative exploration but also a chance for us to refine our Arduino skills together.
Materials Used
Arduino Uno R3
Photoresistor: Adjusts volume based on light levels.
Ultrasonic Distance Sensor (HC-SR04): Modifies pitch according to distance from an object.
Piezo Buzzer/Speaker: Outputs the sound with controlled pitch and volume.
LED: Provides an adjustable light source for the photoresistor.
Switch: Toggles the LED light on and off.
Resistors: For the photoresistor and LED setup.
Breadboard and Jumper Wires Code
The code was designed to control volume and pitch through the analog and digital inputs from the photoresistor and ultrasonic sensor. The complete code, as documented in the previous sections, includes clear mappings and debugging lines for easy tracking.
// Define pins for the components
const int trigPin = 5; // Trigger pin for distance sensor
const int echoPin = 6; // Echo pin for distance sensor
const int speakerPin = 10; // Speaker PWM pin (must be a PWM pin for volume control)
const int ledPin = 2; // LED pin
const int switchPin = 3; // Switch pin
const int photoResistorPin = A0; // Photoresistor analog pin
// Variables for storing sensor values
int photoResistorValue = 0;
long duration;
int distance;
void setup() {
Serial.begin(9600); // Initialize serial communication for debugging
pinMode(trigPin, OUTPUT); // Set trigger pin as output
pinMode(echoPin, INPUT); // Set echo pin as input
pinMode(speakerPin, OUTPUT); // Set speaker pin as output (PWM)
pinMode(ledPin, OUTPUT); // Set LED pin as output
pinMode(switchPin, INPUT_PULLUP); // Set switch pin as input with pull-up resistor
}
void loop() {
// Check if switch is pressed to toggle LED
if (digitalRead(switchPin) == LOW) {
digitalWrite(ledPin, HIGH); // Turn LED on
} else {
digitalWrite(ledPin, LOW); // Turn LED off
}
// Read photoresistor value to adjust volume
photoResistorValue = analogRead(photoResistorPin);
// Map photoresistor value to a range for volume control (0-255 for PWM)
// Higher light level (LED on) -> lower photoresistor reading -> higher volume
int volume = map(photoResistorValue, 1023, 0, 0, 255); // Adjust mapping for your setup
// Measure distance using the ultrasonic sensor
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
// Calculate distance in cm
distance = duration * 0.034 / 2;
// Set frequency based on distance in the range of 2-30 cm
int frequency = 0;
if (distance >= 2 && distance <= 30) {
frequency = map(distance, 1, 100, 20000, 2000); // Closer = higher pitch, farther = lower pitch
tone(speakerPin, frequency);
analogWrite(speakerPin, volume); // Apply the volume based on photoresistor reading
} else {
noTone(speakerPin); // Silence the speaker if the distance is out of range
}
// Debugging output
Serial.print("Photoresistor: ");
Serial.print(photoResistorValue);
Serial.print("\tVolume: ");
Serial.print(volume);
Serial.print("\tDistance: ");
Serial.print(distance);
Serial.print(" cm\tFrequency: ");
Serial.println(frequency);
delay(100); // Short delay for sensor readings
}
Video Demonstration
In our video demonstration, we showcase how the instrument responds to changes in light and proximity. We toggle the LED to adjust volume and move a hand closer or farther from the ultrasonic sensor to change pitch, demonstrating the instrument’s sensitivity and interactive potential.
Reflections What Went Well: The project successfully combines multiple sensors to create a reactive sound device. The integration of volume and pitch control allows for intuitive, responsive sound modulation, achieving our goal of designing an engaging, interactive instrument.
Improvements: To improve this instrument, we would enhance the melody range, creating a more refined and versatile sound experience. This could involve using additional sensors or more sophisticated sound generation methods to provide a broader tonal range and a richer melody.
Overall Reaction
My user testing experience went about how I expected it to. The person who participated was already familiar with my concept so it was a bit challenging to get an authentic reaction but we still had a productive conversation about what steps I need to take to improve. Although he was aware of my concept, I did not prompt him with any steps on how to run the software. Therefore, I was quite pleased that he was able to navigate it fairly seamlessly but am considering altering a few of the prompts just so users have a better idea of what to expect when they choose a location.
Areas for Improvement
One thing he highlighted in our discussion afterwards was the quality of my mappings. Because you are viewing the same thing on the screen (with brief instructions) and what you touch on the paper, it is clear that you should press a button to generate that location’s music. Right now my biggest concern in terms of improvements in finishing adding the switches to the map. This weekend I focused my time on finishing the software but now realize how that limited the user experience for testing. To help remedy this, I plan to conduct user testing again this Saturday to get a full critique of the design.
Another feature that we discussed would be beneficial in adding is a small screen where users can see themselves dancing to help promote full interaction with the piece. Given that my user already knew the intended goal of my design, he danced for the sake of full participation but I’m not sure it’s clear to users that I also want them to dance on their own. Thus, in terms of next steps, my immediate goals are: finishing the hardware, adding the video capture component, and creating another tab with specific instructions of how to use the program.