Week 12 – Final Project Proposal

Finalized concept

For my final project, I’m building something that sits somewhere between a desk object, a quiet companion, and a tiny archivist of the room it lives in. The idea came from a simple thought: most days pass without leaving much behind, and by the time I get to the end of the night, the whole thing feels like a blur. I’m not someone who journals daily, but I like the idea of having some kind of trace of the day even if it’s abstract, incomplete, or not linguistic at all.

So the artefact is basically a small desk object that listens to the atmosphere of the space throughout the day, and later turns those ambient shifts into a soft, formless visual cloud in p5.js. It’s not interested in what the user did, only in how the day felt. It just records the vibe of a day, no eavesdropping or surveillance (and definitely no productivity tracking). I want the final thing to feel almost poetic, like the object is quietly paying attention on the side while I’m working, and at the end of the day it shows me its version of the memory.

What the Arduino will do

The Arduino will handle all the sensor-stuff during the day. I’m using:

  • a photoresistor to capture light changes,
  • an ultrasonic sensor to sense presence/absence near the desk,
  • a piezo to detect general sound/vibration spikes.

Arduino will collect these readings over time and send them to p5.js through serial. I’m keeping the Arduino’s job simple: sense → store → transmit.
I’ll also have a small physical trigger (most likely a button or dial) that the user presses at the end of the day to “reveal” the visual memory.

What p5.js will do

p5.js will take the day’s data and transform it into an atmospheric, slow-moving cloud. I’m aiming for visuals that sit in between abstract art and environmental “weather.” Light translates to color gradients, presence to density, and sound to softness or sharpness of the shape. I’m also considering a very light ml5.js layer just to classify general movement energy, so the cloud feels a bit more alive.
[Communication is mostly Arduino to p5, but later I might also send a message back to Arduino so the object can react in a small way when the memory is generated]

Early progress

So far, I’ve been sketching a few versions of the cloud visualization to figure out what feels “alive” without being overwhelming. Physically, I’ll keep the build minimal – something that looks more like a desk artefact than a tech box.
I’m hoping the final result feels calm, personal, and a little bit poetic, not a gadget trying to do too much, but an object that simply notices what the day was like and gives it a shape.

Week 11 – Reading Reflection

Pullin’s argument challenges the lazy division between design as aesthetic indulgence and disability as pure function. He dismantles the notion that assistive devices must be invisible or “normalized,” suggesting instead that disability could (and should) be a space for expressive, creative design. That was a refreshing perspective. The hearing aid, the wheelchair, the prosthetic are cultural objects that communicate identity (not just medical equipment).

Yet, what I appreciate most in Pullin’s framing is how he exposes the moral vanity of “inclusive design.” The impulse to hide difference in the name of inclusion often erases individuality altogether. Still, I wonder whether his optimism about designers embracing disability aesthetics underestimates the market’s conservatism; we live in a world where even fashion struggles to tolerate imperfection. The essay makes me question whether good design serves comfort or visibility, and whether true accessibility might require celebrating discomfort, making difference not something to be hidden, but worn, literally, in style.

Week 10 – Reading Reflection

Bret Victor’s “A Brief Rant on the Future of Interaction Design” reads less like a complaint and more like a plea for imagination. His frustration with “pictures under glass” isn’t really about touchscreens rather about how easily we mistake convenience for progress. Victor’s argument that our hands are not just pointers but thinking tools is relatable. I’d never thought of a touchscreen as “numb,” but he’s right: sliding a finger across glass is nothing like twisting, folding, or shaping an object. He’s asking designers to respect the body’s intelligence, the millions of years of evolution that made us good at feeling, gripping, and sensing the world. I agree with him almost entirely. What unsettles me is how quickly we’ve accepted numbness as normal. We’ve trained ourselves to think that friction is a flaw when it comes to UX.

The Responses piece adds an interesting layer of humility. Victor doesn’t pretend to have the solution. He admits he was ranting, not designing one. Still, his answers to critics are telling. He pushes back against gimmicks like voice control and “waving your hands in the air,” arguing that real interaction should involve the whole body and the tactile richness of touch. I found myself nodding at his line about computers needing to adapt to our bodies, not the other way around. That’s such a simple reversal, yet it cuts right through decades of design laziness. When he compares touchscreen culture to “restricting all literature to Dr. Seuss’s vocabulary,” it’s funny, but it also nails the deeper loss: we’re settling for tools built for children, not adults.

If there’s one thing I’d question, it’s Victor’s nostalgia for physicality. I agree that touch and movement matter, but I also think the human imagination adapts. The digital world is training new forms of dexterity which are mostly mental than physical. Coding, multitasking, navigating layered interfaces – these, too, are forms of “touch,” just less visible. Maybe the future of interaction design isn’t about replacing glass with holographic clay, but about balancing sensory depth with cognitive range. Victor’s rant reminds me that design should evolve with both the hand and the mind.

Week 9 – Analog + Digital

Concept

When driving at night, high-beam headlights often blind oncoming drivers. I wanted to explore a simple and affordable system that encourages responsible use of headlights, something that detects oncoming glare and helps the driver automatically or manually switch to low-beam mode. This prototype uses a photoresistor to sense incoming light and a button to simulate the driver’s manual response. Two LEDs represent the vehicle’s headlight (green) and alert indicator (red).

Mechanism

Ambient calibration: when the Arduino starts, it reads the surrounding light for about 1.5 seconds and sets that as a baseline for normal nighttime brightness. This is important in the real world because ambient light differs in places, times.
Glare detection: if the light level rises significantly above that baseline, similar to when another vehicle’s high beams hit, the red LED begins to blink as a warning.
Manual response: when the button is pressed, the green “headlight” LED dims to a predefined low-beam brightness, showing that the driver has dipped the lights.
Return to normal: releasing the button restores full brightness once the road ahead is clear.

Everything happens automatically in a simple loop, using only one analog sensor (photoresistor) and one digital input (button).

Link to video demonstration

Schematic Diagram

Code highlight

The most interesting part for me was sensing ambient light accurately.
I learned how to smooth noisy analog readings using an exponential moving average (

ema = ALPHA * raw + (1 - ALPHA) * ema;

 

) and how to calibrate a baseline from Arduino documentation and community projects.
This taught me how real-world sensors fluctuate constantly and why filtering is crucial before making decisions based on their data.

Use Cases:

  • A bike or e-scooter headlight that adjusts automatically when another vehicle approaches.
  • A low-cost training aid for driving schools, showing how to react responsibly to high beams.
  • A foundation for smart vehicle lighting projects that aim to reduce glare and improve road safety.

Further Enhancement

1. Replace the LEDs with an actual headlight driver or relay for real-world testing.
2. Use a camera or focused photodiode to distinguish streetlights from oncoming cars.
3. Add automatic dimming without manual input, fully autonomous dipping.
4. Integrate with IoT dashboards to log light data and analyze driving environments.
5. Use a microcontroller with more PWM channels to control multiple lamps independently.

Conclusion

This project started as a simple assignment, but it quickly became a meaningful experiment in safety-oriented design.
By combining analog sensing, human interaction, and basic control logic, I built a small system that mimics how real vehicles manage headlight behavior. It reminded me that even modest hardware, guided by thoughtful logic, can address real-world problems.

Week 9 – Reading Reflection

Tom Igoe’s “Making Interactive Art: Set the Stage, Then Shut Up and Listen” advocates for a shift in the role of the artist in interactive work: the artist must stop controlling the experience and instead facilitate a conversation with the audience. Igoe’s key argument is that interactivity only truly begins when the artist relinquishes interpretation and allows the audience to complete the piece. This concept challenges the creator’s ego and promotes a more humble, open-ended form of art. While I appreciate the insight, I find Igoe’s emphasis on “shut up and listen” a bit idealistic. In practice, many audiences need some level of guidance to fully engage with an interactive installation. Too much ambiguity can easily lead to frustration. Nonetheless, his metaphor of the artist as ‘a director—staging a conversation rather than dictating a lecture’ resonates strongly. At its core, this advice serves as a reminder that interaction requires mutual respect: artists must listen as much as they create.

In “Physical Computing’s Greatest Hits (and Misses),” Igoe reflects on recurring projects in interactive art, such as theremins, drum gloves, and video mirrors, and contemplates why they keep resurfacing in new forms. He doesn’t dismiss these repetitive ideas; rather, he sees their evolution as evidence of growing refinement and deeper understanding. Igoe suggests that repetition is a form of progress, not stagnation. In physical computing, each reimagining of a “classic” project offers new possibilities, whether it’s smarter sensors, more intuitive designs, or deeper contextual relevance. Igoe also rejects the notion that novelty for its own sake is the ultimate goal, calling attention to the often-overlooked value in revisiting older concepts. This stance challenges the modern fixation on innovation for innovation’s sake, emphasizing that novelty must be paired with genuine engagement and a willingness to learn from the past, not just chase aesthetics or trends.

Synthetically, both essays stress the importance of humility in interactive art. Whether talking about listening to the audience or refining established ideas, Igoe places the artist’s role not in the creation of definitive, controlled experiences but in the facilitation of dialogue and discovery. The act of interacting with art, according to Igoe, is an ongoing process that requires responsiveness and openness. The artist’s task is to create the conditions that encourage curiosity, rather than rigidly scripting the conversation. In the end, good interactive art is about paying attention to what the interaction itself reveals and adjusting accordingly, facilitating a space where discovery is as important as design.

Week 8 – Unusual Switch

For my project, I created an Arduino switch that activates through physical contact, specifically, a hug. Instead of using hands, I built a simple “hug switch” using two pieces of aluminum foil connected to the Arduino. One piece was taped onto my sleeve, and the other onto a plush toy sitting on my chair. When I hugged the toy, the foil pieces touched and completed the circuit, turning on an LED on the breadboard.

This setup used digitalRead() to detect when the circuit closed, lighting up the LED as a visual indicator. It’s a very basic circuit, just two foil pads, a resistor, and an LED; but it demonstrated how the human body can act as a conductor to trigger digital inputs. I liked how small physical gestures could translate into electronic signals. The process reminded me how interaction design can make technology feel more human, even with something as simple as a hug that lights up a tiny LED.

Schematic Diagram

Week 8 – Reading Response

What links Donald Norman’s “Emotion and Design” and Robert McMillan’s profile of Margaret Hamilton is that both quietly insist on humanizing it, instead of just dealing with design . Norman’s argument that beautiful design makes people feel more in control is not just about colors or curves. It’s about the psychology of trust. He claims that people “perform better when they feel better,” suggesting that aesthetics aren’t superficial, but functional. I find this somewhat persuasive, but also a little idealistic. There’s truth to it, I do feel calmer using Notion than some clunky university portal, but sometimes “pretty” products mask poor usability. Attractive things may appear to work better, but that illusion can also hide deeper flaws. Still, Norman’s point stands: emotion isn’t a side effect of design rather it’s part of the system itself.

Reading “Her Code Got Humans on the Moon” right after feels like the necessary reality check. Margaret Hamilton’s code didn’t have the luxury of being “attractive,” it just had to not crash on the Moon. Yet what she achieved was, in its own way, a kind of beauty: the beauty of precision, foresight, and calm under cosmic pressure. Her work (from inventing the concept of “software engineering” to preventing the Apollo 11 disaster) captures design stripped to its core: solving human problems with clarity and empathy. I love how she described her philosophy as “preparing for the unexpected.” That’s the emotional intelligence of a designer without ever calling it that

If Norman celebrates how design makes us feel, Hamilton reminds us what design must do: sustain life when it matters. My takeaway is that emotion in design isn’t always about pleasure; sometimes it’s about responsibility. The most beautiful designs are the ones that don’t panic when everything else does.

Midterm Project – Hogwarts Experience

Concept

Hogwarts Experience is an interactive web experience inspired by the whimsical world of Hogwarts.
It blends a classic sorting quiz, a maze challenge, and wand selection into one compact game built entirely in JavaScript (using p5.js).

The idea was to explore how storytelling, visuals, and interactivity could merge to create something that feels alive; something more than just a quiz or a mini-game.

Inspiration

From my fascination of JK Rowling’s use of symbols (a hat, a house, wand) in Harry Potter explores identity and choice. I wanted to capture that feeling of “who am I?” in a lightweight browser experience.

Technically, this project was also a personal experiment:
how far can I go with only vanilla p5.js, with minimal frameworks and assets, what be drawn or generated?

Visual Elements

The visuals are all hand-coded with p5.js shapes and color palettes chosen to reflect the four houses:

  • Gryffindor: warm reds and golds, courage in motion
  • Ravenclaw: deep blues and calm precision
  • Hufflepuff: mellow yellows and earthy tones
  • Slytherin: sleek greens and silvers, a hint of ambition

[I got the color codes from codepen.io]

The wand selection features small glowing particle bursts when you find your correct match, a simplified particle system I built directly into the Wand class.
It’s minimal but expressive: circles of light that rise and fade like sparks from Ollivander’s wand shop.

Interaction & Controls

  • The quiz is fully clickable — each answer dynamically updates your house “weight.”
  • Once sorted, you navigate a small maze using arrow keys (or WASD).
  • You can activate your house ability with a single keypress (spacebar).
  • The wand test responds to clicks, showing visual feedback for correct or incorrect matches.

Each stage was designed to feel self-contained but connected, a simple rhythm of choice, discovery, and action.

Sound Design

Sound is subtle but intentional.
A soft background theme plays during the game, punctuated by short cues:

  • a shimmer when your wand responds,
  • a gentle whoosh for movement,
  • a celebratory chime when you win,
  • and scary dementor sound when you fail to exit the maze.

All sound events are managed with a simple sound registry that starts, stops, and restarts based on player state. I tried to get rid of any overlaps or chaos. I wanted it to add atmosphere without overwhelming the visuals.

Code Architecture

The game is built around a few modular classes:

  • Question → handles quiz text, answers, and house mapping
  • Player → manages movement, collision, and ability us
  • Enemy → manages the enemies in the maze
  • Wand → merges wand logic and particle effects for magical feedback
  • GameManager (lightweight) → controls flow between quiz, wand test, and maze

Each class does one job well.
The code favors clarity over complexity; the division into classes make it readable, flexible, and easily expandable.

Code Snippet to Highlight

test() {
    if (this.isCorrect) {
        this.glowing = true;
        for (let i = 0; i < 20; i++) {
            this.particles.push({
                x: this.x,
                y: this.y,
                vx: random(-3, 3),
                vy: random(-5, -1),
                life: 60,
                success: true
            });
        }
        return true;
    } else {
        for (let i = 0; i < 10; i++) {
            this.particles.push({
                x: this.x,
                y: this.y,
                vx: random(-3, 3),
                vy: random(-5, -1),
                life: 60,
                success: false
            });
        }
        return false;
    }
}

It’s small, but it brings the world to life, literally adding a sparkle of magic when you choose correctly.

Future Additions

  • Better sprites & art direction: hand-drawn assets for characters, wands, and the maze walls
  • Fullscreen adaptive display: scaling visuals gracefully across devices
  • House competition system: each player’s score stored via browser cookies or localStorage, allowing a shared “House Points” leaderboard
  • Integration with ml5.js: experimenting with emotion or gesture recognition to let your facial expression or hand movement influence sorting outcomes

Each of these is a small step toward a more responsive, immersive experience,  a bit closer to real enchantment.

Week 5 – Midterm Assignment Progress

Concept

For my midterm project, I’m building an interactive Hogwarts experience. The player starts by answering sorting questions that place them into one of the four houses. Then they get to choose a wand and receive visual feedback to see which wand truly belongs to them. After that, the player will enter their house’s common room and either explore various components in the room or play a minigame to earn points for their house.

The main idea is to capture the spirit and philosophy of each Hogwarts house and reflect it in the minigames, so the experience feels meaningful and immersive. Instead of just random games, each minigame will be inspired by the core traits of Gryffindor, Hufflepuff, Ravenclaw, or Slytherin.

Design

I want the project to feel smooth and interactive, with a focus on simple controls mostly through mouse clicks. Each stage (from sorting, to wand choosing, to the common room minigames) will have clear visual cues and feedback so the player always knows what to do next.

For the minigames, I’m aiming for gameplay that’s easy to pick up but still fun, and thematically tied to the house’s values. The design will mostly use basic shapes and animations in p5.js to keep things manageable and visually clean.

Challenging Aspect

The part I’m still figuring out and find the most challenging is designing minigames that really match each house’s philosophy but are also simple enough for me to implement within the project timeline. It’s tricky to balance meaningful gameplay with code complexity, especially because I already have a lot of different systems working together.

Risk Prevention

To manage this risk, I’ve been brainstorming minigames that are easy to build, like simple clicking games for Gryffindor’s bravery or Memory games for Ravenclaw, while still feeling connected to the houses’ themes. I’m focusing on minimal input and straightforward visuals so I can finish them reliably without overwhelming the code.

Week 5 – Reading Reflection

What I enjoyed most in this piece is how it drags computer vision down from the pedestal of labs and military contracts into something artists and students can actually play with. The examples, from Krueger’s Videoplace to Levin’s own Messa di Voce, remind me that vision doesn’t have to mean surveillance or soulless AI pipelines. It can also mean goofy games, poetic visuals, or even awkward belt installations that literally stare back at you. I like this take, it makes technology feel less like a monolith and more like clay you can mold.

That said, I found the constant optimism about “anyone can code this with simple techniques” a little misleading. Sure, frame differencing and thresholding sound easy enough, but anyone who’s actually tried live video input knows it’s messy. Lighting ruins everything, lag creeps in, and suddenly the elegant vision algorithm thinks a chair is a person. The text does mention physical optimization tricks (infrared, backlighting, costumes), but it still downplays just how finicky the practice is. In other words, the dream of democratizing vision is exciting, but the reality is still a lot of duct tape and swearing at webcams.

What I take away is the sense that computer vision isn’t really about teaching machines to “see.” It’s about choosing what we want them to notice and what we conveniently ignore. A suicide detection box on the Golden Gate Bridge makes one statement; a silly limbo game makes another. Both rely on the same basic tools, but the meaning comes from what artists decide to track and why. For me, that’s the critical point: computer vision is less about pixels and algorithms and more about the values baked into what we make visible.