Week 10 – A DIY Musical Instrument (The Ultrasonic Air Piano)

Concept:

The idea behind my instrument was to create a hands-free musical device that transforms invisible gestures into sound. My goal was to design something playful yet technical; a device that reacts to both motion and touch. By combining distance and pressure, the instrument invites intuitive exploration: the closer your hand gets, the higher the note, while pressing the sensor triggers the sound. It merges tactile interaction with sound, making music a physical experience.

 

Method & Materials:

This project was my first time working with two types of sensors on the Arduino Uno:

  • Analog Sensor: Ultrasonic sensor (HC-SR04) to measure distance.
  • Digital Switch: Force Sensitive Resistor (FSR) to detect pressure.
  • Output: Piezo buzzer to produce sound.

I connected the ultrasonic sensor to pins 10 (trig) and 11 (echo), the FSR to analog pin A0, and the buzzer to pin 12.
Each note from the C major scale (C–D–E–F–G–A–B) was assigned to a specific distance range, coded in an array:

int notes[7] = {261, 294, 329, 349, 392, 440, 494};

The system reads distance in real time:

  • When the FSR is pressed and your hand is between 0–50 cm of the sensor, the buzzer plays a tone corresponding to that range.
  • If no pressure is detected or the hand moves out of range, the sound stops.

Process:

At first, it took time to understand how analog vs. digital inputs work and how to read them simultaneously. I researched how to use pulseIn() for the ultrasonic sensor and experimented with mapping values using the map() function.
To visualize the notes, I placed colored paper strips at different distances  (each representing one note of the scale)

Throughout the process, I learned:

  • The importance of wiring correctly (e.g., ensuring the FSR forms a voltage divider).
  • How combining two sensors can create more expressive interaction.

Schematic:

Code:

This combines input from two sensors, an ultrasonic sensor and a force-sensitive resistor (FSR) to generate musical notes through a piezo buzzer. The ultrasonic sensor continuously measures the distance of my hand, while the FSR detects when pressure is applied. When both conditions are met (hand within 50 cm and FSR pressed), the code maps the distance value to a specific note in the C major scale (C, D, E, F, G, A, B). Each distance range corresponds to a different pitch, allowing me to “play” melodies in the air. The code I’m most proud of is the single line that transforms the project from a simple sensor experiment into a musical instrument. It defines the C major scale, turning numerical frequency values into recognizable notes. I love that such a short line of code gives the device its expressive character, it bridges logic and creativity, translating distance data into melody. It’s the heart of the project, where sound and interaction truly come together.

// --- Define musical notes (C major scale) ---
int notes[7] = {261, 294, 329, 349, 392, 440, 494}; // C D E F G A B

Result:

The final prototype acts like an invisible piano: you play by waving your hand in front of the sensor and pressing lightly on the FSR. Each distance triggers a different musical note. The colored papers made it easier to perform intentionally and visually mark pitch changes.

In the demonstration video, the tones respond smoothly to my gestures, transforming simple components into an expressive interface.

Challenges:

One of the main challenges I faced was understanding how each pin on the ultrasonic sensor worked. At first, I didn’t realize that every pin had a specific purpose, like trig for sending signals and echo for receiving them, so it took me a while to fully grasp how data was actually being measured. I also struggled with wiring the circuit, often making small mistakes that stopped the whole setup from working. Drawing out the schematic was another time-consuming part since there were many components to connect and label correctly. Finally, the coding process was challenging because it was my first time using several of these elements, and I had to learn through trial and error how to make the sensors and buzzer communicate smoothly.

Inspiration + Tools thats helped me:

https://projecthub.arduino.cc/theriveroars/simple-hand-controlled-instrument-372bfc

https://learn.adafruit.com/force-sensitive-resistor-fsr/using-an-fsr

Reflection:

This project taught me how code, sensors, and sound can merge into interactive art. The challenge was balancing sensitivity: sometimes the ultrasonic readings fluctuated, and it took some fine tuning. But once it worked, it felt rewarding to hear the instrument. It also made me realize how music can be built from logic, how creative coding allows emotional expression through electronics. I see this as the beginning of exploring computational instruments that combine art and technology.

Week 10 – Reading Response (A Brief Rant on the Future of Interaction Design)

One passage that really stayed with me from Bret Victor’s A Brief Rant on the Future of Interaction Design is his statement that screens are “pictures under glass.” That phrase hit me because it’s so ordinary yet so revealing; every day I touch my phone dozens of times, yet I never actually feel anything back. Victor’s argument that we’ve limited human interaction to tapping on cold glass made me realize how passive our so-called “interactive” technologies have become. I started thinking about how my creativity, whether sketching or coding, always feels richer when my hands are physically involved; pressing, folding, shaping. It made me question: why did we let convenience replace tactility? What would technology look like if it honored the intelligence of our hands instead of reducing them to cursors?

In the Responses section, I was fascinated by how defensive many readers became, as if Victor’s critique was anti-progress. But what I sensed in his tone was care, not nostalgia; a desire to expand our sense of what interaction can mean. This really reminded me of  Refik Anadol’s Machine Hallucinations, a piece I’m analyzing for another course, where data transforms into movement, color, and emotion. Anadol’s work feels like the future Victor imagines: one where technology engages the body and senses, not just the eyes.

These readings challenged my old assumption that the “best” design is the smoothest and most frictionless. Victor helped me see friction as meaningful; it’s how we feel our way through the world. I now think of design less as creating perfect efficiency and more as crafting moments of connection between body, mind, and machine. The essay left me wondering whether the future of interaction design depends not on faster touchscreens, but on rediscovering touch itself; real, textured, imperfect, human touch.

Ultimately, I completely agree with Victor’s message. His critique felt refreshing, almost like a wake-up call to slow down and rethink what “innovation” actually means. I liked how he exposed the emptiness behind shiny new interfaces and instead celebrated the physical, human side of design. Even though his tone was mainly critical, I didn’t find it negative; I found it hopeful. It made me appreciate the kind of design that makes people feel connected, not just technologically advanced.

Week 9 – Analog + Digital

Assignment Overview:

The goal of this task was to create a simple interactive circuit using one digital sensor (a pushbutton) and one analog sensor (an LDR) to control two LEDs on the Arduino Uno. The digital LED turns on and off when the button is pressed, while the analog LED smoothly changes brightness depending on the amount of light detected by the sensor. This task helped me understand how digital and analog inputs behave differently and how they can work together in one circuit.

My Circuit:

Planning the Schematic:

I started by sketching a schematic on my iPad on procreate to visualize how each component would connect. The schematic represents the button on pin D2, the LDR with a 330 kΩ resistor forming a voltage divider to A0, and two LEDs connected to D8 and D9, each with their own 330 Ω resistor to ground. Planning it out first helped me avoid confusion later when placing components on the breadboard and made the whole process smoother. Of course when actually going to build the board a few elements moved around but the schematic mainly shows my thought process and “map” while planning this assignment.

Building the Circuit

Next, I built the circuit on the breadboard, carefully following the schematic. I color-coded my jumper wires to stay organized: red for power, black for ground, green for digital signals, and yellow for analog. One small challenge was understanding that the breadboard’s left and right halves are not connected across the middle gap. Once I fixed that mistake, the circuit started to behave exactly as expected.

(Its kinda hard to see but when I cover the sensor the blue LED shifts in brightness, and the button turns the yellow LED on)

Coding Elements:

This Arduino code connects both a digital and an analog input to control two LEDs in different ways. The process starts with defining pin connections: the pushbutton on pin 2, the LDR sensor on analog pin A0, and two LEDs on pins 8 and 9. In the setup function, the button uses input pullup so it naturally stays HIGH until pressed (no external resistor needed). The digital LED on pin 8 simply turns on when the button is pressed and off when it’s released. The LDR, wired as part of a voltage divider, continuously sends changing light readings to A0. In the loop, these readings (ranging 0–1023) are mapped to 0–255 with the map function so they can control the brightness of the second LED through analogWrite on pin 9. The small delay at the end makes the light level changes smooth and stable. Overall, the code demonstrates how digital and analog signals can be read simultaneously to control different outputs in real time.

// Pin setup 
const int buttonPin = 2;   // Button connected to D2 
const int ledDigital = 8;  // Digital LED (on/off)
const int ledAnalog = 9;   // Analog LED (brightness control)
const int ldrPin = A0;     // LDR sensor connected to A0

void setup() {
  pinMode(buttonPin, INPUT_PULLUP); 
  pinMode(ledDigital, OUTPUT);
  pinMode(ledAnalog, OUTPUT);
}

void loop() {
  // Digital LED controlled by button 
  bool pressed = (digitalRead(buttonPin) == LOW); // LOW = pressed
  digitalWrite(ledDigital, pressed ? HIGH : LOW);

  // Analog LED controlled by LDR light level 
  int lightValue = analogRead(ldrPin);        // Reads LDR 
  int brightness = map(lightValue, 0, 1023, 0, 255); 
  analogWrite(ledAnalog, brightness);         // Set LED brightness

  delay(10); 
}

Troubleshooting and Adjustments 🙁

The most challenging part was honestly just starting this task. It was very overwhelming since I never used arduino before and had no experience in coding C++. It took me a few hours to look through videos and articles just to get a better understanding of how the board works, and I even gave myself a few warm up tasks, like a simple blink test which I will show later in this documentation. In this particular assignment, one of my main challenges was getting the circuit itself right. The first few times I tried to setup the button the led wouldn’t turn on and I felt like I had so many wires and it was hard to focus on what the problem was. To combat this, I took the time to properly think about my schematic as a map, then followed that instead of blindly building my board first then drawing my schematic after. This really helped me when I got frustrated, and I also took many breaks in between so I could come back to the task with fresh eyes.

Random Blink Test Experiment:

Before starting the full circuit, I experimented with a simple blink test using just one LED. I uploaded the basic Blink example to the Arduino IDE to see the LED turn on and off every second. This small test helped me understand how the digital pins, resistors, and ground connections actually worked together in a real circuit. Since it was my first time using Arduino, doing this warm-up made me more confident in reading pin numbers, placing components correctly, and recognizing how code controls physical outputs on the board.

Reflection 🙂

Overall, I really want to emphasize that i’m very proud of myself for this assignment. This showed me how coding and wiring work hand in hand, the schematic guided the hardware setup, and the code brought it to life. This was a rewarding first experience combining both analog and digital sensors in one interactive circuit.

Week 8 – Unusual Switch (Foot-Activated Circuit)

 Step-On Switch:  An Unusual Arduino Foot-Activated Circuit

Video:

Overview:

For my first Arduino assignment, I created an unusual switch that does not require the use of hands. My switch is activated by pressing it with the foot, completing the circuit to light up two LEDs (red and blue). The idea was to create a physical interface that feels natural, intuitive, and accessible, something that could fit into everyday gestures like stepping, rather than pressing buttons with fingers.

Inspiration:

I was inspired to use the feet after thinking about how often we interact with objects using our hands, keyboards, phones, remotes,  and how limiting that can be. I wanted to explore another part of the body to interact with technology in a more grounded and physical way.
The foot felt like an interesting choice because stepping on something has a clear and satisfying feedback, it’s an action that already feels like a switch. Plus, it connected to the idea of accessibility, making a device that could be triggered without hand movement.

Building Process

Since this was my first time ever using Arduino, I learned a lot through trial and error and by asking questions in the IM Lab. I gathered all the materials from the lab, mainly cardboard scraps, metal sheets, and wires.

Here’s how I built it:

  1. Base:
    I used a folded piece of scrap cardboard to create a “step pad.” The bottom layer held the metal conductor (two rectangular aluminum sheets), while the top layer acted as a flexible pressure surface.
  2. Wiring and Circuit:
    I made my own extended wires since the ones in the kit were too short. Professor Mang showed me how to cut and separate the wires together safely. I then taped the two wires onto the cardboard so that when the top layer was pressed, the exposed wire ends touched the metal conductor, closing the circuit and turning on the LEDs.
  3. Circuit Connection:
    The switch was connected to an Arduino Uno and a breadboard. I programmed it so that when the circuit closed (when someone stepped on it), both red and blue LEDs would light up.
  4. Interface Design:
    To make it more intuitive and visually clear, I designed a quick cover on Canva that said “STEP ON ME,” giving the switch a fun and inviting personality. It also made the prototype feel more like a finished (polished) interactive product.

Overall Reflection:

I really enjoyed how hands-on this project was, literally using my feet instead of my hands. It helped me see how electronics can interact with the body in playful, unexpected ways. I liked that the result felt tactile and responsive, and it gave a sense of satisfaction when the LEDs lit up under my foot.

What I could improve next time:

  • I would make the structure more durable using thicker cardboard or a sturdier base, since repeated stepping eventually weakened it.
  • I wish I had more time to maybe figure out how to add all four different colored LED lights to make it look more visually appealing
  • The wiring could be cleaner and more hidden, maybe integrated into the design itself for a more polished look.

Schematic:

This schematic illustrates how my foot-activated switch completes the circuit to power the red and blue LEDs. The 5V output from the Arduino flows through the LEDs and a 330Ω resistor to limit current. The circuit remains open until the metal conductor plates make contact, this happens when I press the cardboard switch with my foot. Once pressed, the metal pieces touch, closing the circuit and allowing current to flow, which lights up both LEDs. This simple setup demonstrates the bones of how my arduino circuit work and was the plan I followed to make building this project much easier.

Conclusion

This project taught me the basics of Arduino, digital input/output, and how creative thinking can shape how we interact with technology. Building a switch with my foot instead of my hand made me realize that interfaces don’t always need to follow convention, they can be playful, personal, and unexpected.

Week 9 Reading Response

This week’s readings were definitely something refreshing to analyze as I agreed with the authors notion that physical computing is less about the devices themselves and more about the relationships they create between people and machines. In “Physical Computing’s Greatest Hits (and misses),” the authors critique of overly complicated projects made me question how often creators mistake complexity for creativity. I found his idea that “the simpler interaction is the more meaningful one” especially relatable. It reminded me of minimalist interactive artworks like Rafael Lozano-Hemmer’s Pulse Room, where a simple heartbeat sensor becomes a profound collective experience. The authors argument made me reflect on my own tendency to prioritize aesthetic or technical sophistication over intuitive engagement from the audience.

In “Making Interactive Art: Set the Stage, Then Shut Up,” his metaphor of the artist as a stage-setter really reframed how I think about authorship. I used to believe that creators should guide the audience toward a specific emotional reaction, but this readings insistence on letting the user finish the work through participation challenged that assumption. It raises the question: where does authorship end in interactive media? Is the true art in the design, or in the unpredictability of human interaction?

Both readings pushed me to see interactivity as a dialogue rather than a display. They align with theories I’ve encountered in my Interactive Media classes, especially discussions around user agency and co-creation. Ultimately, The authors perspective helped me realize that successful interactive work doesn’t shout, it listens. These readings made me rethink what it actually means to design something interactive. I used to believe that making an interactive project meant using as much technology as possible to impress people. But the ideas these readings assert is that the simpler interaction is often the more meaningful one, which really clicked with me. It made me realize that interaction isn’t about showing off sensors or screens, it’s about designing moments that feel natural. I thought about projects I’ve made where the tech took over the experience, and how maybe, the more invisible it becomes, the more powerful the interaction actually is.

Week 8 Reading Response

What I immediately noticed in the readings is how both Don Norman and Robert McMillan challenge how we define functionality; Norman through the psychology of aesthetics, and McMillan through the ingenuity of software engineering. Reading “Emotion and Design: Attractive Things Work Better” made me question something simple yet profound: why do I find certain interfaces or objects “trustworthy”? Norman’s claim that “attractive things work better” stayed with me because it connects emotion to cognition, arguing that beauty is not decoration but an active force in usability. His description of positive affect broadening creative thought resonated with me, especially when I considered my own design projects in other Interactive Media courses I have taken. When a prototype looks cohesive and inviting, I find myself more patient while debugging it; frustration fades faster. Norman’s teapot metaphor illustrates this perfectly, the emotional experience of interacting with a design changes how we perceive its flaws.

In contrast, McMillan’s “Her Code Got Humans on the Moon” celebrates the emotional labor and intellectual rigor behind Margaret Hamilton’s software for Apollo 11. I was surprised by how Hamilton’s story complicates the idea that engineering is purely rational. Her insistence on accounting for human error, writing software that could correct an astronaut’s mistake, echoes Norman’s belief that design must accommodate emotion and imperfection. When Hamilton’s code prevented a lunar crash due to a data overload, it wasn’t just logic at work but empathy, the foresight to design for failure.

Together, these texts made me rethink the separation between “soft” and “hard” skills in design. Emotion and logic, art and code, are not opposites but co-creators of reliability. I’m left wondering: in a future dominated by AI systems, can machines be designed to “care” the way Hamilton’s software did, to anticipate human error with grace?

Week 6 – Midterm Project Documentation

Copy Paste To Browser For Full-screen Experience:

https://editor.p5js.org/AsmaAlMurr/full/i15QXvk3g

Overall Concept

My project, Majlis Madness, is an interactive game that introduces players to Emirati cultural traditions through play and memory.  The game is set inside a majlis, a traditional gathering space where families and friends share hospitality, food, and conversation. Instead of just learning about this culture through text, the game engages the player with a memory sequence challenge where they must remember the order in which Emirati snacks “glow” and then repeat the sequence, the player has three lives symbolized by a heart icon at the top left. The concept combines cultural storytelling and information with a fun, simple game design so that players learn something new while having an enjoyable experience.

Originally, my idea was to design the game around plants in a garden, where players would memorize the sequence of flowers. While this concept was visually appealing, it didn’t feel personal enough to me. I realized that using snacks in a majlis would be more meaningful, since it directly connects to my own cultural experiences and memories. Instead, I used its basic bones as a shell to create a better version of this game. This shift made the game feel more authentic and gave it a stronger connection to my Emirati traditions.

My Original Game Idea (Prototype):

Inspiration:

The design of Majlis Madness aims to blend visuals, sounds, and interactivity: background images depict the majlis, oud music plays in the background to create atmosphere, and traditional snacks like Vimto, laban, chips, and ice pops become the central objects of the memory challenge. This makes the project both playful and informative, highlighting how cultural spaces like the majlis combine hospitality, tradition, and fun.

On a personal level, I feel a deep connection to the majlis because it has always been at the center of family and community life for me. Growing up, it was the place where I learned the value of gathering, listening, and sharing food. By recreating it in my game, I wanted to honor this space and give players a sense of its warmth, cultural meaning, and social importance. For me, the project is not only about coding a game but also about carrying forward traditions that shaped my own experiences.

For the background of the game, I was inspired by the traditional majlis setting, with its patterned carpets, red cushions, and lanterns that create a warm, communal atmosphere. This space felt like the perfect environment to represent Emirati culture, since the majlis is where people come together to share food, stories, and hospitality. 

For the sound design, I wanted it to feel authentic and true to the spirit of the majlis. After struggling to find the right audio online, I decided to ask a few local friends for inspiration. Their suggestions helped me discover tracks that carried the warmth and cultural depth I was aiming for, which made the game atmosphere feel much more genuine. I decided to stick to the classic tradition of the oud instrument as that is what both my friends and I associate with the majlis setting .

Screenshot on whatsapp where I asked for help for selecting the sound:

Here is an image of a Oud instrument for those who have never seen one:

Soundtrack of Abu Dhabi | National Geographic

How It Works: (Game mechanics)

When the player loads the game, they first see a cover screen with a logo for the game (Majlis Madness) and two options. Pressing ‘Enter’ takes them to a welcome page that introduces the Emirati majlis and explains its cultural importance. There is also an instructions page that gives step-by-step directions on how to play. Once the player starts, they watch a glowing sequence of snacks and then try to click them back in the same order. Each correct click is rewarded with a glow effect and a positive sound, while mistakes trigger an error sound, a red X, and eventually a gameover screen. If the player completes a sequence, they level up and face a longer, more challenging sequence.

Technically, the game uses a state machine to move between phases such as “cover”, “welcome”, “instructions”, “waiting”, “show”, “play”, “win”, and “gameover”.  Images and audio files are preloaded, and the layout is made responsive so the game can adapt to fullscreen sizes.

Planning and Design

(CHAT GPT WAS USED FOR SPECIFIC ARTISTIC ELEMENTS)

Before I wrote a single line of code, I began designing the game on paper. Sketching out screens and flows helped me plan the user experience in a structured way. I connected this process with UI concepts we had learned in class, like keeping instructions simple, providing clear feedback, and creating intuitive navigation between states. Having this roadmap made the actual coding process smoother, since I already had a clear vision of how each part of the game should look and feel.

Initial Planning Sheet:

Whiteboard In Class:

For the visual elements, I combined resources from different places. I gathered some reference images from Google (like snacks and majlis items) and then used ChatGPT to generate cartoon versions of these objects, then further tweaked them on procreate. This gave the game a playful and consistent art style while still grounding it in recognizable Emirati cultural elements. I liked how this workflow let me balance authenticity with creativity, bringing everyday cultural objects into a polished, game-ready format. For assets that were more specific like the glow feature in the game I used procreate on my iPad to draw a glowy circle because that was too difficult to find on google images, so it was better to create that on my own.

I am especially proud of the way my project mixes cultural storytelling with technical interactivity. Adding atmospheric oud music, using Arabic text (“مرحبا”) alongside English, and visually highlighting Emirati snacks that most locals relate with their childhood makes the game feel culturally rich. From a technical perspective, organizing the code into states and using a class for snacks makes the project more readable and user-friendly, while still handling animation, audio, and user feedback effectively.

Code Snippet:

I am particularly proud of the code that handles snack animations and misclick feedback, because it brings the game to life and makes the experience feel way more polished. The way it checks for shaking when a player clicks the wrong snack, enlarging when a snack is chosen, and glowing during active play, adds personality to each object and makes the interactions more satisfying. I also like how the red X sign briefly appears on a misclick before moving to the gameover state, it gives the player clear feedback without being overwhelming. For me, this section shows how I was able to combine logic and creativity: not just making the game functional, but adding expressive details that make it feel engaging and fun.

// Draw snacks with active animations
  for (let i = 0; i < snacks.length; i++) {
    const shaking = wrongIndex === i && wrongShakeFrames > 0;
    const enlarging = clickedSnack === i;
    const glowOnTop = enlarging && state === "play";
    snacks[i].draw(shaking, enlarging, glowOnTop);
  }
  if (wrongShakeFrames > 0) wrongShakeFrames--;

  // when showing a misclick, display the red X sign briefly then go to gameover
  if (state === "misclick") {
    if (wrongIndex >= 0 && xImg) {
      const s = snacks[wrongIndex];
      image(xImg, s.x + s.w / 2 - 30, s.y - 40, 60, 60);
    }
    misclickHold--;
    if (misclickHold <= 0) state = "gameover";
    return;
  }

 

Debugging 🙁

Debugging turned out to be one of the most challenging but also most important/ rewarding parts of this project. There were times when I stared at the same piece of code for hours and couldn’t see what was wrong, and I realized I needed fresh eyes, either by stepping away and taking breaks or by asking someone else to look at it with me. That process often helped me notice small mistakes I had been overlooking. The class we had last week on debugging strategies ended up being way more useful than I expected for a project of this size. It gave me practical techniques, like breaking problems into smaller parts and testing sections of the code separately, which saved me a lot of time and frustration.

Me when I find the bug after looking for hours, just to realize I had spelt the word “function” wrong:

Debugging 101 : r/ProgrammerHumor

Challenges and Areas for Improvement

One of the biggest challenges I ran into was learning how to use states (this made me want to cry), since I had never worked with them before. At first, it was confusing to manage the different phases of the game and make sure each part made sense (like the cover screen, instructions, and gameplay) and  transitioned smoothly. Over time, I began to understand how states could structure the flow and make the game easier to organize. Another challenge was finding traditional Emirati audio that felt authentic and added to the mood of the majlis setting. I wanted the sound to truly capture the atmosphere, so it took extra effort to search for the right oud tracks that matched the visuals and theme of the game.

For improvements, I would like to add more cultural depth to the game, such as different levels themed around other parts of Emirati hospitality or new backgrounds showing other Emirati cultural settings. Technically, the game could also benefit from smoother animations, for example, fading glows or a more creative transition between states and more positive feedback when a player levels up. While the core mechanics and atmosphere work well, as I learned in this course there is ALWAYS room to expand our games, wether that’s in terms of storytelling or even polishing the technical aspects. Overall, I’m very happy with how this turned out.

Week 5 – Midterm Progress (VERY) rough draft

(VERY ROUGH) draft of my game

For my midterm project I am designing an interactive memory game called Garden of Sequence. The idea is inspired by the concept of a magical garden where flowers “light up” in a sequence, and the player must repeat the pattern. Each round, the sequence grows longer and playback gets faster, which challenges the player’s short-term memory and focus. The interaction is simple but engaging: the player begins at a menu and presses Enter to start the game. During the playback phase, the game shows a sequence of flowers highlighted one by one with a circle (which i will later change to a glow or shine). Once playback ends, the player’s turn begins, and they must click the flowers in the same order. If they are correct, the game advances to the next round with a longer sequence. If they are incorrect, the game ends and a restart option appears. At any time, pressing “R” resets the game to the menu so a new session can begin.

Right now, I’m starting off with the bare bones of the game and keeping things simple. I’m not too focused on visuals or polish yet because I want to make sure the core concept, gameplay mechanics, and basic UI are working first. The prototype is built with a very clear structure: the flow of the game is controlled by four states, MENU, PLAYBACK, INPUT, and GAMEOVER. Each state decides what gets drawn on the screen and how the player can interact at that moment. I also created a Flower class to represent each clickable flower, which stores its position, size, color, and index. The class has a draw() method to show the flower and a contains() method to check if the player clicked inside it. The flowers  are just circles for now just as placeholders. Other functions like startGame(), restartGame(), and prepareNextRound() handle moving from one round to the next, while makeSequenceForRound() creates a random sequence with the correct length for each round. The updatePlayback() function is what plays the sequence back to the player, it shows which flower is active by drawing a simple white outline circle around it (which will later replace with a glow or other visual effect). Interaction is kept basic: the Enter key starts the game, the R key restarts it, and clicking on the flowers lets the player try to repeat the sequence.

NOT YET IN PROTOTYPE BUT PLANNED FOR THE ACTUAL GAME: When designing the visual elements for Garden of Sequence, I wanted to blend AI-generated assets using chat gpt with my own creative touch. I used AI tools to quickly generate base images such as the background, which gave me a solid starting point and saved time on initial drafts. From there, I created a logo and customized it in Procreate, adding hand-drawn details, adjusting colors, and layering text with the flowers so they felt more personal and unique to the game. For the flowers I used images from google that I liked and removed their backgrounds to make them a png, and tweaked minor details in them to make sure they looked like what I want for my actual game. This mix of AI efficiency and manual drawing allowed me to create visuals that are polished but still carry my own artistic style. Its important to note that these elements are not yet in the prototype but will be added to the actual game later on.

Background:

Flowers: Game Logo for cover page:

The most intimidating part of this project was figuring out how to handle the playback of the sequence and the checking of user input without overlap. The challenge was not only to generate a random sequence but also to display it one flower at a time, with pauses in between, and then smoothly transition to the input phase. If playback and input overlapped, the game would feel broken. To minimize this risk, I stripped the game down to its simplest form. Instead of complex glowing graphics, I used a basic white circle to indicate the active flower. I tested different sequence speeds and lengths until the loop felt reliable. By reducing the visuals and focusing on the sequence logic, I was able to confirm that the core mechanic works before moving on to more complex features such as the sound and design. I’m excited to mess around with the sounds, I feel like it will add a lot of depth to my game especially when users click and get the sequence right I can add a positive noise and an error noise if they get it wrong.

This prototype demonstrates the essential gameplay loop and shows that the memory challenge mechanic actually works in p5.js. With the hardest logic already tested, I now feel confident adding more polished elements such as custom flower drawings, glow animations, sparkles, and ambient sound. The prototype also sets up room for future features like score tracking, or maybe even weather events that could make gameplay more dynamic. Starting small and addressing the most uncertain part first gave me a working structure to build on, along with a clear plan for how to transform this into a polished final project.

Week 5 – Reading Response (Computer Vision for Artists and Designers)

Reading Computer Vision for Artists and Designers made me realize how differently machines interpret the visual world compared to humans. Where my eyes and brain can immediately recognize faces, objects, and contexts, a computer sees only streams of pixel data without inherent meaning. That difference kinda amazes me: what feels intuitive for me (like noticing the mood on a friend’s face) must be translated into measurable rules for the computer, such as brightness thresholds or background subtraction. This gap forces me to think about vision not as a natural act but as a series of constructed processes, something that both reveals the limits of human assumptions and opens new artistic possibilities.

The text also showed me that helping computers “see” isn’t only about coding better algorithms but also about designing the physical environment to be legible to the machine. Techniques like backlighting, infrared illumination, or retroreflective markers are surprisingly simple but effective. I found this point significant because it shifts responsibility back onto the artist or designer: we’re not just programming systems but curating conditions where vision becomes possible.

What I can’t ignore, though, is how these same techniques can easily blur into surveillance. Works like Lozano-Hemmer’s Standards and Double Standards or Jeremijenko’s Suicide Box made me uncomfortable precisely because they expose how tracking technologies, even when playful or artistic, carry power dynamics. If a belt can silently follow me or a camera can count unacknowledged tragedies, then computer vision isn’t neutral, it’s political. This makes me question: when I use vision algorithms in interactive art, am I creating a playful experience, or am I rehearsing systems of control?

For me, the text ultimately sharpened a tension: computer vision is at once liberating, because it expands interaction beyond a keyboard and mouse, and troubling, because it normalizes being watched. As a student studying Interactive Media, I feel I must navigate this duality carefully. A question that stuck with me is how to design works that use computer vision responsibly, acknowledging its history in surveillance, while still exploring its potential for creativity and embodiment.

Week 4 – Text Generative Art

For this piece, I wanted the text to be the artwork. I was drawn to generative typography and to the way Arabic calligraphy sometimes forms circles and wreaths. That felt right for motivation: words orbiting you like support. Using p5.js, I kept the idea simple: move, rotate, repeat and let those small rules create rhythm.

Before I touched the code, I mapped the whole idea on paper. I drew quick thumbnails of a circle in the center, tested where the English and Arabic lines should sit (which one tilts left/right), and marked the button at the bottom so the composition felt grounded. I sketched a few ring sizes to see how the wreath should feel, then noted font options next to each sketch (tall/condensed for English, warm/legible for Arabic). I even made a mini color chart with high-contrast pairs to keep legibility strong. That paper plan made the coding step feel like tracing: I wasn’t guessing; I was just implementing a layout I’d already tested by hand.

The code is organized so the flow is clear. In `setup()` I make the canvas, center the text, and place a button at the bottom. In `draw()` I set the background color, then call one function to draw the ring and another to draw the button. The ring function repeats the Emirati Arabic line around a circle  so it looks more like art created through text. Inside the ring, I tilt the English line one way and a slightly larger Arabic line the other way. When I click the button, the quote, colors change, so every press feels like a fresh poster.

Conceptually, I’m showing the message in two voices. English gives punch and clarity; Emirati Arabic gives warmth and a playful touch with design. The outer Arabic circle feels like community, something steady around me while the inner cross of lines pulls my eye back to the center. Bright, high-contrast colors keep it readable and energetic.

I did hit a bump halfway through when I started messing with Arabic fonts. The letters were not connecting and looked wonky, basically it was unreadable. This set me back a good two hours since I started on a new P5js file as Prof Mang suggested, with just the most simple form of the code before I realized the issue wasn’t my code; it was the font itself. Some fonts don’t include proper Arabic shaping, and p5 will fall back to whatever it can. Then I switched to a full Arabic font (like “Noto Kufi Arabic”) and Instead of downloading the font I embedded it into the HTML which fixed my problem. This was a good learning lesson as it gave me more confidence in my coding abilities. I was so sure that I had probably messed up the code, I didn’t even give myself a chance to consider that the font was the issue.

Here is an image of the issue I was facing:

I’m most proud of the code that draws the Arabic ring. It took me a while to understand not just how to do it, but how I wanted it to feel. I kept imagining the words circling like quiet encouragement, not static but not too free flowing, it was really about finding that in between. Getting there meant a lot of trial and error: turning things a bit, nudging them outward, letting each repetition change in a small way so it didn’t look stiff. When it finally worked, the text stopped feeling like code on a screen and started feeling alive. That moment when something frustrating becomes simple and meaningful is why this section is my favorite.

 // outer ring (Arabic repeated around a circle)
  textFont(AR_FONT);
  let r = min(width, height) * 0.38; // radius
  let copies = 24; // how many stamps
  let spin = frameCount * 0.005; // slow rotation

  for (let i = 0; i < copies; i++) {
    let angle = spin + (TWO_PI * i) / copies; // angle around circle
    let wobble = sin(frameCount * 0.02 + i) * 4; // tiny breathing
    let size; // Cycle the text size
if (i % 3 === 0) size = 16;
else if (i % 3 === 1) size = 18;
else size = 20;
    push();
    rotate(angle); 
    translate(0, -r + wobble); 
    textSize(size);
    text(quotes[qIdx].ar, 0, 0); 
    pop();
  }

Going forward, I want to push the typography and build patterns from text itself experimenting with new pairings (e.g., Amiri or Markazi), and exploring grids, spirals, lattices, and simple art made only from words. I’ll play with  rotation and translations to see how meaning shifts as the pattern changes. If I had more time to work on this I would focus more on the interactivity part and less on the generative art part. Although this made a cool visualization, maybe I could explore with the users typing a feeling which would then be met with a motivational quote to match/ respond to that feeling. Overall, this was a great learning experience and made me more confident in using text for generative art.