Week 4: Reading Response

After reading Norman, I kept thinking about how often I’ve felt genuinely embarrassed, not because I did something wrong, but because something was so badly designed that it made me look like I didn’t know what I was doing. I’ve blamed myself so many times for design failures, but Norman makes it clear that it’s not me, it’s the object. One thing that still annoys me is the sink setup at Dubai Airport. The soap, water, and dryer are all built into one sleek bar, with no sign telling you which part does what. You just keep waving your hands around and hope something responds. Sometimes the dryer blasts air when you’re trying to get soap, or nothing works at all. To make things worse, some mirrors have Dyson hand dryers built in, others have tissues hidden somewhere off to the side, and there’s no way to know without ducking and peeking like a crazy person. Norman’s point about discoverability and signifiers felt especially real here. One simple label could fix all of it.

In my interactive media work, I’m starting to think more about how people approach what I build. Norman’s ideas about system image and mental models stuck with me. If someone doesn’t know what they’re supposed to do when they see my sketch, I’ve already failed as a designer. In my  work, I try to make interactive elements obvious and responsive. If something is clickable, it should look like it. If something changes, the feedback should be clear. The goal is to make users feel confident and in control, not confused or hesitant. Good design doesn’t need to explain itself. It should just make sense.

Week 3: Reading Response on What Makes Interaction Strong?

After going through this week’s reading, I realized how often the word “interactive” gets thrown around. Chris Crawford makes a sharp distinction between reaction and interaction. Just because something responds doesn’t mean it’s truly interactive. For him, strong interaction happens only when a system “listens, thinks, and responds thoughtfully,” like a real conversation. He emphasizes that true interaction requires both sides to be active participants. That stuck with me, especially since we often label anything that reacts to input as interactive, even when it’s really just one-sided. I liked how Crawford stripped the term down to something clear. It’s not about bells and whistles, it’s about actual communication.

Looking back at my own p5.js sketches, I was intentional about making them gamified because I wanted them to feel interactive, not just look interactive. I wanted them to look more like a game than a GIF because only by interacting with an artwork do you really get to sense what went behind it and what it stands for. I love the effect of something actually happening because of a user’s input. It gives users a sense of presence, like they’re not just observing but actively shaping what unfolds. That moment of response makes people feel like they’re part of the piece, not just pressing buttons on the outside. It’s rewarding for both the user and the creator, and it’s the kind of experience I want to keep building on. To me, interactivity is a back-and-forth communication.

Week 2 : Reading Reflection on Casey Reas’ Talk

When I first watched Casey Reas’ talk on chance operations at the beginning of the semester, I wasn’t entirely sure how to feel. Honestly, I was just ready to absorb whatever perspectives were being offered through different narratives. I’ve always been someone who likes control when I’m creating and knowing what’s going to happen, having a plan, getting things “right.” So the idea of building randomness into a project felt a little chaotic, maybe even risky. But Reas broke that tension down. He walked us through how he uses systems, chance, and instruction-based art to remove the artist’s ego from the process and let the artwork evolve in ways he couldn’t fully predict. The way he referenced John Cage and connected those ideas to computational art made it all click: randomness doesn’t mean lack of intent, it just shifts where the intent is. Reas isn’t just throwing things into the void and hoping for the best, he’s setting up a structure where randomness can still move freely. That clicked with me. It’s not about losing control entirely, it’s about creating a space where unexpected things can happen without everything falling apart. That made the idea of randomness feel a lot less intimidating and a lot more useful.

Since I’m writing this reflection a bit later in the semester, I’ve had more time to think about my own relationship to randomness—and honestly, I like randomness I can control. Total chaos just ends up looking like a muddy mess. I prefer when randomness happens within boundaries, where the outcome is still cohesive and intentional, even if it’s unpredictable. That’s the balance I’m drawn to: letting go a little, but not so much that the work loses meaning or direction. It’s about creating space for surprise, but still being able to call the final result your own.

Week 12 — Final Progress Report

Final Idea

After lots of thought (in which initially, I wanted it to be a I’ve settled on a concept for a game titled “Did You Eat My Chips?”. The premise is a semi-thriller, played in a lighthearted tone, where the objective is to consume chips surreptitiously while avoiding detection by your slime sibling. My aim was to strike a balance, creating an experience that is engaging without being overtly frightening, yet distinct from typical game scenarios. This approach was driven by the desire to incorporate a pulse sensor, using it to gauge a player’s physiological response during time-sensitive, high-pressure(?) situations within the game and observe the impact of these moments on their state. This was largely inspired by my experience growing up with my sister where we would sneakily eat each other’s food; it always felt like a thriller movie.

Progress

So far, I’ve been busy sketching out some ideas for how the game will look. I’ve been doing some image drafts in Figma and Procreate to get a feel for the design style and what I want to aim for when I start building things in p5.js. On the hardware side, I’ve also been playing around with the flex sensor. That’s going to be the main way you control the game, so I’m just getting the hang of how it works and how to use it.

Visuals

What is left?

I still have to test the pulse sensor and see how to interact with the game. I have to borrow it from the IM lab so I’ll be testing that out today.

Week 12: Final Project Proposal

After a lot of deliberation and brainstorming, I am happy to report that I have solidified my final project concept. I will be producing an interactive piece of art that aims to tell the story of international students in the US who are being targeted by the current political situation.

My work will allow anyone to participate by scanning a QR code, which will direct them to my p5.js sketch which shuffles through images–coordinated real-time with text on screen transcribing details. The p5.js sketch will connect to a websocket running on my server, which provides control information for the piece.

Finally, a panel at the front of the installation will include controls, which allow viewers to adjust the playback of the art, including speed and effects. A diagram of the architecture is attached below.

 

Final project finalized concept | Handdrawn shape guessing using AI

Let me walk you through my journey of planning an Arduino integration for my existing Python-based drawing application. TBH the Arduino implementation isn’t complete yet, but the documentation and planning phase has been fascinating!

The Current State: A Digital Drawing Assistant

Before diving into the Arduino part, let me elabroate about the current state of the application. My app is built with Python and PyQt5, featuring:

      • A digital canvas for freehand drawing
      • Color selection and brush size options
      • An AI assistant that can recognize what you’ve drawn (with a fun, playful personality)
      • The ability to save, load, and organize drawings

The AI assistant is my favorite part—it has this quirky personality that makes drawing more entertaining. It’ll say things like “That’s obviously a circle! Van Gogh would approve!” or “I’m as lost as a pixel in a dark room. Can you add more detail?” of course the hardest part was to actually come up with those lines.

 

The Arduino Part

The next phase of development involves adding physical controls through an Arduino. Here’s what I’m planning to implement:

      • Joystick control for moving the digital cursor and drawing
      • Physical buttons for clearing the canvas, changing colors, and triggering the AI assistant
      • Visual and audio feedback to enhance the experience

The beauty of this integration is how it bridges the digital-physical gap.  I am trying to reach the peak of interactivity that I can think of, and I think having an interactive AI system is as far as I can, with my current knowledge, achieve and implement.

Vision of the structure

app,py/

├── main.py (needs to be updated)
├── arduino_integration.py (not yet there)
├── test_arduino.py (not yet there)
├── config.py

├── core/
│ ├── __init__.py
│ ├── conversation.py
│ ├── drawing_manager.py
│ ├── recognition.py
│ ├── tts.py
│ └── arduino_controller.py (not yet there)

├── data/
│ ├── __init__.py
│ ├── database.py
│ ├── history.py
│ └── user_settings.py

├── ui/
│ ├── __init__.py
│ ├── main_window.py
│ ├── canvas.py
│ ├── toolbar.py
│ ├── settings_dialog.py
│ └── training_dialog.py

├── Arduino/
│ └── DrawingAI_Controller.ino (not yet there)

└── sounds/ (not set, only palceholders)
├── connect.wav
├── disconnect.wav
├── clear.wav
├── color.wav
└── guess.wav

Challenges

I want to have the joystick and buttons on a  plain wooden plank and in a container to hold it in place, and the buttons not on a breadboard but actually on a physical platform that makes everyting looks nicer, to be honest i am so lost with that, but we will see how things go.