Reading Response – Making Interactive Art: Set the Stage, Then Shut Up and Listen

This idea of not over-explaining your interactive artworks really struck a chord with me. I’ve definitely been guilty of that in the past – spelling out too many specifics about what different elements “mean” and how people are “supposed” to interact with them. But as the author points out, doing that pretty much defeats the whole purpose. You’re just dictating how the audience should think and experience the piece, instead of leaving room for them to explore and interpret it themselves.

I can vividly remember one interactive installation I saw that fell into this trap. It looked really cool – these conductive surfaces that would trigger light patterns when you touched them. But then the description plaque gave you this long, explicit walkthrough of the precise sequence you “should” follow when engaging with it. It ended up feeling really prescriptive and took away from the sense of curiosity and spontaneous discovery that initially drew me to the work.

The author’s point about interactive art being the “start of a conversation” between the artist and viewer resonated so much. Rather than a static, finished product, it’s meant to be this open-ended exchange where the audience’s live participation and personal perspectives complete the experience. Kind of like a director setting up a premise and suggestions for the actors, but then letting them organically find their own emotional truths within that framework.

Moving forward, I really want to embrace that spirit of intentional ambiguity in my own interactive work. Instead of strictly defining roles and meanings, I should focus on crafting intriguing environments, suggestive arrangements of elements, and potential pathways to explore – but then step back and allow diverse interpretations to emerge organically through self-directed engagement. Creating prompts for personal dialogue rather than dictating conclusions. It’s a shift in mindset, but one I think will lead to much richer, more interactive experiences.

Reading Response – Physical Computing’s Greatest Hits (and misses)

This tour through the greatest hits (and misses) of physical computing projects was such a fun read!

One quote that particularly resonated with me was: “Sometimes when people learning about physical computing hear that a particular idea has been done before, they give up on it, because they think it’s not original. What’s great about the themes that follow here is that they allow a lot of room for originality.” As someone still finding my footing in creative disciplines, I can relate to that instinct to get discouraged if you feel like you’re just retreading old ground. But the author makes a compelling case for why revisiting familiar concepts is worthwhile – there’s an endless well of creative variations to explore.

Rather than dismissing these well-trod paths as clichés, the piece argues that putting your own spin on an established idea can make it feel totally fresh and novel. I was particularly struck by the examples under “Mechanical Pixels.” The artistic possibilities of combining precise kinetic movements with immersive audiovisuals seems endlessly fascinating. Dan Rozin’s mind-bending mechanical mirrors sound like they blur the boundaries between interactive art and raw mechanism in some delightfully bizarre ways.

At the same time, I’ll admit some of the Greatest Hits left me a bit puzzled. I’m still not 100% sure I grasp the emotional motivation behind things like “Remote Hugs” that aim to convey intimacy over a distance. Maybe I’m just a cynic, but I have a hard time imagining any unhuggable object truly capturing that warmth and connection.

The whole catalog is a humbling reminder of just how much creative ground has already been covered in this space – but also how unmapped the frontiers of invention still remain. I can only hope that I can someday create my own trail.

Colour-Changing Lamp

The challenge was to create something that blended analog and digital inputs to control a set of LEDs – one through an analog sensor and the other digitally.

The Concept:
I envisioned a vibrant desktop lamp that could cycle through a kaleidoscope of smoothly blending colours.

The Execution:
For the analog input, a potentiometer proved perfect – capable of outputting 1024 values just by twisting its knob. This enabled fluid color control.

An RGB LED became the centerpiece light source. Its red, green, and blue elements could blend into any hue based on the potentiometer’s analog output levels. A regular red LED served as the digital indicator, powering on/off with the slide switch.

I wired the potentiometer to an Arduino analog input and the slide switch to a digital input pin. The RGB LED trio connected to three PWM analog outputs for mixable color output, while the red LED patched into a separate digital pin.

The Code:
The Arduino continuously read the potentiometer’s smooth analog value via analogRead(). I then mapped this range across the full RGB spectrum, setting the three LED output levels to precisely blend the corresponding hue on the RGB model. This proved to be slightly beyond my scope and I used the help of online resources to accomplish this

For the digital side, it just checked the slide switch state – HIGH powered the separate red LED, while LOW turned it off.

// Define pin connections
int potPin = A0;           // Potentiometer at analog pin A0
int redPin = 9, greenPin = 10, bluePin = 11; // RGB LED pins
int switchPin = 2;         // Digital pin for the toggle switch
int ledPin = 13;           // Pin for the additional standard LED

void setup() {
  pinMode(redPin, OUTPUT);
  pinMode(greenPin, OUTPUT);
  pinMode(bluePin, OUTPUT);
  pinMode(ledPin, OUTPUT);     // Set the additional LED pin as output
  pinMode(switchPin, INPUT_PULLUP); // Set the switch pin as input with pull-up
}

void loop() {
  int potValue = analogRead(potPin); // Read the potentiometer value
  int hueValue = map(potValue, 0, 1023, 240, 0); // Map pot value from blue to red hue values

  // Convert hue to RGB
  float r, g, b;
  hueToRGB(hueValue, r, g, b);

  // Write RGB values to LED pins
  analogWrite(redPin, r * 255);
  analogWrite(greenPin, g * 255);
  analogWrite(bluePin, b * 255);

  // Check the state of the switch
  if (digitalRead(switchPin) == LOW) {  // Switch is pressed (toggle switch connects to GND)
    digitalWrite(ledPin, HIGH);         // Turn on the additional LED
  } else {
    digitalWrite(ledPin, LOW);          // Turn off the additional LED
  }

  delay(10); // Short delay for stability
}

void hueToRGB(int hue, float &r, float &g, float &b) {
  int s = 1; // Saturation: 1 for full color
  int v = 1; // Value: 1 for max brightness
  float C = s * v;
  float X = C * (1 - fabs(fmod(hue / 60.0, 2) - 1));
  float m = v - C;
  float r1, g1, b1;

  if (hue >= 0 && hue < 60) {
    r1 = C, g1 = 0, b1 = X;  // Red to pinkish-red
  } else if (hue < 120) {
    r1 = X, g1 = 0, b1 = C;  // Pinkish-red to purple
  } else if (hue < 180) {
    r1 = 0, g1 = X, b1 = C;  // Purple to blue
  } else if (hue < 240) {
    r1 = 0, g1 = C, b1 = X;  // Lighter blue
  } else if (hue < 300) {
    r1 = X, g1 = C, b1 = 0;  // Skip greens
  } else {
    r1 = C, g1 = X, b1 = 0;  // Skip greens to yellow
  }
  r = (r1 + m);
  g = (g1 + m);
  b = (b1 + m);
}

Challenges:

My original vision was integrating this into a physical lamp with the RGB as the main light source. However, I struggled to find an easy way to run the component wires and extend the LEDs cleanly off the breadboard – a skill I’ll need to develop.

Future Improvements:
– Adding animation modes like pulsing, gradual color-cycling, and custom fading sequences between hues.
– Using light sensors to automatically adjust brightness based on ambient lighting.
– Exploring alternative RGB mapping patterns beyond the standard spectrum for unnatural, psychedelic hue blends.
– Integrating everything into a stylish 3D printed desktop lamp enclosure.

 

My First Arduino Project – An Automatic Night Light ⚡️

For my first Arduino project, I decided to build a simple automatic night light that turns on when it gets dark. The basic concept is to use a photocell (light sensor) to detect light levels and then turn an LED on or off accordingly.

The Components:

– Arduino Uno board
– Photocell (light dependent resistor)
– LED
– Resistors
– Breadboard and jumper wires

The Concept:

A photocell is a resistor that changes resistance based on the amount of light hitting its sensor area. In bright light, the resistance is low allowing more current to flow. In darkness, the resistance is high restricting current flow.

I used this property to build a basic light sensor circuit. By connecting the photocell to one of the Arduino’s analog input pins, we can read its varying resistance values based on light levels. With some code to set a light threshold, we can then control an LED by turning it on when dark and off when bright.

const int led = 8;
const int sensor_pin = A0;
int sensor;
const int threshold = 500;

void setup(){
pinMode(led, OUTPUT);
Serial.begin(9600);
}

void loop(){
sensor = analogRead(sensor_pin);
Serial.println(sensor);
if(sensor<threshold){
digitalWrite(led,HIGH);
} else{
digitalWrite(led,LOW);
}
}

The end result is a compact night light that automatically lights up whenever the ambient light drops below the threshold level!

Future Development:
While functional, this is a very basic project. Some improvements I’d like to make are:

  • Make it portable by integrating a battery pack for a wireless night light
  • Design it into functional household objects like lamps, book lights, stair lights, etc.
  • Program different LED brightness levels based on duration of darkness

This first project taught me the basics of working with Arduino, simple circuits, analog inputs, and lighting control. I’m excited to level up my skills on more advanced projects!

Watch it in action here! :

IMG_4512

Reading Response – Her Code Got Humans on the Moon—And Invented Software Itself

This piece blew my mind. I had no idea that one of the key pioneers of modern software and coding was a working mom from the 1960s! The fact that Margaret Hamilton was leading an MIT team writing the onboard flight software for the Apollo missions while also bringing her daughter to work makes her such an icon.

Just let that sink in for a moment. At a time when women were expected to stay home and support their husband’s career, Hamilton was leading an MIT team writing the critical onboard flight software that allowed NASA to accomplish the seemingly impossible – landing astronauts on the lunar surface and returning them safely.

What makes it even more incredible is that she was doing this boundary-pushing work while also bringing her young daughter to the lab. Little Lauren was napping under mommy’s desk as Hamilton and her team were inventing core programming concepts like error prioritisation and asynchronous processing from scratch. Techniques that are still fundamental today!

The part about the “Little Old Ladies” literally weaving the software into indestructible copper wires is so fascinating. It’s a stark contrast to our current world of seamless cloud computing and automatic updates. But it captures the blind ambition and faith in human ingenuity that powered that era’s space race.

My favorite anecdote from the reading though is Hamilton advocating to add extra fault protection to the code because her daughter had exposed a flaw in the simulator – and NASA dismissing it as impossible. Then that exact scenario happening on the critical Apollo 8 mission and Hamilton’s protocol saving the day! What foresight.

Stories like this are such great reminders that the technological marvels we now take for granted were once radical frontiers explored by true visionaries and pioneers like Hamilton. At a time when the concept of “software” was barely understood, she had the brilliance to blaze that trail through the unknown and invent an entirely new discipline.

This was such an inspiring read!!

Reading Response – Three Teapots

Don Norman’s “Three Teapots” piece really got me thinking about how design isn’t just about pure functionality. The part that stuck out to me was when he talked about his three very different teapots – the bizarrely unusable Carelman one, the plain but practical Nanna pot, and the cleverly designed Ronnefeldt tilting teapot. Despite their varying levels of usability, Norman admits to using all three regularly depending on his mood and the situation. 

This challenges the idea that good design has to be 100% focused on usability above all else. Norman makes the point that aesthetics, emotion, and personal preferences also play a huge role in how we perceive and enjoy designed objects. His teapot collection shows that design excellence isn’t a one-size-fits-all thing – it’s about striking the right balance between functionality, beauty, and generating an emotional connection for the user.

I totally relate to this from my own experiences with products and objects. There have been times when something was highly usable but felt soulless and uninspiring. On the flip side, I’ve been drawn to gorgeous pieces of design that maybe weren’t the most practical but just made me feel good owning and using them. Norman reminds us that great design caters to our practical needs as humans, but also our emotional and aesthetic desires.

His points about how emotions influence our thinking and decision-making were also fascinating. The idea that positive emotions can boost our creativity and tolerance for small design flaws, while negative emotions can make us laser-focused but closed-off, is pretty mind-blowing. It makes me think designers need to consider the emotional resonance of their work, not just tick boxes for usability.

Overall, “Three Teapots” challenges the usability-over-everything mentality in a really insightful way. It argues that design should harmonize utility, beauty, and generate an emotional response in users based on their subjective needs and experiences. 

 

Crafting “Garden Guardian”: The Journey of My Midterm Project

When coming up with the idea for “Garden Guardian,” I wanted to make a charming, aesthetic game that had a challenging twist. A basic garden planting game didn’t seem very exciting on its own. I needed to add something to it.

However, this wasn’t always the case. In the very beginning, this project was pretty basic and boring. My first draft was just a game where you could plant flowers in a field, and that’s all it did. I didn’t focus much on how it looked during this early phase. It was pretty simple and not very attractive. This is what it looked like at first.

With the core functionality in place, I could now turn my attention to the aesthetics. To enhance the visuals, I utilized a combination of images generated by Dall-E and icons sourced from Google images. This allowed me to give the project a more polished and appealing look while retaining the foundational code I had already developed.

The game was pretty, but I wasn’t satisfied. That’s when I decided to throw in some pests causing trouble for the player’s garden. These pest invasions make the simple act of growing flowers into more of a defensive mission. Cultivating a colourful garden is still the main goal, but now you have to protect it as well.

Imagine this: you start with a blank canvas, ready to transform it into a breathtaking field of blooms. With a simple click, you can choose between flowers to add splashes of colour. 

But just when you think you’ve mastered the game, the real fun begins! Pesky pests appear out of nowhere, trying their best to kill your floral babies. That’s when your skills as a true Garden Guardian will be put to the test.

With this project, I really wanted to challenge my coding skills. One of the first challenges I faced was designing a game state management system. I wanted smooth transitions between the introduction screen, gameplay, and instructions. Through trial and error, I eventually settled on a streamlined approach, with the draw() function acting as the game’s heartbeat, constantly updating and rendering all the visuals used in the game (icons, buttons and backgrounds) based on the current state.

The drawGame() function became the centrepiece of my code, responsible for orchestrating the entire garden experience. I spent SO MANY hours refining this function, ensuring that the rendering of the garden background, the placement of icons, and the display of planted flowers all worked seamlessly together. I’m particularly proud of the highlighting technique I implemented, which draws attention to the currently selected icon, enhancing the overall user experience.

// Highlight the selected icon
noFill();
stroke(255, 204, 0); 
strokeWeight(2); 
let selectedIconIndex = selectedFlowerType - 1;
if (selectedIconIndex >= 0 && selectedIconIndex < iconPositions.length) {
  let pos = iconPositions[selectedIconIndex];
  rect(pos.x, pos.y, 50, 50, 10);

One of the most rewarding aspects of this project was creating the Flower class. Building a system to manage the lifecycle of each flower, from planting to potential infestation and treatment, was a true test of my object-oriented programming skills. The introducePest() method, which simulates the arrival of a pest and sets a timer for the flower’s demise if left untreated, was a particularly satisfying challenge to overcome. This took way too much time but as my mother would say, it felt like eating a ladoo (a delicious Indian dessert) when I could finally get it to work!

class Flower {
  constructor(x, y, type) {
    this.x = x;
    this.y = y;
    this.type = type;
    this.size = 50;
    this.hasPest = false;
    this.pestTimer = null;
  }

  display() {
    let img = [flowerImg1, flowerImg2, flowerImg3][this.type - 1];
    image(img, this.x - this.size / 2, this.y - this.size / 2, this.size, this.size);
    if (this.hasPest) {
      image(pestImg, this.x - this.size / 2, this.y - this.size / 2, this.size, this.size);
    }
  }

  introducePest() {
    if (!this.hasPest) {
      this.hasPest = true;
      this.pestTimer = setTimeout(() => this.die(), map(targetFlowers.planted, 0, targetFlowers.total, 4000, 3000));
    }
  }

  treatWithPesticide() {
    if (this.hasPest) {
      clearTimeout(this.pestTimer);
      this.hasPest = false;
      gameTime += 3;
    }
  }

  die() {
    let index = flowers.indexOf(this);
    if (index !== -1) {
      flowers.splice(index, 1);
      targetFlowers.planted = max(0, targetFlowers.planted - 1);
    }
  }
}

The Flower class encapsulates all the properties and behaviours of each flower in the garden. From managing the flower’s position, type, and size to handling pest infestations and treatment, this class is the backbone of the game’s core mechanics.

The introducePest() method is a prime example of the thought process behind crafting engaging gameplay. When a pest is introduced, a timer is set to simulate the potential demise of the flower if left untreated. The duration of this timer is dynamically adjusted based on the number of flowers already planted, increasing the difficulty as the game progresses. I had to do a lot of research (and some help from ChatGPT) to get this section working.  

Conversely, the treatWithPesticide() method allows players to counter the pest threat by using the pesticide icon. When a flower is treated, the pest timer is cleared, the hasPest flag is reset, and the player is rewarded with a few extra seconds on the game timer, encouraging strategic decision-making.

The die() method handles the removal of a flower from the game when it succumbs to a pest infestation. By splicing the flower from the flowers array and adjusting the targetFlowers.planted count, the game state is seamlessly updated, reflecting the player’s progress towards the target.

Throughout the development process, I encountered numerous roadblocks and debugging nightmares. However, each obstacle was an opportunity to learn and grow. I quickly realised the importance of modular code, which led me to create separate functions and classes for specific tasks, improving the overall readability and maintainability of my code.

Looking back on this journey, I’m filled with a sense of accomplishment and gratitude. “Garden Guardian” not only allowed me to create an entertaining game but also served as a valuable learning experience. I gained a deeper understanding of game mechanics, object-oriented programming, and the intricacies of creative coding with p5.js. Most importantly, I discovered the joy of problem-solving and the satisfaction of seeing my code come to life in the form of an engaging interactive experience.

However, there are still so many areas where I can improve and expand “Garden Guardian”: The scoring system needs work. Right now, you just win by planting enough flowers before time runs out. But I want a better system that scores you based on things like how many pests you treated, the variety of flowers planted, and maybe even keeping your garden completely pest-free. 

The difficulty progression could be better too. I think having the difficulty adapt based on the player’s performance would make it more engaging. If someone is struggling, the game could spawn fewer pests or give more time.

Visually, while the current look is charming, adding more detailed graphics, animations and effects could really enhance the overall aesthetic appeal. And new gameplay elements like power-ups, special abilities or different game modes could add lots of replay value.

During development, I ran into some tough challenges. Managing all the different timers and making sure the countdown was accurate when players gained extra time was really tricky. Weird edge cases, like planting flowers outside the garden area still need some work. 

Working through the issues I faced was a huge learning experience for me. It really emphasised the importance of thorough testing, keeping my code organised, and anticipating potential problems. Moving forward, I’ll apply those lessons to make my games even more polished.

“Garden Guardian” may be a small project, but it represents a big milestone in my coding journey. I had so much fun tackling challenges and adding and improving features and I cannot wait to experiment with game dev more!

Reading Response – Computer Vision

The overview on using computer vision in interactive art really sparked my imagination. Of course algorithms can analyze images – but creating immersive experiences that actually respond to someone’s real-time presence? The possibilities seem endless. Still, I wonder – at what point could systems become too reactive? Krueger’s Videoplace reacted whimsically, but always ethically. The line between delight and dystopia likely needs careful watching.

Even with today’s exponential tech growth, restraint remains critical in design. Amidst the complexity, what separates a seamless experience from one that’s cluttered and confusing is knowing when enough is enough. But making those calls is an art, not a science. The LimboTime game showed how a playful vision system could emerge from simple building blocks. Yet its limitations in changing lighting reveal the fluid and adaptable intuitions still required.

Overall this piece brought great food for thought on computer vision’s creative possibilities. The blend of concrete examples and big picture analysis kept an engaging pace. I appreciated the framing of challenges creatively rather than just technically. This hit a sweet spot between grounding me conceptually and sparking curiosity to apply these ideas further. The writing style created enjoyable momentum.

Reading Response – The Psychopathology of Everyday Things

The psychological principles for design make a lot of sense to me. Of course affordances and mental models would shape how people use stuff – those concepts seem so core. Formalizing them into the design process feels overdue in a way. Still, I wonder – following guidelines can limit you too, can’t it? The times I’ve felt the spark of creativity or problem-solving have actually come a lot from defying conventions or unspoken “rules.” So where’s the line between guidance and overstandardization? Feels like a tension worth watching.

And even with technology’s insane pace of advancement, I think designers still have to be minimalists at heart. What separates a seamless, delightful experience from a bloated, confusing one is restraint and knowing when to stop adding features just because you can. But how designers make those calls is an art, not a science.

One part that really resonated was thinking about mental models across teams. We all carry biases and assumptions and lenses we see the world through. That exists even (maybe especially?) in cross-functional groups. Creating spaces where communication norms themselves feel simplified and clarified could make a world of difference in alignment, collaboration and innovation. There’s surely design opportunities even in how organizations function on a daily basis.

Overall this piece brought up great food for thought, and refreshed my appreciation of design thinking. I like writings that make me feel both grounded conceptually and curious to apply ideas further. This hit that sweet spot through framing the challenges creatively vs. just technically. The blend of examples and analysis kept a nice engaging pace too. Overall, this was a really enjoyable read!

Assignment 4 – Fortune Cookie

I’ve always loved cracking open fortune cookies to see the surprise message inside. I decided to recreate this idea digitally by building an animated, interactive fortune cookie in p5.js that reveals a new quirky fortune each time it’s clicked.

The core concept here was to display random fortunes generated from a list of silly phrases. I started by coming up (with the help of ChatGPT) with a whole array of potential fortune texts – things like “Run” or “Don’t eat the paper.” I tried to channel the cryptic non-sequiturs you might find in real fortune cookies.I then set up a pickNewFortune() function that can select a random element from this master list of fortunes. This gets called every time the cookie finishes its opening animation.

So on each cookie click, it grabs a new random index from the list and stores this fortune string to display.

The visual animation of the cookie opening was also really fun to build out. I made the separate cookie halves tilt back and apart slowly to reveal the text underneath. I’m really happy and proud with how realistic the motion looks!

The code for the animation looks like this:

function animateCookie() {
  let halfAnimationStep = maxAnimationStep / 2;
  let angleOffset, pullOffset;
  
  if (animationStep <= halfAnimationStep) {
    // Tilt cookie halves apart  
    angleOffset = map(animationStep, 0, halfAnimationStep, 0, -40);
    pullOffset = 0; 
  } else {
    // Finish opening animation
    angleOffset = -40;  
    pullOffset = map(animationStep, halfAnimationStep, maxAnimationStep, 0, 200);
  }

  drawFortuneCookie(angleOffset, pullOffset);

  // Progress animation
  if (animationStep < maxAnimationStep) {
    animationStep++;  
  } else {
    state = 'open'; 
  }
}

In the future I could definitely expand and refine this concept even more – perhaps allowing the fortune text itself to animate in different ways for some visual variety. I would also love to enhance the text generation further – for example, mixing and matching fortune text fragments to assemble new combinations. I could also display the text animating onto the screen for more visual interest when the fortune is revealed.