Final Project: Virtual Microbiology Lab


Throughout the semester, I’ve created projects that, in a sense, gamified concepts in Biology. Back when I worked on my Assignment 2, I had expressed a desire to allow users to select multiple color options for the bacteria. So, this project grew out of that desire, in addition to giving people a chance to practice very basic Synthetic Biology / Microbiology.

In essence the concept is simple. There are six prepared agar plates. Additionally, there are six fluorescent proteins: Green Fluorescent Protein [green], mCherry [pinkish-red], mOrange [orange], mKO [yellow], mCerulean [cyan], and Blue Fluorescent Protein [BFP]. All of these proteins fluoresce naturally under UV light and are not usually produced by bacteria. Instead, they are obtained from bioluminescent animals and can thus be used as a method to verify whether a certain gene editing technique worked in bacteria. But that biology-heavy introduction aside, the idea was that users select the fluorescent protein-modified bacteria they want and “pipette” them into the corresponding plate. Then, they can incubate the bacteria to watch them grow and toggle the UV light to actually see the fluorescence.


Interaction Design

As mentioned above, the interaction design has two main parts: the laptop-focused part, and the physical prototype part. On the laptop, the user can press on-screen buttons to change the fluorescent protein, incubate the bacteria already plated, toggle the UV lamp, and dispose of the plates. On the physical prototype, the user has to bring the pipette to one of six holes (each of which contains a hidden photoresistor) and press a button to “dispense” bacteria. The idea is that the user controls which color of bacteria they want to grow on which plate.

Arduino Code

The Arduino code was relatively simplistic, as its main purpose was to read the values from the hidden photoresistor and send them to p5. The secondary function was to receive color values from p5 and control an RGB LED with it to glow with the corresponding color.

phot0 = analogRead(A0);
phot1 = analogRead(A1);
phot2 = analogRead(A2);
phot3 = analogRead(A3);
phot4 = analogRead(A4);
phot5 = analogRead(A5);
if (phot0 - phot0init > 100) {
if (phot1 - phot1init > 300) {
if (phot2 - phot2init > 300) {
if (phot3 - phot3init > 300) {
if (phot4 - phot4init > 300) {
if (phot5 - phot5init > 300) {

The above code basically detects light intensity crossing a certain threshold value above the background light intensity. The reason why pin 0 had a lower threshold was because the associated photoresistor appeared to be more sensitive to light and reached near maximum intensity even under ambient light conditions. Three separate photoresistors behaved that way, so I decided to just change the code instead. It is likely that all three photoresistors were of the same type (i.e. one that was different from the others).

Below is my (highly confusing) assembly diagram of the project, as created using TinkerCAD.

p5 Code

The p5 code was mostly based on my old code from Assignment 2. However, I had to modify it quite a bit to both work with Serial Communication and also restrict the agar growth to specific plates rather than the entire area of the sketch. Like last time, I used randomGaussian() for the growth. I used this instead of Perlin Noise as I rather liked the higher degree of randomness the Gaussian random gave me, as the Perlin Noise did tend towards aggregation rather than spread, as expected.

Also, to avoid colonies from appearing beyond the plate borders, I just used an if() statement to only display those colonies that were generated within borders. While I originally wanted to use a while() loop to truly restrict generation to within the plate, I soon discovered that a while() loop interfered with Serial Communication, thus causing the program to crash. Since a minimum of 5 and a maximum of 30 colonies were generated every frame, or 60 times per second, I felt that the few colonies leaving the plate borders that would not be displayed wouldn’t really be missed.

for (let i = 0; i < numColonies; i++) {
  // Gaussian random to ensure aggregation towards center
  colonyX = randomGaussian(cultures[d].x, spread);
  colonyY = randomGaussian(cultures[d].y, spread);
  colonyR = random(2, 15);
  if (
    ) <=
    plates[d].diameter / 2 - colonyR - 3
  ) {
    if (uv) {
    } else {
    ellipse(colonyX, colonyY, colonyR);
Serial Communication

Serial Communication in this project was two-directional.

From the p5 sketch, the RGB values of the fluorescent protein colors were sent to the Arduino, which would use these as inputs for red, blue, and green light. I converted the hexadecimal color value to decimal using parseInt() which I learned to use from this tutorial.

let colRgb = hexToRgb(colonyUVColors[currentProt]);
let sendToArduino = colRgb[0] + "," + colRgb[1] + "," + colRgb[2] + "\n";

From the Arduino component, the information about which photoresistor had crossed the threshold and was thus the plate on which the user had “dispensed the bacteria” was communicated to the p5 sketch. As explained earlier, this information would only be sent when the light intensity crossed a certain value above background light intensity.

p5 Sketch

Fullscreen Version:

Challenges Faced

There were numerous challenges faced. I enumerate some of them below.

1) To detect? Or not to detect?

As anyone who uses photoresistors (or any kind of variable resistance sensors really) must know, photoresistors have wildly inconsistent results. Not just that, since their resistance changes according to light intensity, any changes in background light intensity would also potentially trigger false positives, or could even mask actual detections leading to false negatives. Both are bad.

There are two ways of combatting this. The first is to create an enclosure that minimizes background light intensity. A “dark room” as such. My original plan included this in some aspect, as I had planned to build a mini version of a laminar flow hood as the housing for the project. However, due to my lack of any abilities in fabrication, this was out of the question.

So, the next solution lies in code. Instead of trying to detect light intensity above a certain threshold, using the difference in intensity between background light intensity and the light to  be detected would be a better solution. So, I decided to put statements asking the Arduino to measure light intensity at startup through the setup() function. But this presents another problem, as you might have guessed. How would I account for changing light intensity during runtime? This could be done by using the button output (the one the user was pressing to pipette) as a condition for when the actual light intensity was being sensed, and otherwise continually detecting background light intensity while the user did not press the button. This actually worked surprisingly reliably, even in weird lighting conditions.

2) Watch me crank that (solder) boy

Soldering was a pain. It looks easy from the outside but I was clearly doing something wrong because making 8 solder connections took me 2.5 hours. One mistake that I discovered I was doing is that in order to “beautify” the solder, I was trying to melt it a bit so that it flowed around the wire better and looked smoother, but the whole thing would melt off and drop unceremoniously. I quickly learned not to do that.

3) If it can’t be fixed by tape, you’re not using enough

As mentioned in my Resources Used section, much of my project is held together with a ton of Scotch tape. I mean, half of an entire roll of Scotch tape. I initially wanted to join the cardboard segments making up the pipette with hot glue, but I quickly discovered that hot glue guns wouldn’t work too well. Not because of the strength of hot glue (hot glue was strong enough), but because the cardboard itself was too weak to handle shear stress from being pressed on while only being linked using hot glue to one/two joints. Tape allows for more surface area of contact and also holds the pieces together like a rubber band would instead of just creating a joint. Also, a bunch of tape was used to hold the aluminium taut against the pizza box both to prevent crinkles and also to avoid the foil itself from shifting around and covering the potentiometer windows.

4) Serial communication: More drama than Hindi TV serials

Serial Communication. It’s a useful tool to create projects linking digital artwork on p5 to physical processes through an Arduino.

But it requires so much bug-fixing to get right.

Right off the bat, as described earlier, a while() loop to keep regenerating random positions that were only within disc borders, while working perfectly fine in a p5-only situation, would crash as soon as it came to Serial Communication, most likely because the while() loop interfered with the Serial receiving/sending of data. This required me to switch to an if() statement to only display those colonies that generated within the plate borders, using the dist() function to calculate distance between centers of colonies and plates.

Also, I noticed that occasionally, Serial communication would stop entirely between the p5 and Arduino components. This, I found, was because my code initially sent a number associated with each light sensor when the corresponding light sensor detected the LED. What happens if it doesn’t detect the LED? You’re right, the Arduino stops sending data, breaking the Serial Communication. This, I fixed by asking the Arduino to send an arbitrarily picked ‘6’ whenever no sensor detected the LED.

The final challenge in fact couldn’t be solved by me. I noticed both during User Testing and the show itself that if the user switched colors too quickly (as excited users wanting to try out the different colors are wont to do) Serial lagged on the Arduino side and the LED would display a color associated with a different protein. The time taken to recover gradually increased with runtime, eventually reaching a longest period of 1 minute of recovery time. I found that p5 was indeed sending the correct information with virtually no delay, but without the ability to get Serial callouts from the Arduino while Serial Communication was running meant that I could not identify what was causing the lag in the Arduino.

User Testing


Overall, I’m rather proud of my project. It allowed me to convey my love for Biology and allowed people, irrespective of their background, to try their hand at a simple experiment, without requiring any prior lab prep work or worry of contamination and other challenges when it comes to growing bacteria. Also, the simulation was a lot faster than actual bacteria growth (taking E. coli for example, at full growth, each second is equivalent to about 6 hours). Seeing the excited faces at the IM showcase as the fluorescence under UV was revealed to them was (as cheesy as it sounds) worth all the time I spent on this project. That I would say is the aspect I am most proud of. But other than that, I am particularly proud of my way of detecting the LED light using a difference in light intensity method than a raw threshold, as it eliminated the biggest possible source of variation.

However, there is definitely room for improvement. Most of what I include below comes from feedback actually given to me during the IM showcase, or from behaviors that I observed.

  1. The project was essentially not very intuitive. While I did have a poster with written instructions, like all other written forms of instruction, it was ignored, thus leaving most users confused about what to do without my assistance. The instructions being unclear also didn’t help.
  2. The interaction could flow better. I could map the change in fluorescent proteins to the keyboard instead of mouse clicks as (a) a lot of MacBook users kept accidentally right-clicking on my Windows laptop due to the differences in what is sensed by each device’s trackpad; and (b) most people stuck to the same color for all six plates as they forgot or did not bother with changing the color.
  3. As some of my Biology major friends pointed out, it would be cool if I had six tubes with six LEDs of different colors representing each of the fluorescent proteins. And then, instead of selecting the protein from the screen, I could potentially ask the user to “pipette” from one of the tubes and into each well, just like in a real Biology lab. However, this would require both an RGB sensor, as well as a way to distinguish between pipetting in and out. One button with two different depths (like in real pipettes) wouldn’t work, and two separate would be too cumbersome to hold and press. But, it would definitely increase the immersion if possible.

Resources Used

Most of the electronics were from the base Sparkfun Arduino Uno Rev3 SMD kit. I had to borrow additional photoresistors and an arcade button from the IM lab consumables for the photoresistors. Additionally, I borrowed solid core wires and solder to extend my connections and electrical tape to insulate them.

For the physical prototype itself, the pipette was made of cardboard from the IM lab held together with a LOT of tape a hole was punched into one side of the pipette for the wires, and at the bottom for the LED. A larger hole was made at the top for the arcade button.

The main working surface was based on a pizza box that was covered in foil and punched with six holes to serve as windows for the photoresistors. All of my circuitry went into the pizza box, which would make it convenient to make any quick fixes. I could just open the box and work on it rather than have to cut anything out of a more permanent casing.

Software side, the cover image was generated using DALLE3, while the images of the fluorescent proteins were obtained from the Protein Data Bank (PDB) maintained by the Research Collaboratory for Structural Bioinformatics (RCSB).

IM Showcase

Week 12: Serial Communication (Amiteash + Aneeka)

Serial Exercise 1

For the first exercise, we used the flex sensor in a voltage divider circuit as the analog input on pin A0 to control the x-axis motion of an ellipse on a screen. When the flex sensor is fully bent, The more a flex sensor is bent, the higher the resistance of the flex sensor.


p5 Sketch:

Serial Exercise 2

For the second exercise, we represented the movement of an ellipse along the x-axis, as controlled by mouseX, as the brightness of 2 LEDs, on pins 9 and 11. The green LED increases in brightness as the ellipse moves to the right while the red LED increases in brightness as the ellipse moves to the left.


p5 Sketch:

Serial Exercise 3

Here we edited Prof. Aaron Sherwood’s GravityWind example in p5 to connect to Arduino through Serial Communication. This allows the potentiometer on pin A0 to control wind speed and direction depending on turning of the potentiometer (0 to 1023 mapped to -5 to 5) and for the LEDs on pins 9 and 11 to blink alternately when the ball crosses the bottom end of the screen.


p5 sketch:

Code (for all three)

From the Arduino side, we used a common code, mainly to prevent errors from continuously updating the code. The setup() function set up serial communication and the behavior while a connection was yet to be established. The loop() function was as follows:

void loop() {
  // wait for data from p5 before doing something
  while (Serial.available()) {
    digitalWrite(LED_BUILTIN, HIGH); // led on while receiving data
    int left = Serial.parseInt();
    int right = Serial.parseInt();
    if ( == '\n') {
      digitalWrite(leftLedPin, left);
      digitalWrite(rightLedPin, right);
      int sensor = analogRead(potPin);
  digitalWrite(LED_BUILTIN, LOW);

From the p5 side, our code didn’t differ greatly from the Serial Communication template. Where our code differed was mainly in mapping analog inputs to fit the range of values of the sketch width/maximum viable wind speed (for exercises 1 and 3 respectively), while variables called ‘left’ and ‘right’ were assigned to track changes to the LED state due to changes in the sketch.

One code we were particularly proud of in Exercise 3 was how we alternated the blinking of the LEDs depending on the number of times the ball bounced.

if (position.y > height - mass / 2) {
  velocity.y *= -0.9; // A little dampening when hitting the bottom
  position.y = height - mass / 2;
  left = (left + 1) % 2;
  if (left == 0) {
    right = 1;
  } else {
    right = 0;

The code for assigning the values to the right LED may look needlessly complicated, and it seems that both could use the (x + 1) % 2 for both LEDs, only assigning one as 0 and the other as 1 at start. However, this method allowed for both LEDs to be dark before the first bounce of each ball (which can be seen in the demo if slowed), which we felt was key to the prompt.

Demo (for all three)

Week 12 Reading Response: Design Meets Disability

This reading by Graham Pullin was quite interesting. I especially loved how Pullin set up the contrast between the medical model of disability (which focuses on symptoms and the ability/inability to perform certain tasks) and the social model of disability (which views disabilities arising from social barriers that make spaces and conversations inaccessible to people with impairments), and how using fashion for assistive devices addresses the social model. There is a major focus by Pullin to break down the long-held misconception that form does not matter for assistive devices as long as they are capable of performing their specified function in a discrete manner. The goal so far when crafting these devices is to hide them, which only perpetuates the wrong belief that having a disability is shameful.

In that regard, I liked the comparison Pullin drew to eyeglasses. Eyeglasses are technically assistive devices too, and in many cases essential to prevent a disability. Under the medical model, myopia with a power more than -2.5D would pretty much be a disability, as without the help of glasses, people with high myopia are unable to see the world normally. However, glasses, and the fact that glasses are socially acceptable, assist in mitigating the impairment and prevent it from being a disability, pointing to the influence of the social model. Thus, as Pullin points out, there’s an urgent need to reconcile other assistive devices with social acceptability. For that, assistive devices need to be optimized for nor just function but also for form and aesthetic design.

My final point was that while Pullin is making a revolutionary call to include fashion designers and artists into the designing process, there’s one group that he has forgotten. One that is already underrepresented in the assistive device design process: people with disabilities themselves. To make a successful assistive device that has a preferable design, people with disabilities need to be involved at every step, not just as customers.

Final Project Proposal

Idea 1: Virtual Microbiology Lab

My first idea is to make a physically interactive version of my second assignment. So, the user would move a pen on top of a board. On the pen, there would be a button and a RGB LED mounted on opposite ends. The idea is to make the pen look like a pipette. If possible, I plan to have more than one pipette to represent different strains of bacteria. In that case, there might be one more sensor to identify which “pipette” is being picked up. This would link back to the p5 sketch to control the color of the bacteria.

Refurbished VWR Single Channel 100-1000ul | Maxim Pipette Service

The board itself would have rows of photoresistors mounted along the length and breadth, meant to detect the light from the LED. The information about the position (obtained from checking which photoresistors get the brightest measurement) will be used to map the bacterial colonies to the p5 sketch, which will then grow on the screen in a way very similar to my assignment.

There are many challenges I foresee. The primary is getting my hands on that many photoresistors. Also, I am not sure if photoresistors have the sensitivity required to differentiate between the light being directly in line vs. off to a small angle. I will also have to eliminate background light, although I have an idea to do so by creating a hood under which the user will have to work. Also, due to the very limited amount of time I have, as well as the amount of other commitments [mainly in the form of Capstone 🙁 ] , it might be hard to implement it.

Idea 2: Braille Translation

My second idea was to kind of make an English-to-Braille translator. On the p5 side, I plan to have the user type out a word. I have two ways of implementing this on the Arduino side:

(a) Using LEDs: In this case, the idea is to serve as a learning aid for Sighted people to learn Braille.

(b) Using Servos to raise pins: This is my preferred means of output, as it would be suitable for both Sighted and Blind people. It would be similar to an already existing device known as a refreshable Braille display.

This project was partially motivated by this project from a couple of years back, which used LEGO Mindstorms to make a Braille printer.

Shubham Banerjee, 13, works on his Lego robotics Braille printer.

While researching, I also found this handheld refreshable Braille display device made by MIT undergraduates, which serves as a pretty good indication of what the project might look like.


Week 11 Reading Response: A Brief Rant on the Future of Interaction Design

Both readings (basically a two-part reading) were a highly interesting look into future interaction systems, especially using the tactile capabilities of our fingertips to sense and manipulate objects and interaction systems.

Bret Victor’s examples on the range of motion and sensitivity of the human hands and fingers reminded me of a key fact from developmental psychology: the fact that the touch receptors of the fingers, lips, and tongues are the first senses to develop. This is why infants touch everything and put everything into their mouths; it is basically their way of seeing things.

The human fingertip in fact can resolve objects at a resolution of 0.4mm. That means, it can basically distinguish between two objects that are about half of a sharpened pencil tip apart.

Thus, with this level of capability of the human hands, one would be inclined to agree with Victor on the declaration that current systems of interaction are highly limited compared to the possibilities.

Other than that, many touchscreen technologies are unfortunately inacessible for people with disabilities. Blind people, for example, require some kind of haptic or audio feedback from a touchscreen input that is usually never bundled with the hardware. In a lot of cases, there is no sufficient option provided in the default software, and special software needs to be downloaded… by first making one’s way through the device unaided. Old people and people with motor disabilities also often struggle with some of the finer inputs required with touchscreens, again due to the lack of any haptic feedback.

Interaction systems in the future need to be designed for all. But first, we must break away from the status quo and boldy go where no man has gone before.

Week 11: Music Box


For this project, Aneeka and I wished to create our own recreation of a music box (or musical box, for those from the Commonwealth). This was a popular toy/gramophone in the past (depending on how you viewed it) and we even remember having a small old one back at home in India. While they have taken many different forms, we based ours on a simple electronic one that played music when you opened the box.

Singeek Ballerina Girl Mechanism Rotate Music Box with Colorful Lights and Sankyo 18-Note Wind Up Signs of The Girl Heart Gift for Birthday Christmas (Swan Lake): Buy Online at Best Price in

Figure 1: A music box

  • 1 Arduino Uno R3 SMD
  • 1 Photoresistor
  • Slideswitch
  • Arduino Piezo Buzzer
  • 2 10 kΩ Resistor
  • Jumper Wires
Circuit Schematic and Simulation

The first step was to prepare a circuit schematic on TinkerCAD. Basically, the digital input took the form of a slideswitch feeding into a digital input pin (pin 4) through a 10 kΩ pull-down resistor. Analog Input came from the photoresistor connected to pin A0. The buzzer output was from pin 8.

Figure 1: Component Simulation View

Figure 2: Schematic View

TinkerCAD also has a handy simulation functionality, that even allows to upload the Arduino code and test how the circuit would work under simulator conditions. This definitely helped in fixing bugs before even testing with the actual circuit, and also helped to individually simulate each component before assembling together.


Figure 3: The circuit

Basically, there are two main control points: opening/closing the box, and sliding the switch between ON/OFF.

When the box is opened, light falling on the photoresistor exceeds the threshold and thus triggers the playSong() function, which iterates over the notes in the pre-assigned song and plays it using the buzzer. When the box is closed, the light intensity falls below the threshold and the for loop breaks / is not triggered again, causing the music to stop playing.

When the switch is ‘ON’, pin 4 detects a ‘HIGH’ voltage and thus the Arduino plays song 1, which for this example we chose to be Toby Fox’s Megalovania from the game Undertale.

When the switch is ‘OFF’, pin 4 detects a ‘LOW’ voltage and thus the Arduino plays song 2, which for this example we chose to be Mikkel Fabricus Smitt’s Subway Surfers Main Theme from the game Subway Surfers.

#define LDR_PIN A0
#define SWITCH_PIN 4
#define BUZZER_PIN 8

#include "pitches.h"

int ldrThreshold = 500; // Adjust this value according to your LDR sensitivity
int song = 1;

void setup() {
  pinMode(LDR_PIN, INPUT);

void loop() {
  // Read the light level from the LDR
    int lightLevel = analogRead(LDR_PIN);
  // If the light level exceeds the threshold, play the first song
  if (lightLevel > ldrThreshold) {
      // Check if the switch is pressed to change to the second song
    if (digitalRead(SWITCH_PIN) == LOW) {
      // Add some delay or debounce to prevent rapid switching
      int song = 1;} 
    else {
      int song = 2;    }
  else {

void playSong(int songNumber) {
  // Define the melody and duration for each song
  int melody1[] = { NOTE_D4, NOTE_D4, NOTE_D5, NOTE_A4, 0, NOTE_GS4, NOTE_G4, NOTE_F4, NOTE_D4, NOTE_F4, NOTE_G4,
  int noteDurations1[] = { 8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
   8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
   8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
    8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8 
  int melody2[] = {
  // NOTE_C4, 0, NOTE_G4, 0, NOTE_AS4, NOTE_C5, NOTE_AS4, 0, NOTE_F4, NOTE_DS4, 0,
  // NOTE_C4, 0, NOTE_G4, 0, NOTE_AS4, NOTE_C5, NOTE_AS4, 0, NOTE_F4, NOTE_DS4, 0,
  // NOTE_C4, 0, NOTE_G4, 0, NOTE_AS4, NOTE_C5, NOTE_AS4, 0, NOTE_F4, NOTE_DS4, 0,

  // NOTE_C4, 0, NOTE_E4, 0, NOTE_G4, NOTE_A4, NOTE_AS4,
  NOTE_C5, 0, NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0,
  NOTE_AS4, 0, NOTE_AS4, NOTE_C5, 0, NOTE_AS4, NOTE_A4, 0,
  NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0, NOTE_AS4, 0, NOTE_E5,

  NOTE_C5, 0, NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0,
  NOTE_AS4, 0, NOTE_AS4, NOTE_C5, 0, NOTE_AS4, NOTE_A4, 0,
  NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0, NOTE_AS4, 0, NOTE_E4,0

int noteDurations2[] = {
  // 4, 8, 4, 8, 4, 8, 8, 16, 8, 8, 16,
  // 4, 8, 4, 8, 4, 8, 8, 16, 8, 8, 16,
  // 4, 8, 4, 8, 4, 8, 8, 16, 8, 8, 16,

  // 4, 8, 4, 8, 4, 4, 4,
  8, 16, 8, 16, 8, 16, 8, 16,
  8, 16, 8, 8, 16, 8, 8, 16,
  8, 16, 8, 16, 8, 16, 8, 4, 8,

  8, 16, 8, 16, 8, 16, 8, 16,
  8, 16, 8, 8, 16, 8, 8, 16,
  8, 16, 8, 16, 8, 16,8,4,8,4

  // Select the melody and note durations based on the song number
  int *melody;
  int *noteDurations;
  int notes;

  if (songNumber == 1) {
    melody = melody1;
    noteDurations = noteDurations1;
    notes = sizeof(melody1) / sizeof(melody1[0]);
  } else {
    melody = melody2;
    noteDurations = noteDurations2;
    notes = sizeof(melody2) / sizeof(melody2[0]);

  // Play the selected melody
  for (int i = 0; i < notes; i++) {
    //Read light level from LDR
    int lightLevel = analogRead(LDR_PIN);
    if((lightLevel > ldrThreshold) || (song == 1 && digitalRead(SWITCH_PIN) == HIGH) || (song == 2 && digitalRead(SWITCH_PIN) == LOW)){
      int duration = 1000 / noteDurations[i];
      tone(BUZZER_PIN, melody[i], duration);
      delay(duration * 1.3); // Add a slight delay between notes for better sound quality
    } else{

The notes refer to the pitches.h header file from the toneMelody example on Arduino IDE.

The code is not too different from the examples we did in class, which was a major advantage of this project. To switch between two different songs, we assigned the information about the songs to pointers.

Also, to ensure that the song switched off as soon as possible when the switch or photoresistor were toggled, we used an if statement that led to a statement to break the loop, which immediately stops playing.

The notes and durations were obtained from this Github repository: Since the repository did not have the notes for Megalovania, that was manually transformed from its sheet music online.



We both really enjoyed working on this project. For something that was relatively simple, it was still quite impressive.

We were especially happy with the solution we found in using the break statement. Because earlier, the photoresistor was only taking an input after the ‘for’ loop was completed, and thus the song kept playing even when the box was closed, contrary to our expectations. Breaking the loop avoids that issue.

Week 10 Reading Response: Tom Igoe

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This was a pretty interesting look into interactive media as a participatory media. Often artists tend to be afraid of their art being assigned a wrong meaning, and artistic clarifications usually keep popping up well after a work is created (JK Rowling is a particularly glaring example). It stems from a fear that our own beliefs, our own creation, will be inevitably challenged by the audience. This is even more true for something that involves even more two-way communication, like an interactive art showcase. Even in our own documentation, it is tempting to include an exact user guide of how to “use” our art. And while user guides are helpful for products, they might not have the same effect in art.

That is not to say however, that all interactive art has to be fully participatory and unscripted. This again goes back to the debate we had when discussing Chris Crawford’s The Art of Interactive Design. Once again, the question pops up of whether interactive is participatory, or whether it has to be organic. For example, a lot of games are deeply scripted, and while it may be argued that playing the game is an organic experience of its own, it doesn’t change the fact that in a lot of games, you will be seeing most, if not all, the events that are meant to happen. Visual novels are another example. By most definitions of interactivity, they are not interactive, being no better than picture books. Yet, they do feel interactive in their own way.

At the end of the day, it is up to the artist and what compromise they make between interactivity and meaning.

Physical Computing’s Greatest Hits (and misses)

This was a nice look at different classes of physical computing projects and the benefits and challenges of each. Usually, there seems to be a common theme where there is a compromise between implementation and engagement. Relatively simpler projects like the theremin, floor pad, video mirrors, and Scooby Doo paintings may be easier to implement and more common, but they also lose engagement quickly. On the other hand, more complex mechanical pixels, body/hand cursors, multi-touch interfaces and fields of grass may be more engaging but often run into implementation or maintenance issues that take away some of the charm. In a way, my main takeaway from the article was that rather than your physical computing mechanism, what is more important is the story and how the mechanisms tie in to the story being told.

Week 10: Analog/Digital (Parking Sensor)


For this project, I wished to use the ultrasonic sensor in some form, especially as it used to be my favorite sensor back when I experimented around with LEGO Mindstorms. I remembered that conceptually, a parking sensor was, in its simplest form, a combination of a digital input (Reverse Gear On/Off) and an analog input (ultrasonic/electromagnetic sensor). So, I decided to emulate that for this project.

  • 1 Arduino Uno R3 SMD
  • 1 HC-SR04 4-pin Ultrasonic Distance Sensor
  • Slideswitch
  • Arduino Piezo Buzzer
  • 10 kΩ Resistor
  • 330 Ω Resistor
  • Red LED
  • 1 Green LED
  • Jumper Wires (3 red, 5 black, 2 blue, 2 yellow, 2 white, 1 green [in lieu of black, as per convention])
Circuit Schematic and Simulation

The first step was to prepare a circuit schematic on TinkerCAD. Basically, the digital input took the form of a slideswitch feeding into a digital input pin (pin 12) through a 10 kΩ pull-down resistor. Analog Input (which I later discovered was actually a digital input) came from the Echo pin of the Ultrasonic sensor (pin 8), while the ultrasonic pulses were triggered from pin 9. Pin 7 and PWM Pin 5 controlled the digital-controlled Green LED and the analog-controlled Red LED respectively, while the buzzer output was from PWM pin 5.

Figure 1: Component Simulation View

Figure 2: Schematic View

TinkerCAD also has a handy simulation functionality, that even allows to upload the Arduino code and test how the circuit would work under simulator conditions. This definitely helped in fixing bugs before even testing with the actual circuit, and also helped to individually simulate each component before assembling together.

Usage and Implementation

So, the “parking sensor” is first switched on with a slideswitch, which simulates the gear being changed. The green LED turns on, indicating that the sensor is now switched on.

The analog part works by taking the reflection times reported by the ultrasonic sensor, which is converted to distance using the speed of sound. The distance is mapped to the brightness of the red LED and the frequency of repetition of beeps from the buzzer.

The circuit was assembled almost identically to the above schematic. As I had run out of black jumper wires, I substituted using a green jumper wire, as per convention. No other connections made use of green jumper wires. Other than the convention of red for live / black for neutral, I had additional conventions of my own: blue for inputs, white for digital outputs, and yellow for analog outputs.

Figure 3: Front View

Figure 4: Top View

Figure 5: Arduino Pins Connection

int distance = 0;
int duration = 0;
int brightness = 0;
int minDistance = 5;
int maxDistance = 30;

int digitalIn = 12;
int trigger = 9;
int echo = 8;
int green = 7;
int red = 5;
int buzzer = 3;

// Ultrasonic Function
long readUltrasonicDistance(int triggerPin, int echoPin)
  pinMode(triggerPin, OUTPUT);  // Clear the trigger
  digitalWrite(triggerPin, LOW);
  // Sets the trigger pin to HIGH state for 10 microseconds
  digitalWrite(triggerPin, HIGH);
  digitalWrite(triggerPin, LOW);
  pinMode(echoPin, INPUT);
  // Reads the echo pin, and returns the sound wave travel distance in centimeters
  return 0.01723 * pulseIn(echoPin, HIGH);

void setup()
  pinMode(digitalIn, INPUT);
  pinMode(green, OUTPUT);
  pinMode(red, OUTPUT);
  pinMode(buzzer, OUTPUT);

void loop()
  if (digitalRead(digitalIn) == HIGH) {
    digitalWrite(green, HIGH);
    distance = readUltrasonicDistance(trigger, echo);
    if (distance < maxDistance && distance != 0) {
      duration = map(distance, 0, maxDistance, 5, 60);
      brightness = map(distance, 0, maxDistance, 220, 0);
      analogWrite(red, brightness);
      tone(buzzer, 523); // play tone C5 = 523 Hz for 100 ms
      if (distance > minDistance){
        delay(duration); // Wait for (duration) millisecond(s)
    } else {
      analogWrite(red, 0);
    delay(10); // Wait for 10 millisecond(s)
  } else {
    digitalWrite(green, LOW);
    analogWrite(red, 0);

The code is not too different from the examples we did in class, other than the function for the Ultrasonic Sensor. For that, I followed this helpful tutorial on the Arduino Project Hub.



I enjoyed working on this project, although figuring out the Ultrasonic Sensor and debugging it did take a while. I was actually impressed by how sensitive the sensor turned out to be, and how it managed to sense even objects passing perpendicular to its field of view (as demonstrated towards end of demo). Thus, other than being a parking sensor, it does actually work as a rudimentary safety sensor, being able to detect pedestrians as well.

I originally wished to include another LED (a yellow one that would fade with distance and a red that would trigger at a threshold minimum distance), but I ran out of black jumper wires and viable space on the breadboard, so I cut it down to two. Also, not shown in the showcase itself is an issue that the circuit has with powering off. Possibly due to the built-in delays, it takes a while between the switch turning to off position and the LEDs themselves actually turning off.

Also, I realized only later that Ultrasonic Sensors technically are digital inputs, but since they transduce an analog measurement into a digital input, I felt that it worked for the scope of this project.

Week 9: Unusual Switch (Sleep-Snitcher 3000)

The Switch itself

Before going into the background, let’s look at the switch itself:


A switch to be controlled by a body part other than the hand is not completely out of the realm of imagination, but it is quite difficult to imagine, especially for a circuit based on an Arduino. When I went to collect the copper strip from the IM lab, I got the idea of taping it to my forehead. And that’s how Sleep-Snitcher 3000 was born.

Conceptually, it is a tongue-in-cheek detector of whether a student is sleeping in class. When a student’s forehead is on the table, the LED lights up. This may also be the right place to add a disclaimer that the maker of this project does not advocate the use of this in schools or other educational institutions. It is meant to only be a fun and slightly satirical project.


Circuit Diagram of the circuit, which goes from 5V pin on Arduino, to a 330ohm resistor, to the components acting as the switch to the led and finally to the GND pin. Figure 1: Circuit Diagram

The circuit is pretty simple. After this is all that the assignment actually requires. The above is a slightly confusing diagram from Tinkercad that shows the circuit design.

The 5V DC power output pin is used to keep the resistor and one terminal of what is technically a push-button switch live. When my forehead is on the table, it connects the two terminals of the “push-button switch” and thus lights up the LED, whose anode is connected to the the switch and cathode to GND (ground).


  1. Arduino Uno Rev3 SMD x1
  2. Jumper wires x5
  3. 330Ω carbon resistor x1
  4. Breadboard with 400 contacts x1
  5. Basic Red 620nm LED x1
  6. Aluminum Foil x1 A4 sheet (courtesy of Indian by Nature!)
  7. Copper Tape x10 inches
  8. Paper Bag x1


At first, I was planning to connect one terminal to an aluminum foil sheet on the table and the other to my forehead. This came with a major problem: the wires were just too short. Besides, a hypothetical customer of this product wouldn’t want to connect and disconnect themselves from a circuit repeatedly. Then I remembered that in push-button switches, the button itself is usually not a terminal of the switch but acts as a wire bridging the two terminals. So, I came up with the following split-terminal design using aluminum sheets.

Two aluminum sheets connected to wires linked to a breadboard.Figure 2: Switch terminals (the green terminal is +ve and the yellow terminal is -ve)

The copper strip simply goes across the forehead and that’s pretty much it.

And finally, the breadboard components were obtained from the Sparkfun kit we were provided, and the Arduino was connected to a laptop for power supply.


This turned out to be more fun and simpler than I thought it would be. This was my first experience with Arduino outside of our class, but it felt nice to directly apply the concepts that were drilled into us by both high school physics and FOS. Also, the switch was surprisingly consistent, as I had expected that the circuit would be interrupted by a layer of insulating aluminum oxide. All in all, it was a fun project, with a fun concept.

Use of Generative AI

Generative AI (OpenAI ChatGPT 3.5) was used to name the project (the original name was A Student’s Nightmare) and to generate the disclaimer at the end of the video.

Week 8 Reading Response: Don Norman and Margaret Hamilton

Don Norman’s Emotion and Design: Attractive things work better

The most interesting part of this reading for me (as well as the most important, I believe) was the part about Affect Theory, and how it influences design considerations. I was always aware of the tenets of affect theory, as in its essence it is a core part of behavioral ecology, or how the behavior of animals influences their survival and reproduction, although I never knew of “affects” by name prior to this reading. I especially liked reading about affect, especially negative affect, being a threshold effect, with low levels of negative affect increasing concentration. That is because the negative affect mainly draws instinctual reactions of fear or anger, and both of these reactions have evolved specifically to increase concentration in survival situations. But when the negative affect gets higher, the fear/anger triggered gets overwhelming and leads to anxiety and freezing up, which to be fair is another evolution-designed response to challenges that cannot be solved by “fight” or “flight”. In general, as Norman points out, negative affect causes “tunnel visioning”, while positive affect causes the “broadening of the thought process”.

So, knowing about affects becomes an important consideration in design. This part of the reading is also highly interesting. Norman compares scenarios where there is a degree of external negative (like in emergencies or dangerous work), neutral (most day-to-day actions), or positive affect (like in creative and safe spaces). In the negative affect case, Norman asserts that designs should emphasize function and minimize irrelevancies. For example, emergency exit doors should, by design, immediately tell users which way they swing (besides they should swing outwards anyway to prevent crowd crush, but that is a matter of building codes rather than design). This is a sentiment Norman has expressed earlier in “The Design of Everyday Things”. But in a neutral and positive affect scenario, it becomes important to also consider design, and small sacrifices of functionality for good design becomes increasingly more tolerable in positive affect scenarios. For example, we often find ourselves gravitating towards better-looking pencil boxes, better-looking soap dispensers, ornate wall clocks and wristwatches, and sleeker smartphones, among others. In each case, as long as the product scores high enough on our mental calculations regarding usability and cost-effectiveness, we often do go for the more attractive product.

And this is most apparent when looking at phone sales. For example, recent iPhones (like the 14 and the 14 Pro), actually have higher sales in colors like Purple, Gold, and Blue than even Black (an ever popular color), and significantly higher sales than Silver/White. iPhone colors (other than Red), generally do not cost a user more, so in the absence of any influence of design on sales, the expected result would have been a near-even distribution of sales across all color options, or at least a distribution that reflected production and availability (because Black is usually overproduced compared to other colors). The fact that the sales distribution is skewed goes to show that beautiful products are automatically seen as more attractive.

Her Code Got Humans on the Moon—And Invented Software Itself

This reading was very inspiring. I had known of the work of female mathematicians like Katherine Johnson and Dorothy Vaughan on the Mercury and Apollo programs, mostly due to the film Hidden Figures. Their work on calculating trajectories and backup trajectories for the Apollo mission was instrumental in the program’s success and even saved the life of the Apollo 13 astronauts. However I was unaware of the key contributions of Margaret Hamilton to both the moon landings and modern software design, through her work on the Apollo In-flight Guidance Computer.

I was especially surprised at reading how, despite Hamilton’s insistence to include exception handling in the software (which is now essentially a software engineering 101 concept, as far as I’m aware), NASA had nearly rejected it as being too excessive. However, Apollo 8 had shown the importance of such error handling. I had also heard about Apollo 11’s memory overflow error before (apparently a result of the navigation RADAR being left on past when it was supposed to be used), but through this article, I learned that Margaret Hamilton was the one who came up with the solution to it.

Reading further about this incident, I found out about another contribution of Margaret Hamilton to the success of the Apollo 11 mission, specifically when it came to interaction. While the “priority displays” exception handling mechanism was innovative, the low processing power and slow speeds of the Apollo 11 computers meant that there was a risk that the astronauts inputs and the computer could go out of sync while it was trying to load up the priority sub-routines. This was why Hamilton put a standing instruction that when the priority displays came online, astronauts should wait for 5 seconds for everything to load up properly before putting in any inputs, which helped prevent knock-on memory overflows and asynchronous input-output logic.

Overall, Margaret Hamilton’s work is highly inspiring and aspects of it can still be seen in software design today.