Week 10 Assignment– IM (Ivan & Muhammad) Techno


It took a while for us to come up with a properly functioning musical instrument with a decent sound. At first, we tried to incorporate an ultrasonic sensor for controlling the height of a sound, a servo motor to create a beat, and a flex sensor for controlling the duration of the notes. It turned out to be fine, but we weren’t satisfied with the quality of the sound or its instability. After a little experimenting, we finally decided to use transistors to amplify the sound, which gave the music a feel of the techno genre.

Technical structure

We used the ultrasonic sensor to control the height of a sound. The B10K potentiometer controls the duration of the note, while the B100K potentiometer controls the volume of the sound. The potentiometer, as mentioned earlier, amplifies the sound.

We decided to use a transistor to amplify the signal to the speaker. Moreover, we added an external 9V power supply to power the speaker, and these additions greatly enhanced the volume and clarity of the sound. The potentiometer controlling the volume is connected in series with the input signal to the speaker from the Arduino, and so the physical reduction of the voltage of the signal by adjusting the resistance of the potentiometer controls the volume. Therefore, we use the potentiometer as a variable resistor instead of an input here.

On the other hand, the potentiometer used for the delay is used to get analog input to the Arduino, which is then used to control the delay interval in code. This is just a way to showcase the different use cases of the same components.

Video demonstration:


int mapDistanceToIndex(float distance) {
// Map distances into array index (2 cm steps from 5 cm to 15 cm)
// int index = constrain((distance - MIN_DIST) / 2, 0, ARRAY_LENGTH - 1);
int dist = int(distance);
int index = constrain(map(dist, MIN_DIST, MAX_DIST, 0, ARRAY_LENGTH - 1), 0, ARRAY_LENGTH - 1);
return index;

void loop() {
float dist = getDistance();

noteDuration = map(analogRead(OFFSET_DIMMER), 0, 1023, 100, 500);

if (millis() - noteStartTime >= noteDuration) {
// noTone(SPEAKER_PIN);
// delay(10);
// Play a funky note continuously based on the distance range
if (dist >= MIN_DIST && dist <= MAX_DIST) {
int note = FUNKY_NOTES[mapDistanceToIndex(dist)];

Serial.print("cm ");
Serial.print(" with duration = ");

// Check if the note duration has elapsed, then check for a new note

// If the note has changed, start a new note
if (note != lastNote) {
lastNote = note;
tone(SPEAKER_PIN, note);
noteStartTime = millis(); // Record the start time of the current note
} else {


We have succeeded in creating a new instrument that wouldn’t just become the replication of already existing ones. Furthermore, we had a chance to experiment with new sensors and get experience working with transistors and potentiometers, implementing them in one project. We believe there’s more potential for this project, especially if we could use mp3 files. Nevertheless, we had fun working on this project, experimenting with Arduino.

Week 9 Reflection

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This reading showed me an interesting point of view, but I’m not too sure if I agree with it. Tigoe says that you should not interpret your own interactive art, but rather let the audience figure it out for themselves. The entire premise of this is that you shouldn’t think of interactive art as a finished painting or sculpture. However, I find myself disagreeing with Tigoe. Firstly, even with a finished painting, no two people will interpret it the same way given they are free to think anything about the painting. Although I understand Tigoe’s point of view of not confining the art piece to your own direction, I do not think that you should not interpret your own art pieces. If I want my audience to have a very specific experience, I will give them my interpretation or the direction that I took with the piece beforehand to guide their thoughts. It entirely depends upon what I want the piece to be. Of course, people can just ignore these directions, but I believe that there is still room for the audience’s personal interpretations of the guided art piece. Like a painting, maybe I want to express myself with the art piece. If you want what Tigoe wants, then his recommendations are very helpful. However, there is no right or wrong way to do interactive art.

Physical Computing’s Greatest Hits (and misses)

Exploring the themes in physical computing classes highlights a cool mix of established ideas and personal creativity. Even though some projects come up every year, it’s not about repeating but reimagining them with your own touch. The examples, from theremin-like instruments to remote hugs, show there’s a lot you can do.

The mention of different time periods and tech advancements adds context, showing how physical computing keeps evolving. I like the focus on projects that involve people rather than just machines. It’s a reminder that the connection between users and their creations is what makes physical computing special. Overall, this collection of different projects is like a guide to show the creative possibilities in interactive art.

Week 8 Reflection

Emotion & Design: Attractive things work better

Often good design is conflated with designs that incorporate the most mechanically efficient ways to complete certain tasks. For example, a well-designed kettle would be one that pours well, and heats up really fast. It could have some cool features, such as being able to maintain the temperature accurately. Function is the priority. But does that mean that form is unimportant?

Donald Norman addresses this question in Emotion & Design using the examples of three very different teapots: one that is inefficient, with its handle and spout facing the same direction, one that is effective in what it does, and one that is aesthetically pleasing. Yet, not one of them could be labelled as the best designed teapot. 

There are two general reasons for this. The first is that form and function are related; they add to one another. Time flies by when you’re having fun. Similarly, things feel better designed if they’re fun to use.

On the other hand, an object doesn’t even need to be that functional for it to be well designed. For example, take the Impossible teapot. It certainly doesn’t function too well as a teapot. You could brew tea in it, but the ergonomic experience would be terrible. Yet, it serves as a statement piece, or a conversation starter.

However, I think there is some distinction that needs to be made. I don’t think that as long as you enjoy using something, it is well designed. You could find a use for anything and enjoy it, but that would make the design of an object a subjective matter. I believe that there has to be some objectivity and some kind of measure of how well something is designed. In that sense, an object has to be intentionally designed in a certain way, targeted to fit some sort of function, and be good at it. If you allow looking good to be a function, then form and function meld together and we have a consistent system of determining what is well-designed, and what is not.

Her Code Got Humans on the Moon – And Invented Software Itself

Back in 1960, a period when women were discouraged from diving into technical realms, Hamilton began as a programmer at MIT, initially planning to support her husband through law school. But fate had other plans – the Apollo program emerged, and Hamilton found herself leading a groundbreaking engineering venture that would reshape what humanity deemed achievable.

As a mother working in the 1960s and a programmer for spacecraft, Hamilton’s narrative is anything but conventional; it is downright radical. Her story challenges not only the norms of the tech world but also societal expectations. Exploring her experiences made me rethink what I consider success to be. Is it reaching your goals, or the entire journey you take till the end.

Margaret Hamilton’s story challenges norms and suggests that success doesn’t always follow a conventional script. As a college student navigating a landscape of uncertainties, I see in her journey a call to embrace the unconventional, challenge stereotypes, and approach challenges with resilience. Her legacy urges us to view setbacks not as roadblocks but as avenues for innovation. Her unconventional path serves as a reminder that greatness often emerges from uncharted territories. As I navigate my academic and professional journey, I’ll carry Hamilton’s spirit—an emblem of resilience, innovation, and the transformative power of embracing the unexpected.

Week 8: Unusual Switch


Although very impractical in its current form, the concept of this switch was to tell the user whether or not their mixing spoon is making contact with the liquid inside the cup when they are mixing it. This switch turns on when the spoon touches the liquid, completing the circuit.


The following components were used:

  • A spoon
  • A cup filled with an ionic liquid (water and table salt)
  • 4x wires
  • 330 Ohm Resistor
  • LED light
  • Metal straw (to mask the wire)



This was a very fun project, particularly because this was a no-coding project. Therefore, I really had to stretch my imagination and come up with an interesting switch. I think I was inspired to make the mixing switch because I was making coffee for friends that day.

Midterm Project: Soundscapes//Landscapes


Initially, I had planned on making a project with gravity manipulation as a core mechanic. However, I did not particularly like the ideas that I came up with. One particular idea was to create a voxel-based game where players could create objects that would fall to the ground. Upon impact, the land would be destroyed based on the momentum of impact. However, this proved to be difficult. I might attempt this for a future project, but the idea I settled on took its roots in this idea of a voxel-based landscape, where a voxel is a 3-dimensional pixel.

My idea was to implement a landscape constructed with voxels, and the player could play around with. For ease of computation and aesthetic preferences, I decided to make the voxels change height instead of stacking voxels to create the terrain. Additionally, I gave the players the ability to change from 3D to 2D and vice versa.

Lastly, to complete the experience, I added sounds that vary based on the nature of the landscape. The user can control different attributes of the landscape, and experience what it sounds like.

Note: Please enter Fullscreen mode to view the controls.

P.S: Headphones are recommended!


I really enjoy pixel art, which is why I wanted my landscape to be pixel-based instead of being a continuous plane. Some of my previous projects have had the same style, so I wanted to stick to something that I knew design-wise.

To move from 3D to 2D and vice versa, I decided to implement the landscape to be comprised of boxes whose height varies according to Perlin noise. The noise depends on the spatial coordinates of each box in the space, as well as their height in the previous frame. The noise also depends on time, so the landscape evolves right in front of your eyes.

I particularly like the way I transition from the 2D view to the 3D view. The plane rotates as it grows and morphs into a landscape, which is a sleek effect and adds to the experience in my opinion.

The above video is an early sample from the project, where the project was supposed to be a landscape editor with physical interactions.

Code Highlights

Although I’m proud of many parts of the code, there are a few pieces that add a functionality that ties the entire project together.

For example, the following code changes the soundscape by calculating the percentage of each type of terrain on the current landscape and sets the volume of each corresponding sound based on how much of that terrain is present.

//set soundstage according to the composition of the landscape
 if (timeEvolution) {
   //set audio volume based on the number of tiles of each type
     map(landscape.waterCount, 0, landscape.totalGridPoints, 0, 0.2)
     map(landscape.sandCount, 0, landscape.totalGridPoints, 0, 0.2)
     map(landscape.landCount, 0, landscape.totalGridPoints, 0, 0.2)
     map(landscape.mountainDirtCount, 0, landscape.totalGridPoints, 0, 0.6)
     map(landscape.mountainRockCount, 0, landscape.totalGridPoints, 0, 0.1)
     map(landscape.snowCount, 0, landscape.totalGridPoints, 0, 0.2)
 } else {

Despite it being a very simple case of using the map() function, the depth this code snipped creates for the overall piece is amazing, giving the whole experience a much-needed atmospheric vibe.

Project Design — The Good

The way the land is created is by creating a Landscape object, which is a container that manages multiple Tile objects. Each Tile is represented as a box with a particular height. The way the height is determined is using Perlin Noise, which takes in the spatial coordinates of each tile as well as the point in time and spits out a number between 0 and 1. This number is then mapped between a minimum and maximum height-scale, and so we get a landscape comprised of multiple tiles that vary in height.

Perlin noise was the perfect tool to use for generating landscape. Landscapes are typically very smooth, with areas close to each other having similar elevation. Conveniently, Perlin noise outputs similar values for similar input values, giving us areas of higher elevation and other areas of lower ones.

The last part that makes this project come together is using the calculated noise value for different things. With some thresholds, you can specify what heights should be water and which should be grass, so on and so forth. With this, one can color the landscape differently based on the elevation, which really produced a natural looking landscape.

Also, I feel like having sliders being the only controls was a good idea. Sliders are very intuitive to use, and one can figure out how they integrate with the project in no time. I wanted to have other controls in the beginning — like a brush that allows you to morph the canvas — but realized that those controls were just muddying the experience. Instead, I decided to focus on the aesthetic choices like the sound design, which I believe added way more than the brush tool would have.

Project Design — The Not So Good

Working on this project was frustrating. Funnily enough, creating the landscape and everything related to that took no time. However, what I did not realize at that time was how difficult it would be to integrate 2D portions, like the menu and buttons, back into the project.

Firstly, I decided to use a library called EasyCam. This allowed me to easily create a camera object that could be programmed with animations so that it shifted from one perspective to another with a smooth transition, with the click of a button. However, as I started working on the 2D elements, I realized what I had gotten myself into.

Turns out, you cannot really create 2D portions when using the WEBGL renderer. My first thought was to create multiple canvases and swap them when switching to the menu, but that does not work as the renderer is still WEBGL which is 3D.

Then I tried to experiment with something known as Instance mode. Basically, everything that we run in a P5js sketch is part of a larger P5js object. What you can do is create multiple p5js objects to be able to use namespaces, which is just a larger scope that you can specify when declaring object. I did try that, and everything ran, but I was still confused on how to create the menu using instance mode. Thinking back, I realize that I should have stuck to that route, as it would have been much more elegant compared to the solution I came up with.

In the end, I decided to use an idea that I learnt from one of Daniel Schiffman’s videos of turning the menu into a texture and applying it to a plane. Here is where the use of EasyCam proved to be a bad choice. I basically had two states: a game state and a menu state, which would show either the instructions menu or the main game menu. Since the plane lives in a 3D world, the camera being at a different position when going into the menu means the menu plane will be viewed from a different angle. To top this off, the mouse coordinates are skewed when compared to the world coordinated because the world is seen through the lens of the camera. All of this made it hard to implement a menu that was in the right orientation, and buttons that were clickable.

However, I was able to make a crude solution by resetting the camera every time the game state changes. This way, the camera returns to the neutral position when in the menu, and goes back to being movable in the game state.

Overall, I’m proud of the end result, but looking at the guts of the project I see that there are many aspects that can be improved. Still, it was a project that I spent many hours working on, and I’m glad I had fun along the way.


How to set up JavaScript namespace and classes properly? — Stack Overflow

createGraphics() p5.js Tutorial — The Coding Train

createGraphics() as WebGL Texture — The Coding Train

3D Terrain Generation with Perlin Noise in Processing — The Coding Train

How to Code Procedural Terrain with Perlin Noise (JavaScript & p5.js) — RachelfTech

Font Used




Midterm Progress


Initially, I had planned on making a project with gravity manipulation as a core mechanic. However, I did not particularly like the ideas that I came up with. One particular idea was to create a voxel-based game where players could create objects that would fall to the ground. Upon impact, the land would be destroyed based on the momentum of impact. However, this proved to be difficult. I might attempt this for a future project, but the idea I settled on took its roots in this idea of a voxel-based landscape, where a voxel is a 3-dimensional pixel.

My idea was to implement a landscape constructed with voxels, and the player could play around with. Additionally, I wanted to give players the ability to change the view from 3D to 2D and vice versa. What I have so far is the project below:

I really enjoy pixel art, which is why I wanted my landscape to be pixel-based instead of being a continuous plane. Some of my previous projects have had the same style, so I wanted to stick to something that I knew design-wise.

I particularly like the way I transition from the 2-D view to the 3-D view. The 2-D plane rotating as it grows and morphs into a 3-D landscape gives a sleek look to the experience.


Assignment 4: Crafting a Nighttime Scene with Text

Concept: Converting Names into Visual Forms

The main idea behind this project was to create imagery by using the names of objects to represent their form.  I wanted to implement water as I felt that I could create a cool wave pattern by moving the letters. The final result was this sketch:


I began by sketching out different ideas on combining text and imagery:

Water as text spilling from a point and fills up the screen, revealing “WATER” written in the background

In the end, I ended up liking this idea the most:

Sun drawn with the word “sun”, with rays coming out that also say “sun”

To implement this, I looked up functions that would allow me to manipulate texts as objects. I was able to find the textToPoints function, which allows me to convert text to an array of points that make up the shape of the text. With this array of points, I could use the beginShape function to create a closed shape of the text. Then, by altering the position of the points, I could manipulate the shape of the text.

However, I found that converting the shape of the text when converting the entire string at once didn’t give me a clean shape. So, I decided to split the text into individual characters, and converting each of those into a separate shape.

After adding some rippling effect and moving each letter like a wave, I ended up with the following result:

Since I went with an object-oriented approach, I could just replicate the water across a grid to generate a wave of water. I repeated the same steps to create a moon and added points that twinkled to add stars in the background.

Code Highlights

My favorite part of the code was creating the rippling and wave effect. It was so simple, but I believe it added amazing detail to the sketch.

//make the water ripple
let rippleFactor = map(this.textSize, MINFONTSIZE, MAXFONTSIZE, 0, 2);
y += noise(frameCount * 0.02 + j * 0.02) * rippleFactor;

//make the water wave
let waveHeight = map(this.textSize, MINFONTSIZE, MAXFONTSIZE, 1, 2);
y += waveHeight * sin(frameCount * 0.02 * (i + 1) + this.offset);

I added some noise and used the frameCount variable to make the letters move in a wave pattern. The crucial part was to make these changes when drawing, since I did not want to modify the points directly, otherwise I would need to reset their position. If I didn’t do that, the movement would add up and move the letters out of place. So, this was a simple way of getting this effect.


For this project, I feel like I really pushed my creativity. Although my previous projects might have been more technical, I really enjoyed the planning and ideation of this project.


Reference | p5.js

Font used: Super Funky Font | dafont.com





Reading Reflection – Week 4

The world we live in is designed. We interact with many things in our day-to-day life, from doors to mugs, to the screen you’re reading this off of right now.

Great design is something that often blends in with our lives so well that we don’t realize how well these things are designed. Take an ordinary pair of scissors for example. When you pick one up, you know where your fingers have to go. When you open and close the scissors, you have an intuition about its purpose. How is this information communicated to us? Do we all have an innate understanding of what a scissor is?

The answer lies in great design. In chapter 1 of “The Design of Everyday Things”, Don Norman highlights some concepts that come together to explain what makes a design great. These include affordances, which are the types of relations between two entities that are possible. Paper affords cutting for a pair of scissors, but for the same scissors a chunk of metal does not afford cutting. Another concept that is important is that of signifiers. These are indicators that signify where an action is supposed to take place.

Lastly, feedback is another aspect of design that—if implemented correctly—can help a person understand how to use something. There are two examples of machines with feedback, one good and one bad, that I use in my daily life. One is an electronic toothbrush that beeps violently when I apply too much pressure. I didn’t have to use any manuals to understand what it was telling me. Whenever I brushed too hard, it beeps to let me know that I’m being a bit harsh. On the other hand, I have an espresso machine that has three buttons: two for espresso and one for steaming milk. The milk-steaming button has been blinking orange for a while. I didn’t notice it until a friend who owns the same machine saw it and told me that the machine needs descaling. How would I have known that?  The toothbrush is an example of good design, where the feedback is conveyed without needing a manual. On the other hand, my espresso machine has a poorer feedback design.

After reading this chapter, I instantly thought of the doors that we have installed on campus. The automatic doors that should open when a sensor detects a person, and they usually do. Except when they stop working:

Which happens more often than it should. But that’s fine, machines break. After seeing these signs so often, I’ve made it a habit of using every door manually. You would expect that to work all the time, but something weird happens with the automatic doors that we have on campus. When I push on a “functional” automatic door, it decides to resist. It will not move until I stop pushing, even if it was perfectly keen on doing so before. What does this feedback even mean? Is the door saying that I should let it do its job?

Door with handles that make you want to use them manually

Then why are the handles and push bars there? They handles are affordances—they allow me to pull on the door, and also signifiers as they tell me where to pull on. Yet when I listen to the design, I am misled; I am met with resistance that leaves me confused. Why call to me to be pulled and then push against me as if telling me that I never should have pulled. It is deception. Bad design is deception.

Assignment 3 – Insects

Concept: Mimicking Organic Patterns

For this assignment, I wanted to implement insects and add behaviors to them that would give rise to organic patterns that are found in nature. For example, slime mold exhibits fascinating patterns when searching for food.

Slime Mold – Audrey Dussutour, CNRS

I wanted to recreate this effect. However, I was unable to achieve this. Still, I managed to create a class-based visual that is interesting in its own way.


After watching some videos on simulating ants and slime mold, I wanted to work on a similar project as I loved the results. I would play around with the visuals that I could achieve, but I wanted to create the visuals based off of movement of some insects.

I decided to create an insect class that would leave trails behind it as it moved. Initially, I wanted to create trails that would allow ants to seek out the best route to food, and back to their home. However, the logic that I implemented did not achieve this. Instead, I decided to make the trails look aesthetically pleasing and added random movement to give the insects some life-like feeling.

Code Highlights

My favorite part of the code was making the trail diffuse. Since I implemented the trail on a grid layer, all I had to do was to create weaker trails on the surrounding grid cells. The code that does this is as follows:

diffuse() {
    //if the trail  is too weak, don't diffuse
    if (this.life <= 9) {
    //diffuse the trail
    let i = floor(this.x / gridSize);
    let j = floor(this.y / gridSize);
    let x = i * gridSize;
    let y = j * gridSize;

    let intensity = this.life / 4;

    //add the weaker trails to the surrounding cells

    for (let x = i - 1; x <= i + 1; x++) {
      for (let y = j - 1; y <= j + 1; y++) {
        if (x >= 0 && x < trails.length && y >= 0 && y < trails[0].length) {
          let weakerTrail = new Trail(
            x * gridSize,
            y * gridSize,

          //add the trail  to the cell
          trails[x][y] = weakerTrail;
    this.life = intensity;

Not only do the trails fade over with time, but they spread out if they are strong enough. This is what allowed the fading away effect to emerge.


While working on this project, I realized that I need to measure the scale of the project better beforehand. This project was a massive undertaking and so I had to give up some of my goals due to time constraints. Going forward, I need to plan the project with respect to the time frame so that I can achieve what I want to for the project.

However, I love the actual effect that was produced in the end. It reminds me of The Game of Life, and maybe my grid-based implementation for the trail markers is why such a similar effect emerged. Nonetheless, I’m happy with the way this project turned out.


Slime Mold – Audrey Dussutour

Coding Adventure: Ant and Slime Simulations – Sebastian Lague

Reading Reflection – Week 3

What exactly is interactivity?

Chris Crawfords chapter on “What is Interactivity” was interesting because I had never thought about formally defining interactivity before. Since Crawford was trying to do so, I approached the reading as if I were a skeptic, trying to think of a counter argument to anything that Crawford proposed.

Crawford insists that interaction is the name of the exchange between two actors as they listen, think, and reply to one another. I was about to ask what exactly an “actor” is to Crawford, but he replied not soon after. For him an actor is a “purposeful creature”.  This is still a bit unclear to me. Right then he mentions that a fridge that is programmed to “react” to someone opening the door by turning on its light is not what he is considering as an actor, but rather what he wants is “interactivity that has some blood in its veins.” To me this just means that he does not want include entities whose decision-making is too simplistic in his notion of interactivity. Humans are one type of actors that Crawford would accept, and the replies that they can generate are on a spectrum when compared to the refrigerator that can only turn on or off as a reply to an input. As such, Crawford’s definition of interactivity required the actors to have a certain degree of complexity.

Thinking about this for a little bit, I did not find myself disagreeing with this notion. He insists that interaction is different from reaction, and even though I’m taking the skeptic’s position, I will have to agree with him on that. So far, I’ve accepted the notion that interaction needs two actors to listen, think, and reply to one another. However, I thought of moments when I’ve had small talk and conversations without purpose. By this definition, it would seem that such conversations would not be interactive. It seemed a bit counterintuitive that a conversation is non-interactive, but as I thought of this I read through Crawford’s position on the subjectivity of interaction, and his proposed solution to it. He insists that interaction can vary on a spectrum, and that interactivity is just a measure of how much thinking, listening, and replying occurs during an exchange. As if he had read my mind, the example of trivial conversations was one that Crawford used to explain the spectrum of interactivity.

Was the book interactive then? It certainly felt like it. However, by Crawford’s proposed definition it was definitely lower on the scale than an engaging conversation. Although it felt like I was conversing with someone who had just read my mind, the ideas in the chapter were stagnant, unchanging to my thoughts. For a moment they seemed to be a reply to my thoughts, but I know better than that. Still, this gave Crawfords definition a merit in my books.

After going through the entire chapter, I wonder what the epitome of interactivity would look like. One technology that comes to mind is artificial intelligence. I think the reason AI fascinates us is because it is a technology that is much higher on the interactivity scale. When interacting with AI technology it feels as if there is more complex thinking involved in the interaction between us humans and technology. If interactivity is one measure of how good a program is, then is a perfect program one that uses some AI algorithms to tailor its function and form to our needs?  If such tailoring is done, then what is the need of specialized programs. When I imagine the peak of interactivity, I think of something without a predetermined form. A single program that is an amalgamation of every possible program but does not have form until it is interacted with.