I was building the 3D printer for the lab and noticed how the extruder moves across the X and Y axes. There are two motors, each moving the extruder in one or the other axis. By working both motors together, the extruder can be moved to any X or Y position. This is where I got my final project idea from. I’m going to make a raised tray and fill it with sand, and place one metal ball bearing on the sand surface, like in this picture:
There’s going to be a magnet underneath the table, which attracts the ball, and by moving the magnet, I’ll be able to move the ball on the sand and draw patterns. The magnet is going to be moved across the X and Y axes using two motors, like in the 3D printer. There are a bunch of belts in the IM lab storage, just like the ones that are used in the 3D printer, and I’m hoping that I get to use those for my purposes. If not, I would probably have to order them myself.
This is where p5 comes in: the Arduino controls the motors, but the Arduino is controlled by user input from p5. Users can maybe use keyboard arrows to move a virtual ball on the canvas, and that motion will be replicated by the real ball. I haven’t really pinned down what the user interaction would look like: it could also be users painting with their finger on a touch screen, but the issue with that might be that users might just move their fingers faster than the motors are able to move the ball, so that wouldn’t work. One simple way to tackle this would be to just reduce the frame rate on p5, so that users can’t draw that fast, and the motors are able to keep up.
I would also want to program a “default” mode, so that when there’s no users, the ball draws nice, symmetric shapes on its own. That way, the project could be a nice, standalone art installation, but also interactive.
The reference to Charles Eames really stuck with me. “Design depends largely on constraints” — I think so too. What is good design if not bad design? It is easy to define bad design. It’s a jumble of elements that do not fit together, but forced into a whole that bears a discordant existence. Good design on the other hand, seems harder to articulate, but what cannot be told can be suggested, which seems to be Pullin’s standpoint as he enters the discussion of designing for special needs. Discretion has been appreciated as a design element when designing for disability, but, on the other hand, in the case of eyewear, there is an existing positive image, and so invisibility is not necessarily a consideration here. This is understandable; that every product has a different context surrounding itself and its design should be in accordance. But then Puliln also says, “fashion can be understated, and discretion does not require invisibility.” The discussion that follows illustrates how several parameters interact in defining the constraints of design, and the design of something transcends the thing itself and blends into people’s perception of other, related concepts, as is in the case of the design of eyewear and fashion.
make something that uses only one sensor on Arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on arduino is controlled by p5.
I took the existing Arduino and p5 code that we saw in class, and simply added this line in the p5 loop() function:
This allowed me to control the horizontal position of the ellipse by turning the potentiometer.
2. make something that controls the LED brightness from p5Again, starting with the same code, I just made some adjustments. Here’s the relevant part from the p5 code:
if (!serialActive) {
text("Press Space Bar to select Serial Port", 20, 30);
} else {
text("Connected", 20, 30);
// Print the current values
text("rVal = " + str(rVal), 20, 50);
text("alpha = " + str(alpha), 20, 70);
mouseXMapped = map(mouseX, 0, width, 0, 255);
right = int(mouseXMapped);
mouseYMapped = map(mouseY, 0, height, 0, 255);
left = int(mouseYMapped);
}
This just informs the Arduino of the mouse’s X and Y coordinates.
On the Arduino code, I connected both LEDS to PWM capable ports, and simply added these two lines:
The brightness of the left LED increases as the cursor moves to the bottom of the p5 window, and the brightness of the right LED increases as the cursor moves to the right edge of the p5 window.
3. take the gravity wind example (https://editor.p5js.org/aaronsherwood/sketches/I7iQrNCul) and make it so every time the ball bounces one led lights up and then turns off, and you can control the wind from one analog sensorVideo
made the potentiometer control the wind speed (mapped from -1 to 1), and the blue LED blinks whenever the ball comes in contact with the ground. Here’s the complete p5 code, and I left the Arduino code almost untouched, except for a small delay after the LED blinks once, just to make it less jittery.
if (Serial.read() == '\\n') {
digitalWrite(leftLedPin, left);
if (left == HIGH) {
delay (100);
}
int sensor = analogRead(A0);
delay(5);
int sensor2 = analogRead(A1);
delay(5);
Serial.print(sensor);
Serial.print(',');
Serial.println(sensor2);
}
Our idea for the musical instrument was drawn from the classic guitar mechanism. By integrating Light-Dependent Resistors (LDRs) as the sensory input and buzzers as the output, we’ve created a playful experience. Each LDR is strategically positioned to represent a distinct musical note, and when covered, it triggers a corresponding buzzer, simulating the act of plucking strings on a guitar. There’s also a digital switch, which, when pressed, records the notes being played. When the switch is released, the notes are played back.
Items used:
Arduino Uno
6 Light-Dependent Resistors (LDRs)
6 Buzzers (Speakers)
Resistors (for LDRs )
Jumper Wires (for connecting components)
2 Breadboards
1 Momentary Switch
Technical Implementation:
In our musical instrument, each Light-Dependent Resistor (LDR) is assigned to represent a specific musical note, creating a musical sequence reminiscent of a guitar tuning. The choice of notes – E, A, D, G, B, E – corresponds to the standard tuning of the six strings on a guitar. When an LDR is covered, it changes the resistance and triggers the Arduino to interpret this change as a command to play the designated note through the corresponding buzzer.
The Arduino continuously reads the analog values from the LDRs and, upon detecting a significant change, maps the input to trigger the corresponding buzzer connected to a digital output pin. The code is designed to be modular, allowing for easy adjustments to resistor values or the addition of more sensors and buzzers for expanded musical possibilities.
When the digital switch is pressed, the notes which are played are recorded in an integer array. As soon as the switch is released, the notes in the array are played back using the regular tone() function.
// check recording state
if (digitalRead(SWITCH_PIN) == HIGH) {
Serial.println("Recording");
recordMode = true;
} else if (digitalRead(SWITCH_PIN) == LOW) {
if (noteCount > 0) {
Serial.println("Playback");
recordMode = false;
for (int i = 0; i < noteCount; i++) {
tone(playback_PIN, melody[i]);
delay(200);
}
noTone(playback_PIN);
noteCount = 0;
}
}
Future Improvements:
The array can only hold a maximum of 50 notes, and a future improvement could be adding some warning (LED flashing?) to indicate that the array capacity has been reached. There’s also no error handling at this stage, so there could be some unexpected errors if more than 50 notes are recorded.
First, I should make a point of saying that I have always believed writing to be conversational. When you write something, you open a conversation that remains unfinished business until you’ve heard back from your readers. I understand that this is not how a lot of people see the position of a writer, but it’s just my view, and I’ve lived by it ( — that’s why I’ve emailed all kinds of authors from John Green to Noam Chomsky). So, when I opened the second link under this week’s readings, I was pleasantly surprised to see an author continuing the conversation with his readers, in his humorous but not really condescending tone, which I appreciated very much.
Either way, maybe the author won me over with his jokes, but I feel inclined to agree more with him that with his critics. I have long harbored a slight distaste for VR/AR technologies, but I could never put a finger on where it was coming from. But the author’s “rant” offered the words I was looking for: this obsession with Pictures Under Glass genre of technology feels like a disservice to the human glory. They simply do not engage the full potential for interaction that humans possess, and by being such limited creations, they fall short of being tools that “fit the person”. It’s like giving a lumberjack a butterknife to chop down a tree. The lumberjack will probably just do his job with the knife because that’s all he has, but the creators in his society owe it to him that they start thinking in the general direction of a chainsaw instead of how to make the butter knife more aerodynamic. Simply because the lumberjack is capable of so much more.
I can’t trace how we ended up here, though. There are paintings from the year 1900 predicting how life in the 21st century would look like. Just a few generations ago, people were expecting so much more than just thin LCDs or holographic interaction panels. But somehow, someone along the way popularized these so called Pictures Under Glass technologies, and now we’re in this strange limbo.
There’s this short film from 1967, called 1999 A.D., which shows the life of a family in the future. It predicted many of the things that we have today: cell phones, email, and online shopping. However, these technologies are depicted in very different forms than how we know them today. For example, to reply to an electronic mail, the father has to handwrite his response on a machine with a glossy surface to write on. When the mother wants to go shopping, she turns some knobs on her “fingertip shopping” machine and she’s tuned into a real retailer, where a camera automatically scans across items for sale. These predictions are now fun to look at, with the knowledge that specialized, bulky machines for emailing or remote shopping isn’t the most convenient or realistic way to go. Still, it just goes to show that at some point in history, humans did have the vision to create more dynamic mediums of interaction “that we can see, feel, and manipulate,” but it seems that we have since gone astray.
Making Interactive Art: Set the Stage, Then Shut Up and Listen reads like meta commentary for this class. For all the work that we have done in the past weeks, we’ve produced supporting documentation backing our inspiration, thought process, methodology, and everything in between. The author speaks of such artists with disdain. “They pre-script what will happen. When you do that, you’re telling the participant what to think, and by extension, how to act. Is that what you wanted?” That is actually not what I want, and for my final project, I would like to work on this author’s terms. I don’t have an idea for my final project yet, but I think I want to create something expansive; something that houses at least the potential for serendipity. The projects that I am making right now are rather limited in functionality, so essentially I have already defined the scope for interactivity before the first interaction with my projects can even happen. But my goal for my final project is to design, for each individual user, unique experiences which exist in a larger permutative space.
The other reading offers some good ideas for thinking in this direction. Furthermore, even though most of the works listed are popular project ideas in the interactive art community, I liked how the author addressed this at the get-go. “So if you’re new to physical computing and thinking to yourself “I don’t want do to that, it’s already done,” stop thinking that way! There’s a lot you can add to these themes through your variation on them.” Usually, when I’m looking for inspiration on the blog for my weekly projects, I look at other people’s work and immediately close off ideas pertaining to the use of similar tools or methods. But looking at projects that use “body-as-cursor” or “hand-as-cursor”, it seems I don’t have to be that restrictive in my thinking. Everyone used Javascript to create all these cool projects in the first half of the semester — but every project came out so unique and with the emblem of each person’s individuality. So, if I see someone using an LDR for their project, I don’t think I should turn away from using LDRs in my project altogether. I can also probably make something cool with the same tools.
My project this week is fairly simple, and my goal was to just get more comfortable using the components with the Arduino (in preparation for following weeks!). I used an RGB LED controlled by a potentiometer, and a regular LED controlled by a momentary switch to do a little light show.
The RGB LED shines red, green, and blue continuously, but the speed is controlled by the voltage at the potentiometer. The mapping is done as such:
The other LED is always on. When the switch is pressed, it just turns off.
Here’s a little video demo:
One of the challenges was working with the delay() function for the RGB led, while asking the code to do other things, such as reading the potentiometer value. I was going to use the trick for calculating time, so the delay() function wouldn’t block the rest of the program, or even use a library I found for creating a non blocking timer. But I figured that wasn’t the point of this assignment, so I stuck with delay(). This led to some repetition in my code, and I would definitely not do it this way if I wanted cleaner, more robust code.
I made a simple “switch” which completes the circuit when there’s a conductive fluid in between the ground and live wire. It’s a simple circuit, and there’s not much to it really. Water is not supposed to be conductive, but the tap water on campus has added minerals, so that works for this purpose. I just used the LEDs and jumper wires in the Arduino kit, and borrowed some solid core wires from the IM Lab to put in the liquid container.
I had the most fun making the demo film with my friend, though. Here’s the demo:
When my friend pees in the container with the two exposed wires which were initially not touching, the circuit is completed, and the LED lights up.
The piece by Dan Norman aptly captures my response to reading The Design of Everyday Things, which I had brought up in class discussions as well, and this story of the three teapots in Emotion & Design is a perfect follow up to his story about the artistic but inconvenient doors in his previous work. “Pleasurable designs are not necessarily usable. But need these attributes be in conflict? Why not beauty and brains, pleasure and usability?” I would consider myself a proponent of beautiful design, but I am also a regular user of everyday things, and I care about their usability. What Dan Norman says, and I agree, is that “design matters, but which design is preferable depends on the occasion, the context, and above all, my mood.” There need not be a race between aesthetics and utility. In fact, I like nice design so much that I occasionally go on the Museum of Modern Art online store website just to look at pretty things. This $315 lamp or this $140 side table are probably not setting utility standards for any other lamps or side tables, but they’re beautiful, and I am glad someone designed them.
Also, Norman’s discussion of the system of affective thinking was agreeable, and quite therapeutic to be honest. I can appreciate how it can be taken as a lens for looking at life in general. If the affective system works independently of thought, as he claims, then I can attribute unstructured or neurotic thoughts to the neurochemicals that “bathe” my relevant brain centers, affecting the way I “perceive, decide, and react.” Norman also offers his solution: affective reactions can be overcome with time and practice. This, he says, requires patience, and taking agency over your thought process. This too, then, is not a race, since anything that requires patience by definition cannot be done in haste. To learn to have healthy thoughts is to learn to be patient.
For my midterm project, I created a spooky juggling game called Morbid Juggler. I thought of the concept in week 2, and for the project involving arrays and classes I made a simple game that lets users add balls using the keyboard and juggle them using the cursor. Morbid Juggler version 2 is built on top of the same motion logic, but instead of using standard input (mouse and keyboard), players have to use their webcam and their hands to interact with the game. To add a new ball, the user makes the “🤘” gesture with their left hand. The balls automatically move in a parabolic trajectory, and the goal of the game is to not drop the balls, i.e. catch them before they leave the screen. To catch a ball, users can use any hand and pinch their fingers and drag a ball across the screen. To throw the ball again, they just have to release the pinch.
How it works
The following explains the balls’ motion and is from my documentation for my week 2 project:
I save the initial time when an (eye)ball is created, and measure the time that has passed since. This allows me to use values of elapsedTime as a set of x values, which, when plugged into a quadratic equation, give a parabola. Changing the coefficients in the equation allows me to modify the shape of the parabola and thus the trajectory of the (eye)ball, such as how wide the arc created by its motion is. I played around with the coefficients and decided to use (0.4x)(2-x), which works nicely for a canvas of this size.
A more detailed explanation of how the balls are stored in memory and moved with each frame update can be found here.
For tracking hands using the webcam, I used a Javascript library called Handsfree.js. I was initially going to use PoseNet, but I realized it wasn’t the best for this use case. PoseNet’s landmark model tracks 17 points on the human body, but for my game, I didn’t need all that. I just needed to track the user’s fingers, and PoseNet only returns one keypoint for the user’s wrist. So, I looked up other libraries and found Handsfree.js, which is built on the same framework, Google’s Mediapipe, as PoseNet, but it is more geared towards hand tracking. It tracks all fingers and knuckles and even has built in gesture recognition to detect pinches, which is what my users would need to do to drag balls around the screen. Furthermore, it was very easy to train the model to recognize new gestures using the website. It lets you collect your own data and create a gesture model to plug into your code. For example, this is the code for recognizing the “🤘” gesture.
function addBall() {
const hands = handsfree.data?.hands;
if (hands?.gesture) {
if (hands.gesture[0]?.name == "addBall") {
let x = sketch.width - hands.landmarks[0][9].x * sketch.width;
let y = hands.landmarks[0][9].y * sketch.height;
console.log(x, y);
balls.push(new Ball(x, y));
canAddBall = false;
// go to sleep for a second
setTimeout(() => {
canAddBall = true;
}, 1000);
}
}
}
The hardest part of this project was working with Handsfree.js. There is criminally limited documentation available on the website, and I had to teach myself how to use it by looking at the few demo projects the author of the library had created. For hand tracking, there was a piece of code that closely approximated what I wanted to do. It loops through the hands in the object returned by handsfree.data, and for each hand, it loops through all the fingers. Each finger can be identified using its landmark, and its location and other information can be used elsewhere in the program. For example, in handsfree.data.landmarks[handIndex][fingertips[finger]], handIndex=0 and finger=8 represents the tip of the left index finger. In Morbid Juggler, when handsfree.data.pinchState for any hand or finger becomes held near a ball, the ball sticks to the tip of the pointer finger. When it becomes released, the ball restarts its parabolic motion.
To see the full code for the project, refer to the p5 web editor.
Playing the game
The game can be played here. When the project has loaded, click on ‘start game’. It takes a second for Handsfree to load. You can check if the game is fully loaded by waving your hands in front of your webcam. You should see the skeleton for your hand mapped on the canvas.
Make a “🤘” sign with your left hand (tuck in your thumb, and make your index and pinky straight!). You’ll see a new ball pop from between your fingers. Add a few more balls like this, but be quick! Don’t let them leave the screen, or the counter in the corner of the screen will tell you how badly you’re losing even after death. To keep the balls from falling off, pinch your fingers like you would pick up a real juggling ball, drag the ball you’re holding across the screen, and release your fingers somewhere near the bottom left corner of the screen, so the ball can travel again. You can add as many balls as you like — at your own risk.
Final thoughts
I think there’s still room to fine tune the hand tracking. If I were to upgrade the game, I would probably use a different library than Handsfree.js. The tracking information isn’t the most accurate, and neither is it consistent. For example, even when a hand is being held up still, the keypoints on screen are seen to jitter. Since during dragging, the balls stick to the tip of the pointer finger, the balls were jittering badly too. I later added some smoothing using the lerp() function to fix that. I also had to do a lot of trial and error when picking a gesture that would trigger the function to add a new ball. The model wasn’t very confident about a lot of the other gestures, and kept adding balls erroneously. The “🤘” sign was the final choice because it was explicit enough and did not resemble other gestures a user might inadvertently make while playing the game.
One thing worth special mention is the background music used in the game, which was produced by my friend by sampling my other friend’s vocals. I heard the track and liked the spooky yet playful feel to it and asked if I could use it in my project. My friend agreed, and now I have a bespoke, perfectly fitting music to accompany my project.