Reading Reflection Week 11: The Touchy Touchscreen

Bret Victor makes a really good point about the so called “Pictures under glass” concept of technology. I mean, as a bit of a fun fact, in my elementary school, our books were so in need of an update that the latest technology the textbooks about computer science, mention touchscreens as the latest technology. It’s quite laughable thinking about it. But I do have to agree, in a generation and era where we’re living a touchscreen oriented life, the magic does get lost along the way.

But thing is I think right now we have at least one example of trying to make technology adapt to fit our human body, being the reMarkable paper tablet. Initially when I was looking to buy some new technology I had this one goal in mind; that it can have a touchscreen for me to draw and write on. I primarly was looking at laptops that had touchscreens when I was introduced to the reMarkable. Now I thought, I mean its nothing special when it came to what it is and what it lacked as opposed to a conventional laptop. But after testing it out, it was astounding. It felt so natural to write on it, like I was writing in a notebook. I didn’t get that artifical feel that iPads give me and genuienly is what sold me on buying the reMarkable in the end.

At some point, we do have to ask ourselves, at what point is our life simulated by screen or whether our reality is literally a headset we can’t take off? (obviously I’m joking but it is getting quite dystopian no?)

Assignment 11: Dial up the Tune

“It’s a pity to shoot the pianist when the piano is out of tune.” – Rene Coty

Concept:

Soo, when it comes to instruments, I thought right, let’s make a piano. Now due to the limitations from the Arduino Breadboard, I couldn’t make it so all of it the frequencies are in tune, whether they’re lower or higher. So I needed to think of a way how to make it so all of the frequencies could be represented on the project. That’s where our lovely, potentiometer comes into play.

Check the Tinkercad out here!

Sketch:

Digital Circuit:

How it’s made:

Now for the design, I went with having the buttons in a line with each of them being connected to the digital input of the Arduino Uno. After adding to each a resistor, alongside a connection to the power, they were ready to go. Then we have a Piezo that is connected to the side and acts as our output for sound, or specifically frequencies.

But the special thing is our so called dial, represented by the Potentiometer. He is connected as an analog input and will serve the user as while it is rotated, the frequency will be increased or decreased. I will be honest, I actually had an issue intially with the Piezo constantly playing sound, but I figuered it out later after seeing that the resistors weren’t connected properly (always good to check this in case of any errors)

Highlighted bit of Code I’m proud of:

Now after that it was time to put it all together. The code itself is quite simple as we have read functions to take in our defined input. But what was interesting is trying to get it so each button having a different pitch. It was done by having the potentiometer’s value added on to a constant value which increases with each button. This gave way for the user to increase the pitch and then, always, it will be added to a specific constant which will give the sesne of a piano.

//potent variable is dependent on where the user slides the potentiometer
if (s1 == 1){
tone(buzzer, potent + 100, 100);
} 
else if (s2 == 1){
tone(buzzer, potent + 200, 100);
}
else if (s3 == 1){
tone(buzzer, potent + 300, 100);
} 
else if (s4 == 1){
tone(buzzer, potent + 400, 100);
} 
else if (s5 == 1){
tone(buzzer, potent + 500, 100);
} 
else if (s6 == 1){
tone(buzzer, potent + 600, 100);
} 
else if (s7 == 1){
tone(buzzer, potent + 700, 100); 
}

Reflection

I’m happy with this design and even using the Piezo for the first time was fun. I will be honest my ears didn’t like it as much but we got through it haha. I think an extension is to make it so you could have some sort of keyboard input or even make original songs with it. I saw some interesting ideas of how by getting specific frequencies in a loop, you could make a song which I’d love to experiement with this for my final project.

Full Code:

//Declaring every button to it's assgined digital value
int tone1 = 12;
int tone2 = 11;
int tone3 = 10;
int tone4 = 9;
int tone5 = 8;
int tone6 = 7;
int tone7 = 6;

//And here declaring the Piezo at Digital 13
int buzzer = 13;

void setup ()
{
  
//Starting up the Serial Connection on Arduino
  Serial.begin(9600);
  
//Making all of the buttons take in our Input 
  pinMode(tone1,INPUT);
  pinMode(tone2,INPUT);
  pinMode(tone3,INPUT);
  pinMode(tone4,INPUT);
  pinMode(tone5,INPUT);
  pinMode(tone6,INPUT);
  pinMode(tone7,INPUT);
  
//Making the buzzer as our output  
  pinMode(buzzer,OUTPUT);
}

void loop () 
{
  
//Defining a variable which is the value of the potentiometer
  int potent = analogRead(0);
  
//Outputing the value in the console
  Serial.println(potent); 
  
//Reading a value for each pin
  int s1 = digitalRead(tone1);
  int s2 = digitalRead(tone2);
  int s3 = digitalRead(tone3);
  int s4 = digitalRead(tone4);
  int s5 = digitalRead(tone5);
  int s6 = digitalRead(tone6);
  int s7 = digitalRead(tone7);

//If else conditions where, when the button is on, it will output a sound
//potent variable is dependent on where the user slides the potentiometer
  if (s1 == 1){
  tone(buzzer, potent + 100, 100);
  } 
  	else if (s2 == 1){
  tone(buzzer, potent + 200, 100);
  }
  	else if (s3 == 1){
  tone(buzzer, potent + 300, 100);
  } 
  	else if (s4 == 1){
  tone(buzzer, potent + 400, 100);
  } 
  	else if (s5 == 1){
  tone(buzzer, potent + 500, 100);
  } 
  	else if (s6 == 1){
  tone(buzzer, potent + 600, 100);
  } 
  	else if (s7 == 1){
  tone(buzzer, potent + 700, 100); 
  } 

//Just to give a short delay between presses
  delay(10);
  
}

 

 

Assignment 10: The Switch, The Dial and The Guesser

“Turning on the light is easy if you know where the switch is” – Colin Wilson

Concept:

Finally we have reached the Arduino era of the class. So to start out, we were tasked to make a analog switch and a digital switch to turn on two LEDs in a digital and analog fashion. But as well, to add a bit of spice to it. Now being honest it took me some time to get my footing with Arduino as this is my first time using and tinkering with it. But I’ve managed to make something at least a bit fun. I’ve been fond of escape rooms and locks, and I thought, what if I make it so depending on where you twist the lock, the LED will shine.

Sketch:

Digital Circuit:

How it’s made:

In terms of designing, the Blue LED is a simple circuit, featuring a resistor set at 220 olms, and wires connecting it to the Ground and Power. The key thing added is a simple switch that the user can interact with to turn the LED on or off.

However the Red LED has instead a potentiometer. I chose it as it’s use as a dial is going to be key to solving what random value the LED is in. Basically, I have a random function that generates a random value between the lowest and highest value of the potentiometer. Then we read the current value of the potentiometer which uses the function analogRead(). And finally we use a simple if else statement to check if the value is the same, and if so the LED will shine. I’ve added a buffer just so it’s not too difficult to guess.

Highlighted bit of Code I’m proud of:

Outside of the regular struggles with making the digital design, I struggled figuring it out why the random value wasn’t truly random. It was confusing as I assumed somehow it was reading the value of the potentiometer and using that variable as a constant. But that wasn’t the case, so I was a bit dumbfounded as the result variable in the code below isn’t tied to anything.

I did a bit of googling and found out that if I use the random function then it will give me a random number. But, it will repeat everytime, practically serving as a constant, and that wasn’t helpful everytime I would start the simulation. So apparently, you are supposed to add a seed to a code to make it unique and it must be on a unconnected pin as it picks up different static which determines the randomness. Really interesting honestly, but a pain to figure that quirk out.

randomSeed(analogRead(A0));
result = random(0, 1022);

Reflection

Overall I’m pleased with my tinkering for now. I feel like just making the digital design on tinkercad was an experience in itself but also trying to find some sort of creative spin for the LEDs. I think potentially I could add more to it, but I’m fine this being a simple game and hopefully as the weeks go, we can try out different things.

Full Code:

int inputPin = 0;
int outputPin = 2;
int result = 0;

void setup(){
  
  Serial.begin(9600);
  
  randomSeed(analogRead(A0));
  result = random(0, 1022);
  
  pinMode(outputPin, OUTPUT);
  
}
  
void loop(){
 
  int potentialVal = analogRead(0);
  Serial.println(potentialVal);
  Serial.println(result);

  if (potentialVal >= result - 20 && potentialVal <= result + 20){
    digitalWrite(outputPin, HIGH);
  }
  else{
  	digitalWrite(outputPin, LOW);
  }
  
}

 

Reading Reflection Week 10: Reboot it and Mute it

Tom Igoe really presents some interesting messages surrounding about our own projects. Firstly, a core message of not being discouraged if an idea has “been done before”. I like a lot of his submissions that he showed off and how we require the limitations part of our projects (and also the feedback that Mang presents for us 😊). I can see this in this in light of my own work in content creation for YouTube.

Much of the time I struggle with determining an idea for a video, especially in the niche community that I’m in where most ideas have already been “made before”. But sometimes I like taking a different approach and look at an idea from a different prism. It helps me build on the ideas that the original creator has made, and of course providing credit, whilst injecting my own spin on the idea and creating a different product in the end. Our imagination isn’t a limitation, the critical thinking we don’t do is.

And his next message is around not interpreting our own work. Honestly, after visiting multiple art galleries and even being an usher for Abu Dhabi Art Exhibition 2025 for an Interactive Media artist, I can definitely agree with this.

For some context, I was a volunteer for the Abu Dhabi Art Exhibition 2025, held at the Manarat Al Saadiyat Gallery, with my task being to usher people to an exhibition hosted by reImagined Art. Now, even though the person there only gave me the responsibility to usher people, I also decided to talk to the guests about the artworks and give them a bit of background.

The artworks mainly revolved around the Ethirium blockchain and represented it in different forms (I’ll upload some pictures below 😉). And as guests were coming in, they were astounded by some artworks. I didn’t want to intrude into their experience of the artworks and wanted to give them time to sink in.

Later, some would come up to me and ask for me to explain to them what the artwork meant. I gave them a decent rundown but also was a bit ambivilous as most aren’t tech savy to know about blockchain (I mean they hear crypto or bitcoin and everyone goes “ahh”). Still I can say that they definitely felt astounded as to how something like a ledger for records can be art.

And I think with this, I let more so the guests the agency to interpret the work, rather than letting me being the post it note describing it in detail. So definitely we should be a bit ambivilous in what we say about our work and let others experience it. That in of itself is the crux of imagination, seeing our world from a different prism of thought.

Reading Reflection Week 8: Aesthetics, Auto-Saving and Appeal

Norman makes a really interesting argument that I see throughout daily life of how a positive mood is really a partnership of the aesthetics vis a vis the usability of the product. I mean, whenever I upgrade my own technology, I see a lot of benefits from the upgrade and with that, I can see how it does influence my thinking. As an example, with my previous Xiaomi phone, I found a lot of workarounds for say installing apps that aren’t available in my region or even some hidden features that made my use of the device much more intuitive.

However I’d like to extend his argument by saying that brand loyalty does also play a factor in our thinking. I upgraded to a Google Pixel phone and I think I mostly got sold on it, because of the abudance of features and the whole Google brand. I mean, I’m enjoying the phone don’t get me wrong, but when I encunter a minor issue, it doesn’t really appaul me, more so I’m patient but particularly because I strongly side with Google as the brand value of it. So there’s definitely an argument to made there that if you are loyal and trust a brand, then your thinking is based off it.

With regards to Hamilton, I think it’s really interesting that there was a time where people thought that humans didn’t need support or help because NASA just assumed that astronauts are equipped with a higher degree of knowledge. Nowadays, we constantly make mistakes (I mean even writing this I almost forgot to save my work 😭) and I couldn’t imagine living with technology without error checks. And I think some people do get offended or at least frustrated with error check screens as they’re so constant and everywhere. But man, imagine if we didn’t have them. What kind of society would we live in?

Midterm Project: A Handful of Color

Project:

Concept:

The main concept behind this project was my 3rd production assignment “Colorful Concoction“. Ever since that asssignment I’ve wanted to build up on it and create something generative to the user and interactive in the same sense. So when I found out that we had access to tools such as hand tracking and face tracking, I knew I wanted to go deeper into creating something that would truly give a fun feeling to the user.

I decided to create a simple project where the user could generate balls of different color and with the user’s hand, they are able to decide the size of the given ball. So if the user chooses to make a screen filled with tiny balls, they are able to. Or if they want larger ones they can as well. It gives that sense of variety for the user. And then they are able to interact with the balls using their own hands by flicking them around and pushing them. This was the main highlight of what I wanted to do to make it truly fun for the user.

So how does it work and what am I proud of:

The program starts with an instructions screen I whipped up in Canva, waiting for the user to click to start the actual program. Going from my own game that I’ve made for a different course, it was done using different states for when the program should be starting, and when it should be played (which is why they’re named game state).

function draw() {
  //Different state for Start, to show the instructions screen
  if (gameState === "START") {
    //Displays the instruction screen
    image(instructionsImg, 0, 0, width, height);
    
  } 
//Starts the actual program
  else if (gameState === "PLAY") {

    background(0);

Now in order for the user to actually create the ball, they must firstly allow camera access and then present their hand to the camera so it can track their position of their thumb and index finger. The tracking was done simply with ml5.js’ hand tracking library which was of course the main part of the whole project. Now in terms of the size of the ball, that was simply done with seeing the distance between the points of the thumb and index finger, and from there we can generate a simple circle. Also to note all of the balls were added to an array so that it can be dynamically updated when needed.

let finger = hand.index_finger_tip;
        let thumb = hand.thumb_tip;

        //Finds the midpoint between the thumb and index finger
        currentX = (finger.x + thumb.x) / 2;
        currentY = (finger.y + thumb.y) / 2;
        
        //Based off the distance, it determines the size of the ball
        currentPinch = dist(finger.x, finger.y, thumb.x, thumb.y);

        //This makes a new random color for the cursor on screen, or in this case the ball
        cursorColor = color(random(255), random(255), random(255), 200);
        
        //Creating the random colored ball
        fill(cursorColor); 
        stroke(255);
        strokeWeight(2);
        circle(currentX, currentY, currentPinch);

Problems and how I solved them:

The main part of the program was also the main problem that I faced. Getting the hands to be displayed on screen while also giving the balls a notion that they should think of the hands as a wall was a bit tricky to write. Not to mention that doing hand tracking with a camera isn’t as smooth as it could be so it took a good amount of time before I got it working.

Firstly, using the ml5js reference, all of the joints of the hand were marked with numbers. Using this, I made an array where all of the joints can be loaded and I could then dynamically add lines to them and dots to signify the joints of the hand. handPose keypoints diagram

This was done using a for loop that constantly loads the lines and joints depending on where the hand is, so that it is actively updating where the hand is on the screen instead of just, having a static image. I used Google Gemini to help me out with this part as I struggled with how I would conenct the hands using lines and create the skeleton.

//This draws the lines for the skeleton of the hand
for (let i = 0; i < connections.length; i++) {
  let pointBase = hand.keypoints[connections[i][0]];
  let pointEnd = hand.keypoints[connections[i][1]];
  line(pointBase.x, pointBase.y, pointEnd.x, pointEnd.y);
}

//This draws the joins as white dots
fill(255);
noStroke();
for (let i = 0; i < hand.keypoints.length; i++) {
  circle(hand.keypoints[i].x, hand.keypoints[i].y, 10);
}

And after that it was simple as to just adding vectors to the balls and having them repeal off the canvas’ wall and also off the hand as well. Finally after a good while, I managed to get it to work. Now your mileage may vary, as some camera’s capture it better than other (I myself was stuck with an older, bit blurry camera).

Areas of Improvement:

Most of all I think the main area of improvement for me is just probably to add more customization should I want to. I was debating on having some way for the user to change color, but time constraints sort of hampered that notion. I was thinking even this by itself can be enough as adding too much can drown out the simple nature of the project.

I think another area is making it seemless for the user to interact with the program fully with their hands. I tried but I wasn’t getting the results I desired and ultimately I decided just to stick with keyboard inputs for most things which is unfortunate. Possibly it could be a nice idea to have it so the other hand can be clamped and the ball could spawn or somehow find a gesture with a hand that could change modes. But overall I’m happy with how it turned out in the end.

Midterm Progress Report

Concept:

Throughout the Assignments, I really feel in love with Assignment 3 where I made this mesmerizing colorful display. Even as developing that production, I saw that there is more to be made and even playing around with some of the variables, inspired me to make it the core focus of my midterm project. I think if time allows, I really want to create a magnificient interactive display, on that will be close with the viewer of this.

The main concept is customization of the colored canvas. I plan to add options so that the user can interact with key things of the project, such as sliders for the direction of the balls on screen (both in the X and Y direction). There will also be an option for the user to change around the RGB colors in order to get the desired color they wish. But the main thing I want to incorporate is the text from Assignment 4 and it would be surrounded by the colorful balls. Also I could see the user having so the mouse interrupts the flow of the balls, similar to how the mouse interrupts the text in Assignment 4.

Design

The design process is mainly extending and adding more features to the colorful concoction project. Firstly, there’s going to be an intro screen, which the user will be guided into what exactly the project is, and give them an overview of what’s to come. There will also be instructions for what the user could do to interact further with the project.

Then, when the user is ready, it will switch to the generative artwork. There is going to be sliders or probably text boxes, where the user will add a value and it will change something from the artwork. This includes range of color, the direction and speed of the balls, and a text box so the text can be displayed on screen. Finally there will be a button so that the user can take a picture of their final artwork.

Challenging Aspects:

I think the biggest challenge is mainly implementing the text and getting it to be a blockade for the balls so that it surrounds them. In a sense, the balls need to recognise the letters as a wall, so not only do the balls surround it, but also bounce off if they change. It’ll be a case of playing aronud with direction vectors.

Another challenging aspect is making the sliders, as I do not have any experience with making sliders so that they can dynamically change different parts of the artwork.

Mitigating Risk:

In terms of implementing the text, I plan to experiment with the text and seeing how it will affected by other objects. As a starting place, I could take the code which I used to make it so the balls do not go outside of the walls, and try to implement it for the letters. Then from there, I can manipulate the variables in order to get the desired effect I want.

For the sliders, I will read up upon how they’re implemented. Most likely our friends at the Coding Train, have made a video about how to use sliders so they will be a great starting ground. From there, I can extended them so the sliders can manipulate the variables of say, the color or direction of the balls.

 

Reading Reflection Week 5: The visionary difference between a Computer and a Human

I found it quite interesting seeing how computer vision actually is different than human vision. Initally I assumed that computer vision being chock full of the knowledge we would provide from the side of AI, it would be able to, at the very least analyze what the image is. However I was surprised to find out how computers only really see grids of pixel and a fully relient on mathematical algorithms, in order to get a cleaner picture of what is on screen. Whereas uh humans, we’re able to distinguish an object from a background and different lighting, computers have a hard time to tell a shadow passing along a room.

However with regards to the use of tracking and surveilence, I would say it honestly opens up a world of possibilities to make use of body tracking as a controller for many games and loads of interactive media artworks. The coolest one I’ve personally seen so far is Just Dance. It utilizes a camera for motion tracking so that its able to give an accurate assessment if the dance moves match up with the computer’s example. It’s main concept isn’t just a gimmik, but the crux of the main functionality of the game. But it’s the implementation where you get an accurate assessment of whether you follow the dance moves and can give you instant feedback, through the use of sound effects, that is very useful. And I mean, with regards to interactive media, this will allow say, people to interact with our art in a deeper way so that they can genuienly feel immersed in the art in question.

Assignment 4: The Text that Spins.

“You can make anything by writing” – C. S. Lewis

Concept:

I’ve been watching a few of Patt Vira’s videos on coding and one of them about generative text jumped out to me. In this video she made the text rotate in a interesting pattern. So I followed along the video and decided to add more to it to make it more interactive in that sense. I wanted to see if it is possible to change the font, to change the message and have some sort of mouse contact with it. Whilst the font change and user inputted message was fine, learning to manipulate with position vectors was a bit of a challenge to learn. But this was quite a fun project to extend and make. Below is the final product.

How it’s made:

So the code is done with points, where the points are displayed based on the text’s poistion. Then lines are drawn to give that 3D effect of the code. Now this code utilizes the angles, and degrees which I will be honest, I struggle to understand as they were quite, finiky to say the least.

But then the text itself is displayed on the screen, in the rotational motion and in 3D. Again, learning this was quite a challenge but it was interesting nonetheless. And then of course, making it so the user could input a bit of code was done with the createInput function. There’s a text box below where the user can generate any text they want and it will be displayed.

And of course then came making a list of different fonts, and a mousePressed function to cycle through the various different fonts.

But the challenging part was still the vectors and manipulating the mouse.

Highlighted bit of Code I’m proud of:

So figuring this out took some time but I believe I got it down for the most part. The idea is to see the distance of the mouse with the points on screen. In this way, we can measure how far the mouse is from a given point so it can have some force when applied.

Then seeing the distance as it gets closer, the force in terms of a vector is applied to a point. Then its just simply adding or subtracting that vector so that point can go back to its original position.

But of course to make it smoother, I found out there is a function called lerp which calculates the number between two numbers, following some incriment. In this case, it can be used to see the distance between the origin point, and the vector point, and in a sense, make a line. Then depending on the incriment, that number, or in our case the point, will travel based on the speed provided from the incriment.

let mX = mouseX - 100;
let mY = mouseY - 300;

for (let i = 0; i < points.length; i++) {
let p = points[i];

// The variable d checks the distance from the mouse to any given point
let d = dist(mX, mY, p.x, p.y);

// These if elses make it so if the mouse is getting closer, then some force is applied to push the points
if (d < 50) {
let pushForce = 5; 
if (mX < p.x) p.x += pushForce;
else p.x -= pushForce;

if (mY < p.y) p.y += pushForce;
else p.y -= pushForce;
}

//I found this lerp function makes it so the points return back to their original position
p.x = lerp(p.x, p.originX, 0.1);
p.y = lerp(p.y, p.originY, 0.1);

Reflection

I will say, I wish I could do more with this code. Obviously time constraints and many assignments coming up did hinder the overall product but I feel like some ways I can expand on it is by making different designs of the typography. Or even, finally figuring out how to make the text spin in different directions. However I feel confident in what I learned so far through this assignment so hopefully I can use this to expand on it for the midterm project.

Reading Reflection Week 4: The Experience of the Interactivity

This reading made me question more so the everyday designs that are prominent in our lives in terms of usability and understanding. I feel as if, the more modern and contemporary a design is, the less funcationality and even instructions behind it are. I mean, just as a real life example, when I go to Galleria mall, the bathroom was quite confusing to use. They had some new faucets, where they output water, soap, and also you can blow dry your hands. Of course, this is great in terms of efficiency and spacing, in order to have all you could need at your fingertips. But not only was it, a little odd to understand, it was also a bit inconsistent, as depending on where you put your hands, you could get splashed with water when you wanted to blow dry them, or soap when you wanted water. I applaud of course the ease of convience for it, but simply put, better labelling and also a bit of a larger design would be appreicated.

In terms of interactive media, I would agree with the three design concepts he presents and I’d go as far as to say that, more emphasis should be on the  experience design. When I notice think about interactive media, it is paramount that interactivity is a key part of designing a enagaging and unique bit of media. But if I’m honest, most of the time, I feel a bit underwhelmed from the experience I get from most interactive designs, say in museums or art galleries. It’s not to say there is a lack of interactivity, but I’d argue that memorability of the given piece of interactive media matters a lot, especially if you want to be remembered. One person I can name who I’d say masters the experience is Scott Snibe’s Boundary Functions . You still interact but also take in the experience behind the art, and the actual meaning as well. Another place I’d say is places like TeamLab, where you truly are surrounded by this experience of art that I think you can lose yourself in it as easily as you can interact with it. Even theme parks like Super Nintendo World have a lot of interactive games, where you can feel challenged but also enjoy the experience.

To close off, I would say it matters a lot in how you frame the experience of interactivity. It needs to speak to our human emotion and to really provoke us to get the feeling of which the curator of that bit of media, was trying to convey.