Week 8 – Reading Reflection

Attractive things work better:

I think the most important point to me when reading this essay was the idea of context being a vital part of design. Depending on the time of day, mood, and situation, certain objects might be preferred over others. For example, sometimes I switch out the type of bag/purse I bring outside depending on where I’m going. The aesthetic and functionality of the bag both matter when choosing for the right occasion. Taking into account negative moods and stress when designing/using an object also rings true in this example. Let’s say I’m going somewhere with a lot of walking around/physical activity. I don’t want a clunky bag with unnecessary pockets and components; I want something easy to carry even if it means looking a little more plain. When I’m doing a lot of physical activity, I’m going to feel more stressed out. That’s why choosing a simpler bag to avoid unnecessary panic when digging for items in my bag would be smarter. People think differently when they’re under pressure or anxious

Her Code Got Humans On The Moon

I had known that women were a crucial part of the start of computer programming from a previous media class that talked about computers and the necessity of a female workforce during WWII. However, even when were women were working on the same things as men, they were portrayed in media as doing clerical or busy work. They weren’t acknowledged for how much they contributed. These were women with degrees in mathematics and various STEM fields looked down upon and dismissed despite their qualifications. Going on to the 60s with space exploration, it’s still evident how womens’ opinions aren’t as valued despite how critical they are. Hamilton’s expertise was crucial for Apollo’s success and software engineering as a whole. It’s important to remember that these people were our predecessors and a backbone in the knowledge we have available today. Even though this field has been historically male-dominated, we have to remember that women were a very important part of this field’s emergence.

Week 8 – Unusual Switch

Brainstorming:

I had a hard time coming up with a unique switch idea, but my process entailed coming up with different body parts that can be used to put pieces of metal together. I thought back to a design class I took in the past and how the professor talked about accessible design; she mentioned doors with handles that turn like this which don’t require you to use your hands. You can use your elbow for example to rotate the handle and from there, pull with your elbow as well. Sometimes, when I have dirty hands from eating food, I’ll use my elbow to open doors like this to access the bathroom. 

I decided to use my elbow to trigger a switch. I taped a wire to my elbow and had a strip of copper tape on the table. This copper strip had the other part of the circuit taped to it, so as soon as the wire on my elbow comes into contact with the copper strip, a bunch of LEDs should turn on.

My first attempt was a bust. I just took the two wires connected by the buttons and put them together, but I realized that it took a while for the LEDs to turn off. I looked at the serial monitor and saw that the button was technically “on” (1) even after the circuit was broken. I realized after a while that it was because I didn’t ground the circuit. With the button and breadboard, a resistor is connected to the wire and goes to GND.

Failed attempt:

After realizing this, I taped the resistor and second wire together so that the electrons can go to GND and no longer light up the LEDs. Now, the LED immediately turns off after my elbow lifts up.

F I N A L    V I D E O:

Picture of the setup:Code:

Similar to our in-class code, but this time, I have 4 LEDs. The Serial.println line is very helpful for debugging!

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
Serial.println(buttonState);
// if button is pressed, turn LED on
if (buttonState) {
digitalWrite(10, HIGH); // turn the LED on (HIGH is the voltage level)
digitalWrite(11, HIGH); // turn the LED on (HIGH is the voltage level)
digitalWrite(12, HIGH); // turn the LED on (HIGH is the voltage level)
digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level)
}
// if button is not pressed, turn it off
else {
digitalWrite(10, LOW); // turn the LED on (HIGH is the voltage level)
digitalWrite(11, LOW); // turn the LED on (HIGH is the voltage level)
digitalWrite(12, LOW); // turn the LED on (HIGH is the voltage level)
digitalWrite(13, LOW); // turn the LED on (HIGH is the voltage level)
}
Serial.println(buttonState); // if button is pressed, turn LED on if (buttonState) { digitalWrite(10, HIGH); // turn the LED on (HIGH is the voltage level) digitalWrite(11, HIGH); // turn the LED on (HIGH is the voltage level) digitalWrite(12, HIGH); // turn the LED on (HIGH is the voltage level) digitalWrite(13, HIGH); // turn the LED on (HIGH is the voltage level) } // if button is not pressed, turn it off else { digitalWrite(10, LOW); // turn the LED on (HIGH is the voltage level) digitalWrite(11, LOW); // turn the LED on (HIGH is the voltage level) digitalWrite(12, LOW); // turn the LED on (HIGH is the voltage level) digitalWrite(13, LOW); // turn the LED on (HIGH is the voltage level) }
Serial.println(buttonState);
// if button is pressed, turn LED on
if (buttonState) {
  digitalWrite(10, HIGH);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(11, HIGH);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(12, HIGH);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(13, HIGH);  // turn the LED on (HIGH is the voltage level)
}
// if button is not pressed, turn it off
else {
  digitalWrite(10, LOW);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(11, LOW);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(12, LOW);  // turn the LED on (HIGH is the voltage level)
  digitalWrite(13, LOW);  // turn the LED on (HIGH is the voltage level)
}

Looking forwards:

I was thinking since I’m using a strip of copper, maybe I can somehow use the distance between the wires on the copper strip to determine which color LED to turn on. It’d be cool if I can measure how far away the wires are from each other on the copper strip using analog read instead of digital.

Week 6 – Midterm

Link to sketch: https://editor.p5js.org/bobbybobbb/sketches/7XBZCIX_C

My project is a calm flower shop experience where users can purchase flowers and make bouquets. Users have 100 dollars to spend and create a bouquet. Once they’ve spent $100 and checked out, they can restart the experience by leaving the store. You can also just leave the store whenever, but until you’ve checked out and spent $100, the experience will not reset.

Implementation

I drew all the elements (from the store front to the flowers) by hand to give it this homey and soft aesthetic. I also chose lo-fi cafe music to set this tone of relaxation. Feedback for actions users make was very important to me when designing this; I wanted them to know exactly how their actions affect the experience. For example, for the clickable objects in the store, hovering over them will create a white mask over them. Another feedback I implemented was the mouse click sound that occurs after the user clicks on something clickable. I also wanted the outside of the store to be inviting and encourage users to click on the screen; every time a user hovers over the screen, the doors to the shop will open, encouraging them to come in and use their mouse. Otherwise, it remains closed.

When users go to check out with a cart full of flowers, I display a bouquet full of the flowers they have. I had to think about how flowers are arranged in angles and randomly translated and rotated each flower to make it seem like they’re situated in a bouquet:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
push();
// random positions and orientation of flowers
translate(random(170,240),random(140,210));
rotate(random(-PI/6,PI/6));
// display each flower inside the cart
image(flowerOptionsBouquet[cart[i]],0,0,200,200);
pop();
push(); // random positions and orientation of flowers translate(random(170,240),random(140,210)); rotate(random(-PI/6,PI/6)); // display each flower inside the cart image(flowerOptionsBouquet[cart[i]],0,0,200,200); pop();
push();
// random positions and orientation of flowers
translate(random(170,240),random(140,210));
rotate(random(-PI/6,PI/6));
// display each flower inside the cart
image(flowerOptionsBouquet[cart[i]],0,0,200,200);
pop();

One thing I had to think about was how to make images clickable because all my elements are displayed using images. Instead of creating hit boxes and defining boundaries, I wanted to just let the images themselves be tracked as clickable things. That’s why I settled on using createImg() instead of the loadImage() and image() functions. createImg() does the same thing, except the image has .show() and .hide() functions for easily turning them on and off. They also have .mouseClicked(), which allows me to call a function once the images are clicked. The downfall of this system is that it doesn’t rely on draw; the images are constantly being displayed unless you hide them. Even if you call background() in draw, the canvas doesn’t reset. That’s why the bulk of my work is done in setup(), but still works and responds to mouse clicks from the user. This method also requires booleans to keep track of which scenes are being displayed so I can turn things on and off. 

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
doorOpen = createImg("door_open.png","alt");
doorOpen = createImg("door_open.png","alt");
doorOpen = createImg("door_open.png","alt");

versus:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
redFlowerBouquet = loadImage("red_flower.png");
redFlowerBouquet = loadImage("red_flower.png");
redFlowerBouquet = loadImage("red_flower.png");

It was very important to me that I got all the mechanisms, clicking, switching between scenes before creating something beautiful, so here are some examples of what the early prototypes looked like before:I  didn’t  even  spell  flowers  right:

Improvements

One thing I noticed was just how long my file was; I feel like there’s a lot of lines for a fairly simple project. It might be due to the fact that I have to turn on and off image so often. Next time, I’ll try implementing hit boxes instead of hiding/showing images and see if that lends to simpler code. I’d also like to implement a feature where you can buy multiple bouquets at a time given the money you have and be able to display all the bouquets in your collection. As of right now, you can only buy one $100 bouquet, but users might want to split up that $100 and buy 2 or more bouquets.

Images

Inside the store:
Hovering over objects:
Example bouquets:

Week 5 – Reading Response

Human vision is intuitive whereas computer vision relies on algorithms, pattern recognition, and controlled environments for said algorithms to work. Computer vision processes images pixel by pixel whereas human vision takes in an image as a whole. For example, to detect movement, some computer vision algorithms detect the value different between individual pixels next to each other. However, human pattern recognition looks at the overall shape of an object. We can also better handle seeing things in different contexts. However, computers need to specifically be told to watch out for slight variations. With the rise of machine learning and neural networks that help computer vision with pattern recognition, however, it’ll get easier for computers to detect objects or movement.

Some techniques for tracking include detecting motion, detecting presence, and detection through brightness thresholding. Detecting motion means comparing pixel brightness per frame. If the brightness of a pixel changes a lot from one from to another, that can indicate movement. The total differences experienced by different pixels can give a broad picture of total movement. Detecting presence means comparing an with a preset background to see what’s different (background subtraction). Detection through brightness threshold uses a general brightness level and compares pixels to it to determine whether or not the pixel is in the foreground or background.

Tracking through computer vision has a lot of potential in interactive art. Being able to detect a user’s movement means it can influence the direction of the art. Even with musical performances for example, an artist can do a specific movement to trigger a specific sound or effect. Raves or concerts which are both audio and visual based could have a lot of potential using dance movements to aid musical production. More advanced tracking has also allowed for the detection of subtle expressions and glances, which gives artists even more to work with as they can use a user’s emotion to guide an art piece.

Week 5 – Midterm Progress

I decided to make an experience based on a flower shop. Users can go into the shop, select flowers they like, make a bouquet, and buy flowers. I also want to implement a money system such that you need to purchase everything within the budget, and if you go over the budget, you can’t buy the bouquet. Right now, I’ve implemented being able to walk in and out of the store using a door (enter) and rug (exit), being able to select the flower stand and select flowers you like from a menu into your cart, and going to the checkout. I want to draw all the images, backgrounds, and items in this experience, but right now, I just simple stand-in pictures I drew. I’ll replace these shabby ones in the future, but I just wanted to get the mechanisms of my system working before beautifying it.

Jotting  down  ideas  before  starting:Lots of things ended up changing. I originally wanted to make everything pixel art too, but I think I’m gonna stick with more a freeform style.

Fear:

I had a lot of difficulty displaying and un-displaying different elements in the project. For example, I have a hover effect I’m proud of as it helps indicate what’s clickable to users. However, that means turning things on and off in terms of displaying them, which is a lot more complicated to implement than it intuitively feels to use. I knew I was gonna spend a lot of time figuring out how these mechanisms were gonna work, so I wanted to start early on these instead of the visual sophistication of the project. That’s why I created really simple sketches and blobs to fill in the final drawings.

I’m also not sure if I want to implement a budget; the minimum viable product should be being able to make a bouquet with flowers you chose without worrying about the money, but it’d be nice to have. Having these milestones is important because then you feel less overwhelmed. Each tiny detail can take a long time to implement, so it can feel very daunting. However, I can steadily make progress through these milestones.

Week 4 – Reading Response

Faucets, soap dispensers, and towel dispensers in public restrooms can be really frustrating. One reason is because of the uptick in automatic dispensers and sinks. Sometimes, faucets look like they’re automatic but are really operated by handles, and same thing goes for soap/towel dispensers. Another frustrating point is when soap dispensers look like faucets. I also once saw an air/hand dryer, soap dispenser, and faucet all with the same color, build, and shape surrounding a sink; the only distinguishing factor was their size. I had to test out all of them to figure out which one was which.

Don Norman says, “The same technology that simplifies life by providing more functions in each device also complicates life by making the device harder to learn, harder to use. This is the paradox of technology and the challenge for the designer”, which I thought was interesting. The designer has to make complicated information easily digestible. Designers are like the middlemen between new technology and users.

I also think the point about having to spell out a function or affordance for users with words is poor design is compelling, but also very difficult to get right. Making designs intuitive is the ultimate goal, but the thing about intuition is that you don’t consciously think about it, so designers can’t know immediately what it intuitive. Too often, I personally lean towards words to let people know what a design does. However, I need to get more used to designing without words and simply use visual or physical cues to guide a user’s attention.

Don Norman’s main idea about how people aren’t at fault for poor design is also very important. It’s up to designers, engineers, manufacturers, and businesses to create a pleasant user experience; if a user isn’t enjoying their product, it’s not the user’s fault. Too often, experts are biased in their own highly skilled perspective that they expect others to be on that level. For example, sometimes when I’m teaching someone a skill like crochet, I see the stitches and shapes very easily, but they can’t. If I were to design a beginner’s crochet kit or tutorial, I need to take into account the average beginner’s perspective.

I really liked how Don Norman distinguishes between affordances and signifiers, indicating that affordances are relationships between people and designs, and signifiers are physical attributes that lend to discovering affordances. Distinguishing them helps designers understand what they’re working towards and how to work towards it.

Week 4 – Data Visualization

Link to sketch: https://editor.p5js.org/bobbybobbb/full/rBBWMFyxx

When brainstorming for this project, I wanted to represent data in a way unique to the dataset. I wanted to use what the data was about and frame it in a visually engaging way that best represented what it was about. I was digging through Kraggle and found a simple dataset about streaming platforms, so I thought it’d be cool to represent it inside a TV and the glitchy effects of a TV screen. Each bar of glitch/color is proportional to the percentage of shows or movies the streaming platform has compared to the others. Since there were two sets of data, I created buttons on the TV to toggle between them; I really liked how the functionality of the system was tied in to the visual cohesion of the piece. The colors for the glitches are also randomized, so each time you click the buttons, the colors will change regardless of whether or not the dataset changed.

Some challenges I encountered included not being able to pass in arguments to a function I call when the buttons are pressed. To overcome it, I found a way to change the mousePressed function:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
showsButton.mousePressed( function(){
displayData(2);
});
showsButton.mousePressed( function(){ displayData(2); });
showsButton.mousePressed( function(){
  displayData(2);
});

In the future, I want to make this piece more engaging because it feels a bit more static right now. Maybe users can click on the specific streaming platform and learn more about it. Or I can find more data on these platforms and users can see even more data visualization upon clicking each individual platform. Overall, this project came with a lot of testing and looking for datasets. I had to frequently test the spacing of the rectangular bars used to represent each streaming platform, text rotation, and button sizing. There was also a lot of math involved for turning the values in the dataset into rectangles that fit within a confined space.

Week 3 – Reading Reflection

After reading The Art of Interactive Design, Ch. 1, I’m realizing that we as a society and industry throw around the term “interactivity” very liberally. I used to classify interactivity as response to user input, but that may be too broad or too simple. For example, my p5.js sketches consist of clicks from a user to make something happen, but it’s the same process over and over again. For it to be more complexly interactive, these inputs and outputs should engage in a conversation. Maybe as time goes on and more inputs are made by the user, different responses are created. Depending on the type of input, different outputs will be created. Or if a user’s input can change how a system operates. I think giving users more agency in how a “story” plays out is interactivity. A movie for example doesn’t let the viewer change its outcome. However, interactivity gives users agency (albeit in a confined system, but said system should give users enough freedom to feel like they have power to change things the way they want to). Over time, a system should respond differently to a user’s input given the amount of history and new information it has to work with (kind of like in a conversation). Interactive systems aren’t stagnant and progress with time; this allowed users to stay engaged. If a process presents the same steps repeatedly, how are users supposed to care and give their sustained attention? I think giving users the ability or tools to create something entirely new can also be implemented into interactivity as prolonged engagement is the goal of any good conversation. However, the tool can’t be a standalone piece, but a part of a system since “tool” implies no complex thought behind receiving inputs. A piano for example is a tool for creating something entirely new, but a system that senses dance moves to create sounds based on the user’s movement over time (sounds build up over time, not just linearly mapping one movement to one sound) might constitute more as interactive.

In the future, I should take these principles into account and create more complex interactions where multiple processes are happening, not just one.

Week 3 – Object Oriented Programming

Click on the piece with your mouse:

Link to sketch: https://editor.p5js.org/bobbybobbb/full/GKQffo525

When brainstorming ideas for this generative piece, I thought about pixel art and how you can generate complex images with simple shapes. You just need a lot of these simple shapes, and since we’re working with arrays this week, it’d be easy to store this data. I had this idea of a 100 pixel grid and how combining lines and diagonals can make shapes like flowers. It can be simple and have one color, so you essentially only need to store one thing: whether or not a specific pixel is colored. I created an object for the flowers where a boolean is stored in a 10 x 10 2d array. To display the flower, I just have to go through the array and fill in pixels that are true with a color and not filled in otherwise. The most difficult part about this process was creating a dynamic algorithm for generating each flower shape. I didn’t want to create individual shapes from scratch that were hard-coded in, so I used numbers like each pixel’s coordinate on the grid (the x and y value), an argument for the object that’s a randomly generated number each time, and the distance of the pixel from the center of the grid, to create equations and true or false statements to determine whether a pixel is colored or not. This part came with a lot of experimentation; I would generate 20 flowers on the screen to test out whether or not I like the way the numbers are being manipulated and the specific shapes of the flowers. Brainstorming how to manipulate the numbers and making sure there was a diverse range of shapes took the most amount of time.

Here’s some ways I generated the flower shapes: (this.shape is a random number passed into the object as an argument, i and j are the x and y coordinates of the pixel)

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// distance of pixel to center of flower
let distFromCenter = dist(j,i,5,5);
// random pattern, weird alg after testing
if ((distFromCenter - this.shape)==0) {
this.pixArr[i][j] = true;
}
// another random pattern
if ((i * j + i + j + distFromCenter)%this.shape==0) {
this.pixArr[i][j] = true;
}
// diagonal pattern
if (diag) {
// diagonal
if ((i+j==10) || (i==j)) {
this.pixArr[i][j] = true;
}
}
// distance of pixel to center of flower let distFromCenter = dist(j,i,5,5); // random pattern, weird alg after testing if ((distFromCenter - this.shape)==0) { this.pixArr[i][j] = true; } // another random pattern if ((i * j + i + j + distFromCenter)%this.shape==0) { this.pixArr[i][j] = true; } // diagonal pattern if (diag) { // diagonal if ((i+j==10) || (i==j)) { this.pixArr[i][j] = true; } }
// distance of pixel to center of flower
let distFromCenter = dist(j,i,5,5);

// random pattern, weird alg after testing 
if ((distFromCenter - this.shape)==0) {
  this.pixArr[i][j] = true;
}
        
// another random pattern
if ((i * j + i + j + distFromCenter)%this.shape==0) {
  this.pixArr[i][j] = true;
}

// diagonal pattern
if (diag) {
  // diagonal
  if ((i+j==10) || (i==j)) {
    this.pixArr[i][j] = true;
  }
}

Every time the user clicks their mouse, a random flower (different shape, size, color) is generated. Each flower also has a green stem (its size is determined by the flower size). I didn’t want the flowers to exist in a vacuum so I created cloud objects as well that move across the screen and a blue background to seem like a sky. The clouds also have randomly generated speeds and positions within a given range. The flowers’ positions are also limited to the lower half of the screen because it wouldn’t make sense for them to be in the sky. There’s a “clear” button at the bottom of the screen to clear the flowers as well.

Randomness for the flowers’ parameters:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
var xCoor = random(-20,400);
var yCoor = random(200,400);
var pixelSize = random(2,10);
var col = color(random(255), random(255), random(255));
var shape = floor(random(5));
var xCoor = random(-20,400); var yCoor = random(200,400); var pixelSize = random(2,10); var col = color(random(255), random(255), random(255)); var shape = floor(random(5));
var xCoor = random(-20,400);
var yCoor = random(200,400);
var pixelSize = random(2,10);
var col = color(random(255), random(255), random(255));
var shape = floor(random(5));

For future improvements, I want to develop more algorithms that’ll give me more complex flower shapes, because right now, you can tell these flowers have a limited range of shapes.

Brainstorming:

Week 2 – Reading Reflection

I really like using random elements in my work; both assignments so far have incorporated random elements because I feel like it adds so much movement and engagement with only a few lines of code. The computer program almost creates the art for you; you just have to give it a guiding path to follow. Even though I like randomness, there’s still specific visions I want it to follow. I use random as a way to represent reality and organic ways of forming images. For example, carefully placed and well-thought out placements of certain objects feel artificial; being able to simulate randomness means mimicking real life. 

The book of random numbers (million digits) in Casey’s talk mirrors my point; people use these numbers to simulate and test things in the real-world. It did get me wondering though, if multiple areas and teams are using this specific set of random numbers, would it eventually be unreliable? Depending on the context in which it’s used, I’m sure it doesn’t matter, but I just felt like if the whole point of these random numbers is to give unbiased data on society, they shouldn’t be reused over and over again. Another thought came to mind when watching the video: our universe is just a random simulator. What if our world was just programmed by someone who used randomness to let organic life move and form freely? This reminds me of the theory popularized by the Matrix about how we’re very likely living in a simulation because eventually, a civilized society will develop the ability to simulate worlds, which then can also simulate worlds, and so on. That means there are probably millions of simulated worlds and only one base reality; the likelihood of us being one in a million is unlikely. We might be a simulation to test a specific outcome, and that technology could be rooted in the randomness shown in Casey Reas’ talk.