I quite agree with this first reading, as I am a person who is greatly affected by the aesthetics of my surroundings. Norman wrote that there doesn’t have to be a scientific, cognitive explanation in place for us to like something better (as with the example of the color computer displays — there is no information advantage, yet we cannot go back to black & white displays). This brought two pieces of supporting evidence to mind. The first is YouTuber Molly Burke, who is blind and also very into fashion and makeup. I have watched her videos for a long time, and something that she often says is that, just because she can’t see the makeup or the colors of her cute outfit doesn’t mean she can’t enjoy it. This is a twist on what Norman wrote about the teapots: just because he can’t use them every day for brewing tea efficiently, doesn’t mean that they don’t give “satisfaction in their appearance”. While I agree with most of this text, I do think the author is biased, as a designer, when he says that “to be truly beautiful, wondrous, and pleasurable, the product has to fulfill a useful function, work well, and be usable and understandable”. I think true beauty can exist without usability, and might sometimes even exist because of the lack of usability.
Her Code Got Humans on the Moon—And Invented Software Itself
This reading was a fascinating glance into the beginnings of software, and I was both surprised and not surprised at how the tech industry hasn’t changed in terms of gender inequality. In fact, it may have even been less discriminatory back then, because it was the “Wild West” and nobody knew what they were doing. It is notable how Hamilton’s role as a mother and Lauren’s playing with the keyboard led to saving the Apollo 11 mission, because it shows how diverse life experiences in the workplace enhance overall outcomes.
Additionally, I think the inherently exclusionary “inside club” feel of programming persists to this day, because of the learning curve needed to understand and participate in discussions in the community. It’s comparable to jazz music, in the sense that there is a certain level of “work” you need to do to be a part of the culture.
I think this article shows that everything that we take for granted today was conceived and invented by people for the first time, some time ago. For example, I, with no previous knowledge about physical computing, am now able to use breadboards to create circuits that take digital input, but back in Hamilton’s time, a whole team of expert seamstresses was required to wire the 0s and 1s. This raises the question of how current technologies that are inaccessible to laypeople may, in the very near future, be simplified and commercialized enough for anyone to manipulate.
Concept:
“Pride Dragon Generator” is essentially a very simplified version of a Picrew avatar maker, in which users can select a LGBTQ+ identity and receive a dragon with spike colors of the corresponding pride flag. They can then press “s” to save the image to their device.
Image of “Bi” dragon saved to device
This project was inspired by my identity as a queer Chinese-American who has been immersed in queer culture for years, but did not know the terms in Chinese until very recently. These terms are not taught in language classes, and the queerness is oftentimes not spoken about in Chinese or Chinese diasporic families. Additionally, queer narratives in popular culture are overwhelmingly western-centric. Thus, I wanted to combine Chinese imagery (the color red, the lantern images, guqin music, dragons) with queer imagery to represent this intersection of identities. I hope this program makes the process of learning queer Chinese terms a little more cute and memorable than by just typing them into Google Translate.
Parts I’m Proud Of: The thing I am most proud of in creating this project is not necessarily any single bit of code, since the code is quite basic (to those well versed in p5js —aka not me), but rather stringing it all together to form a functional work of art. In each week’s assignment, I gained individual skills such as drawing with shapes, making classes, and using the mouseClicked and keyPressed functions. The challenge was combining all of these into one project, and at first it kind of felt like starting off on a marathon when you had just learned how to walk. What helped was breaking the project down into chunks, and first establishing the “structure” of the code before actually drawing the details. Here are the steps I followed, roughly in order:
Figuring out how to toggle between two screens with true/false & if statements
Programming the “S” key and back arrow to save image & return to homepage
Creating a Button class and adding functionality (hover and click) to it
Drawing the unchanging elements of the homepage and the dragon page
Creating a Dragon class
Figuring out how to make the scales change color depending on which button was pressed (if I were to do this project again, I would perhaps move step 6 to the beginning and try to figure out the 6 buttons + 6 color schemes while my code was still minimal).
As for the design of the project, I like its minimalistic, cute feel, which I tried to emanate with round rectangle corners, bold Futura font, and pastel colors. I also believe it was a good decision to have the “back” and “save image” functions be keyboard commands, because it decluttered the area under the canvas and foregrounded the LGBTQ+ identity terms that I wanted to highlight.
I also think it was a good idea to have the toggling between scenes controlled by a boolean variable + functions to draw each scene, as it made it very clear which scene was being drawn currently. I had to restructure my program in the middle of my working process, as I realized that it made much more sense to have a dragon class and then call different dragons within one scene than to have different scenes with one dragon each, and call the scenes. Also, originally, the buttons were all on the homepage with only the Chinese terms, which was a bit confusing for the user.
After revising, there was a start button on the homescreen and a “canvas” on the second screen, which brings to mind the universal design of familiar drawing programs, creating a more intuitive experience. I also added English to the buttons, because although my original intent was to omit English and have the user learn the terms by associating them with the pride flag colors, I realized this was not explicit enough and also many people may not know or remember which pride flag colors correspond with which identities.
Problems:
I encountered four main bugs in this code, two are resolved and two are still remaining.
The first bug was with the “press ‘s’ to save image” functionality. The bug was that every key being pressed was triggering the program to save the image, not just the “s” key. The issue was that I had written keyCode === "s" instead of keyCode === 83, as keyCode is only intended to work in natural language with special keys such as LEFT_ARROW, whereas for letter keys you need to find its numerical code. Below is the working code:
function keyPressed() {
// To find keyCode of letter keys:
// console.log(keyCode);
if (keyCode === LEFT_ARROW && homePageVisible === false) {
homePageVisible = true;
scene1Visible = false;
}
if (keyCode === 83 && homePageVisible === false) {
savedImg = myCanvas.get(0, 0, width, height - 115);
savedImg.save("my-pride-dragon", "png");
}
}
The second bug was that the buttons were triggering different scale colors to show up when the user was merely hovering over them, but not clicking them. This was because I was writing code for clicking specific buttons inside the class, which I later learned is unideal because classes are not supposed to know the specifics of each button, they should be as general as possible. Here is the code before, with this section repeated for each button 1-6:
In office hours, I learned that the click within the Button class can be used to make a boolean variable either true or false, and then that true or false can be used in the mouseClicked function to actually detect clicks on different buttons and call each button’s associated actions.
//code for detecting click on buttons to trigger
//different scale colors
if (scene1Visible) {
if (button1.click(mouseX, mouseY)) {
dragon1Visible = true;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = false;
mainDragonVisible = false;
}
The third bug, which I have not resolved yet, is one in which the “bi” dragon (dragon3) sometimes shows up as the “default”, what the user sees when entering sceneOne before clicking on any buttons. By putting print statements in every function where dragon3 is called, I was able to see that that something is happening between the moment the user goes back to the homepage and the moment they returns to sceneOne to call dragon3. All the code for the bi dragon is the same as for the other dragons, so I have not been able to figure out what is causing it, in particular, to be called.
Finally, the background audio does not play when the sketch is embedded into this blog post or in fullscreen mode, but it plays fine when the sketch is run in p5js. I am not sure why this is, but perhaps it could be solved by adding a command for playing the audio on the user’s first mouse click. However, the desired effect is that the audio plays before the user clicks, just as ambient background music.
Areas for Improvement:
There are many areas of improvement, both in terms of efficiency of the code and functions and aesthetics I wish the program had. In terms of efficiency, it would be sleeker to have a “dragon selector” variable that would assign a number to each dragon, so instead of writing all of the code below to ensure that only one dragon is true at a time, I could just call the number of each dragon.
if (scene1Visible) {
if (button1.click(mouseX, mouseY)) {
dragon1Visible = true;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = false;
mainDragonVisible = false;
}
if (button2.click(mouseX, mouseY)) {
dragon1Visible = false;
dragon2Visible = true;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = false;
mainDragonVisible = false;
}
if (button3.click(mouseX, mouseY)) {
dragon1Visible = false;
dragon2Visible = false;
dragon3Visible = true;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = false;
mainDragonVisible = false;
//**bug: sometimes dragon does not set back to
//the default dragon but instead
//shows dragon 3 as the default
//when returning to the drawing page
}
if (button4.click(mouseX, mouseY)) {
dragon1Visible = false;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = true;
dragon5Visible = false;
dragon6Visible = false;
mainDragonVisible = false;
}
if (button5.click(mouseX, mouseY)) {
dragon1Visible = false;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = true;
dragon6Visible = false;
mainDragonVisible = false;
}
if (button6.click(mouseX, mouseY)) {
dragon1Visible = false;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = true;
mainDragonVisible = false;
}
}
}
function keyPressed() {
// code to find keyCode of letter keys:
// console.log(keyCode);
//return to homepage when left arrow is pressed
//reset dragon to default dragon (sometimes does not work because bug)
if (keyCode === LEFT_ARROW && homePageVisible === false) {
homePageVisible = true;
scene1Visible = false;
mainDragonVisible = true;
dragon1Visible = false;
dragon2Visible = false;
dragon3Visible = false;
dragon4Visible = false;
dragon5Visible = false;
dragon6Visible = false;
}
Additionally, since each of my buttons are performing the same hover and click functions, they could perhaps be stored in a button array, instead of being called individually every time:
However, I am glad I did things “the long way” for this project, because it was much easier for me to know what each part of my program was doing when I could see every button I was creating, instead of everything being “hidden” in a for loop. I think making the code more efficient would have come at the cost of me understanding my program as I worked on it.
As for features, I wish I had time to add a key command to change background color, as well as a little “wardrobe” for the dragon (another scene that users could enter to select items, such as sunglasses or a shirt, that they could put on their dragon).
This article illustrates the historical development of computer vision techniques using notable examples, and then provides practical instruction to artists and designers on computer vision principles and optimizing their physical environment for the camera. I found it interesting how the development of technology always seems to welcome a process of democratization but also capitalization and surveillance with negative consequences. For example, before the advent of the printing press and similar technology for mass printing, only a select few people could print books. The ability to disseminate texts is a tool, but what people do with it can cause both positive and negative consequences — such as distributing hate speech or propaganda. Similarly, the ethics of computer vision, now that it is so democratized, is intriguing to contemplate.
Two example usages of computer vision particularly stood out to me, Krueger’s Videoplace, and Suicide Box by the Bureau of Inverse Technology. The concept behind Videoplace, that it involves the whole human body, is something that I think could be used to positively offset the fact that modern humans sit at screens all day. Would it be possible to make word processors or computer operating systems controlled by larger upper body movements, or even foot movements, like floor pianos, so that we can still move while doing our schoolwork/jobs? This line of thinking was inspired by the podcast Body Electric, exploring how the human body has changed in response to technology.
The Suicide Box also raised many questions for me. How did it know to only detect human vertical movement, and not if someone threw something from the bridge? How come the numbers of suicides it recorded were so much more than the officially recorded amount by the port authority? For a topic as emotionally charged and taboo as suicide, is it empowering to record data to bring awareness to the topic, or does it cause more harm than good (as 13 Reasons Why’s suicide depiction did)?
Inspired by the simple yet compelling cartoons of @dinosaurcouch, and the fun and customizability of avatar designing websites such as Picrew, I set out to make a “Pride Dragon Generator” in which users can select a LGBTQ+ identity and receive a dragon with spike colors of the corresponding pride flag. I wanted to incorporate an educational element as well, and what came to mind was using the generator to teach people LGBTQ+ terms in Chinese. When the user hovers their mouse over each of the buttons, an audio will play pronouncing the Chinese term. When they click on a button, they will get a picture of a dragon with, for example, lesbian flag spike colors. They can then save this image to their device.
Dinosaurcouch comic, featuring dinosaurs with lesbian and bi flag color spikes
One of the many customizable avatars in Picrew image maker
Most Frightening Part & Risk Reduction
The most frightening parts of this midterm are 1) the underlying logic of the “hover over button” interactions how users will go forward and go back to the homepage, 2) the sound playing and image saving functionalities, and 3) the complexities of drawing a dragon.
To address risk #1, I first tried to make interactive buttons on my own, and then went to IM Lab hours. With help from Coding Train and Zion, the IM lab assistant, I now have the basic logic of my program, and buttons that enlarge when you hover over them. The next steps are adding the sound and customization options.
To address risk #3, I went to this website to try to better understand bezierVertex, and played around with it a bit to get the hang of which numbers control which aspects of the shape.
From this reading, I found the principles of affordances and signifiers most interesting. Norman writes, “Affordances determine what actions are possible. Signifiers communicate where the action should take place” (14). The authors frustration with doors not having signifiers that appropriately and intuitively demonstrated their affordances reminded me of my frustration with mobile apps, specifically the apps used to order takeout, when I first moved to China to attend NYU Shanghai. The apps are often overcrowded with symbols, words, and pop-ups, akin to the complex instruction manuals for machines that the author mentions. The affordances of these apps were not clear at all if you could not read Chinese fluently. Apps, like doors and refrigerators, could be designed to have simpler, more visible affordances.
However, this made me wonder if all perceptions of affordances are created equal. For example, when I encounter the “Settings” app on any phone (even in Chinese), I can find my way around because the layout is the same (or similar) across all operating systems (usually “General” settings is at the top, then there are some sections for wallpaper, brightness, sounds, notifications, etc). In contrast, my mom or my grandparents have a harder time finding setting controls that are second nature for me to find.
It is also interesting to consider how there exists a need to specify a design field as “human centered design”. Does this suggest that, if we do not consciously remind ourselves to design for the ease of use by humans, that the design of products will inevitably be pushed towards “profit centered design” or “efficiency centered design”? Seems rather depressing, because aren’t humans designing for ourselves?
Another thought the reading brought up for me is our tendency to value logic above emotion. The text stated that engineers are trained to think logically, and I think that we as a society have been trained to think that logical thinking is better, even though emotion colors every decision and experience we have.
Concept: I wanted to use data that I generated myself for this data visualization, as I find it fascinating that, as time progresses, we are tracking more and more aspects of our lives — for example, # of steps walked, places visited, songs listened to, etc. One thing that I track is my reading data, on the app StoryGraph. I wanted to visualize my StoryGraph data on a timeline, where the size of the point corresponded to my rating.
Highlight: I am proud of the sorting portion of my code, because data visualization has been the hardest unit for me to understand so far, and it required some time and an office hours visit for me to grasp the logic of sorting. In the end, I split the whole array twice, once on “,” to separate each row into its elements, and again on “/” to separate the dates into individual numbers. Then, I sorted according to the month.
function findMinMaxMonth() {
//go through each row of bookData to sort by month
let singleRow = [];
for (let csvRowNumber = 1; csvRowNumber < bookData.length; csvRowNumber++) {
singleRow = split(bookData[csvRowNumber], ",");
//split date 2023/03/21 into 2023, 03, 21 on "/"
let dateSplitted = split(singleRow[5], "/");
// print(dateSplitted)
dateReadMonth = int(dateSplitted[1]);
// print(dateReadMonth)
if (dateReadMonth < minMonth) {
minMonth = dateReadMonth;
}
if (dateReadMonth > maxMonth) {
maxMonth = dateReadMonth;
}
}
Reflection and ideas for future work or improvements: I initially had many more data points in my CSV spanning multiple years, but due to time constraints and my skill level, I had to simplify the CSV to include only one book per month, read within one year (2023). If I could, I’d like to include the whole dataset, which would require first sorting by year, then by month, then by day. This would provide a much more interesting visualization. It would also be cool to vary the color of the book depending on the genre or the number of times I read it (darker shades = more times read).
As an Interactive Media and Business major, I certainly agree with the author that “interactivity” is a word thrown around so much that it might as well be “dead” – as evidenced by how hard it is to come up with a succinct explanation of the IM major.
I think Crawford’s definition of interactivity as involving two actors, and more specifically two purposefulcreatures, and as existing on a continuum, is what makes this definition stand out. This reading changed my belief that interactivity was a boolean value, and I will now be more mindful of viewing interactions with objects (or even humans) on a scale of low to high, as he suggests. I do wish that he gave more examples of what would count as high interactivity, especially in the digital space.
Now to raise a question, are acoustic instruments interactive? They do not “think,” but they appear like they do because they react (speak), and their reaction causes a change in the (listening) player, thus fulfilling all 3 requirements of interaction laid out by the author. For example, if I am playing my guitar, I will first “speak” by plucking the strings and applying a certain level of pressure. The guitar strings “listen” and then “speak” in (sometimes unexpected) ways, which causes me to adjust the pressure I apply or change the position of my fingers on the fret board. Even more interesting is the question of digital instruments and “smart” instruments, which can make autonomous (?) decisions – where do these sit on the scale of interactivity?
Concept: I recently saw a post (below) about love that touched me very much, and I thought that I would be able to represent this idea visually using what I currently know about objects and arrays.
This was my initial idea, where the circles represent the waves of light radiating out from each of us and intersecting with other people’s lights:
And here is the final sketch, where each time you run the code, the circles start at random places, and the increments between the circles also vary. You can click the mouse to make the circles appear:
A bonus of the random starting locations of the circles is that each outcome made me think of different types of relationships. For example:
A mother and her child:
Long distance best friends:
Future lovers who are just about to meet for the first time:
I started out the assignment only meaning to depict the “long distance” visual, but the randomness sparked many more possibilities, which I was not expecting!
Highlight: I am proud of this part of my code, because the forloop + objects + arrays integration was something that I struggled a bit to understand, but after watching several Coding Train videos and following his examples, I was able to apply the same principles to my own code.
let circles = [];
function setup() {
createCanvas(500, 500);
background(0);
for (let i = 0; i < 2; i++) {
let x = random(width);
let y = random(height);
let r = random(1, 50);
circles[i] = new Circle(x, y, r);
}
}
function draw() {
if (mouseIsPressed){
for (let i = 0; i < circles.length; i++) {
circles[i].show();
circles[i].increaseRadius();
}
}
}
Reflection and ideas for future work or improvements: I would like to assign colors to the light waves, for example making the smaller circles brighter/more saturated and having the color fade out as the circles get bigger. It would also be cool to apply some kind of blur feature so that the circles look more like actual light.
This talk challenged my belief that code is one of the opposites of nature. I used to think that people could employ code in an attempt to replicate nature: humans (“natural” organisms) ordering a computer (an “unnatural” object) to create art. However, Reas’ talk made me realize that perhaps code and nature could operate on the same level. They both operate on certain core principles to create patterns that are then interpreted by humans as art. There is not so strict of a distinction between the natural and the artificial. This concept reminded me of Donna Haraway’s Cyborg Manifesto, in which she blurs line between humans and non-human organisms, and between humans and the technologies/systems we use.
(the top line is my previous belief, and the bottom line is the new concept introduced by the talk)
I think that’s Reas, as a digital artist, is biased towards portraying his own profession in a positive light, in the sense that he is in a position to make meaning out of abstract art. Some of the examples he showed, like the picture created by an artist dropping pieces of paper on the floor and leaving them there, made me wonder if such pieces are only considered art because of who created it and who is interpreting it. Both the artist and the commentators are likely already established within the rather gate-kept art world. If a child in kindergarten dropped some pieces of paper on another piece of colored paper and then painted the result, I doubt that people would pay the same respect in their analysis — they’d probably tell the child to clean up the floor. This points to power dynamics in the creative world, in which people with certain identities are more likely to be taken seriously. I am not trying to discount the importance of artist’s intention and concept, only posing the question: are certain artworks only considered art because the people creating them are already seen as artists?
I also want to comment on Reas’ latest project trying to reveal the algorithmic structures behind artworks. I feel that there is an ideological undertone throughout the whole talk that we should aspire to gain an algorithmic understanding of the world and of art, which I don’t know if I agree with. Especially with the rise of AI and the way it is impacting artists’ rights and pay, it is worthwhile to think about how computers’ ability to emulate randomness may come with both benefits and consequences.
Concept: After learning in class about how the Perlin noise effect simulates more natural movement, I experimented with it using circles instead of lines. The way the circles moved reminded me of bubbles in a lava lamp, which is quite a nostalgic object from my childhood. I never actually owned one, but I would often be mesmerized by them whenever I saw them in classrooms or stores. To recreate the feel of a lava lamp with different size bubbles, I set 4 offset values and made them control different aspects of the circles. For example, for the first circle circle(n, n, 0.4 * n), the x-coordinate, y-coordinate, and diameter are all changing at the same time, while for the second circle circle(v - 10, 400, 0.4 * v) the y-coordinate is fixed. I did this so that the bubbles would have their own “territory” on the screen and not overlap that much.
offset = offset + 0.003;
let n = noise(offset) * width;
circle(n, n, 0.4 * n);
offset2 = offset2 + 0.003;
let v = noise(offset2) * 200;
circle(v - 10, 400, 0.4 * v);
offset3 = offset3 + 0.003;
let k = noise(offset3) * width;
circle(400, k, 0.2 * k);
offset4 = offset4 + 0.003;
let u = noise(offset4) * width;
circle(40, u, 0.3 * k);
Highlight: A part of the code I had many difficulties with was the forloop. My aim was to make the background color change slowly by altering the “g” value in my background function using a forloop. However, my p5.js kept crashing whenever I did anything, which was quite frustrating. I eventually found out that it was doing so because I had made an endless loop with g+1. Since the value of g +1 is never reassigned to g, g stays at 0 forever. Therefore, g is always less than or equal to 255, so the forloop will run forever, causing p5 to crash.
function draw() {
for loop does not work, why?
for (let g = 50; g <= 255; g + 1) {
background(255, g, 50);
}
After I fixed this problem, the forloop was still not causing the background color to change, and I am still unsure of the reason why. I asked a NYU Shanghai IMA major for help, and she explained that I do not need a forloop to accomplish my aim, I could simply initialize a global variable g and then tell g to increase by 1 in the draw function.
let g = 50
function draw() {
background(255, g, 200)
g = g + 1
drawBubbles();
}
This created the below effect:
However, this project still lacked the use of a forloop, so my friend proposed that I could make a gradient by drawing vertical lines of varying colors from left to right. She showed me the code, but I did not want to simply copy and paste it without knowing how it worked, so I really tried to break down each part I did not understand. Writing it out on paper helped with that:
So, the final forloop code creating the changing background gradient was as follows:
let r = 50; // red
let b = 200; // blue
function draw() {
//increase x by 1 every time the for loop runs
for (let x = 0; x <= width; x = x + 1) {
noFill();
strokeWeight(1);
let g = 200 - x / 2; // vary the "g" value according on the "x" value
stroke(r, g, b);
line(x, 0, x, height); //draw line from (x, 0) to (x, height) from left to right to create gradient
}
r = r + 1; // gradually increase r value by 1
b = b + 1; // gradually increase r value by 1
Embedded sketch:
Reflection and ideas for future work or improvements:
I’d like to figure out how to make the gradient change back the other way, so the background will keep changing forever once you run the program.
As a self reflection, I have never been very quick at understanding mathematical logic, so I always feel extremely lost when it comes to coding. Some functions of code seem like they should be simple, but I always need to walk through each step slowly in my head (or even better, on paper) to understand how it works. However, this may not be a bad thing, it just means I need more time for the information to marinate.
Resources: Many thanks to the IM Discord and Pacey Wang for their help!