I was surprised by the ways in which computer vision works with interactive art. Recently, I have been interested in public surveillance and curious about the use of technology in the field. The Suicide Box was the one I thought was meaningful and useful. Recognizing suicide attempts in the Golden Gate Bridge, humans devised a machine to detect the vertical motion of people and hence make a guess on whether or not the person is about to jump off the bridge.
The debate that arose around the Suicide Box was interesting as well. The question of “is it ethically okay to record suicides” and “are the recordings real” make us think if the invention of such technology is necessary or useful. As of I know, there are still a lot of questions regarding the topic of public surveillance and the use of technology for face recognition and video recording. While I think these tools are very useful, I also understand the ethical concerns that comes with the uses of the tools.
To continue, computer vision is quite difficult to use considering the setting in which it works best. For example, background subtraction and brightness thresholding could fail if the person in the scene have similar color or brightness to his surroundings. The fact that we have to design the physical conditions in which computer visions will be used is bothering. Although computer visions in their perfect setting works well and abstracts useful information, figuring out ways to use computer vision in any setting would be even more useful and beneficial to the society.
In short, the intersectional study and use of computer vision and interactive media seems to have great potential in the coming future. Computer vision and interactive media have areas in which they can be used and the developments made so far show how life in general can advance.
When I was a kid, I was obsessed with the game Chicken Invaders. I remembered spending 2-3 hours playing it every time I was free from school and homework. It was a way of life for my PC back then. And so, I wanted to emulate the engine of that game for this mid-term project and have fun creating it. I call my interactive game “Alien Slayer.”
Here’s my design:
Here’s the overall structure of the code design:
let spritesheet;
// example spritesheet: https://png.toolxox.com/vhv?p=imRwRx_top-down-spaceship-sprite-hd-png-download/
// i'll edit the spritesheet so that the spaceship can only move left, right, up, down
let sprites = [];
let direction = 1;
let step = 0;
let x; //the spaceship's x position
let y; //the spaceship's y position
let speed = 4; //the animation speed, image moving on canvas
function setup() {
createCanvas(400, 400);
}
function preload()
loadImage()
spaceship = loadImage ("spaceship.png")
// or create a pixelated spaceship
function draw() {
background(220);
// create a cosmos background
// 1. gradient color
// 2. use loops (for/while loops) to create stars scattered across the space
// create evil aliens
// 1. create a class for show() and move() the evil aliens, with loops (for/while loops). The number of evil aliens is random, from 2 to 10. They will appear at the top of the canvas
// 2. use loops (for/while loops) to make the evil aliens start moving down 5 steps every time the frame is refreshed.
// 3. the evil aliens will randomly drop bombs.
// use sprite sheet animation for the spaceship
// 1. use (for loops) to create a nested array for the spaceship sprite sheet
// 2. use get() to get different tiles of the sprite sheet
// 3. use function keyPressed() and if conditions to make the spaceship move each step
// create a laser beam attack for the spaceship
// 1. create a class to draw the laser beam and use the show() to call out the beam
// 2. use function mousePressed() {} to shoot the laser beam that can only go up
// gameplay
// game screen gives instructions and waits for user input
// The spaceship needs to be moved around using the keys on the keyboard, to the location vertically under an evil alien to attack it (sprite sheet animation)
// if (){} the laser beam hits the evil alien, the alien is obliterated
// if (){} the bombs dropped by the evil alien hit the spaceship, it will lose one life.
// keep playing until all evil aliens are obliterated.
// if (){} one evil alien touches the spaceship, the player loses.
// displaying score
// user gets one score everytime an evil alien is obliterated.
// user receives 5 lives per game. if the number of lives == 0, the user loses and the game will end.
}
The most frightening or complex part of the project is to match the coordinate of the aliens’ bombs so that when they touch the spaceship, it will lose one life. Similarly, the laser beam’s coordinate has to match that of the alien so that the spaceship can kill it. Also, I still need to figure out how to make the aliens erratically drop bombs for the spaceship to dodge. I also still need to figure out how to display the score and the spaceship’s lives in a translucent board at the top of the canvas.
What I’ve been doing is I’ve been trying to write the algorithm for matching coordinates. It’s mostly about using the if function. I’ve also been testing the algorithm for randomly creating bombs from the aliens. This game is essentially about two objects having the same coordinates and randomness.
I’m also going to implement a sound library into the project (for example, a laser beam sounds, the alien dropping a bomb, the sound of the spaceship’s moving, etc.).
For my midterm project, I want to create a fruit ninja game but instead of cutting fruits, I want the user to cut flowers. Simulating you’re cutting flowers from a garden.
This first week I wanted to work on the hardware of the game.
I have the backbone of the game coded. Starting with the instructions display, the game which ellipses simulate the flowers and triangles simulate insects which simulates bombs. I started with a simple design so that it would be easier for me to code and understand every step.
It was a bit challenging for me to understand and to add the GameObject class.
Everything that moves or shows on the screen in your game, such fruits or bombs, is designed after the GameObject class. This is why it’s helpful:
In simple terms, GameObject is like the basic DNA for all the items in the game. It makes creating and managing game items easier and more organized.
Saves Time: It avoids writing repetitive code for common game item functions, such as moving or displaying on the screen. All items can use it after you write it once in GameObject.
Maintains Order: It assists you in maintaining the clean code of your game. Aware of the locations of the common elements
Facilitates Modifications & Simple to Add New Content: Modifying an attribute that should be shared by all items (such as movement) only requires making a change in GameObject; all items will then update accordingly.
Another thing that I had to learn was to increase difficulty as the user sliced flowers. I added this feature to my GameObject class.
This formula allows for dynamic adjustment of the game’s difficulty or pace, making objects fall faster or slower based on the speedMultiplier value.
Accordingly, if the game is playing at 60 frames per second (fps), the object will glide smoothly down the screen because its y location will be updated 60 times in a second.
this.y += this.speed * speedMultiplier;
Furthermore, I had to add a lot of If statements such as If an ellipse was not cut – game over or if a triangle was cut then – game over.
A handle function handles particular tasks within the code. For example, handleFruits() in a game might add new fruits, update their positions, and check if you missed any, all in one place. It ensures that everything that has to happen with fruits is done correctly and helps keep your code neat.
function handleFruits() {
if (frameCount % 60 === 0) { //60(fps) a new EllipseFruit object is added to the fruits array every second.
fruits.push(new EllipseFruit());
}
for (let i = fruits.length - 1; i >= 0; i--) {
fruits[i].update(); //Updates the position or state of the fruit
fruits[i].display(); //Draws the fruit on the screen
if (fruits[i].offScreen()) {
gameOver = true; // End the game if any fruit goes off-screen
break; // Exit the loop to immediately handle game over
}
}
}
Next steps:
This next week I’ll work on improving the game and make it more aesthetic. I need to add the pictures of the flowers and change the cursor to some scissors. I have to add some music so that every time the user cuts a flower there is a sound in the background. Lastly, I have to figure out how to display the instructions first with the fullscreen option.
For anyone who has ever been interested in fusing art and technology but has been afraid to take the step, Golan Levin’s paper, “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers,” is an awakening. It basically says: “Hey, you don’t need to be a tech whiz to play with computer vision. Come investigate!”It aims to dismantle the stigma associated with technology and demonstrate how it can be an interesting and easy tool for designers and artists.
In the past, computer vision was a high-tech field only utilized by the military or for severe scientific applications. But Levin demonstrates that it’s no longer exclusive to specialists. Artists are now utilizing computer vision to create amazing and interactive works of art because of more affordable devices, quicker computers, and an online community of shared code.
This paper made me think about broader issues, such as the responsible use of technology in art and privacy considerations. While it’s exciting to see how far technology can take art, it’s also critical to consider the effects of our creations.
To conclude I would like to end with two questions for reflection-
How can artists ensure that their use of computer vision respects individuals’ privacy?
What influence will designers and artists have on how society perceives and uses this technology going forward?
I have personally never encountered the concept of computer vision before this reading. So, as I was reading it, I could not help but be amazed by the ways in which a concept as complex as computer vision can be manipulated to be used in different ways. It was very interesting for me to see the range of computer vision usage, going from military purposes to interactive art. To me, such a range showcases the duality of computers. It showcases that computers go beyond technicality but can actually encompass concepts that are typically seen as “non-technical.” The author also brought up different techniques to use computer vision such as frame differencing and brightness thresholding. The author’s introduction of different techniques for computer vision instilled the idea that we cannot simply rely on computers to create optimal outcomes. That is why a well prepared environment is almost always needed to help create optimal outcomes that could be generated by the computer itself.
Thus, the relevance of user interaction comes into play. Are humans vital for the success of computer vision systems? Do we need to play an active role to ensure that computer vision systems are optimized correctly? In my eyes, the significance of the physical environment in regards to computer vision underscores the role human interaction plays in shaping the functionality of computer vision systems. Humans are able to manipulate and optimize the environment to enhance the performance of computer vision systems. Whether it is simply adjusting lighting or placing sensors in the environment, human intervention plays a vital role in computer vision systems. Other than this, it is users’ interaction with computer vision systems that determine the interpretation and utilization of its output. Therefore, human intervention is essential for harnessing the full potential of computer vision, ensuring it runs successfully. To me, the idea of computer vision as displayed by the author showcases the interconnected nature of computers and users and that one is somewhat always dependent on the other for success.
For this week’s assignment, I chose to create generative text project. To find inspiration for it, I browsed through some data on Kaggle, eventually landing on a dataset of Tweets, where each tweet corresponds to a certain emotion(https://www.kaggle.com/datasets/nelgiriyewithana/emotions). Hence, I tried to use the data to create a somewhat meaningful conversation between two people.
First I uploaded an image of two characters speaking and thought about putting text over their heads like comic book characters, to indicate speech. The speech , in this case, is simply one instance of the Tweets in the dataset, but every Tweet is written from first perspective, giving an illusion of fluency in the conversation. The main idea behind creating a conversation was to randomly choose what the first person has to say, then identify the emotion the speech is tied to, and finally, get another random Tweet from the database that has the same emotion. The only addition to the text would be the connectives words I added manually, but they are also randomly taken from the given array that I wrote. let openers = ["True, " , "Right!, " , "Agreed, " , "Same, ", "Fax, " , "Relatable, " , "Yeah, " , "I see, "]; //additions to the second speech
I made it easier for me to correlate text with emotions by creating arrays of text for each emotion, so that the random function only has to choose between texts of the same emotion. Example of a conversation below. With each run of the program, different texts and emotions will be generated.
In the future I would like to try to make the conversation make more sense by adding additional variable and edits into the texts, but I feel like sometimes the generated outcome with this code is funny enough to be worth it.
I really enjoyed reading the chapter by Don Norman called “The Design of Everyday Things”. The chapter for me was engaging and the author gave various examples and insights on why design is important in everything. Norman says that the two of the most important characteristics of good design are “discoverability” and “understanding”. And through the doors example, Norman clearly explained to us that why these components are essential for designing. I completely agree with him on his view on the current designs which are complex and how these designs should not just merely focus on the beauty part but also focus on the human interaction/ accessibility side of the design.
I liked how Norman gives importance for the concept of Human-Centered Design (HCD), which was really an eye opener for me. He emphasized the need of the designers to focus on the potential errors of the designs during the designing process itself. Which really enlightened me and taught me that, checking for such potential errors in the designs will help us to have a better user-design interaction which will ultimately lead to a better user-end reaction for the design. This concept is what which I believe, is important in all designing process and I would like for it to be incorporated in my future designs and projects.
For this assignment, I drew inspiration from one of the previous students works where they used the 2018 world cup data to create a graph. As a passionate football enthusiast, with football being my favorite sport, and considering that the FIFA world cup is the biggest sports tournament in the world, I decided to collect the data on the total number of goals scored in every World Cup tournament from 1930 to 2022. My goal is to present this data in the form of a bar graph, visually depicting the historical trends and patterns of goals scored across different tournaments.
A highlight of some code that you’re particularly proud of
// Inserting bars
for (let i = 0; i < data.length; i++) {
let year = data[i].getNum('YEAR');
let goals = data[i].getNum('GOALS');
let x = map(year, 1930, 2022, margin+20, width - margin); // mapping the Year
let y = map(goals, 0, maxGoals, height - margin, margin); // mapping the number of goals
let barWidth = w / data.length;
fill('#E8B6B6'); //color of the bars
rect(x - barWidth / 2, y, barWidth, height - margin - y); //draws the bar
fill('#100D0D'); //color of the texts
text(goals, x+6, y -10); //writes the number of goals
}
Embedded SketchReflection and ideas for future work or improvements
For future improvements, I would like to add various patterns into the bars of the graph and additionally, I plan to implement interactivity, allowing users to click on individual bars to reveal which country achieved the highest goal count in each respective tournament.
CONCEPT:
In this assignment, my primary objective was to visualize data pertaining to different types of energy consumption spanning from 1900 to 2018. Given the diverse nature of energy sources and their consumption patterns, I decided to employ a line graph as the mode of visualization. However, due to the multitude of energy types and their varying magnitudes of consumption, presenting them all on a single graph would result in a cluttered and incomprehensible visualization. To address this issue, I opted to implement a dropdown menu using the createSelect() function. This feature allows users to select a specific energy type, and subsequently, the graph updates dynamically to display the consumption trend exclusively for the chosen energy source.
One of the prominent challenges encountered during the implementation phase was the disparate scales of energy consumption among different energy types. Some energy sources exhibited significantly higher consumption values compared to others. Consequently, if a uniform y-axis scale were to be applied across all energy types, it would lead to visual distortions, making the graph incomprehensible. To mitigate this challenge, I implemented a dynamic scaling mechanism. This involved calculating the maximum consumption value for the selected energy type and mapping the y-axis scale accordingly. By dynamically adjusting the scale based on the maximum consumption value of each energy type, the visualization maintains clarity and accuracy across all displayed data points.
My favorite part of the code is processData(). It extracts and organizes data for visualization. It initializes empty arrays xValues and yValues, then loops through each row of the dataset. It extracts year and energy consumption values based on the selected energy, storing them in the arrays. This function ensures accurate representation of energy consumption trends.
REFLECTIONS:
Completing this assignment was definitely challenging, but I persisted by referring to the p5.js documentation whenever I encountered difficulties. This helped me understand how to use different functions and techniques to achieve my goals. I also attempted to enhance the visualization by incorporating a background picture, but encountered some issues with how it interacted with other visual elements. I’m eager to continue exploring this project further. I see a lot of potential for adding more interactive elements to make the visualization even more engaging and informative.
Initially , I tried figuring out whether data visualization or generative text would be a cooler project to work on . I tried data visualization for precipitation for every city but I couldn’t find large clean datasets and had difficulty in setting the world map to scale as the background of my canvas .
Meanwhile , in the generative text side , I experimented with different fonts and different ways to display and adjust text . I came across the texttoPoints() method that creates an array of points from a text when using a specified font . I looked for cool implementations online and came across this video by Ed Cavett –
I thought this was amazing and decide to do something using this code and logic to create interactive generative text .
First of all, I wanted to have dynamic text such that whatever the user would input in a text box would be displayed on the screen. Although I thought this would be easy, it was challenging for me to implement it as I had to understand the class lineMaker() created by Ed Cavett fully and had to change some aspects of it to ensure that text was generated dynamically.
I wanted to add the words that were typed in the background too – this involved changing the position of the words in the background depending upon the textlength and using loops to fill the whole screen with the correct spacing . I also experimented with color and alpha values to see what suited the best .
I wanted some moving letters based on what the user had typed . For each letter typed by the user, I created a letter object that would move in a circle of changing radius to give the effect that the circle was closing in to the center of the canvas . The letters would disappear as the text was changed .
After reading about signifiers in this week’s reading , I had to make sure that the user knew how to change the text at the center of the canvas . So , I added a message next to the text box so that when the user presses enter, the message at the center is updated .
I also wanted to add a ‘fire ‘ effect instead of a lightning effect to the letters at the center so I changed the color values for highlighting the text and the color of the ‘darts’ too .
Sketch
PLEASE ENTER TEXT IN THE TEXTBOX AND PRESS ENTER !! HAVE FUN !!
Challenges while implementing
I had a lot of errors while implementing the code. The backspace functionality to remove any letters already on canvas was tricky to implement . It would duplicate the letters . I found a way to clear the array and regenerate it every frame to remove this error . Althouugh this is not the most efficient way to do it (which would be to push and pop elements at the end of the array)
The aligning of words typed to the background of canvas was another aspect . It took some time to get the alignment right. Initially , I had tried to use the array generated by textToPoints() function for drawing the words as points but the program would lag/crash due to the large number of points so I decided to stick with using text .
The code to generate the darts and form the word in the center that is based on Ed Cavett’s code was fairly complicated for me and it took a lot of time understanding it . Thankfully , I did not have to change much and it was only important to broadly understand what each segment does so that I could change the properties as intended .
Code that I am proud of
This project took a lot of time and I am really proud of the fact that I was able to make a dynamic version of Ed Cavett’s text art . In addition , I implemented the background and the spiral letters as well as the Letter class . I am most proud of the code for letter class as follows :
class Letter {
constructor(char) {
this.char = char;
this.x = random(width);
this.y = random(height);
this.angle = atan2(this.y - height / 2, this.x - width / 2); // Angle from center
this.radius = 300;
}
update() {
let angularSpeed = 0.05;
// Increase or decrease radius based on current value
if (this.radius >= 400) {
this.radiusDecreasing = true; //boolean to determine whether circle should increase or decrease
} else if (this.radius <= 200) {
this.radiusDecreasing = false; //flase if radius is below 200
}
// Increment or decrement radius
if (this.radiusDecreasing) {
this.radius -= 1;
} else {
this.radius += 1;
}
// Update angle
this.angle += angularSpeed;
this.x = width / 2 + cos(this.angle) * this.radius;
this.y = height / 2 + sin(this.angle) * this.radius;
}
display() {
textSize(fontSize);
fill(255,150,0);
text(this.char, this.x, this.y);
}
}
I also spent a lot of time implementing the alignment for the words in the background , this is the piece of code that does that :
// Word Decor
let charArray = input.value().split('');
word = input.value(); //implementing word decor
// Calculate text width
let text_Width = myFont.textBounds(word, 0, 0, fontSize).w;
// Calculate horizontal position for centering
let centerX = (width - text_Width)/2;
// Draw points
push();
stroke(255);
for (let i = 0; i < points.length; i++) {
point(points[i].x, points[i].y);
}
pop();
}*/
for(let j=0;j<num_cols;j++){
for(let i=0;i<num_lines;i++){
fill(200,200,200,20);
text(word,centerX,50+ i*fontSize);
}
translate(-text_Width,0);
}
//reset canvas
translate(text_Width*num_cols ,0);
for(let j=0;j<num_cols;j++){
for(let i=0;i<num_lines;i++){
fill(200,200,200,20);
text(word,centerX,50+ i*fontSize);
}
translate(text_Width,0);
}
//resets canvas to OG position
translate(-text_Width*num_cols,0);
Here, the translate functions have been used to fill the whole canvas . I have made sure that I translate back to the original position once the drawing is done . Although looking back , I think using pop() in the right way would make it easier , this is the way I decided to implement it .
Reflection/Scope for improvement
I am very happy with what I achieved at the end . The project took a lot of time but I learnt a lot – especially while trying to understand the lineMaker() class . However there are certain additional implementations that I had thought of which I would love to look into in the future but I couldn’t implement now because of errors that would take time to debug :
Increase the number of letters in the circle – you will notice there is an unused variable called lettermultiplier in the beginning that is set to 5 as well as a commented out for loop in). This was for adding extra letters to the circle, however the for loop doesn’t work as expected for some reason and the letters attain complete randomness in motion instead of moving smoothly ( it probably has to do something with instantiating each letter object but I couldn’t pinpoint the source of the error) .
Use different font (I have used Roboto which is fairly common , maybe a fancier font could look better)
Experiment with different uses of texttoPoints() function . This function can be used in a variety of interesting ways . Although I have used it only for one such way , there seems to be endless possibilities with this function.