Week 5 Reading Response – Dachi Tarughishvili

The article talks about computer vision which is a field of computer science responsible for designing algorithms that enable computers to provide some level of analysis for digital content. The article mainly talks about new ways of using computer vision, in digital media, art, and more which are nowadays accessible to a much wider range of consumers. 

There were many points that I found interesting in the article, starting from the methodologies behind vision algorithms to social implications. Myron Krueger’s legendary Videoplace reminded me of a project I did in last year’s class virtual body performance. I created an environment in Unity similar to how Videoplace used human canvas and allowed interaction with elements through movement. I also used bodily movements to signify unlocking chakras and connecting to the outer world through physical movements reflected in VR using body capture technology. While methodologies are different, at the core, they are both interactive performances that require a combination of computer technology, human creativity, and physical involvement.

The second and far more interesting point involves ethical considerations coming from the Golden Gate Bridge incident. How morally acceptable is it for us to record people’s final moments for all the world to see? And all this happening without consent is the most problematic aspect. In this case, there is a fine line between artistic expression and socially acceptable expectations. In the future, this line will be blurred further as various cameras around us start to capture more and more data, with better, more sophisticated algorithms. Soon, an immense amount of data concerning our daily routines, choices, feelings, and other personal information will be collected and stored in massive databases. This information will include data on our facial expressions, movements, and more. Unfortunately, there is potential for this data to be exploited by corrupt governments or large technological conglomerates.

On a final note, I agree that computer vision technology has enormous potential for innovation and creativity, but it must be used ethically and responsibly to ensure privacy/ethical concerns are addressed going forward into the future.

Midterm Progress 1(Space Navigators) by Sihyun Kim

Concept

The image shown above is my little brainstorming for the midterm project. Inspired by my childhood favorite game Flappy Bird (image shown below), I decided to create a game that has similar mechanics to Flappy Bird. However, I made my game distinct from Flappy Bird by having a different theme and a different way to play it. The theme of my game would be “Space”. More specifically, the concept of the game is a rocket sent from the Earth to explore space avoiding meteoroids of different sizes in space to prevent the destruction of the rocket.

Recreate Flappy Bird's flight mechanic | Wireframe #29 - Raspberry Pi

Flappy Bird

The user can control the rocket through his or her voice volume. The rocket will be controlled by getting the microphone level of the microphone input. 

Design 

The drawing above is the drawing I drew when conceptualizing this game. This game will consist of two key features: rockets and meteoroids of different sizes. As of now, I am planning to find an image with this kind of illustration. However, if I could not find any of which I could satisfy, I might draw the rocket and the meteoroids for the game by myself. For the background music, I found this playlist (shown below) of background music on YouTube. 

As of now, I am planning to use one of these songs in the game as the background music.

 

Challenges:

Intentionally, I started working on the parts of the game that I thought would be the most complicated and frightening features to code. Which were: 

Controlling the rocket through the mic input

function setup() {
  createCanvas(400, 400);
  mic = new p5.AudioIn();//initializing microphone input
  mic.start();//starting microphone input
  rocket = new Rocket(50, 200, 200);//creating a new rocket object
}

function draw() {
  background(220);
  //getting microphone input level
  let vol = mic.getLevel();
move(vol) {
  let adjustment = map(vol, 0, 1, 0, -20);// mapping the volume to an adjustment in velocity
  this.vy += adjustment;//// applying the adjustment to the vertical velocity
}

The shown above are the code snippets for implementing the control for the rocket through mic input. Implementing this part was easier than I expected because we have been introduced to how to code such an implementation in p5.js. I have used the p5.AudioIn() and getLevel() to access the microphone input and the volume level of the microphone input. Then, I utilized the map() to map the volume to an adjustment in (vertical) velocity. 

 

Moving Obstacles: 

After watching some videos of people playing the Flappy Bird, I have noticed that the x position of the bird maintains to be the same. It just looked like it was moving because the obstacles were moving! So, I decided to make my obstacles(meteoroids) move as well. Creating the instances and letting the instances move were not difficult. It was the timing that was challenging. I first tried to use random() for all the parameters. Then, I realized that this would result in overlapping obstacles. 

So, I contemplated How I could make one meteoroid at a time. It was quite challenging to figure out how as I took around 30 minutes. Then, I came out with the idea of using frameCount() and some variables. 

if (frameCount - lastMeteoroidTime >= meteoroidInterval) {
  // creating a new meteoroid
  let x = width; // starting the meteoroid from the right side of the canvas
  let y = random(50,350); // random y-position between 50 and 350
  let size = random(50, 100); // random size between 50 and 100
  //adding the created new Meteoroid object to the array
  meteoroids.push(new Meteoroids(x, y, size));

  // updating the last meteoroid creation time
  lastMeteoroidTime = frameCount;
}

 

So, what I have done is that I first set the interval to be 60 frame count, and if the time difference of frameCount and the last time that the meteoroid was generated is greater than the interval, then the new meteoroid will be generated.

Collision detection

Collision detection with the boundaries was easy since I just had to check if the y position of the rocket was greater than 400 or less than 0. 

However, letting the collision between any of the meteoroids and the rocket be detectable was essentially the most frightening part of this project (so far). Honestly, I did not think this would be one of the hardest parts as I had similar coding experience in Introduction to Computer Science. However, letting the collisions detectable when both are “circles” was different from letting the collisions detectable when both are “rectangles”. But, after all, I was able to figure out how to code for this as well! 

  //checking collision with the rocket using the equation of the circle 
checkCollision(other) {
    let distX = this.position.x - other.position.x;
    let distY = this.position.y - other.position.y;
    let distance = sqrt((distX * distX) + (distY * distY));
    if (distance <(this.radius + other.radius) ) {
      noLoop();
    }

After all, I was able to figure out how to do collision detection between two circles by utilizing the equation of the circle I learned back in middle school. AND IT WORKED! So, basically, I have gotten the distance between the center of the meteoroid and the rocket, then checked if the calculated distance between the two objects is less than the sum of their radii. If this condition is true, it means the two objects are overlapping or colliding. This is because the sum of their radii represents the distance at which their boundaries touch if they are just barely colliding.

Conclusion

Fortunately, I was able to overcome the challenges I encountered so far. In fact, I was able to resolve the most frightening problem of this project- collision detection. Now, I am done with the main features of the game. I just have to implement the images and sound, create the starting page, implement the scoring system, define game-ending conditions, and enable the restarting of the game. 

Progress so far…

!! It just stops if it is on the ground or if it is touching the ceiling for now because I have put noLoop() for all the if -statements involving collision check. 😊

Week 5 – Midterm Progress – Lord of the Maze – Dachi Tarughishvili

Concept:

My midterm will be a game titled Lord of the Maze which is inspired by epic fantasy novel called Lord of the Rings which my previous project Eye of Sauron was based on. This time around I want to make a maze style game where main character Frodo has to make his way out of the maze and reach Mount Doom (volcano in Mordor where the ring is forged and place where it can be destroyed). In addition to the maze, orcs are roaming along different pathways which you should avoid. Coming in direct contact with an orc will decrease your health. If you loose all three health points, you lose the game. If you reach mount of doom in time, you win the game, and image gets displayed. Some of the features I am considering adding are: time counter, sound effects, more orcs, more movement patterns, Eye of Sauron additional level before reaching mount of doom and destroying the ring.

Design:

Maze Generator | Graphic design inspiration, Maze, Design inspiration

These are some of the basic design elements behind my midterm project as it is going to be top down, 2D 8 bit pixel art-style maze game. I am also going to add iconic soundtracks from LOTR to enhance the atmosphere. I am also thinking of other obstacles or objectives along the way like additional monster or a ring. Additionally, I want a spell casting ability design of which might either be animated or might opt for simpler integration with visual changes P5js offers. The maze itself is gonna be fixed for ease of implementation, and collision detection, however, I could possibly pursue more generative design in the future.

Classes will most likely be: UI, Ring, Orc, Mount of Doom, Game Win, Game Over conditions and more.

Challenges:

Challenges will most likely come from collision detection. The maze logic should be properly implemented so that Frodo does not go through walls and does not break player immersion.

Additionally, it will be quite difficult to make all objects work together as they have to abide different rules. I don’t think the movement itself will be an issue as most monsters should have predefined path. Integrating eye of Sauron animation from project before both programmatically and conceptually might be challenging.

Risk Prevention:

To maximize the efficiency of implementation, I am going to approach the task step by step. First, I am going to design the maze and check basic collisions for the character.

After that I am going to add orcs and make them move on predefined paths. I will then implement Ui Elements since games core logic works successfully and I can only build up from there.

Additional logic concerning the eye, reaching mount of doom, additional obstacle, the ring, and potential power will be built each one at a time.

Eventually, I am going to try to polish everything together, improve my comments and foundation and possibly work on graphics and transitions.

As far as audio is concerned, each new element will be accompanied by relevant audio file and thats how they are going to be added as well consequently.

 

 

Week 4 Assignment: Personal Electronics Market in India

I had initially wanted to work with generative text to create poems in my native language (Bengali), which would have translations in English. Midway through this project however, I realized how challenging it was as the rules of grammar in English and Bengali are quite different, and I couldn’t just make one-to-one sentences, even simple ones.

Then I decided to go through the Kaggle website to search for suitable datasets. The dataset on the Device market in India over last 15 years (https://www.kaggle.com/datasets/michau96/device-market-in-india-over-last-15-years) was a trending dataset, so I decided to use that. Since the dataset was on a monthly basis, I first used Excel to take averages across each year, converting the dataset to an annual one.

When it came to making the plot itself, I first tried getting the basic stacked histogram right. This was done using rectMode(CORNERS) as it allows to specify the opposite corners of the rectangle. The x position for each bar was specified using the Year column and the y position and height using the percentage value of each column normalized to the desired height of the plot.

    rectMode(CORNERS); //to allow stacking of the bars
    //bar for mobile
    fill(this.mobileColor);
    rect(
      (this.dataYear - 2007) * w,
      y,
      w * (this.dataYear - 2007 + 1),
      y - (h / 100 * this.mobile)
    );
    //bar for desktop
    fill(this.desktopColor);
    rect(
      (this.dataYear - 2007) * w,
      y - (h/100 * this.mobile),
      w * (this.dataYear - 2007 + 1),
      y - (h / 100 * this.mobile) - (h / 100 * this.desktop)
    );
    //bar for tablet
    fill(this.tabletColor);
    rect(
      (this.dataYear - 2007) * w,
      y - (h / 100 * this.mobile) - (h/100 * this.desktop),
      w * (this.dataYear - 2007 + 1),
      y - h
    );

After that, I decided to work on the graph legend. The legend class takes one of the bars and makes the legend based off of that (this works since every bar is identical when it comes to the number of groups and the color associated with each group).

Finally, I wanted to add a level of interactivity in the form of a popup that comes up when you hover over a bar, similar to the statistics website Statista (statista.com). I tried using the mouseOver() event listener, but that didn’t work with the bar object for some reason, so I decided to go with the hard route of checking mouseX and mouseY against the bar’s dimensions.

The final result is below:

 

I would have loved to make this more generalized and capable of visualizing any dataset loaded by a user (kind of like ggplot in R). In that case, the program would need to work for any number of columns. But until I figure out how to make function arguments optional, this is kind of impossible.

Reading Response 4: The Design of Everyday Things

Upon delving into “The Psychopathology of Everyday Things,” I found Don Norman’s exploration of design principles and challenges to be thought-provoking and relevant. The idea that technological advancements, while promising increased benefits, also introduce more complexities and challenges in usability, is a stark reality in today’s rapidly evolving tech landscape. This made me think of the evolution of smartphones, which our generation was the last to experience. Initially, phones were simple – calls, texts, and perhaps a basic camera (remembering my flip phone). However, we now have smartphones that can do almost everything but are accompanied by a myriad of complexities. Consider the addition of facial recognition, Bluetooth, and a multitude of applications on our smartphones. While these features enhance functionality, they often lead to confusion and frustration– such as the times I’m fumbling through settings and menus on my smartphone, realizing that the very advancements intended to make life easier can, at times, make it more challenging.

 

I believe that technology should enhance, not complicate, our lives. Thus I agree that as we embrace innovation, designers must prioritize user experience and human-centred design to ensure that technological progress truly benefits users. For example, the shift from traditional wired earphones to wireless counterparts provided a significant leap in user convenience. Wireless earphones, like Apple’s AirPods, offer users unparalleled freedom of movement and freedom from tangled cords. However, it introduced new challenges, such as using touch gestures, which might not always be intuitive. Moreover, since the two buds are not connected, it was very easy to misplace them.

To strike a balance between innovation and user-friendliness, designers should adopt a user-centric approach. Conducting extensive user testing and feedback sessions can provide invaluable insights into which features are genuinely beneficial and how users prefer to interact with them. Prioritizing essential functions and ensuring that they are easily accessible, perhaps through intuitive gestures or a clear menu hierarchy, may help prevent users from feeling inundated. For example, the ‘Find My AirPods’ feature was added to address the challenge of potential loss. Overall, achieving equilibrium involves understanding that not every technological advancement needs to be incorporated if it compromises usability.

Asciilumination – Assignment 4 – Dachi Tarughishvili

Passing by Interactive Media Lab, I always noticed those TVs that used webcams to track our movement and demonstrate interesting visualization using predefined symbols. I always wondered how it actually worked. So for this data visulization assignment I wanted to recreate it and perhaps add some more adjustments. I followed Coding Train’s coding challenge tutorial regarding ASCII Text images where most of my code comes from.
The basic mechanism behind this program is to map the pixel brightness values which are calculated by dividing their average RGB values and mapping it to characters from density character strings. In this case, I am using ‘ÑYUAD876543210?!abc;:+=-,._’; since initial symbols take up more density and show contrast better but it can realistically be anything. Super bright parts of the image have no space value at all. Additionally, there is no Canvas, and this is direct html implementation with little bit of CSS. I also added a slider for density value which adjusts number of spaces that are added to the string, which acts similiar to how contrast slider would act in photo editing app. If uyou think about it, thats actually whats happening. There are more spaces for wider value of brighter pixels. There is also a Toggle Color button on top left, which assigns the symbols colors based on initial pixel value. This is done per frame basis.
To ensure that you see full working project, make sure to follow P5 link, since this website does not correctly display it and also you need camera permissions anyway since it takes the video of your webcam.
Here is perhaps the most important code out of entire project:

 //loop to iterate over pixels
  for (let j = 0; j < video.height; j++) { //iterates over rows (height)
    for (let i = 0; i < video.width; i++) { //iterates over columns (width)
      const pixelIndex = (i + j * video.width) * 4; 
      //calculates index of pixel in videos pixel array based on its x and y cordinates (i and j). basically vertical and horizontal displacement. Video width indicates how many pixels are in each row. r, g b, a so we multiply by 4 since pixel takes up 4 spaces in array
      const r = video.pixels[pixelIndex];
      const g = video.pixels[pixelIndex + 1];
      const b = video.pixels[pixelIndex + 2];
      const avg = (r + g + b) / 3;
      const len = adjustedDensity.length;//to later map brightness value
      
      const charIndex = floor(map(avg, 0, 255, 0, len)); 
      
      //maps avg value from 0 to 255 to 0 to len. floor is used for rounding
      
      const c = adjustedDensity.charAt(charIndex);
      
      //brighter pixel = higher char index lower density
      
      const charColor = `rgb(${r},${g},${b})`; 
      //its a template literal, javascript uses it for embedding expressions within the string
      
      if (c == " ") {
        asciiImage += "&nbsp;";
      } else if (check == true) {
         asciiImage += `<span style="color:${charColor};">${c}</span>`; 
      } else {
        asciiImage += c; //adds c to our image
      }
      
      //span element is inline container in hztml used ot apply styles without line break
      //if our c is empty by mapping, correspond that to true space (html does not conventionally display empty spaces) 
      
    }
    asciiImage += '<br/>'; //line break to start fresh rows
  }
  asciiDiv.html(asciiImage); //sets html content of ascidiv to our finalized asciimage, by continously drawing we update div content and thus motion is live
}

The code is fully commented but the general idea is to find out r g b values for individual pixels. After that, we find average value which corresponds to our brightness. We map our string index to brightness values. Therefore we have a constant C which is a character for every-frame and we add it to our frame. If color is on, then its color is modified.

In the future, I could work on optimizing since. Refreshing html elements so many times, especially when color is applied is very taxing on CPU and the process becomes laggy. Lowering frame-rate does not help much. Additionally, I could add more adjustmenets and variables, such as video size, font options and more. The latter is especially interesting since the font I am using has same horizontal length. Variable font would be harder to implement. Overall I am very glad with how this project turned out and would like to work on it more in the future.

Week 4 – Reading Response: Every day’s Life Design | Haziel

I found the reading really interesting and relatable as it provides a thought-provoking exploration of design principles and their application to everyday life, particularly focusing on the challenges posed by modern design complexity. Don Norman’s concept of “Norman Doors” highlights a common frustration experienced by many individuals, including me, when interacting with poorly designed products, such as doors with ambiguous instructions.

The reading also made me think of other trivial daily life examples. For instance, the design of a smartphone interfaces often reflects the principles discussed in the reading, particularly regarding feedback and conceptual models. When typing a message on the smartphone keyboard, the visual feedback of each keystroke appearing on the screen provides immediate confirmation of input, enhancing the user’s understanding of their actions. This feedback loop is crucial for users to feel confident and engaged with the device.

Assignment 4 – Lyrics Video | Haziel

For this assignment, I got inspiration from one of my favorite Brazilian songs, Wave by Antonio Carlos Jobim. The objective of this project is to display a Canva that looks like a YouTube lyrics video. Using generative text, I have created a CSV file with the translation of the lyrics. Then, it is loaded in the main sketch along with the background image and the instrumental of the song.

Rather than just displaying the lyrics on the screen, I added a bit of interactivity as the user gets to choose when to read the next sentences. So, if we click on the screen, we load one sentence at a time, following the order of the song. Additionally, I added some animation to the sentences, as they move in a wave format to match the concept of the song. It also allows the user to immerse themselves in the project and read the (I would say) beautiful and romantic lyrics.

When coding this assignment, I also got references from the text() function.

function draw() {
  // Display the background image
  image(bgImage, 0, 0, width, height);

  // Sets the text alignment to center
  textAlign(CENTER);
  
  // Calculate y-coordinate with a sinusoidal function for wave animation
  let y = height - 20 + sin(frameCount * 0.05) * 10; // Adjust the amplitude (10) for desired wave height
                                                     // Adjust the frequency (0.05) for wave speed

  // Draw the current sentence at the bottom center with animated wave effect
  drawWords(width / 2, y);
}

function drawWords(x, y) {
  // Text color
  fill('gold');

  // Display the current sentence at the specified position on the canvas
  text(sentences[currentSentenceIndex], x, y);
}

Reflection: One area I could still improve is to make the Canva more animated and dynamic. Currently, the animation is limited to the wave effect applied to the text as it moves on the screen. However, there are several ways we can enhance the animation to make the canvas more engaging, such as background animations and transitions between the sentences.

Assignment 4 – Reading Reflection

The excerpt from Don Norman’s “The Design of Everyday Things” offers a perceptive look at the psychology of commonplace objects and their designs. In his discussion of frequent problems with doors, light switches, and technology, Norman highlights the significance of design features like affordances and signifiers. He contends that effective design should be intuitive, enabling people to utilize a product without the need for manuals, mostly by means of obvious and sensible hints included into the design itself.

When I think about it, Norman’s emphasis on user-friendly design resonates even more. From my own experience, I have come across a lot of non-intuitive products and interfaces, which has caused annoyance and mistakes. I’m more conscious of the importance of user-centered design now that I’ve read this, and it’s inspired me to think about how the systems and items around me may be made clearer and easier to use. Norman’s concept of “affordances,” which refers to the potential interactions between people and their surroundings, has altered my understanding of how products should be made to fit naturally with human use rather than requiring users to adjust to complicated or counterintuitive systems.

Assignment 4 – UI/UX Reading

“Design of Everyday Things” presents a compelling exploration of the interdisciplinary nature of design and the challenges involved in creating successful products. For me, it emphasized the importance of considering various perspectives, goals, and priorities from different disciplines, such as marketing, engineering, manufacturing, and customer support. In the modern age, where industries increasingly rely on cross-functional collaboration to innovate and deliver products that meet diverse consumer needs, the point that the text makes shine. In today’s fast-paced market, where competition is fierce, understanding and integrating these varied viewpoints is crucial for achieving market success.
Don Norman highlights the significance of user experience and the formation of conceptual models in product design. He underscores the gap that often exists between the designer’s vision and the user’s perception, emphasizing the critical role of the system image in bridging this divide. In our modern digital landscape, where user-centered design principles are paramount, this notion holds particular relevance.
Norman stated many things which I found relevant to myself. One such idea was that designers will often take shortcuts when something is too complex. This reminded me of myself: I am lazy, and if there is a way to do something quicker without too much of a quality sacrifice, I will do it. Interestingly, I also remembered Japanese web design, and how it differs from Western websites. While her in the West, sites are usually as streamlined as possible, Eastern sites often feel cluttered to me, and present a lot of information all at once.