Afra Binjerais – Midterm Progress

So far I got my game running but it’s still in process. I still have to imbed sound and change a couple of things, but you get the point of the game.

I found it hard to make the sprite and the ball move within the same function, but with the help of Pi, thanks Pi, I was able to do that.

Overall, I still have to add a couple of things, as I have a lot of ideas that I will try to put into action. But so far, I’m happy with my progress, even though my game is simple.

This is some of my code that’s related to the game:

var ball_diameter = 30;
var bomb_diameter = 10; 
var xpoint;
var ypoint;
var zapperwidth = 6; 
var numofbombs = 20;
var bombposX = [];
var bombposY = [];
var bombacceleration = [];
var bombvelocity = [];
var time = 0;
var timeperiod = 0;
var score = 0; 
var posX;

function setup() {
  createCanvas(640, 480);

  var temp00 = 0, temp01 = -20;
  while(temp01 < height){
    temp00 += 0.02;
    temp01 += temp00;
    timeperiod++; 
  }

  posX = zapperwidth + 0.5*ball_diameter - 2;
  xpoint = 0.5 * width;
  ypoint = height - 0.5*ball_diameter + 1;

  initbombpos();
}

function draw() {
  background(137, 209, 245);
  
  fill(239, 58, 38);
  rect(0,0, zapperwidth, height);
  scoreUpdate();

  fill(255);
  noStroke();
  for(var i=0; i<numofbombs; i++){
    ellipse(bombposX[i], bombposY[i], bomb_diameter, bomb_diameter);
  }

  updatebombpos(); 

  fill(31, 160, 224);
  ellipse(xpoint, ypoint, ball_diameter, ball_diameter);
  xpoint -= 3;

  if(mouseIsPressed && (xpoint + 0.5 * ball_diameter) < width) {
    xpoint += 6; 
  }

  if(xpoint <= posX || bombCollistonTest()) {
    gameover();
  }

  time += 1;
}

function updatebombpos(){
  for(var i=0; i<numofbombs; i++){
    bombvelocity[i] += bombacceleration[i];
    bombposY[i] += bombvelocity[i];
  }

  if( time > timeperiod){
    initbombpos();
    time = 0;
  }
}

function initbombpos (){
  for(var i=0; i<numofbombs; i++){
    bombacceleration[i] = random(0.02, 0.03);
    bombvelocity[i] = random(0,5);
    bombposX[i] = random(zapperwidth+(0.5*ball_diameter),width);
    bombposY[i] = random(height/4, height/2); // Adjust Y position to be between one-fourth and half of the canvas height
  } 
}

function bombCollistonTest(){
  var temp = 0.5*(ball_diameter+bomb_diameter)-2;
  var distance; 

  for(var i=0; i<numofbombs; i++){
    distance = dist(xpoint, ypoint, bombposX[i], bombposY[i])
    if(distance < temp) {
      return true; 
    }
  }
  return false; 
}

function gameover(){
  fill(255);
  textSize(32); 
  textAlign(CENTER, CENTER); 
  text("GAME OVER", width/2, height/2 - 20); 
  textSize(15); 
  text("Press space to restart", width/2, height/2 + 20); 
  noLoop();
}

function scoreUpdate(){
  score += 10;
  fill(255);
  text("SCORE: " + int(score/timeperiod), width - 65, 15);
}

function keyPressed() {
  if (keyCode === 32) { 
    restartGame(); 
  }
}

function restartGame() {
  time = 0;
  score = 0;
  posX = zapperwidth + 0.5 * ball_diameter - 2;
  xpoint = 0.5 * width;
  ypoint = height - 0.5 * ball_diameter + 1;
  initbombpos();
  loop();
}

 

 

Reading Response Week 6 – Stefania Petre

This week’s reading revisits a familiar situation from the Interactive Media Lab, where a shadow projected on the lab’s TVs welcomes us as we enter. It represents the combination of technology and creativity, opening the way for human journey.

Building on our prior conversations, the text emphasises a key point: technology alone is not enough for achievement. The individuals who drive innovation make a true impact. Myron Krueger’s Videoplace is a good illustration of how skill and vision can transcend both time and technology, leaving an enduring effect on the industry.

Furthermore, the reading goes into the many ways that artists use  interactive media. From experimental installations to immersive experiences, it demonstrates the limitless potential of artistic expression in the digital era. It emphasises the significance of user experience and technological affordability in producing meaningful interactive art pieces.

Overall, this reading talks about the different approaches that artists take in regards to Interactive Media. Also, it takes into account some of the things that you have to consider in order for your art piece to work.

Midterm Progress – Flappy Falcon

A few weeks ago I stepped upon one big memory :

10 years since Flappy Bird has been taken down. What a time!

So, when we were required to make a game this is exactly where my mind went. The concept of the game would be the same but I want to make it NYU themed. Instead of the bird, it will be the falcon: FLAPPY FALCON!

In terms of potential challenges I will have to be careful with the code, but I have faith in myself! Let’s get to work!

Reading Response Week 6 – Jihad Jammal

Jihad Jammal

Comm Lab

Professor Aaron Sherwood

Reading Reflection Week 6

Feb. 26, 2024

 

Bridging Worlds

 

Levin’s approach to computer vision in the arts serves as a potent democratizing force, effectively breaking down barriers that have traditionally separated the realms of advanced technology and creative expression. In a field that might appear daunting due to its technical complexities, Levin’s narrative fosters an inclusive environment. By presenting computer vision as an accessible tool for artistic exploration, he invites individuals from diverse backgrounds to engage with technology in a creative context. This democratization is crucial because it empowers a wider array of voices and perspectives to contribute to the evolving dialogue between technology and art. It challenges the notion that one must have an extensive background in computer science or fine arts to participate in this innovative intersection, thus fostering a more diverse and vibrant community of creators. The implication is clear: the future of art and technology is not reserved for a select few but is an open field for exploration by anyone with curiosity and creativity.

 

Moreover, Levin delves into the ethical landscape encountered by artists who utilize this technology to craft pieces that interact with and react to human actions. Issues of privacy, consent, and surveillance emerge as critical considerations. As such the capability of computer vision to potentially breach personal spaces or to be deployed in manners that could exploit or inaccurately portray individuals warrants careful scrutiny.

 

Citations:

 

www.flong.com. (n.d.). Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers – Golan Levin and Collaborators. [online] Available at: https://www.flong.com/archive/texts/essays/essay_cvad/index.html

Week 5 Reading : The Evolution of Computer Vision – From Myron Kreuger to OpenAI’s SORA

On Feb 16, 2024, OpenAI released a preview of SORA, a text-to-video diffusion transformer model. With that, almost everyone will be able to (to an extent) generate videos they imagine. We have come a long long way since Myron Kreuger’s 1989 Video Place (Gosh, his implementation makes all my VR experiences weak). In the previous years, a lot of public computer vision models came out and became accessible –  YOLO,  GANs, stable diffusion, DALL-E mid journey.etc .The entire world is amazed when DALL-E shows its in-painting functionalities. However, it should be noted that such capabilities (or at least theories behind it) were in existence since antiquity (i.e. PatchMatch  is a 2009 inpainting algorithm, which later got integrated into Photoshop as the infamous content aware fill tool).

What a time to be alive.

And back in 2006, Golan Levin, another artistic engineer wrote the Computer Vision for Artists and Designers. He gave a brief overview of the state of computer vision, and discussed frame differencing, background subtraction, brightness thresholding as extremely simple algorithms which the artists can utilize. Then gave us the links to some Processing code at the end as examples. I wish that the writing contains a bit more how-to guide and figures on how to set up the Processing interface and so on. 

Golan wanted to stress that, in his own words, a number of widely-used and highly effective techniques can be implemented by novice programmers in as little as an afternoon and bring the power of computer vision to the masses. However, in order to get computer vision to the masses, there are certain challenges… mainly not technology, but digital literacy.

The Digital Literacy Gap in Utilizing Computer Vision

From observation, a stunning amount of people (including the generation which grew up with ipads) lack basic digital literacy. There are some “things” you have to figure out yourself once you have used the computer for some time, for instance to select multiple different files at once, hold the Ctrl key and click on the files. On windows,  your applications are most likely installed in C:\Program Files (x86).  If the app is not responding, fire up the task manager and kill the process in windows, or force quit in mac, or use the pkill command in linux. If you run an application and the GUI is not showing up, it is probably running as a process in the system tray.etc .etc.

However, the masses who use computers on a daily basis for nearly a decade (a.k.a. my dad, and a lot more people, even young ones) struggle to navigate around their computer. For such masses, Golan Levin’s article is not a novice programmer tutorial, but already an intermediate tutorial – one has to have installed Processing in their computer, set up the Java prior to that and so on. Personally, I feel that a lot of potential artists give up the integration of technology due to the barrier of entry of the environment setup (for code based tools and computer vision). Hence, as soon as any enthusiastic artist tries to run an OpenCV code from github on their computer, and when their computer says “Could not find a version that satisfies the requirement opencv”, they just give up.

Nevertheless, things are becoming a lot more accessible. Nowadays, if you want to do such computer vision processing, but don’t want the code, there are Blender Geometry Nodes, Unity Shader Graphs where you can drag around stuff to do stuff. For code demonstrations, there is Google Colaboratory where you can run python OpenCV code without dealing with any python dependency errors (and even get GPUs if your computer is not powerful enough).

Golan mentioned “The fundamental challenge presented by digital video is that it is computationally “opaque.” Unlike text, digital video data in its basic form — stored solely as a stream of rectangular pixel buffers — contains no intrinsic semantic or symbolic information.” This no longer exists in 2024, since you can either use either Semantic Segmentation, or plug your image into any transformer model to have each of your pixels labeled. Computers are no longer dumb.

The Double-Edged Sword of User-Friendly Computer Vision Tools

With more computer vision and image generation tools such as Dall-E,  you can type text to generate images, of course with limitations. I had an amusing time watching a friend try to generate his company logo in Dall-E with the text in it, and it failed to spell it correctly, and he keeps typing the prompt again and again and gets frustrated with the wrong spelling. 

In such cases, I feel that technology has gone too far. This is the type of computer vision practitioners that these new generations of easy tools are going to produce. Ones who will never bother to open up an IDE and try coding a few lines, or to just get Photoshop or GIMP and place the letters by themselves. Just because the tools get better does not mean that you don’t have to put in any effort to get quality work. The ease of use of these tools might discourage people from learning the underlying principles and skills, such as basic programming or graphic editing.

However…

The rate of improvement of these tools is really alarming. 

Initially, I was also gonna say the masses need to step up the game, and also upgrade their tech skills, but anyway…at this rate of improvement in readily available AI based computer vision tools, computer vision may really have reached the masses.

Week 4 — Reading Reflection: Don Norman’s Ch1

I personally found Don Norman’s view on the design world and its rules valid. The main goal of a designer is to find innovative creative ways to make designs easier to use and efficient, so when something as simple as how to open a door gets too complicated I too would write about it.

What makes a good design is the level of how fast users understand how it works. However, nowadays Norman argues that as technology is developing design must play a role in easing the interaction between technology and people. This idea hits close to home, in every family gathering I am the designated technology expert whenever my grandparents want to post something on instagram or update their facebook status, it would be nice if these platforms would measure user experience and user feedback — which was also a principle of design that Norman wrote about. This issues of technology not being inclusive to those of older generations must be addressed because it is a global issues.

Assignment Week – 4

I wanted to take the route of generative text, I wanted something that has a motion of swaying side to side having some sort of hypnotic effect. Additionally, I wanted the text to be in a pixelated font mainly for the aesthetic element, I used the font Silkscreen to achieve that look.

As a beginner in p5.js I had to use online resources such as the p5.js reference page, and YouTube: Patt Vira’s videos.

I am especially proud of being able to figure out the following formulas and that block of code, it might seem basic but it took me a while to wrap my head around them. The following block is what gives the element of swaying as there are two texts overlapping each other using, the top layer mimicking the motion of the layer under it. Updating the angle based on a sinusoidal function, creating a dynamic movement in the pattern which is what creates the swaying motion. And, I added the interaction element of being able to switch the colors of the text randomly – the colors are randomized and the letters are transparent.

function draw() {
  background(0);
  stroke(255);
  let x = r * cos(angle);
  let y = r * sin(angle);
  translate(20, 300);

  for(let i = 0; i < points.length; i++) {
    line(points[i].x, points[i].y, points[i].x + x, points[i].y + y);
  }

  fill(textColor);
  textSize(size);
  textFont(font); 
  text(msg, x, y);

  let increment = 2 * sin(t);
  t++;
  angle += increment;
}

function mousePressed() {
  // Change text color on mouse click
  textColor = color(random(255), random(255), random(255), 100);

  // Introduce noise to the points
  for (let i = 0; i < points.length; i++) {
    points[i].x = originalPoints[i].x + random(-10, 10);
    points[i].y = originalPoints[i].y + random(-10, 10);
  }
}

I initially wanted to add the Dubai landscape behind the text, however that was a complete failure, I couldn’t figure out what went wrong, but that is the only thing that I would change with my code.

Here is my entire code:

let font;
let points = [];
let originalPoints = [];
let msg = "dubai";
let size = 250; 
let r = 15; 
let angle = 0; 
let t = 0;
let textColor;

function preload() {
  font = loadFont("fonts/Silkscreen-Regular.ttf");
}

function setup() {
  createCanvas(850, 400);
  points = font.textToPoints(msg, 0,  0, size);
  originalPoints = points.map(point => createVector(point.x, point.y));
  angleMode(DEGREES);
  textColor = color(255, 100);
}

function draw() {
  background(0);
  stroke(255);
  let x = r * cos(angle);
  let y = r * sin(angle);
  translate(20, 300);

  for(let i = 0; i < points.length; i++) {
    line(points[i].x, points[i].y, points[i].x + x, points[i].y + y);
  }

  fill(textColor);
  textSize(size);
  textFont(font); 
  text(msg, x, y);

  let increment = 2 * sin(t);
  t++;
  angle += increment;
}

function mousePressed() {
  // Change text color on mouse click
  textColor = color(random(255), random(255), random(255), 100);

  // Introduce noise to the points
  for (let i = 0; i < points.length; i++) {
    points[i].x = originalPoints[i].x + random(-10, 10);
    points[i].y = originalPoints[i].y + random(-10, 10);
  }
}

 

Week 4 Reading Response – Saeed Lootah

I personally have not always thought much about the appliances or objects we always interact with even though I sometimes get annoyed by them so when going through the chapter I began to think more about the things I use on a day-to-day basis. Moreover, the beginning of the chapter outlines a philosophy which I have started hearing more of today: Human Centered Design. In my last semester I believe it came up in one of my classes, I am personally surprised that it didn’t exist until recently because I always imagined that designers (anyone that designed anything to be specific) always had the person at the center but what’s new is that designers are accounting for the worst possible mistakes or misunderstandings to best “foolproof” their work.

By the end of the chapter I began to think about two things. How have designers or the discipline of design in general changed since this book has been published. I ask this because as I’m reading it sometimes feel obvious and maybe thats because it has become more ingrained into the culture or perhaps it’s obvious when its spelled out. Then my second thought was about a recent product. The Apple vision pro, which I’m sure almost everyone has heard of by this point, uses the senses in a way which comes naturally to people whereas before with other virtual reality headsets people would have to be taught or spend more effort learning. It made me consider how sometimes a designer might approach a problem from the perspective of how can we make it with the tools we have already or some designers might approach a problem with the perspective of what tools do we have to make the product we should. Maybe I didn’t explain myself very well but thats what came to mind.

 

 

Reading Reflection #4

Don Norman presents a key idea in “The Design of Everyday Things,” which is feedback in design. According to Norman, feedback is a means of informing the user of the outcomes of an action. He uses commonplace examples to demonstrate this, such as sitting at a traffic light without seeing any indication that the system has spotted your automobile, or pushing buttons on an elevator without knowing if the elevator is on its way. This idea emphasizes how crucial it is to provide users with prompt, understandable feedback so they may be assured that their action has been recorded and is being handled. Ineffective feedback can cause consumers to feel doubtful, angry, or confused, which can make them dislike the system or product.

When I think about it, this idea has really changed the way I think about user interface and design. I can think of several instances where I kept pressing buttons on gadgets or user interfaces and got no reaction. I’ve always found this experience disconcerting because I’m not sure if my activities were acknowledged. I completely agree with Norman’s assessment on the importance of receiving feedback that is both clear and fast. It draws attention to a prevalent problem found in many designs and emphasizes how crucial it is to include transparent feedback methods in order to increase usability and user happiness. This understanding highlights the vital role feedback plays in enabling successful human-machine interactions, and it motivates me to examine designs more closely, both those I work with on a daily basis and those I might develop.

Week 4 Readings – Norman Doors

Ah yes, should I pull or should I push this door? From the image, we can already tell that something is off with the door. The sign says to pull the door, yet the handle is flat. These doors are commonly referred to as Norman Doors, introduced by Donald A. Norman in his book we read this week.

When designing something, it is always best to remember the target audience. Most object designs require human senses as an input to be used. While this does sound able-ist, we should also ask ourselves the question: How would this object be used for those who require more assistance? 

People will always seek the path of least resistance. An intuitive design is always good. We can measure how good a design is by simply using our senses. If it feels ‘natural’ to use, as if it is part of our body, it means the object is well designed. Otherwise, if we stop to think about how to use the object, that means it is redundant.

I believe that the concept of affordances Norman wants us to embed in our minds is this: As a designer, we need to put ourselves in the shoes of the audience. Designers can impact the lives of its users, be it for good, or for the worse. Being able to distinguish the holes or problems in the experience of our design is an important aspect to always remember.