Midterm Project – “Wanderer” – Redha Al Hammad

For my midterm project I created a simple interactive piece which focuses on the importance of mindfulness, slowing down and taking a break. The piece features a character that can be controlled by the user to walk left and right with the arrow keys. By walking (or ‘wandering’), the user progresses through a series of different scenic environments ranging from grass fields to mountain ranges. The user is then prompted to ‘think’ at given points (improper functionality explained later) before finally returning home. I have sequenced the images (sourced from craftpix) to convey the passage of time with the exception of the last image which I edited in Lightroom to create a ‘blue hour’ color palette. The link to the sketch is below. For the best experience, open your window in full screen:

https://editor.p5js.org/redhaalhammad/sketches/H_1B-Ts-1

Edits for the final frame

In terms of technical application, I am happy that I was able to incorporate an intuitive transition from background to background for the user through my updateBackground functions. I found it challenging to wrap my head around how exactly to include this functionally. An issue that I had early on was that the background would always change to the next image regardless of whether the user walked off-screen to the left or right. I was able to resolve this by adding an else if statement and simply subtracting 1 rather than adding 1. I feel that doing so helped create an immersive environment for the user as it more accurately reflected the development of the character’s ‘wandering’ journey. The source code for the background transitions is included below:

function updateBackground() {
  // Cycle through the backgrounds
  currentBackground = (currentBackground % 14) + 1;
}
function updateBackground2() {
  // Cycle through the backgrounds
  currentBackground = (currentBackground % 14) - 1;
}

Building upon this, I feel that another strength of this project is its continuity which applies to both aesthetic and narrative. While I initially wanted a basic silhouette sprite sheet to make the experience more universal and relatable, the pixelated design style of the character matches with the similar style of the background images. Additionally, the visual aesthetic of the background images is consistent despite being sourced from different asset folders on craftpix. In terms of narrative, I was conscious, as mentioned previously, of sequencing the images to reflect both the passage of time but also a sense of space. While I do not repeat images (except for the scene of the character’s home), I consciously chose to include scenes featuring mountains as the character nears returning home in correspondence to the scene of the mountain which appears at the beginning of the journey. The intention behind this was to subtly present to the user that the journey is nearing its end as (based on the sequencing) they can infer that the character’s home is located near mountains.

Unfortunately, I have several issues in this project which I repeatedly tried to resolve but ultimately could not figure out. The first, which will be apparent to users unless they engage with the piece in the p5 editor while the window is in full screen, is that I could not properly situate my character and text into a fixed position relative to the backgrounds. This is likely due to the fact that I used ‘innerWidth’ and ‘innerHeight’ for the canvas dimensions with background images that do not take up this entire space. I tried to place the y-position of my character relative to the height of the images (using summer1 as a reference) but that did not accomplish the adaptive positioning that I wanted it to.

Another technical shortcoming was my inability to successfully add a simple audio track to help create a sense of atmosphere. Despite being a straightforward incorporation which I am familiar with, I was unable to successfully have an audio track play once. When the audio did play, it would be called to play every frame and eventually cause the sketch to crash. I looked to the examples provided in the class notes, researched the references on the p5 website and asked more experienced colleagues but could still not figure out how to do it.

Finally, an issue that I am deeply upset about was the lack of functionality in the ‘press ENTER to think’ prompt. To begin with, I was able to get the ‘think’ prompt to work momentarily. However, when it was working, my sprite sheet was not entirely functional as it would move across the screen without being animated. I suspect that the ‘keyPressed’/’keyCode’ functions were interfering with one another but I could figure out how to resolve it. I am especially upset that this element did not work as I feel that it would have elevated my project on many levels. First, it would have simply added another level of interactivity besides the basic movement, thus making the piece more engaging. Second, I feel that it very succinctly relates to the intention behind this work by prompting the user to physically stop progressing and to focus on the digital scenery in front of them. Moreover, the text which that appeared on-screen when this element was functional (still in the source code) added a much-needed personal touch and sense of character to the work which I feel is lacking currently.

Wanderer – Midterm Progress

For my midterm project I will be creating an interactive artwork entitled Wanderer which will take the form of a visual loop featuring a figure that walks through various landscapes. The interaction itself will be fairly simple as the user will only be able to initiate the figure’s movement and change the landscape in which the figure is walking.  I have chosen to simplify the interaction as a means to draw attention towards the work’s visual qualities which will (hopefully) be satisfying to look at. To fulfil the brief, I plan to incorporate an “energy” meter which will cause the session to restart if it reaches zero. As of right now, I have started working on a sprite sheet and will make progress on the backgrounds in the coming days.

let spritesheet;
let sprites = [];
let direction = 1;
let step = 0;
let x;
let y;
let speed = 7;
let stepSpeed = 60;
let animationTimer;
let automaticMovement = false; 

function preload() {
  spritesheet = loadImage("walk-sprite.png");
}

function setup() {
  createCanvas(600, 600);

  let w = spritesheet.width / 17;
  let h = spritesheet.height / 2;

  for (let y = 0; y < 2; y++) {
    for (let x = 0; x < 17; x++) {
      sprites.push(spritesheet.get(x * w, y * h, w, h));
    }
  }

  x = width / 2;
  y = height / 2;

  imageMode(CENTER);
}

function draw() {
  background(255);

  if (automaticMovement) {
    direction = 1; 
    x += speed;
    step = (step + 1) % 12;

  translate(x, y);
  scale(direction, 1);
  image(sprites[step], 0, 0);
}
}

function keyPressed() {
  clearInterval(animationTimer);

 
  automaticMovement = true;

  // then set the interval
  animationTimer = setInterval(() => {
    step = (step + 1) % 12;
  }, stepSpeed);
}

function keyReleased() {
  clearInterval(animationTimer);


  automaticMovement = false;
}

function mouseClicked() {

  automaticMovement = true;
}

 

Computer Vision Reading Response – Redha

The main point that stood out to me from this week’s reading was the wide prospects of the use cases surrounding computer vision.

To begin with, two artworks stood out to me for two varying reasons. Both artworks, however, expanded the scope of possibilities for me concerning the applications of computer vision within the context of art.

The first of these artworks is Rafael Lorenzo-Hemmer’s Standards and Double Standards (2004). This work piqued my interest due to its incorporation of space and inanimate objects which are activated through the help of computer vision. Personally, I find the overlap between the digital and the tangible to be an interesting area of focus so this work immediately caught my attention for its symbolic repurposing of an everyday object which is then given a sense of agency through programming that is supported by computer vision. Moreover, this work allowed me to consider the potential of using computer vision without requiring a visual output based on the data that the program is using. For example, in Krueger’s Videoplace, the user can see a visualisation of the input that the computer vision system is receiving (their silhouette) and it becomes central to the work. Conversely, Standards and Double Standards makes use of the input internally in order to trigger an another action.  Finally, I definitely appreciated that this work does not feature a screen (!) as I feel that it has become an overly predictable method of presenting interactive art.

Rafael Lozano-Hemmer, "Standards and Double Standards," 2004 on Vimeo

That being said, the next work that I have identified is Christopher Moller’s Cheese (2003) – an installation which solely presents screen-based work. While I do feel that this installation is an exception to the statement above (due to its bold imagery and simple presentation (and the fact that the work itself is not interactive)), what stood out to me was not the effectiveness of the work itself but the technical implications concerning the computer vision system that made the work possible. Considering the exponential development of technology, and the fact that the work was produced over two decades ago, one can’t help but wonder what can be done with facial recognition technology today. The reading mentioned how sophisticated the computer vision system needed to be in order to recognise slight changes in emotion and provide a response (albeit a simple one).

Cheese - Christian Moeller

This has lead me to ponder what is possible with facial recognition technology (and computer vision as a whole) within the artistic space today. I was reminded of an installation produced in 2019 which I had looked at for another class entitled Presence and Erasure by Random International. As part of my presentation on this work I discussed the concept of consent within interactive art and, as an Arab and a Muslim, I immediately recognised that such a work would may not be able to exist in certain parts of the world (such as this one) as a result of social and cultural beliefs. Ultimately, going down this rabbit hole as led me to consider the endless possibilities we have with today’s technology but it has also helped me understand that just because you can pursue an idea, does not always mean that you should.

RANDOM INTERNATIONAL

Assignment 4 – Redha Al Hammad

As I was short on time for my fourth assignment I decided to utilise it as an opportunity to experiment with several different ideas.

The first of these ideas was an attempt to replicate the playfulness of the Pixar opening animation. For me (and I’m sure many others in the class) this opening sequence featuring Pixar’s familiar serif typography and iconic lamp character is incredibly nostalgic. I specifically remember how satisfying the movements  of the lamp and the reaction to the letter was to me as a child. I attempted to replicate the most notable movement from this reference in my sketch by focusing on the element of bouncing.

Pixar Movies and Shows | Disney+

I did this by enabling the individual letters of the word ‘jump’ to  jump and bounce once they reach their starting point. I am happy with the jumping and bouncing effect as I feel that is smooth and encapsulates the playfulness I was going for. However, as I was unfamiliar with how to go about it I used ChatGPT as a reference and, while it did give me a good starting point for my desired effect, it also produced a noise-like shaking which I could not figure out how to remove. I am still proud of this piece of code which enables the jumping and bouncing effect onto the individual letters.

 isMouseOver() {
    let letterWidth = textWidth(this.char);
    let letterHeight = this.size;
    return mouseX > this.x - letterWidth / 2 && mouseX < this.x + letterWidth / 2 &&
      mouseY > this.y - letterHeight / 2 && mouseY < this.y + letterHeight / 2;
  }
}

 

Using ChatGPT was a result of my limited time and, had I given myself enough time, I would have avoided the other main problem in my code which is that it became difficult to read after adding my second word – “zoom” – which moves across the screen when pressed. As I was attempting to do many different things (with the help of AI) in a short amount of time, things became overlapped and the “jump” function became used for both words.

Despite this, I feel that I have a better understanding of physics, classes and arrays after producing this work and will look to incorporate them more in my upcoming projects.


 

Reading Response Week 4 – Redha

I felt that this week’s reading effectively conveyed a holistic view of what “good design” is from a human-centred perspective. I felt that a lot of the concepts presented are relevant to our daily experiences with our personal devices (smartphones, tablets, laptops, PCs, smartwatches etc.).

In response to the author’s view that modern devices/machines are too complicated and present too many functions, these devices (for the most part) have incorporated more minimal designs both in their GUIs and their external appearance. Moreover, they have managed to present their complex affordances through simple and accessible conceptual models which can be understood easily by new users. However, I have found that the discoverability and comprehension of these devices’ conceptual models is dependent on how conditioned the user is to the universal modern language of conceptual modelling used across different technologies. As a basic example, someone who has been conditioned to understand these conceptual models would easily comprehend that a button with an arrow would indicate the ability to progress or move something (e.g scroll down a page, move a cursor, change a selection). Regardless of the use case, this affordance is understood through these commonly used signifiers. However, to someone from my parents’ generation, these connections are not made immediately (even if there is prior familiarity with similar interactions) and would take more time to understand.

Given this example, I would like to present interactive design as a language within itself – one that changes depending on time and place. An interesting component of this design is that there is an exponential increase in new “sounds” and “words” in the form of the constant development of technologies. A decade ago, the use of facial recognition and finger print identification technology in personal devices was a completely foreign “word”. Now, it is used dozens of times daily and is seen as an expectation in new devices. I feel that this view accommodates for the author’s informative identification of the different aspects of interactive design while highlighting the importance contextual information within the design process.

Assignment 3 – Pixel Dragon, Redha Al Hammad

My third assignment started with experimentation rather than ideation. As I initially struggled to grasp the concept of OOP and how to implement it into my own work, I went through the examples provided and began to change different variables just to see how they affected the output.

After some time, I decided that I wanted to continue my theme of recreating natural experiences. I initially wanted to pursue this by expanding on the concept of my last assignment (sitting under trees) to “sitting under clouds”. The idea for this was to have clouds move smoothly with a Sine function and incorporate some interactivity using “if (mouseIsPressed == true)” such as lighting or rainfall. However, after implementing Sine movement to my object, I noticed that it felt more “alive” than something as passive as clouds. As I had already conceptually situated my idea in the sky, I decided to branch off from my original concept and create a scene which depicts the body of a dragon.

While I decided to only create the body as a result of my inexperience, I personally like the unique perspective of only viewing a small part of something that is intended to be big. I believe that films often use this technique of showing a small part of a large creature in order to produce a sense of scale without having to animate its whole body.

In terms of interactivity, I still incorporated “if (mouseIsPressed == true)” in order to simulate the (off-screen) effect of the dragon breathing fire. I did this by generating a random color palette within the range of a faded red – a greenish grey on the background when the user clicks on the mouse.

I a simple yet effective piece of code from this assignment which I am proud of would be the “shading” which I managed to produce on the dragon’s body by increasing the number of rectangles in the for loop and selecting two analogous colors for the fill and stroke. I have included the two examples together below.

 // spacing and for loop
 let spacing = width / 163;
  for (let i = 0; i < width; i += spacing) {
    manyRectangles.push( new SineRect(i, 10 + i * 2)  );
  }

/////////////////

// dragon's body color
  fill(140, 70, 70);
  stroke(80, 50, 50);

Some points of improvement going forward (beyond learning how to make a full dragon) could be to create a more realistic sky as the flat/cartoonish style of the clouds contradicts the smooth aesthetic of the dragon. I feel that I could have also made the sketch more interactive by allowing the user to interact with the dragons body in some way (e.g it moves when the mouse is pressed or it follows the mouse).


 

Reading Response – Week #3 Redha Al Hammad

The main point which stood out to me from this reading was the author’s definition of interactivity as clarified by the analogy of a conversation. I found the comparison of “input, process, output” to “listen, think, speak” to be effective in conveying interactivity today (especially since the piece was written two decades ago). On this same note, I found the mention of an interactive process requiring “two actors” to be interesting. As someone who grew up with digital technology, I subconsciously tend to either view it as a tool or an extension of myself and my actions – not as a separate active party that has the agency to “listen, think and speak” in conversation with me. I feel that the only time I consider technology’s agency is when it does not cooperate with my goal as a user as, for example, oftentimes when a website does not load or is difficult to use we place the blame on it and not the interaction itself.

Another point I found interesting was the misuse of interaction in varying different contexts. More specifically, the distinction between integration and reaction stood out to me as the author strongly claims that they have nothing to do with one another. However, I do feel that there is some element of reaction within interaction (on the human side at the very least). In the aforementioned example, a poorly constructed website will surely garner a specific reaction in the user and, in turn, will affect how they interact with it. As such, I feel that in certain cases, the two are interlinked and should not be considered entirely separate processes.

Reading Response – Week #2 Redha Al Hammad

The main point from this lecture that stood out to me was the balance between randomness and order which has been central to the development of interactive art over time.

Prior to learning about the technical process behind coding, I never considered that conceptualization and randomness can coexist within the same work. I was vaguely aware of algorithmically generated art, especially with the rise of AI generated images in the past few years, and I had been exposed to contemporary art which puts conceptualization at the forefront. However, the merging of the two is something relatively new to me and definitely an avenue which I would like to explore in the future to expand the scope of my personal work beyond traditional photographic practices.

One of the examples of this presented in the reading was a project by Reas entitled Chronograph in which he algorithmically generated visuals to be projected onto the New World Center in Miami. What stood out to me from this work was the intentionality behind it as the visuals were created using images of the neighborhood surrounding the center and later manipulated and collaged through coding. This defied my preconceived notion of digital, coded art to be nothing more than a display of technique and skill as it offered a very tangible ‘why’ to the decision making process behind the work. I have included it below.

Another point which stood out to me from the lecture was the effectiveness of simplicity. This became apparent to me as Reas did a live demo of a very basic Commodore 64 program. Despite only using two basic symbols, Reas’ introduction of randomness allowed the program to produce an interesting visual. As someone who is inexperienced in coding, I would like to combine the aforementioned intentionality with the effective simplicity of this C64 program in order to get the most out of my work.

Assignment #2 – Sitting Under Trees by Redha

For this assignment, I used this piece by A. Michael Noll as my starting point.

Naturally, I wanted to expand on the concept of generating randomly-sized rectangles across a canvas and I decided to do this through introducing random movement to them. Upon doing so, I felt that recreating the same monotone color palette in Noll’s piece would lack depth, so I experimented by filling the rectangles with a randomly generated (yet still specific) color palette. Initially, I went for brown but, as I was experimenting with the RGB values, noticed that introducing green reminded me of a forest’s color palette. The movement of the squares (which I had already coded at this point) aided in this imagery as it was reminiscent of the swaying of branches and leaves in the wind. To complete this imagery, I changed the background color to a shade of blue which attempts to mimic the sky’s color just after the sun has set.

My main challenges and highlight of my code one and the same as I struggled to figure out how to incorporate randomness into my for loops. However, I was definitely happy with the result after incorporating them into my code. I have included it below.

for (let i = 0; i < 200; i++) {
   let x = random(width);
   let y = random(height);
   let rectWidth = random(100, 20);
   let rectHeight = random(100, 20);
   rect(x, y, rectWidth, rectHeight);
   
   
   let r = random(140, 170)
   let g = random (120, 150)
   let b = random (10, 40)
   fill(r, g, b)

 

Some further points of development could be to incorporate the use of noise to produce a swaying effect rather than the random generation of squares each frame (however I do like the “pixelated” appeal of my outcome). Another point which I considered would be to incorporate a moving gradient into the background to create a more immersive experience. A larger scale development could be to create an entire interactive exhibition with an environment made in the same style as this outcome.

 

 

Assignment 1 – Self Portrait (Redha Al Hammad)

For this assignment I have produced a self portrait of myself with two defining features: my glasses and my camera.

Initially, the concept I wanted to pursue involved a simplistic rendering of my face with mouse tracking applied to the eyes and camera. I am most proud of the fact that I managed to achieve this idea with the eyes (albeit with some help from the p5js reference page). The main difficulty with this for me was trying to understand how the mouseX and mouseY functions worked in relation to the constrain function. However, with lots of experimentation, I managed to figure out how to implement mouse tracking the way I intended for the eyes. I have included the code for it below:

 // eyeballs
 fill(255)
 ellipse(150, 175, 50, 40)
 ellipse(250, 175, 50, 40)
 
   // pupils
 fill(80, 40, 10);

let leftPupilX = constrain(mouseX, 140, 160);
 let leftPupilY = constrain(mouseY, 170, 180);
 ellipse(leftPupilX, leftPupilY, 25, 25);

 let rightPupilX = constrain(mouseX, 240, 260);
 let rightPupilY = constrain(mouseY, 170, 180);
 ellipse(rightPupilX, rightPupilY, 25, 25);

 

This leads onto the many aspects which I wanted to implement but could not with the main one being the implementation of mouse tracking on the camera. Ideally, the intended outcome was to have the eyes track the camera as it moved around with the mouse in order to create a more dynamic and interactive experience. However, I could not figure out how to apply mouse tracking to multiple shapes. Some other details which I wished to include were facial hair (I was unsure how to go about it and ran out of time) as well as the curve in my glasses which connects the two lenses (I wanted to make a curved rectangle but could not find how). Moreover, I was unsure of how to present my hair so I just opted for a rounded rectangle which may be perceived as a hat.

In terms of the code itself, I feel that I should have made use of variables in order to make the act of placing the different shapes on the canvas in relation to each other easier. One point that I liked which came as a result of not using variables was that certain features were slightly off centre (e.g mouth, nose) which made the outcome feel more playful and hand drawn.