Midterm – Welcome to Macbeth

CONCEPT

This midterm project explores the theme of illusion in Shakespeare’s “Macbeth,” focusing on the play’s intricate examination of hallucinations and the blurred line between reality and fiction. Inspired by a recent rereading of the tragedy, I was particularly drawn to Macbeth’s psychological descent and how it manifests through vivid hallucinations. The project aims to translate these literary elements into an interactive digital experience, allowing users to engage directly with Macbeth’s internal struggles.

Dagger Scene:

Central to the project is Macbeth’s first and perhaps most iconic hallucination: the floating dagger. This vision, occurring as Macbeth grapples with the moral implications of regicide, serves as a powerful metaphor for his ethical dilemma. The famous line, “I have thee not, and yet I see thee still,” encapsulates the frustration and confusion Macbeth experiences as he attempts to grasp the illusory weapon. This moment became the cornerstone of my project.

To recreate this sense of futility and frustration, I designed a scene where users must attempt to catch a floating, elusive dagger. Initially, I incorporated a probability variable to make the dagger challenging to catch, mirroring Macbeth’s inability to clutch the hallucination. However, after discussing with my professor, I realized this approach inadvertently removed an essential aspect of user agency and responsibility.

Therefore, instead of relying on probability, a ‘jittery’ random movement for the dagger seemed the best approach. This approach maintains the essence of Macbeth’s struggle while allowing users to feel fully responsible for their actions and failures. The erratic movement of the dagger not only represents the unstable nature of hallucinations but also challenges users to question their perceptions and reflexes, much like Macbeth questioning his sanity.

function moveDagger() {
  daggerX += speedX;
  daggerY += speedY;
  if (daggerX <= 0 || daggerX >= windowWidth - 200) {
    speedX *= -0.5;
    speedX += random(-2, 7);
  }
  
  if (daggerY <= 0 || daggerY >= windowHeight - 200) {
    speedY *= -0.5;
    speedY += random(-2, 7);
  }
  
    // jittery movement

  
  speedX += random(-0.5, 0.5);
  speedY += random(-0.5, 0.5);
  speedX = constrain(speedX, -5, 7);
  speedY = constrain(speedY, -5, 7);
}

 

BLOODY SPOT:

Once the user manages to catch the dagger, they have essentially helped Macbeth kill the king. Then we move onto the next scene, known as the ‘bloody spot scene’, which takes place in the context of after the murder, shifts focus to Lady Macbeth and her psychological unraveling due to guilt. This level/scene aims to immerse players in her descent into madness, mirroring her frantic attempts to cleanse herself of the metaphorical blood that stains her soul.

The gameplay mechanics are designed to reflect Lady Macbeth’s futile struggle against her guilt. Players will interact with various blood spots that appear on the screen, attempting to “wash” them away using a sponge cursor. However, just as Lady Macbeth’s efforts are in vain, more spots will appear even after being washed, illustrating the persistent nature of guilt and the psychological effects of their actions. I wanted to mirror her feelings into this, so I decided to create this cyclical gameplay.

To enhance this experience, the level also features a timer that counts down from 30 seconds. This time constraint adds urgency to the gameplay while emphasizing the fleeting nature of sanity as Lady Macbeth spirals deeper into madness. Unlike traditional games that have clear winning conditions, this level does not allow players to achieve a definitive victory; instead, it reflects Lady Macbeth’s tragic fate. When the timer reaches zero, players will be confronted with a poignant message about her inability to escape her guilt, culminating in a “Game Over” screen that reinforces the themes of ambition and remorse present in Shakespeare’s tragedy.

MENU

The menu design for this Macbeth-inspired game aims to create an immersive, gothic atmosphere that sets the tone for the entire experience. Drawing inspiration from the dark themes and supernatural elements of Shakespeare’s play, the design incorporates visual and auditory elements to engage the user from the moment they enter the game.

The background is set to black, immediately establishing a somber and mysterious mood. This dark canvas serves as the perfect backdrop for the text elements, allowing them to stand out and capture the user’s attention. 

The welcome message, “Welcome To Macbeth,” is displayed using a medieval font (UnifrakturMaguntia) in a blood-red color. This font choice evokes the historical setting of the play, while the red color symbolizes the bloodshed and violence central to the story. The welcome message appears word by word with a fade-in effect, creating a sense of anticipation and drawing the user into the world of the play.

Following the welcome message, a series of paragraphs introduce the story and set up the game’s premise. These paragraphs use a more readable serif font (DM Serif Display) in white, contrasting with the black background for clarity. The text appears gradually, line by line, allowing the user to absorb the information at a measured pace. This pacing mimics the gradual unfolding of the play’s plot and builds suspense. The final line, “If you are brave enough, click D,” serves as a call to action, challenging the user to engage with the game.

SOUND: 

Throughout all levels of my Macbeth-inspired game, sound plays a crucial role in enhancing the user’s interactive experience. I carefully selected and implemented audio elements to create an immersive atmosphere that aligns with the psychological journey of the characters.

Menu Level:

To set an eerie tone from the start, I incorporated audio from an actual performance of Macbeth’s first encounter with the prophecy that catalyzes his downfall. This choice immediately immerses players in the ominous atmosphere of Shakespeare’s tragedy.

Dagger Level:

In this pivotal scene, I layered the gameplay with a recorded performance of Macbeth’s famous dagger monologue. As players attempt to grasp the elusive dagger, they simultaneously hear Macbeth grappling with his moral dilemma. This audio-visual synergy deepens the player’s connection to Macbeth’s psychological state. Upon successfully catching the dagger, symbolizing Macbeth’s decision to commit murder, a sharp ‘slash’ sound effect punctuates the moment, aurally representing the finality and violence of his choice. In addition, I also added a fading effect – from white to black, to symbolize Macbeth’s descent.

Bloody Spot Level:

For Lady Macbeth’s scene, I opted for a more subtle approach with a general background soundscape. This audio backdrop sonically communicates Lady Macbeth’s internal turmoil and growing madness, enhancing the player’s understanding of her psychological plight without overpowering the gameplay.

Challenges:

I encountered a significant challenge with sound between scene transitions. Initially, audio from previous scenes would continue playing in subsequent levels, creating a chaotic and unintended soundscape. To resolve this, I implemented a sound management system where I call the .stop() method on all active audio elements at the beginning of each new scene. This ensures a clean audio slate for each level, maintaining the intended atmosphere and preventing audio overlap.

However, the most PROMINENT issue that I had (I spent two days trying to figure this out) was the sound — my sound in Menu (the prophecy audio) would play automatically when I run it in p5, like it should. However, when I go the full screen mode, the audio (the prophecy)  would not play, only when I would click ‘R’ to restart . I originally assumed that there was some conflict in my restartMenu() function. However, after tearing my code apart trying to figure out why the sound would not specifically play from full screen but would play in the preview/regular screen, I decided to turn to ChatGPT to see if it was able to recognise the problem. It immediately told me that it was due to my browser (I’m just happy there were no logical fallacies in my code) and how it handles audio autoplay. It told me that many browsers restrict automatic playback of sounds without user interaction, especially on initial load. Therefore, it just told me to use a built-in function from p5 called userStartAudio() — it explained to me that I should code a specific user interaction that can trigger the sound.

Thus this I coded:

function setupMenu() {

  background('black');

  medievalFont = 'UnifrakturMaguntia';

  paraFont = 'DM Serif Display';




  // to play audio

  userStartAudio();




  if (prophecy.onended ) {

      prophecyPlay = false;  

  }

}






function mousePressed() {

  if (!prophecy.isPlaying()) {

    prophecy.play();

    prophecyPlay = true;

  }




}

And my code now functions as expected when I go full screen!

This project has been the most insightful to teach me to be open to feedback, as well as really understanding the elements of  what it takes to make something interactive and functional.

This is the link:

WELCOME_TO_MACBETH

 

Week 5 reading

This reading was instrumental in my understanding of how computer vision techniques can be harnessed in the realm of interactive art and design.

One of the most enlightening aspects of the article was its clear explanation of the fundamental differences between computer and human vision. Understanding these distinctions helped me grasp why certain approaches are necessary when implementing computer vision in artistic contexts. The emphasis on the limitations of computer vision systems, such as their struggle with environmental variability, underscored the importance of thoughtful design in both the physical and digital realms.

The article’s discussion of various techniques to optimize computer vision for artistic applications was particularly valuable. Levin’s explanations of methods like controlled lighting, and algorithms provided me with a toolkit of practical approaches. This knowledge feels empowering, as it opens up new possibilities for creating interactive artworks that can reliably detect and respond to elements in a scene.

The ethical considerations raised in the article regarding tracking and surveillance capabilities of computer vision were thought-provoking. Levin’s examples of artists like David Rokeby and the Bureau of Inverse Technology, who have used these technologies to comment on surveillance culture and social issues, inspired me to think about how I might incorporate similar critical perspectives in my own work.

Furthermore, the range of artistic applications presented in the article, from full-body interactions to facial expression analysis, expanded my understanding of what’s possible with computer vision in art. These examples serve as a springboard for imagining new interactive experiences and installations.

In conclusion, this reading has significantly enhanced my understanding of computer vision in the context of interactive art. It has equipped me with technical knowledge, practical approaches, and critical perspectives that I’m eager to apply in my own creative practice.

Week 5 – Midterm Progress Report

CONCEPT:

Reading, as typical as it sounds, really and truly does fuel my creativity. I have always been keen on Shakespeare, particularly his tragedies. For this project, I want to explore how to toy with the human eye. In Shakespearean tragedies, the protagonists experience some sort of downfall that is indicated through the beginning of experiencing hallucinations. It is typically catalyzed due to paranoia and guilt – or a general feeling of loss of control over a situation. Macbeth is one of those characters. The Tragedy of Macbeth stands out because it is a lesson on arrogance and ambition – and how quickly the human spirit can be broken from guilt and ambition. Macbeth and his wife, Lady Macbeth, experience a series of hallucinations as they attempt to wrap their minds around the notion of regicide. Therefore, this project will have the user attempt to help them through these hallucinations.

EXPLAIN:

There are three hallucinations in Macbeth: A Dagger that Macbeth sees floating in the air that he cannot seem to catch; Banquo’s ghost (Macbeth’s friend he murdered), and blood on Lady Macbeth’s hands. These hallucinations are a manifestation of their guilt. For this project, I have chosen to leave out the hallucination of Banquo, as I would like to do a more stable approach of focusing and detailing and balancing the hallucinations between the husband and wife, the Macbeths. The Dagger Scene, and the Bloody Spot Scene are going to be divided into two levels or scenes.

SCENES:

The dagger level will have a floating hand (Macbeth) attempt to catch a floating dagger, but it will always seem to evade his hand, essentially driving him mad. This is inspired by the monologue he has, “Is this a dagger which I see before me?/The handle toward my hand?/ Come, let me clutch the/  I have thee not, and yet/ I see thee still.”. Here, he is basically saying that he sees this dagger, but as he tries to capture it in his hand, he cannot. I would add a speed variable – so that at moments the dagger will slow down, giving the user the opportunity to catch it. The purpose of this game overall  is not to win, but to simply understand Macbeth and the play a little better. 

For Lady Macbeth’s hallucination — the Bloody Spot level — the user will engage in a different type of interaction that reflects her overwhelming guilt. In this scene, the user will control Lady Macbeth’s hands as they attempt to wash away the bloodstains that symbolize her remorse. The gameplay will involve blood spots appearing randomly on the screen, and the user will need to “wash” these spots by moving Lady Macbeth’s hands over them. As the user succeeds in removing one spot, new ones will emerge, creating an endless cycle that mirrors her inability to escape her guilt. This mechanic emphasizes the futility of her actions and the psychological torment she experiences.

The purpose of the user interaction is to highlight the internal struggles and frailty of the characters. 

IMPLEMENTATION:

To effectively implement these ideas in code, I plan to create several classes: Dagger, Hand, and BloodSpot. Each class will encapsulate properties such as position, speed, and visibility. The functions will include moveDagger(), which updates the dagger’s position and speed; checkCollision(), which detects proximity between the hand and dagger; generateBloodSpots(), which creates new blood spots on the screen; and washBloodSpot(), which reduces the opacity of blood spots when interacted with. 

However, I anticipate several challenges throughout this process. The dagger’s movement algorithm must be realistic yet unpredictable enough to convey Macbeth’s frustration. 

Creating a sense of futility in gameplay will also be essential. Balancing difficulty is key; while users should feel challenged, they must also understand that catching the dagger is ultimately impossible. Gradually increasing difficulty or ensuring that new blood spots continuously appear will reinforce this theme. Lastly, integrating atmospheric sounds or character voices using the p5.sound library could enhance immersion and emotional impact.

To minimize risks associated with these challenges, I plan to implement the dagger movement algorithm first and test various approaches to ensure it feels engaging yet frustrating. Creating a prototype for the blood spot washing mechanic will help determine its intuitiveness early on. 

CODE:

In terms of my current code, I also decided to add two menus – one at the start and one at the end. The one at the start introduces the user to Macbeth. I wanted to make this extremely engaging. Although the final interface and design for the menu is not done yet, I wanted to implement sound almost immediately. The first speakers – the female voices — are actually from Macbeth. It is a prominent dialogue in which they, the witches, converse with Macbeth and let him know of a prophecy – that he will be king. However, there is already a king. So for Macbeth to be king, that means he would have to kill him. I wanted to create an eerie and uncomfortable experience — the whole time, I want users to be on their toes. 

 

However, I am most proud of figuring out how to navigate through different levels. It seemed like a no-brainer, but it took me time to figure it out.

function mouseClicked() {
  if (currentScene == 'menu'){
    currentScene = 'dagger';
  }
  else if (currentScene === 'dagger') {
    currentScene = 'bloodySpot';
  } 
  else if (currentScene == 'bloodySpot') {
    currentScene = 'closing'
  }
}

I have a lot left to accomplish, but overall I am looking forward to doing something unique.

 

 

Week 4 – Generative Text; Shakespeare

Week 4 – Generative Text

CONCEPT:

I recently re-read one of my favorite Shakespearean plays, The Merchant of Venice. When it came time to start my assignment, I knew I wanted to create something related to Shakespeare. I began by searching for “Shakespeare datasets” online and clicked on the first link I found on Kraggle.  The vast amount of data available was overwhelming, and I realized I could approach the project in many ways. After experimenting with different methods for generating text, I ultimately decided to focus on extracting character lines. A user selects a play, a character from that specific play, and then a line at random is generated.

CODING:

I did find this assignment challenging, as working with APIs is something I am not familiar with. Funnily enough, I spent time focusing over a silly mistake, wondering why my code was not working – it was because I had forgotten to upload the actual file into p5.js. 

I was unsure on how to get started, therefore I took to p5.js – Table for aid. From there, I learnt how to upload my data.However, I faced several issues in terms of uploading the file because it was colossally over the file size limit, so I had to spend a lot of time cutting 3 to 4 plays out of the file in order for it to be of a decent file size.

However, during my coding, I came across an issue. The character lists were not showing up on my dropdown list. After several attempts of testing and Youtube videos, I took to ChatGPT to give me help. It then told me although I was populating the ‘play’ dropdown in the setup() function,  I wasn’t updating the character dropdown when a play is selected. To fix this, I needed to add an event listener to the playSelect dropdown that will update the characterSelect dropdown whenever a new play is chosen. I asked it to explain the logic of the task at hand and I was finally able to understand. I created a new function called updatecharacterselect() — which basically  dynamically updates the character dropdown menu.

 

function generateRandomLine() {
  selectedCharacter = characterSelect.value();
  if (selectedPlayName && selectedCharacter && characterDialogues[selectedPlayName][selectedCharacter]) {
    let lines = characterDialogues[selectedPlayName][selectedCharacter];
    randomLine = random(lines);
  }
}

REFLECTION.

Perhaps, if I were able to expand on this, then I would add further levels of interactivity and make it a bit more aesthetically pleasing.

Week 4 – The Design of Everyday Things : Chapter One

This reading made me realize just how imperative the ‘human’ aspect of things comes into play when it comes to interactive design. As I reflected on the chapter, I found myself connecting the concepts to my own experiences with technology and design.

The author’s emphasis on discoverability and understanding resonated with me deeply. I’ve often purchased products and I couldn’t figure out what to actually do or how it’s done without reading the instructions manual first. It made me appreciate the importance of intuitive design that communicates its functionality without requiring extensive instructions or manuals.

I found the writer’s descriptions of engineers and their thought process during production to be quite intriguing, as I had not thought of things that way before. The chapter delves into the failings of machines due to the way certain engineers think – logically. In terms of failings, I am referring to the I learned that they often assume that the audience of their product or project will think like them or read the instruction manual. This is where they fall short. I came to understand that they struggle to grasp the idea that we all think so uniquely, and therefore, designs need to be adapted into a language that can be universal yet enjoyable enough to use. Which made me understand that this failing bleeds into the actual production of the gadgets, hence why it is not received the way they hand in mind.

As I came to learn of these concepts, I found myself looking at the designed world around me with new eyes. I started noticing how certain designs naturally suggest their use, while others require explicit instructions. This awareness has changed how I interact with and evaluate the design of everyday objects and digital interfaces. For example, the doors in D2 are just abysmal. Although they have a sensor, where if you wave at it then it opens the doors – it doesn’t always work, and I actually have to manually open the door like a jester. Although I am unsure if this fault is due to some technical thing, or a lapse in judgment, it is a problem that has existed since freshman year.

This new perspective isn’t just changing how I look at things; it’s changing how I create them too. Whether I’m putting together a presentation or organizing my workspace, I’m now thinking about how to make things clear and intuitive for others. It’s like I’ve been given a new set of glasses to see the world of design through.

Week 3 – Starman’s Spaceship

CONCEPT:

The idea came to me when I was listening to one of my favourite songs — David Bowie’s Starman (2012 Remastered version of course). The song is essentially about an alien, stuck in space, trying to find a community. One of the lyrics particularly inspired me to create the final vision –”There’s a starman waiting in the sky”. In terms of the project, I then imagined the screen to display bright, twinkling stars as spaceships explore the universe. Initially, I thought to just include one spaceship, but then, upon deeper reflection, I thought it would be a nice challenge for me to understand how I can generate different spaceships without adding any sort of manual commands. The alien (Starman), is us all — we’re navigating through the domain that is our life, which is reflected by the different coloured spaceships, symbolising different aliens trying to fit in. Each spaceship, represented in different colours, symbolises various aliens striving to fit into the cosmos, reflecting our own journeys through life. The different backgrounds depicts the unique journey each alien has — the stars are automatically rearranged when the user clicks the screen, to explore the journey of another alien in a spaceship.

 

VISION:

It was quite a challenge for me to be able to exactly visualise what it is I want to execute. Therefore, I came up with a simple, initial sketch to sort of understand what is supposed to be on my canvas, and perhaps what is supposed to happen. The sketch showcases two things namely: the background, and the starships. In this drawing, I drew two spaceships because at the time I had thought of displaying multiple spaceships, stationary.  However, as I began to implement the vision, I came to understand that it would look plain and it would block  the focus of the piece: the twinkling stars.

IMPLEMENTATION:

I decided to not store the starships in an array, unlike with the stars, because it did not make sense to me. The spaceships would appear and disappear one click at a time, so there was no actual reason for it to be stored in any sort of way. 

I focused on creating the spaceship first, as I wanted to see if I could make them fly. I first began by creating the class, and then manually generating them, e.g. spaceship1.show(), spaceship2.show(), for testing purposes. The spaceship was mreo tricky than generating the background for me, as I sort of struggled to understand how I can make them generate the spaceship at a certain set of coordinates without it going beyond the boundaries of the canvas. However, I then came to the realization that I can change the parameters in the this.x, and this,y, value so that they appear within the boundary, as the parameters I had them at originally were (0,600)

regenerate() {
  
  // to make the space ships a bit unqiue
  this.w = random(70, 100); 
  this.h = random(50, 70);  
  // instantiates ship at random pos
  this.x = random(50, 500);
  this.y = random(50,500); 
  
  //unqiue colours
  this.color1 = color(random(255), random(255), random(255)); //top col
  this.color2 = color(random(255), random(255), random(255)); //bottom col
}

In terms of the stars, I believe I am quite proud of my code, especially as the result of the background is quite aesthetic. I decided to create an array, so that the stars are stored into it as they are spawned, so that it is easier for them to be displayed in the draw function. I am mostly happy with my twinkle function, as I had thought long and hard about how to make it seem like it is getting brighter or dimmer. However, at the time, I was also experimenting with the opacity of the stars ellipse, to see how bright they should appear on the canvas — which then led me to the realisation that I can quite literally increase or decrease the opacity to give it the desired visual effect.

 
twinkle() {
  this.opacityChange = random(5);
  this.opacity *= this.opacityChange;
  
  // check if opacity is set below
  if (this.opacity < 50) {
    this.opacity = 50;
    this.opacityChange *= this.opacity;      
  }
  
  // check if opacity goes above
  if (this.opacity > 255) {
    this.opacity = 255;
    this.opacityChange = this.opacity;

  }
}

    

REFLECTION: 

If I were given the opportunity to further this piece, perhaps I would add another level of interactivity to the spaceships. Maybe I would make them shoot different coloured beams in any direction I am pointing at, storing them into an array, implement sound, or even some sort of rotation to add another pop of visual activity.

Week 3 – Reading on Interactivity

Crawford’s paper was remarkable in re-shaping my understanding of what makes something interactive, and how to approach the definition of the very term

At first, I also agreed with his statement about seeing as a conversation between two entities. This reminded me of one of my literature classes, where we discussed reading a paper is like the author trying to have a conversation with us. However, as I am writing this, and after reading the paper, perhaps I was mistaken to agree with this argument, as a conversation requires some degree of back-and-forth, which is something that papers and books do not do. Unless you consider it from a technical sense – if you are reading a book from a Kindle, where there are tons of buttons and opportunities to interact with the book e.g You can highlight any word, and it will look up the definition for you. That is seen as interactive, but reading from a physical paper does not allow users to have that sort of reaction.

This prompted me to question – what is the distinction between a reaction and an interaction? Which the author conveniently brings up at the right time. He brings up the situation of a tree branch falling, and the way he responds – that response does not prompt the fallen branch to suddenly get up and start flying around – it is still still. This example helped me to understand the difference. Once the user reacts to the prompt or situation, the other entity must react and continue as per the user’s actions and word.

What also deepened my understanding of interactivity is Crawford’s example of the Nintendo Fridges. He argues earlier that interactivity should be entertainment, but there are nuances to this statement. He explains that though adults may find a fridge to be mundane, children would like, as they can “play” with its lights by closing and opening the door. He explains that the fridge is still interactive, albeit uniquely. I learnt that interactivity has different grades: High, moderate, low, and none. An item like a fridge has low interactivity, but reading from a Kindle has a high interactivity. I found Cicero’s statement on the notion of interactivity very appealing. I discovered the notion of “imitation” in the interactive sense — I learnt that reading is an activity that imitate the idea of interactivity — as our emotional capacity is exercised there, thus alluding to the idea of interactivity, it not truly interactive. Additionally, Cicero believes “Fuller nourishment comes from the living voice.” – Another (biological) entity provides you with the intimacy that deepens your correspondences and actions with someone or something. Which made me think about the difference between interactive technical projects, and interactive human projects.

When Crawford moves on to performance, it helped me to understand how important the role of an audience is. I discovered that the larger the audience might be, the more challenging it might be to interact with them. In the theatrical world, it takes a large cast to be able to execute this – as you would have to break the fourth wall in order to deem a play as interactive. This instantly reminded me of interactive still lives. It is when an actor or the cast pose as part of an environment, allowing the audience to manipulate the scenery using their body. For example, if the scenery is a forest, then one actor may pose as a tree — and the audience ‘walks’ through this forest (keeping in mind this is a still life therefore the actors don’t move), and they play around with their environment. The audience would shape scenery by moving around the arms of an actor to make the tree appear wider, or perhaps, making them lay on the ground to show that is has fallen. That is an example of interactivity in performance, which Crawford argues barely exist.

He also argues at the end that “good interactive design integrates form with function”, which I sort of struggled to grasp as I did not understand what he exactly meant by form as it was sort of slapped on at the end, but I assume that he means that those who are in charge of interactivity must step out of convention to make a good interactive design.

Casey Reas’ Eyeo Reading Reflection – Week #2

REFLECTION

Casey Reas’ talk on randomness & chance revolutionized my own understanding of how art is composed, and what defines it as art. Before watching this video, I held a rigid view that randomness and art were mutually exclusive, believing that every artistic decision was deliberate, therefore, randomness is not possible. However, Reas’ discussion used the example of Dada’s artwork — which made me reflect on his time period. I am familiar with Dada, and his works. Like Reas mentioned, Dada shows his approach was to challenge the rational, as a reflection of the collapsing societies and turmoil post World War I.

He forges a link to art and the use of technology, by discussing the renowned John Cage. He was revolutionary in the field of Music Technology, as he was one of the first producers to use technology to compose music – utilizing the element of randomness  to determine pitch and length. As someone that has heard of John Cage, and is familiar with parts of his work,  I was surprised to see him featured here, as I had not considered his Book Of Music as ‘art’ — I had seen it as a strict use of splicing, impressing his audience of the time with the tools he was able to fine tune and take advantage of. Therefore, this got me to think, what kind of music did I consider as art? And what kind of technology makes art?

I also found Reas’ discussion on his concert compositions quite profound. He reveals how he found inspiration from previous pieces to make his composition, yet included an algorithm to sort of generate a randomness, limited by his requirement of the shapes being 90 degrees. This discussion has led me to question my own methods on inculcating this philosophy of randomness when I am making artworks and projects, and how I can draw a so-called ‘balance’ between the idea of randomness and other deliberate pieces.

Week 2 – Bubbles.

CONCEPT:

While walking across campus, I saw children blowing bubbles with their families. This simple, joyful scene transported me back to my own childhood, when my brother and I would spend hours blowing bubbles in the park. While I was in front of the screen, I thought about the patterns I recently saw during the week — and then I was reminded of the bubbles. I was then propelled this wave of nostalgia, inspiring me to incorporate the playful essence of bubbles into my piece. 

 

CODE:

In terms of coding, I was particularly challenged by the randomness factor. I was unsure of how to produce bubbles at random places, sizes, and even colours. Therefore, I looked for assistance on the p5.js website.: https://p5js.org/reference/p5/random/ to understand the syntax of the ‘random’ feature. After relentless experimenting, I was able to randomise the selection of colours, which I had little idea on how to approach. I was particularly proud of it. In terms of design, I removed the stroke to add some sort aesthetic to the bubbles, and I changed the background to a light blue, to resemble the sky.

 

for (let i = 0; i < 20; i++ ){
  
  let x = random(width);
  let y = random(height);
  let r = random (10, 50);
  
  fill(random(250), random(250), random(250))
  
  noStroke()
  
  ellipse(x,y,r)
  

REFLECTION:

This assignment really challenged what my understanding of art is, and also made me realise how often we are surrounded by ‘art’ – no matter how subtle, quotidian, or ordinary it may be. Perhaps, if I were given the opportunity to further it, I may add some user interactivity such as popping the bubble and slowing down the speed.

WORK:

 

Week 1: Self – Portrait

My Concept:

As the first assignment, I thought to keep things simple but also to add some personality into it. The background is a beach. One thing about me is that I absolutely cannot tolerate the beach – it is one of those outings I just cannot stand and enjoy. So as I started to create this piece, I thought to implement that (which is why I look so unhappy haha)

In terms of code, I decided to include some user interactivity. If you do click on me, then I end up blushing for as long as you press on the mouse.  I was quite happy with this part of the code because it took me some time to decide on what particular user interactivity I wanted.

// BLUSH
if (mouseIsPressed) {
  fill('pink');
  ellipse(460, 400, 40, 40); // blush on the right cheek
  ellipse(300, 400, 40, 40); // blush on the left cheek
}

Reflection:

This first assignment was quite an enjoyable one, because I liked getting the opportunity make something both creative and having a technical element into it. I suppose the most time consuming thing was editing the circles for the eyes and pupils, deciding where exactly they go and their sizes. I think this assignment helped me to have a solid understanding in how p5.js operates. I had hit a wall when I was drawing the hair on the character. It just looked awkward, especially as there was none present at the forehead, hence I decided to give her a fringe. If I had another opportunity, perhaps I would elevate the work by adding facial expressions and adding some text.