Complete Midterm

Complete  midterm:

Since the project neeed to be interactive, I was inspired by multiple souces that I have experience, like the interactive haunted Queren Mary story and the film “Night at the Museum”.  I deciede to do an interactive storytelling about a spooky story based in the museum setting. I enabled the player to make choices at multiple points in the stories, leading to different endings accordingly. I wrote the story myself and and organized the branching so seemingly safe choices can have unexpected turns.

Surprisingly though, the most difficult part of my project was not actually the coding but having AI genertate images for me. With coding I was able to see  what was wrong and physically fix it, and as long as I code it correctly it will do its job. But with AI generating images it sometimes just doesn’t get what I am saying. And since AI don’t actually see the images it really has difficulty when I want it to edit or make changes to the image it generated.

The way my project works is that I put all my scenes in the “playing” gamesate in a giant object called storyData, and made them nested objects. The storyData is coded in an individual .js file. This allows the main code to be organized since it only fetches the  information. The properties of the scenes, including: nnames of scenes, their relations,  the audio, visual, duration, text, delays and choices for individual parameters are all in the storyData file. Example is below.

storyData.scene1_part3 = {
  text: "In the darkness, you hear laughter in the corridors...",
  duration: 5000,
  textDelay: 1500,
  visual: "booth_dark",
  audio: "girl_laughter",
  choices: [
    { label: "Investigate", next: "scene2_part1" },
    { label: "Stay", next: "scene3_part1" },
  ],
};

This object file functions throught my drawCurrentScene  function, which I am sort of proud of.

function drawCurrentScene() {
  background(0);
  let scene = storyData[currentSceneId];
  //image
  if (myImages[scene.visual]) {
    let currentImg = myImages[scene.visual];
    let aspectRatio = min(
      windowWidth / currentImg.width,
      windowHeight / currentImg.height
    );
    let drawWidth = currentImg.width * aspectRatio;
    let drawHeight = currentImg.height * aspectRatio;
    image(currentImg, windowWidth / 2, windowHeight / 2, drawWidth, drawHeight);
  }
  //timer
  let elapsedTime = millis() - sceneStartTime;
  //audio mechanism for deplayed audio. Undelayed audio mechanism is with the changeScene function
  if (
    scene.audioDelay &&
    elapsedTime > scene.audioDelay &&
    lateSoundPlayed === false //If the scene has audio delay and time is more than delay and the delayed sound has not been played
  ) {
    mySounds[scene.audio].play(); //play the sound and indicate the delayed sound has been played
    lateSoundPlayed = true;
  } //This state mechanism for sound prevents if from playing every time draw runs
  //text (subtitle) display
  let delayTime;
  //if the scene has text delay make the delay time that, if not make it 0
  //The undelayed text cannot go in changeScene like the audio because it needs to be drawn every frame constantly. It must be in draw.
  if (scene.textDelay) {
    delayTime = scene.textDelay;
  } else {
    delayTime = 0;
  }
  //if time has passed delay,
  if (elapsedTime > delayTime) {
    //draw the background box for subtitles
    rectMode(CENTER);
    fill(0, 0, 0, 200);
    rect(
      windowWidth / 2,
      windowHeight * 0.85,
      windowWidth * 0.7,
      windowHeight * 0.1,
      10
    );
    //drawing the text
    fill(255);
    noStroke();
    textAlign(CENTER, CENTER);
    textSize(windowHeight * 0.04);
    // 4th parameter limits the max width of the text, keeping it from going out of the box
    text(scene.text, windowWidth / 2, windowHeight * 0.85, windowWidth * 0.7);
  }

  //scene change logic
  if (elapsedTime > scene.duration) {
    if (scene.autoNext) {
      changeScene(scene.autoNext); //If the scener has an automatic connecting scene, change it to next scene
    } else {
      rectMode(CORNER);
      fill(0, 0, 0, 100);
      rect(0, 0, windowWidth, windowHeight);
      if (choicesDisplayed == false) {
        //If it does not have automatic next scene but has choices, draw the black veil and display the choices
        displayChoices();
      }
    }
  }
}

The drawCurrentScene function was written so it would work for every scene that has information in the storyData file. It draws the image fulfilling the current window size,  creates the scene timer, uses that timer to operate audio and visual delays and scene duration and decide how to change the scene to the next scene based on the parameters of the scene. This allows a smooth flow of the game as if it is a video when the player does not need to interact. When the player does need to make choices, it allows unlimited time on the choice page. Because every scene needs to go through all these processes, coding it in this way allows 50 scenes to go through one function instead of having 50 processes, making the code much easier and much more organized.

It actually also allows super easy editing. If you don’t like any part of the story or want to add or delete anything, because of the existance of this function, you would only need to change or add or delete things in storyData. And since storyData is only storing infotmation, it follows less strict order and organization rules than the main code. Making changes to it would be a lot closer to human language and easier.

I am also quite proud of my logic for updating element position when the canvas is resized. The code actually encorporates multiple functions

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  //resize and reposition button accordingly
  //start button
  if (gameState === "start" && startBtn) {
    //size
    let startWidth = max(200, windowWidth * 0.15);
    let startHeight = max(50, windowHeight * 0.08);
    let startFont = max(20, windowWidth * 0.02);
    startBtn.size(startWidth, startHeight);
    //positionn
    startBtn.position(windowWidth / 2 - startWidth / 2, windowHeight / 2 + 50);
    startBtn.style("font-size", startFont + "px");
  }
  //same for return button
  if (gameState === "credits" && returnBtn) {
    let btnWidth = max(200, windowWidth * 0.15);
    let btnHeight = max(50, windowHeight * 0.08);
    let btnFont = max(20, windowWidth * 0.02);
    returnBtn.size(btnWidth, btnHeight);
    returnBtn.position(windowWidth / 2 - btnWidth / 2, windowHeight * 0.85);
    returnBtn.style("font-size", btnFont + "px");
  }
  //game button
  if (choicesDisplayed && choiceButtons.length > 0) {
    for (let i = 0; i < choiceButtons.length; i++) {
      let btn = choiceButtons[i];
      updateButton(btn, i, choiceButtons.length);
    }
  }
}

I not only changed the canvas size according to window size in this code. I also included the button elements and repositioned them accordingly. I found out when I tried my code the HTML elements do not move like the texts do. They dont stay in the same position relative to the canvas. So I actively coded them to calculate new size and position when the canvas is resized. This function is then called in the buttonFunction function so they would be ready to act every time a button is created.

function buttonFunction() {
  //take the value from the button and give it to variable nextScene
  let nextScene = this.value();
  for (let i = 0; i < choiceButtons.length; i++) {
    //remove the buttons from the screen
    choiceButtons[i].remove();
  }
  //empty buttons array for new round of choices
  choiceButtons = [];
  //reset choice display state
  choicesDisplayed = false;
  //If next scene is credits
  if (nextScene === "credits") {
    //change game state in the state mechanism
    gameState = "credits";
    //display the restart button
    returnBtn.show();
    //use the windowResized function for reorganizing text and buttons for the credits screen.
    windowResized();
  } else if (nextScene === "restart") {
    //same logic as above
    gameState = "start";
    startBtn.show();
    windowResized();
  } else {
    //If it is just a choice in the story go with the story logic, the button doesnt need extra function.
    changeScene(nextScene);
  }
}

windowResized is called after the button is told to show. This way the buttons will always be in the right place no matter when or how the screen size is changed.

function startGame() {
  fullscreen(true);
  gameState = "playing";
  startBtn.hide();
  //input any scene here and its a quick portal to the scene
  changeScene("intro");
}

I also wanted to mention this code snippet that I found could also serve as a “maitainance platform”. This was originally written to changethe scene from start to intro and game state from start to playing. But change “intro” to any scene name that is in storyData and this serves as a portal to that scene. Without it I would have to needed to go through the whole story every time I changed something and wan to see the effect.

Some areas for improvement include adding fade in/out effects and more animation. When I looked through the game I felt that some scenes may need a gradual introduction effectr which fade in would perfectly suit. I wasn’t able to add that due to the limit of time. I tried to code it but bugs got in the way and I did not have enough time to trouble shoot the whole thing so I just deleted it. The game would also look better with more animation. But it would be near impossible to reproduce the shapes of spirits and ghosts in my current images with p5 code. The better way would be just make the plot a video, and code choices in between. But that would make it diverge from the goal for the midterm.

AI was used to some extent in this work. All the images used in this work was generated by Google Gemini accoirding to my requirements. For code, Gemini helped me with button styling. Because HTML objects were new to me I had trouble figuring out how to style them. Gemini introduced the different codes for button styling and I wrote the styling codes according to the examples provided. It also provided me with the idea of using an array for the buttons on the scene so they can easily be added and removed (line 433-436). (I originally only had array for choices so the buttons just got stuck on the screen). It also helped me with writing “not” in the if function (line 490) because I remembered it as || which was actually “or” and the code failed to work. Gemini also assisted my when I needed to find sound effect and voiceovers. It suggested using Freesound and Elevenlabs for these needs and gave me tutorials on how to use them. At the end ofc my project I also used Gemini for debugging once or twice when I encountered no councel error messages but the game crashing, because it was difficult for me to pick out the error among hundreds of lines of code. AI was used for help and assistance and did not write the code. The code was written based on an understanding of what was taught in class and what AI explained to me.

Reading Response Week_5

Computer visions differn from human vision in several ways. The human vision does not need pre-programed information about visual contexs to understand it directly. However computer vision is unable to do so without pre-programmed assumption about the context, or else it fails to understand. Human vision is also better at image level understanding instead of understanding or counting pixels, while computer vision is the exact opposite.

Some techniques to help computers see and track would be background subtraction: to help isolate moving object; brightness thresholding:  to detect change in picel brightness to infer change in the whole image; high contrast: so that the subject would stand out; simple backgrounds: to make background aubtraction and brightness thresholding easier.

Computer vision’s special capacity make it useful for bringing in a new perspective in interactive art. A computer vision will get differnt information or emphasis from the same image as a human, and the comparison or cooperation between the two would be a great lead for interaction. The computer vison is also objective and can pick up details, giving it the potential to learn its interactive audience better than a human would.

Midterm Progress

Concept

I had a tough time thinking of how to make something original and interactive with my limited skills. Then I remembered an interactive exercise I did when I was in middle school. It was an interactive investigation story of the famous haunted Queen Mary cruise, where the reader made choices and flipped to the corresponding pages and eventually reach different endings. I thought I wanted to do something similar that would have an original story and scenes that I make myself, and allow great interactivity by letting the player make their choices. I thought of the film “Night at The Museum” and decided to write a spooky story based on the museum setting. I enabled the player to make choices at multiple points in the stories, leading to different endings accordingly. I am not attching the whole storyline because I dont want it to be a spoiler.

Design

the design would be simple pictures that show the museum scene, with animation when it is neccesary. Sound effect and text would be key to the design. An example would be like this

This is in p5 because the image upload doesn’t work, it is just to show the image and not interactive, but when it is put together in p5js there would be interactivity.

Frightening Aspects

The difficult part if to keep track of what scene the player is in and what interaction the player makes. Every secen need to be activated only when having both conditions of being in the precious scene and clicking the correct interaction. This gets more complex when different choices can lead to the same scene. I am planning to do this with stacking of if else statements.

What I plan to do to prevent this is attaching conditions to the buttons not the images which serve as the scene. Because background and animation is loaded after the scene change, they cannot be the trigger. So I likely have a scene array where each scene has its own index and the index will be used to identify which scen it should change to when interaction takes place.

Reading Reflection_Week4

The author states that a product must have discoverability and allowing understanding to be usable, without these it would not be human-centered enough and cause fustration. Except for all the example mentioned like doors and remote controls, one that often bothers me is the shower knobs at hotels. Usually we assume twisting a knob to the left is hot water and to the right cold water. I have encontered ones that are the opposite, ones that control temprature and water flow at the same time (it definately isn’t a water pressure problem it is a true design flaw) , ones that need a button to be pressed before it is spinned, and other confusing designd. This lead to me standing before the shower head for at least half a minute every time tryin to figure out how things work, and often getting blasted by cold water accidentaly.

The same applies to a lot of electric cars, at least my experience back in China was sometimes frustrating. The electric cars almost all aim for a futuristic design, many following Tesla in using the flat door handles. Some of them need to be pressed to pop out, some of them requires the driver to control them, some are automatic and some need to be pressed on one side, grabbed and them pulled to open the door. The open door button inside the doors and easily just as annoying as them come in knobs, buttons and handles.

The authors idea that the design should be human centered can serve as a central rule to interactive design. The experience of the participant is of the utmost importance. For example, signifiers should be present almost everywhere. The interaction is most likly strange to the user, so they must be instructed what to do. Clear instructions would make an experience a whole lot better than a confusing one.

Feedback is also a key component. Feedback should be fast and accurate to the action to allow the user to truly feel that they are engaging in an interactive activity. It needs to be clear to what it is responding to create logical connections between the users actions and the design’s response, creating a smooth transition in interaction.

Assignment_Week4

Concept

The inspiration for the homeowrk came easily as the Spring Festival is around the corner. In China everyone heads back home before the Sprign Festival and the whole country is on the move, we call this Chunyun (literately meaning Spring travel). Also being inspired by the world population map that we made in class I wanted to make a map that showed the great migration of people that happen every New Year.

The most difficult part was actually looking for the data. Because this was a very Chinese thing and is not really recorded on databases in other countries, I first tried a government site in China but it requested a lot of filling out forms and emails and getting back to me in a few working days so I gave up. Eventually, I asked Gemini to dig through the internet to see if there is any Chunyun data, and Gemini turned up with some data and crawling codes on github. I decided to use the data Gemini found but it was recorded in a way that completely didn’t suit my needs and I was unable to transform it in to a easy understandable format. So I gave up and picked the top 50 most traveled intercity routes and put them on the map.

Code that I am proud of

//decide the real limits of the Chinese map so the cities are mapped to the shape of the country
  //West
  let mapLeft = 71.0;
  //East
  let mapRight = 136.0;
  //North
  let mapTop = 57.0;
  //South
  let mapBottom = 18.0;
  //for loop to draw all the dots
  for (i = 0; i < numRows; i++) {
    //map the coordinated acourding to the outer rims of the country defined earilier
    let x1 = map(float(originLon[i]), mapLeft, mapRight, 0, width);
    //y coordinates are upside down because origin is in top left, reverse height and 0 to reverse the map
    let y1 = map(float(originLat[i]), mapBottom, mapTop, height, 0);
    let x2 = map(float(destLon[i]), mapLeft, mapRight, 0, width);
    let y2 = map(float(destLat[i]), mapBottom, mapTop, height, 0);

This is how I mapped the locations of the cities to the actual shape of China instead of just mapping them to the canvas. I identified the four outer rims of China and kept the dots of cities within the limits so they would be proportional to their real location.

Sketch

The cities are either at their actual location or close. I was unable to be very exact because the margins and stretch of the background map was manually adjusted.

Reflection and improvements

The result of the map is not very good because only 50 routes were chosen and becaue of how people populate china they are mostly on the east side of the country. This makes the map look unbalances. Secondly, 50 is way too far from enough to create something like a flightmap that would outline the country with arcs. This was difficult to do because I had to process the data manually and even 50 took a huge amount of time. I will see if there are futher ways to used exel functiond or formulas to easily transform the data into something that can be processed easily. That would also deal with the problem of dots not fitting on maps properly because the dots and lines would create the map itself.

Reading Reflection_Week3

The characters of a strongly interactive system should include a strong need for input, and a strong output which has the user intrigued even further. The outputs need to be able to create joy or interest or maybe even frustration, just something that can really have the user feeling deeply about what had happend, and trigger the strong intention to try again. I believe games are a pretty good example of this. They require input to do anything at all, and they might be somehow addictive because the output from the game keeps the gamer strongly sttracted. Whether it is that the user dies and needs to start again, needs to finish something, or needs to start somethign new, the feeling generated by the system urges the user to make another input, and then the cycle of an interactive system can keep spinning until one decides to leave.

For my own p5 sketches, I think they lack an interactive depth. There is too little to do with my sketches, since I haven’t learned enough to build a game out of it. There is no interactive cycle, it is usally just a click or a movement of the mouse, and everything is over. This leaves the user with no intention to start again or do something different because they have already finished everything they can. Also, the output isn’t intersting enough. If the output is, for example, getting a certificate that holds some value, or maybe a souvenier of Abu Dhabi, it might make the users want to participate more (for more or different rewards), or maybe it can attract other users.

According to this, I get the idea that I have to incorparate more interactive funcions to keep the user interested when I go on to learn more code and make the big projects.

Assignment_Week3

Concept

I had my initial inspiration from the algorithmic art of Vera Molnár, but since someone already did art based on that and I wanted something more lively, so I thought of making tracing eyes across the canvas. I originally aimed at giving it a spooky feeling but wasn’t really able to fixate an idea on how to make it spooky. So I just ended up using random colors and having the eyes pop up randomly acreoss the canvas.

The tracing effect was the part that I really wanted to implement the most as an idea. I wanted all the eyes, no matter where they are on the screen, to be looking at the position of the mouse. Since I was using class to draw the eyes, it meant that there needed to be a universal fucntion for all the eyes, and I am pretty proud of the results.

Code that I am proud of 

let dx = mouseX - this.eyex;
    let dy = mouseY - this.eyey;
    //calculates the angle of point(dx,dy) relative to positiveX, which is same as (mouseX,mouseY) to horizontal
    let alpha = atan2(dy, dx);
    //calculate distance between mouse and eye center
    let distance = dist(mouseX, mouseY, this.eyex, this.eyey);
    //if mouse is inside the iris, pupil & iris movements follow the mouse, if mouse is outside, they follow the angle at the max distance of 12.5, so they dont run from the eye
    let movedist = min(distance, 25 / 2);
    let movedist2 = min(distance, 3);
    //cos determines horizontal direction movement and sin defines verticle direction movement
    let pupilx = this.eyex + cos(alpha) * movedist; 
    let pupily = this.eyey + sin(alpha) * movedist;
    let irisx = this.eyex + cos(alpha) * movedist2;
    let irisy = this.eyey + sin(alpha) * movedist2;

The code used in creating the movement was not too difficult, it was the math that took me a lot of time. Arc-tangent-2 was used to calculate the angle for the tracing,  because angle of point (dx,dy) relative to positiveX is the same as (mouseX,mouseY) to positive x.

The line I liked most in this chunk is where I united the situations of the mouse being inside and outside the iris in one line. taking the minimum of the two numbers allowed the pupil to follow the mouse smoothly as it transfers across iris.

Embeded sketch (click to see eyes)

Reflection

I actually have reflected quite alot during the production already. Because I randomnized the size and stroke weight, it was a bit difficult to make the arcs meet at the ends, and some of the thinner lines dont really look good with big eyeballs. Also, I did no fill for the arc so it wouldn’t create a fan shape, but that also leaded the bottom eye to be able to be seen through the top eye. I may need to reasearch into how to fill odd shapes to fit that.

Some further development I want to make to this is maybe attach it to some creature instead of having a lot of eyes floating and stacking on top of each other. I am thinking about maybe putting it on some worm because for some reason I think of the game I am playing called Hollowknight: Silksong when I look at the eyes move.

Reading Reflection – Week2

I hope to incorporate randomness as a touch of freedom to the well ordered code if I am intending to make a work of art, but use it as a functional tool if the work is to be of practical use. I wouldn’t want to use too much randomness, and I don’t want to make randomness the center of focus. The video mentioned randomness and chance and demonstrated it as a novel and  individual element in art, which deviates from my understanding of what art is. The speaker seems to be defining randomness using its most limited definition: true randomness generated without bias, unlike what the human brain would give, and claimed that artistis used to be responsible for order.  I personaly would like to believe that randomness is not so strictly defined, and that artists, especially the abstractionist, are probably the most random people among the population apart from psychopaths. They are the human end of randomess, and what they exhibit is a randomness enriched with meaning, emotion and expression.

This is why the way the speaker uses randomness as an artist is not something I would agree with. I believe art should be something more than the shapes and forms of the surface. The beauty ought to lie in the emotions and expression depicted through the work. There is a soul behind the cavas, whether physical or electronic. The extend to which the speaker pursures randomness is excessive. It is more a research of randomness rather than art. The lines and shapes created contain no meaning, and is a mere experiment that is goodlooking. There is no soul behind this. I don’t see this differently as a graph that may be formed from the data of a failed scientific experiment. The scale of randomness in art I would prefer is when the artist tries to express something through the randomness. For example: A work of random lines and connections that resulted from controlled code and processing afterwards that is aimed at expressing the pressure of modernlife and slight moments of relaxation people find within (I have this in my mind as a graph of crowded random lines where the gaps are bigger in some places). This would be my ideal of the scale of randomness: controlled and not covering over the emotions or the “soul”.

Assignment_2: loop art

Concept

I looked throught some of the old computerarts provided and some of the organized and innerric graphs reminded me of the golden curve and the related math arts that I have seen. I remembered that the seeds of the sunflower followed a certain mathematical pattern and decided to use that as the model for my computer art.

I did use AI in this assignment to understand the pattern and mathematical logic the sunflower seeds followed, and to figure out some basic formulas that would allow me to recreate such shapes, but the codes were my own.

Code chunk

for (let pointn = 0; c * sqrt(pointn) <= 150; pointn++) {
    let angle = pointn * 137.5;
    let x = c * sqrt(pointn) * cos(angle);
    let y = c * sqrt(pointn) * sin(angle);
    
    //The code can be played with using many math functions, I tried some and attached them below
    
    //without sqrt
    //let x = (c * (pointn)/10) * cos(angle);
    //let y = (c * (pointn)/10) * sin(angle);

    // logarith(also looks good when made 1*c)
    // let x =5*c * log(pointn) * cos(angle);
    // let y =5*c * log(pointn) * sin(angle);

    //1/log
    // let x =100*c * 1/log(pointn) * cos(angle);
    // let y =100*c * 1/log(pointn) * sin(angle);

    let r = map(x, -300, 300, 80, 255);
    let g = map(y, -300, 300, 80, 255);
    noStroke()
    fill(r, g, (r + g) / 1.65);
    circle(x, y, 3);

The full code is relatively short so I picked the for loop section because I think it shows exactly what loops are helpful for. This chunk of code greatly utalizes the efficency of loops. It would be extremely difficult and time consuming to draw all the dots with accurate spacing by repeated lines of code. The for loop allows drawing and editing of the dots in one line, making it much easier. I also thouht it is intereating that I am able to create different patterns using only change in the mathematical function and slight adjustment to numbers, so I included those modified codes as well.

I was able to recreate the sunflower pattern in resting first, but I thought it looked a bit dull. Because it was a round pattern and included curves, I decided to make it rotate around the center to have a bit of a dillusional effect. I looked up a video tutorial on youtube and learned how to move start point to the center of the canvas and how to rotate it. The effect is pretty good.

Reflection

Drawing a mathematical pattern probably does not really fit the requirement of doing computer art, but I believe the Fibonacci numbers and its related curves do all look quite artistic, and the massive amount of dots needed is an excellent medium for praticing loops. My finished work seems a bit easy and a bit short for an assignment, and I might decide to try something more complicated and in accordance with class difficulties in future assignments.

Another thing is that AI really is quite helpful in the case of understanding scientific or math related concepts. I wouldn’t have been able to figure out all the formulas myself. I will attach my conversation with Gemini in the reference section

References

  • https://www.youtube.com/watch?v=i5bs3SPpHdM
  • https://www.youtube.com/watch?v=z9d1mxgZ0ag
  • https://www.youtube.com/watch?v=3U8-9_WeuKE
  • https://www.youtube.com/watch?v=_GkxCIW46to
  • https://gemini.google.com/share/4c9531505d04

Assignment_1: Self portrait

I started the work with directly showing my visible form with basic shapes. I did have to look up a new function “curve” to draw my bangs because the shape the “arc” function formed was not good enough. I drew the edge with the “curve” function and then filled in with the arc.

When it comes to portraying myself I thought of including my identity of being a science student apart from my looks, because I love what I do and it is a big part of me and I love to play with the sterotypes. At first I tried to do that by showing myself in a lab coat, but it was sort of difficult because I honestly didn’t know how to draw a lab coat, and I gave up to make a T-shirt looking thing. Then I included the “blowing things up” interation (which might be a bit inappropriate) because of how we used to joke about such things in lab courses.

I got the idea of eye tracing from the “Embarrassed Koala” example and Self-portrait by Pauline Wee that we were shown in class. But because I had the idea of making the explotion animation already, I thout it would be funnier if I made the eyes go in different dirrections and looked stupid, so I flipped through the p5js tutorials and founs the “variables and change” page which I then based my code on (I tried to look at the codes for the embarrased koala example but something was wrong with by browser and the code couldn’t show up). However what I intended to do was somehow different from the original code so I tried a few things and switched things around until the animation worked.

I like the eye tracking and the mouse click interaction codes so I put them below.

//eye movement
  lefteyeX=170+mouseX%width/26
  lefteyeY=150+mouseY%width/24
  righteyeX=230+(-mouseX)%width/26
  righteyeY=166+(-mouseY)%width/23
  //these codes are for normal (non-stupid) eye tracing
  // righteyeX=213+mouseX%width/26
  // righteyeY=150+mouseY%width/23

These are the codes for the eye tracking. I used -mouseX and -mouseY for the right eye so the direction of movement was oppodite from the left (I just instinctively tried this and found that it works so this likely isn’t the most concise code). It looked fun. I also kept the normal eye-tracking codes just in case I wanted to change into a normal look.

if(mouseIsPressed===true) {
    fill('orange')
    stroke('rgb(254,254,0)')
    strokeWeight(3)
  } else {
   noFill()
    noStroke()
    }
    triangle(170,300,160,276,180,292);
    triangle(179,291,185,271,197,290)
    triangle(197,290,232,278,226,298)
    triangle(179,300,157,317,187,312)
    triangle(179,307,182,338,198,312)
    triangle(197,290,206,263,211,293)
    triangle(198,312,207,328,214,304)
    triangle(214,311,232,324,225,300)
    triangle(225,292,241,295,224,306)
    noStroke()
    ellipse(200,300,57,32)
if(mouseIsPressed===true) {
    textSize(25)
    fill('rgb(197,0,0)')
    stroke('rgb(0,0,0)')
    strokeWeight(4)
  } else {
   noFill()
    noStroke()
    }
    text('BOOM!',164,308);

This is the code for the explosion animation on click. The explosion cloud was sort of bad because I couldn’t really figure out how to draw the fancy anime explosions, so it was just composed of an oval and a bunch of triangles. I learned the “if” “else” code from the Conditionals and Interactivity page of the p5js tutorial.

The final results look like this. My friends and I had a good laugh at it.

The space for future improvement that remains is that a lot of my codes are lengthy and verbose. I think this will be improved as I learn mode codes that will be able to show the same effects in a more consice and controlable way. I should also have researched more on the p5js page to perfect the codes I alread know but I was sort of exhausted by figuring out the curves and animations. And I think I lack some creativity in the thinking process as it took me quite some time to think of doing something other than regular portraying. Maybe spending some time brainstorming before I start assignments would help.

Overall I am relatively satisfied with the outcomes and really did enjoy playing around with codes and animations. And I am looking forward to more interesting activities 🙂

Citations:

  • https://p5js.org/tutorials/conditionals-and-interactivity/
  • https://p5js.org/tutorials/variables-and-change/
  • Pauline Wee Assignment 1: Self Portrait https://intro.nyuadim.com/2022/09/01/assignment-1-self-portrait-6/
  • https://p5js.org/reference/