Week 9 Reading Response

I completely understand the view brought up in the reading “Physical Computing’s Greatest Hits (and misses)” on how so often I think to myself, “I don’t want do to that, it’s already done” and give up on the idea because I think it’s not original. However, my horizons broadened when I realised through the reading that even recurring themes can have a lot of room for originality, if I try to add a notable variation / twist to these themes. As the reading discusses specific examples of ideas of physical interaction, I think it is great how it contains not only a description of it and its positive aspects, but also its limitations. For instance, with Theremin-like instruments, moving a hand over a sensor can have little meaning – but it can be developed through a twist involving a physical form and context for the sensors that affords a meaningful gesture. I see gloves as a way that affords much more meaning because we use the way our fingers bend and which fingers bend can result in so many variations and convey some meaning – whether someone is stressed vs relaxed, the way to play an instrument, etc. Another limitation that stood out to me was with the Scooby-Doo paintings, where designers of this type of project commonly confuse presence with attention (as I personally have). Someone’s presence does not necessarily mean that person is paying attention. Hence, I made a mental note to pay attention to this to any similar future projects I might undertake, where I could integrate detection of eyes and face, for example.

The next reading “Making Interactive Art: Set the Stage, Then Shut Up and Listen” brought to my attention a problem that I still need work on. So often, artists make artworks, describing their work and their interpretations. Personally, if I were the audience of such an artwork, it feels more difficult to interpret the work differently because I’ve been directed in the way to think. However, I think the audience will enjoy it more when they receive the opportunity to spend some time taking in the artwork through their senses… to think about what each part means, which parts afford contact or control, and which parts don’t.  In letting them interpret the parts, and decide how to respond – rather than prescribing their response, they could be more engaged and discover more. My question is, what is the balance between describing the environment and letting the audience discover the artwork?

Reading Reflection – Week#8

The first reading, “Attractive things work better” has changed one of the most important beliefs for me in the importance of beauty, as well as emphasized the importance of design for me. I don’t think I was fully convinced by previous readings that beauty was as important as utility. However,  it is through this reading that I had the thought that beauty could improve mood, and therefore, help people to be more relaxed and be in positive affect, which could be the type of affect needed at the time as it increases “the tolerance for minor difficulties and blockages.” I also felt an emphasis on the importance of design, through a very possible example whereby a person in anxiety, in flight mode, running away from danger as urgently as possible, could respond to a door that wouldn’t open after pushing. People could react by kicking harder and pushing harder, but this doesn’t solve the problem. However, if people were more relaxed, they may have slightly different though to pull the door instead. This example shows how design could be important to help save lives. Thus, a key takeaway for me is that the principles of human-centered design are especially important in stressful situations. The implication is that designs intended for stressful situations must pay attention to matching the needs of the users, to make actions salient and easy to apply.

What stood out to me from second reading, “Her Code Got Humans to the Moon” are: first, the valuableness of code in allowing humans to go to the moon, or save lives, and much more; second, significance of not ignoring a danger as a possibility; third, the importance of an error detection and handling process. In particular, it was striking that when the Apollo software realized it didn’t have enough room to do all that it was doing, it went through its error detection process and simply focused on the highest priority. This was something I wanted to apply to my work as well.

Week 8 – Creative Switch

1. Concept

I was inspired by the reading on switches and pushbuttons to get creative with switches. I was especially inspired by the part about several custom switches that, when combined, turn on a light. I wanted to have a fun kind of switch, and integrating a game into the project was a good choice for that.

While I’ve been involved in constructing a circuit to play tetris a long time ago, I had the idea of adapting it so that instead of using hands to press buttons to rotate a shape, move a shape left, move a shape right, or move a shape down, the user could use their feet. It may bring an interesting experience in the way of bringing a bigger challenge for playing tetris. It could also be a fun way that engages users in the process of switching on a light through a game. It could be used in a scenario, where, for example, the light has automatically turned off after undetected vigorous motion in the room, as someone has been studying for an extended period of time, and to take a break, the person could play the tetris game using feet (while resting hands).

The switch is in the form of a pushbutton, with two shoe holders (containing aluminum foil on one side of its surface) with four ground boards (containing aluminum foil on one side of its surface). When a shoe holder comes into contact with a ground board, the switch is closed, otherwise, it should remain open.

2. Highlights

I encountered two major obstacles in the attempt to set the switches/pushbuttons to work:

  1. Initially, I planned to use copper wires to extend the connection between the alligator wires connected to the breadboard, alligator wires connected to the ground boards, and alligator wires connected to the shoe sholders. However, I realized that uncovered copper wires can come into contact with each other and disrupt the ideal circuit flow I had in mind. I decided to extend the connection with just alligator wires.
  2. Initially, I had just two shoe holders with four ground boards. I planned to have each shoe holder used for contact with a pair of ground boards. However, this posed the problem where both ground boards associated with the left shoe holder could incorrectly perform the same action on the shape (go down). Thinking that the functional, original circuit had four independent pushbuttons, I cut out the 2 shoe holders to make 4 shoe holders. I tested this and it was solved the issue!

3. Video

4. Reflection and ideas for future work or improvements

Overall, all my goals were achieved through this project: integrating a game into the switch, having several custom switches that combine to turn on a light, as well as functional switch, buzzer, potentiometer, etc. I definitely gained valuable experience in learning how to use the LCD and how to adapt circuits for purposes I would like.

I would love to integrate ideas for a switch that would involve two people and teamwork. This could be done by directly extending the current project, so that one person has the options to rotate a shape or move a shape left, while the other person has the options to move a shape right, or move a shape down. This could be a great way to help foster cooperation and further engagement with the hopefully creative switch.

Week 6 – Midterm Project

1. p5js Sketch

If the experience looks static, please click on the ‘sketch’ version and run it. If you do not see the sketch to the entire width and height, please use the ‘sketch’ version instead.

2. Overall Concept

In my midterm project, the user takes on the role of a beta-tester of a 2D RPG casual adventure game “Tales of Avella” played on a fictional, futuristic ‘Playstation 9’ device. ‘Playstation 9’ is meant to be a much more state-of-the-art video game console than the currently available ‘Playstation 5’ (see Fig. 1). In “Tales of Avella,” the user has the opportunity to explore the world and help someone in need. Upon beginning the experience, the Playstation 9 apparently “turns on” and “loads” the game. The user is greeted with a welcome message from fictional PixelBloom Studios developers, as well as informed with background story for the game and instructions for controlling the user’s character, “Luke.” The background story of the game goes as follows: Luke has just moved into the charming village of Avella and is ready to discover what makes this town so special. The user can control “Luke” to explore the world in “Tales of Avella.” using the arrow keys (left, right, up, down). Depending on the user’s actions, the user could enter new areas and meet a non-playable character (NPC) farmer with a quest to harvest forty-five carrots. Upon completion of the quest, a congratulatory message is displayed, followed by a ‘memory’ snapshot of a gratitude dinner with the farmer and his daughter Lily is displayed. Finally, the user faces an option to restart the game, by pressing ‘Y’ for yes and ‘N’ for no.

Playstation 5 - Photo by Ben Iwara (https://unsplash.com/photos/white-printer-paper-on-brown-wooden-table-tnfbre82_hc)
Playstation 5 – Photo by Ben Iwara (https://unsplash.com/photos/white-printer-paper-on-brown-wooden-table-tnfbre82_hc)
Playing the PlayStation 5, Photo by Yan Krukau: (https://www.pexels.com/photo/close-up-shot-of-person-holding-a-game-controller-9069305/)
Playing the PlayStation 5, Photo by Yan Krukau: (https://www.pexels.com/photo/close-up-shot-of-person-holding-a-game-controller-9069305/)

The user has the opportunity to interact with the world in ways more than moving Luke around. The user could use mouse clicks to progress through the conversation between the farmer and Luke. At some point in the conversation, and if the user decides to accept the farmer’s quest, the farmer’s name is revealed from an initially mysterious title “???”. The user could have Luke grab a tool in Luke’s vicinity by pressing ‘i’. The user could also have a shovel-handling Luke harvest some carrots by pressing ‘i’ and replant some carrots by pressing ‘r.’

3. How Project Works & Good Parts

Good, major game design aspects can be found in project concept, project structure, dialogue flowchart, assets (video, image, audio), character design, story design, quest progress possibilities, quest progress feedback, as well as particular details of the game mechanics. Firstly, the project concept is an attempt on an innovative twist on a simply general game played on a browser on the PC – by simulating the experience of playing a game on a futuristic device, and creating a story for the user as a beta-tester of a game. Thus, there is a frame narrative entailing a story within a story; the character “Luke” within the game “Tales of Avella” has a story that can be played by a “beta-tester” that has a story in supporting the developers at a fictional PixelBloom Studios Inc. The juxtaposition of a pixelated game played in a futuristic device is another game design choice with the intention of bringing an interesting effect – whether it is raising questions, emphasizing contrast in technological development vs historical game interest, or something else. Second, the project has been structured in six stages from 0-5, namely: (0) Opening with Background Story and Instructions; (1) Game Landing Scene (Inside Luke’s House); (2) Game Scene 2 – Luke’s Neighbourhood Outside His House; (3) Game Scene 3 – Farm; (4) Game Scene 4 – Inside Farm Shed; (5) Quest Completion Congratulatory Message with Dinner Memory Fragment and Option to Restart. A gallery containing snapshots of the game progress may be seen below. Third, considering the complex nature of a dialogue between a quest-giving farmer and Luke, I decided to create a flowchart (see flowchart image below the gallery). Fourth, videos were personally designed through Canva, taking into consideration adherence to the concept of the futuristic device interface and visual appeal through positioning of text, etc. I also browsed through audio, considering its appropriateness to the scene at hand; ambient music when the Playstation 9 opens up, relaxing acoustic guitar for the dinner memory, and game atmosphere sounds based on my past experience with Playstation 4/5. Character spritesheets were taken from Spriter’s Resource, chosen based on their closeness to the ideas of the characters I had in mind: a young adult entering a new town and a farmer. Fifth, I spent time on character design (see character traits table below flowchart). Sixth, I designed the message from Pixelbloom Studios Team to the player. Seventh, quest progress possibilities were considered and quest progress feedback was implemented for enhanced user experience.  Since the quest is on harvesting an exact amount of forty-five carrots, if the user harvested less than forty-five carrots, there is a prompt from the farmer to pull more carrots. If the user harvested more than forty-five carrots, the farmer “frowns” and tells Luke, “Over fourty-five. Go back.” At some point in development, I realized that only having the possibility to harvest carrots would impede the user from being able to complete the quest if the user has harvested more than forty-five carrots by accident, so I decided to add the option to replant some carrots – this is one of the changes in my project plan. Last but not least, I find certain little details in the game mechanics are crucial. For example, in Stage 3, the farmer NPC always “faces” Luke, whether Luke is to his right or to his left, mimicking real dialogue.

 

 

Flowchart on Dialogue between Luke and Farmer
Character Traits Table

 

Good technical decisions include setting a game state for which actions happen, structuring characters and objects with behavior (namely Luke, Farmer Giuseppe, message box between Luke and Farmer Giuseppe, shovel, carrot) into classes, structuring code into functions, using arrays to store huge amounts of carrots, designing the game mechanics of the dialogue using if and else if statements, and useful variable names. The game state is crucial because depending on game state, the farmer, carrots and videos may or may not be displayed. Structuring Luke and the Farmer using OOP has been very helpful for organization and readability, and therefore aids in my problem-solving process. Using arrays to store huge amounts of carrots does not only save time, but also helps me control the behavior of certain carrots – whether it’s status is “picked”, “unpicked” or “replanted” based on user behavior. The dialogue is a particularly complex piece of code, thus I paste the snippets below. The first snippet shows the constructor of the message box, which contains x position, y position, image profiles (on the left part of message box to help indicate who is speaking), dialogue_states array, and more arrays containing the speech text of the character(s) for the relevant dialogue_state. The second snippet is a clever way of reducing lines and reusing code as I need to display the profiles as often as I display a dialogue message. The third snippet reveals part of a long function performing actions based on the current dialogue_state. For the “Character Introduction” state, as with many other states, translation has been applied to the message box image and setTimeout is used to have a 200ms cooldown between advances to prevent accidental skips/rapid firing.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Farmer_Giuseppe_And_Luke_Message_Box {
constructor(luke_profile, farmer_profile) {
this.x = 15;
this.y = 500;
this.luke_profile = luke_profile;
this.farmer_profile = farmer_profile;
this.farmer_quest_progress = "not started";
this.dialogue_states = [
"Character Introduction",
"Quest Introduction",
"Player Choice Point",
"Repeated Quest Request",
"Quest Further Details",
"Quest Progress",
"Quest Completed",
"After Quest",
];
this.dialogue_current_state = this.dialogue_states[0];
this.intro_messages = [...];
this.current_message_index = 0;
this.input_blocked_cooldown = false;
this.reveal_title_flag = false;
this.quest_intro_messages = [...];
this.player_choice_yes = [...];
this.player_choice_no = [...];
this.quest_further_details = [...];
this.quest_progress_messages = [...];
this.quest_thanks = [...];
}
class Farmer_Giuseppe_And_Luke_Message_Box { constructor(luke_profile, farmer_profile) { this.x = 15; this.y = 500; this.luke_profile = luke_profile; this.farmer_profile = farmer_profile; this.farmer_quest_progress = "not started"; this.dialogue_states = [ "Character Introduction", "Quest Introduction", "Player Choice Point", "Repeated Quest Request", "Quest Further Details", "Quest Progress", "Quest Completed", "After Quest", ]; this.dialogue_current_state = this.dialogue_states[0]; this.intro_messages = [...]; this.current_message_index = 0; this.input_blocked_cooldown = false; this.reveal_title_flag = false; this.quest_intro_messages = [...]; this.player_choice_yes = [...]; this.player_choice_no = [...]; this.quest_further_details = [...]; this.quest_progress_messages = [...]; this.quest_thanks = [...]; }
class Farmer_Giuseppe_And_Luke_Message_Box {
  constructor(luke_profile, farmer_profile) {
    this.x = 15;
    this.y = 500;

    this.luke_profile = luke_profile;
    this.farmer_profile = farmer_profile;
    this.farmer_quest_progress = "not started";

    this.dialogue_states = [
      "Character Introduction",
      "Quest Introduction",
      "Player Choice Point",
      "Repeated Quest Request",
      "Quest Further Details",
      "Quest Progress",
      "Quest Completed",
      "After Quest",
    ];
    this.dialogue_current_state = this.dialogue_states[0];
    this.intro_messages = [...];
    this.current_message_index = 0;
    this.input_blocked_cooldown = false;
    this.reveal_title_flag = false;

    this.quest_intro_messages = [...];
    this.player_choice_yes = [...];
    this.player_choice_no = [...];
    this.quest_further_details = [...];
    this.quest_progress_messages = [...];
    this.quest_thanks = [...];
  }
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
display_profiles(person_speaking) {
if (person_speaking === "???" || person_speaking === "Farmer Giuseppe") {
push();
translate(110, 550);
image(this.farmer_profile, 0, 0, 130, 100);
pop();
} else if (person_speaking === "Luke") {
push();
translate(110, 550);
image(this.luke_profile, 0, 0, 100, 130);
pop();
}
}
display_profiles(person_speaking) { if (person_speaking === "???" || person_speaking === "Farmer Giuseppe") { push(); translate(110, 550); image(this.farmer_profile, 0, 0, 130, 100); pop(); } else if (person_speaking === "Luke") { push(); translate(110, 550); image(this.luke_profile, 0, 0, 100, 130); pop(); } }
display_profiles(person_speaking) {
    if (person_speaking === "???" || person_speaking === "Farmer Giuseppe") {
      push();
      translate(110, 550);
      image(this.farmer_profile, 0, 0, 130, 100);
      pop();
    } else if (person_speaking === "Luke") {
      push();
      translate(110, 550);
      image(this.luke_profile, 0, 0, 100, 130);
      pop();
    }
  }
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
display_message_box_and_text(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y, farmer_title, carrots) {
if (
dist(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y) <=
width / 8
) {
textAlign(LEFT);
fill(0);
//--------------------------------------------------//
if (this.dialogue_current_state === "Character Introduction") {
push();
// Center horizontally using message_box width
translate(width / 2, 550);
image(message_box, 0, 0);
pop();
// Show current message
let person_speaking =
this.current_message_index % 2 === 0 ? "Luke" : farmer_title;
this.display_profiles(person_speaking);
textSize(17);
text(person_speaking, 200, 500, 85, 120);
text(
this.intro_messages[this.current_message_index],
300,
500,
300,
120
);
// Show continuation prompt
if (this.current_message_index <= this.intro_messages.length - 1) {
text("[Click to continue]", 300, 590, 600, 20);
}
// Advance dialogue on mouse press
if (
mouseIsPressed &&
!this.input_blocked_cooldown &&
frameCount % 10 === 0
) {
if (this.current_message_index < this.intro_messages.length - 1) {
this.current_message_index++;
} else {
this.dialogue_current_state = "Quest Introduction";
this.current_message_index = 0;
}
this.input_blocked_cooldown = true;
// 200ms cooldown between advances
// Prevents accidental skips/rapid firing
setTimeout(() => (this.input_blocked_cooldown = false), 200);
}
}
...
}
display_message_box_and_text(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y, farmer_title, carrots) { if ( dist(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y) <= width / 8 ) { textAlign(LEFT); fill(0); //--------------------------------------------------// if (this.dialogue_current_state === "Character Introduction") { push(); // Center horizontally using message_box width translate(width / 2, 550); image(message_box, 0, 0); pop(); // Show current message let person_speaking = this.current_message_index % 2 === 0 ? "Luke" : farmer_title; this.display_profiles(person_speaking); textSize(17); text(person_speaking, 200, 500, 85, 120); text( this.intro_messages[this.current_message_index], 300, 500, 300, 120 ); // Show continuation prompt if (this.current_message_index <= this.intro_messages.length - 1) { text("[Click to continue]", 300, 590, 600, 20); } // Advance dialogue on mouse press if ( mouseIsPressed && !this.input_blocked_cooldown && frameCount % 10 === 0 ) { if (this.current_message_index < this.intro_messages.length - 1) { this.current_message_index++; } else { this.dialogue_current_state = "Quest Introduction"; this.current_message_index = 0; } this.input_blocked_cooldown = true; // 200ms cooldown between advances // Prevents accidental skips/rapid firing setTimeout(() => (this.input_blocked_cooldown = false), 200); } } ... }
display_message_box_and_text(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y, farmer_title, carrots) {
    if (
      dist(farmer_giuseppe_x, farmer_giuseppe_y, luke_x, luke_y) <=
      width / 8
    ) {
      textAlign(LEFT);
      fill(0);

      //--------------------------------------------------//
      if (this.dialogue_current_state === "Character Introduction") {
        push();
        // Center horizontally using message_box width
        translate(width / 2, 550);
        image(message_box, 0, 0);
        pop();

        // Show current message
        let person_speaking =
          this.current_message_index % 2 === 0 ? "Luke" : farmer_title;
        this.display_profiles(person_speaking);
        textSize(17);
        text(person_speaking, 200, 500, 85, 120);
        text(
          this.intro_messages[this.current_message_index],
          300,
          500,
          300,
          120
        );

        // Show continuation prompt
        if (this.current_message_index <= this.intro_messages.length - 1) {
          text("[Click to continue]", 300, 590, 600, 20);
        }

        // Advance dialogue on mouse press
        if (
          mouseIsPressed &&
          !this.input_blocked_cooldown &&
          frameCount % 10 === 0
        ) {
          if (this.current_message_index < this.intro_messages.length - 1) {
            this.current_message_index++;
          } else {
            this.dialogue_current_state = "Quest Introduction";
            this.current_message_index = 0;
          }
          this.input_blocked_cooldown = true;

          // 200ms cooldown between advances
          // Prevents accidental skips/rapid firing
          setTimeout(() => (this.input_blocked_cooldown = false), 200);
        }
      }
...
}

 

4. Problems Encountered & Areas for Improvement

Throughout the course of the project, I encountered many problems which allowed me to improve in my debugging and coding skills. Often, p5js gives feedback on code errors, which point to a line number. I refer to the line number and check the code around it and code associated to objects in that line. I think this has helped me solve many errors, like simple syntax errors involving a missing closing bracket (which can be easy to do when there are nested if conditions within a function within a class), to slightly more complex errors involving something “undefined” (which could be because I forgot to update the path to video/image in preload function). Personally, the very challenging problems are when the game runs without any error p5js throws out, but does not function as intended. These are logical errors. For example, I encountered this problem: the message box is displayed initially when Luke is in the vicinity of the farmer, and after a certain dialogue state is complete, the message box displays when Luke is farther away from the farmer. My intention was to have the message box display only when Luke is near the farmer. I think that this problem was resolved by being even more specific with my if conditions. One of the most challenging unresolved problems for me was that after the second video called in draw() plays, the video in the next chain of states won’t play. I tried to research online, use AI, try different ways (such as using image() in draw but this produced a static frame, using onended() functions, setting an if condition if the time of the video exceeds the duration-1 second of the video) but these all did not work. It was very difficult, and I may have tried to resolve this for about more than five hours, before deciding to give this idea up, and simply having videos that don’t play in consecutive game states.

A key area for improvement include expanding the affordances of the experience through multiple ways such as adding more characters. Another way is to add objects that are not simply part of the background but can interact with the user. By this, I mean objects similar to the tool that can be “grabbed” by the user and used to do something, such as pull carrots. Perhaps, add a painting that can be clicked on and zoomed in, which depicts a painting of the farmer and his family, along with text on their family history in Avella. A third way to expand affordances is to program the code to not only explore new places, but also go back to previously explored places.

Week #5 – Midterm Progress

1. Concept

I was inspired by the ‘coffee shop expo’ project, where the user can explore a place and click on different buttons freely; but I wished for more room for exploration and especially more freedom, in the way that a user can control a sprite  using keys to move around. Then, if this sprite lands on a specific place, a new place of discovery opens up.

I spent a considerable amount of time to develop the concept: a 2D RPG adventure, casual game in a pixelated world with a medieval setting, with story snapshots (as clear, not-so-pixelated images) appearing from time to time as a quest is completed. The user takes on a role as a character (represented with a sprite) that has migrated to a new city and wants a new job to get some money. There is a task board in a tavern villagers come to post various miscellaneous tasks. The user can pick a job from it, some examples below though my goal is to incorporate at least two from the following:

    • fetch water from the well,
    • harvest a number of carrots,
    • doctor’s mission – fetch herbs for medicinal purposes,
    • help me find my dog,
    • blacksmithing job,
    • help deliver letter.

I was wondering whether it would be strange to haveboth pixelated and non-pixelated graphics. After explaining my concept idea to my friend, my friend thought of a game that existed like that: “Omori.” Omori has pixelated sprites but clear CG scene snapshots, as well as iconic music – things to get inspired by – as well as a fan wiki for sprites which I could try to make good use of.

Interactive elements are to be present, such as a mouse click for opening a door and revealing a character’s speech. Audio is also heard for different story scenes, which should be chosen appropriately based on the atmosphere – eg. music with a sense of urgency on the task  board, relaxing music in the village, music with a sense of triumph and excitement when the quest is completed, etc. On-screen text could be used to beckon and inform the user of narration, instructions and dialogue.

After the experience is completed, there must be a way to restart the experience again (without restarting the sketch).

2. Code Highlights

I think the main challenge would be in the OOP, specifically making the character classes. I found a useful resource for collecting sprites: Spriter’s Resource. In particular, I would like to use village sprites from Professor Layton and the Curious Village. Here are the Character Profiles. I selected the following from the various characters in Professor Layton and the Curious Village:

  • Luke (user’s character)
  • Franco (farmer who needs help with harvesting carrots),
  • Ingrid  (neighbour grandma who needs help with delivering a letter),
  • Dahlia (noblewoman with a lost cat),
  • Claudia (Dahlia’s cat),
  • Lucy (young girl who needs help with getting herbs for her sick mother),
  • Flora (mysterious herb seller).

Since the challenge lies in OOP, I would like to practise making an object, namely the user’s character “Luke.”

In the development of the code, I found it challenging to adapt the walk animation code (discussed in class with Professor Mang) it in two ways: (1) into an OOP format and (2) to adapt it to my spritesheet, which does not have different frames of walking and does not have different different directions that the sprite faces. With (1):

  • I decided to have variables from the walk animation placed into the constructor as the class’s attributes.
  • Instead of keyPressed() as in the walk animation, I used move() and display() since keyPressed() is not an allowed function within a constructor (probably it’s an issue because of it being local when it should be a global function)
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class User_Luke {
constructor() {
this.sprites = [];
this.direction = 1; // 1 = right, -1 = left
this.step = 0;
this.x = width/20;
this.y = height/15;
this.walkSpeed = 3;
this.scaleFactor = 0.2; // Scaling factor
// 12 images across, 4 down, in the spritesheet
let w = int(luke_spritesheet.width / 6);
let h = int(luke_spritesheet.height / 2);
for (let y = 0; y < 2; y++) {
this.sprites[y] = [];
for (let x = 0; x < 6; x++) {
this.sprites[y][x] =
luke_spritesheet.get(x * w, y * h, w, h);
} // iterate over rows
} // iterate over columns
this.x = width / 2;
this.y = height / 2;
imageMode(CENTER);
// Display first sprite
image(this.sprites[0][0], this.x, this.y);
}
move() {
...
}
display() {
...
}
class User_Luke { constructor() { this.sprites = []; this.direction = 1; // 1 = right, -1 = left this.step = 0; this.x = width/20; this.y = height/15; this.walkSpeed = 3; this.scaleFactor = 0.2; // Scaling factor // 12 images across, 4 down, in the spritesheet let w = int(luke_spritesheet.width / 6); let h = int(luke_spritesheet.height / 2); for (let y = 0; y < 2; y++) { this.sprites[y] = []; for (let x = 0; x < 6; x++) { this.sprites[y][x] = luke_spritesheet.get(x * w, y * h, w, h); } // iterate over rows } // iterate over columns this.x = width / 2; this.y = height / 2; imageMode(CENTER); // Display first sprite image(this.sprites[0][0], this.x, this.y); } move() { ... } display() { ... }
class User_Luke {
  constructor() {
    this.sprites = [];
    this.direction = 1;  // 1 = right, -1 = left
    this.step = 0;
    this.x = width/20;
    this.y = height/15;
    this.walkSpeed = 3;
    this.scaleFactor = 0.2; // Scaling factor
    
    // 12 images across, 4 down, in the spritesheet

    let w = int(luke_spritesheet.width / 6);
    let h = int(luke_spritesheet.height / 2);

    for (let y = 0; y < 2; y++) {
      this.sprites[y] = [];
      for (let x = 0; x < 6; x++) {
        this.sprites[y][x] =
          luke_spritesheet.get(x * w, y * h, w, h);
      } // iterate over rows
    } // iterate over columns

    this.x = width / 2;
    this.y = height / 2;

    imageMode(CENTER);

    // Display first sprite
    image(this.sprites[0][0], this.x, this.y);
  }
  move() {
    ...
  }
  display() {
    ...
}

 

With (2), I set conditions if direction is 1 (representing right direction) or -1 (representing left direction). Since my spritesheet only shows the sprites in one direction, I used an image transformation:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class User_Luke {
constructor() {
...
}
move() {
...
}
display() {
let spriteWidth = this.sprites[0][0].width * this.scaleFactor;
let spriteHeight = this.sprites[0][0].height * this.scaleFactor;
// Finally draw the sprite
// The transparent areas in the png are not
// drawn over the background
if(this.direction === -1) {
image(this.sprites[0][0],this.x,this.y, spriteWidth, spriteHeight)
}
else if(this.direction === 1) {
// We will use the scale() transformation to reverse the x-axis.
// The push and pop functions save and reset the previous transformation.
push();
// Scale -1, 1 means reverse the x axis, keep y the same.
scale(-1, 1);
// Because the x-axis is reversed, we need to draw at different x position.
image(this.sprites[0][0], -this.x, this.y, spriteWidth, spriteHeight);
pop();
}
}
}
class User_Luke { constructor() { ... } move() { ... } display() { let spriteWidth = this.sprites[0][0].width * this.scaleFactor; let spriteHeight = this.sprites[0][0].height * this.scaleFactor; // Finally draw the sprite // The transparent areas in the png are not // drawn over the background if(this.direction === -1) { image(this.sprites[0][0],this.x,this.y, spriteWidth, spriteHeight) } else if(this.direction === 1) { // We will use the scale() transformation to reverse the x-axis. // The push and pop functions save and reset the previous transformation. push(); // Scale -1, 1 means reverse the x axis, keep y the same. scale(-1, 1); // Because the x-axis is reversed, we need to draw at different x position. image(this.sprites[0][0], -this.x, this.y, spriteWidth, spriteHeight); pop(); } } }
class User_Luke {
  constructor() {
    ...
  }
  move() {
    ...
  }
  display() {
    let spriteWidth = this.sprites[0][0].width * this.scaleFactor;
    let spriteHeight = this.sprites[0][0].height * this.scaleFactor;
    
    // Finally draw the sprite
    // The transparent areas in the png are not
    // drawn over the background
    if(this.direction === -1) {
      image(this.sprites[0][0],this.x,this.y, spriteWidth, spriteHeight)
    } 
    else if(this.direction === 1) {
      // We will use the scale() transformation to reverse the x-axis.
      // The push and pop functions save and reset the previous transformation.
      push();
      // Scale -1, 1 means reverse the x axis, keep y the same.
      scale(-1, 1);
      // Because the x-axis is reversed, we need to draw at different x position.
      image(this.sprites[0][0], -this.x, this.y, spriteWidth, spriteHeight);
      pop();
    }
  }
}

I also noticed my sprite appear at a huge size – to deal with this, I used a scale factor in the spriteWidth and spriteHeight (already shown in above code).

3. Embedded Sketch

4. Reflection and Next Steps

I experienced multiple challenges along the way, but I gained valuable experience with OOP. I feel that making the next sprites won’t be so challenging, since I can use Luke’s code as reference and adapt it. I think it will be important for me to plan deadlines for me since there are big subtasks for this midterm project including:

  • Finding background(s)
  • Finding snapshots
  • Coding all the sprites
  • Interactive elements – door open animation, ‘choose a job’ buttons

Reading Reflection – Week#5

  • What are some of the ways that computer vision differs from human vision?

No computer vision algorithm is universally able to perform its intended function (eg. recognizing humans vs. background) provided any kind of input video, unlike the human eye and brain which can work together in general to perform its intended function (eg. recognizing humans vs. background). Instead, the object detection algorithm or tracking algorithm crucially relies on distinctive assumptions about the real-world scene that it is to study. If the algorithms’ assumptions are not met, then it could perform poorly, producing not very valuable results, or completely fail in its function. Take a first example: frame differencing is a computer vision technique that detects objects by detecting object movements. This is achieved by comparing the corresponding pixels of two frames by finding the difference in color and/or brightness between all corresponding pixels. Thus, the frame differencing algorithm can perform accurately on “relatively stable environmental lighting,” and “having a stationary camera (unless it is the motion of the camera which is being measured).” Hence, providing videos with much active movement, like in the NBA games, would be much more suitable than providing videos of focused people in the office. In addition to frame differencing, background subtraction and brightness thresholdings are more examples where having some presumptions are important for computer vision tasks. Background subtraction “locates visitor pixels according to their difference from a known background scene” while brightness thresholding uses “hoped-for differences in luminosity between foreground people and their background environment.” Thus, considerable contrast in color or luminosity between foreground and background is important for an accurate recognition of objects; otherwise, in nighttime scenes, the algorithm may detect objects in the video scene incorrectly as background. On the other hand, I personally feel that the human eye remarkably uses a combination of these three, and perhaps more, algorithms to detect objects, which allows it to perform extraordinarily well compared to current computer vision.

 

  • What are some techniques we can use to help the computer see / track what we’re interested in?

It is of great importance to design a physical environment with the conditions best suited for the computer vision algorithm, and, in the other way, select software techniques that are best with the physical conditions at hand. There are interesting examples that stood out to me for enhancing the suitability and quality of the video input provided to the computer vision algorithm. I believe that infrared (as used in night vision goggles) should complement conventional black-and-white security cameras, which can massively boost signal-to-noise ratio of video taken in low-light conditions. Polarizing filters are useful to handle glare from reflective surfaces, especially in celebrity shows. Of course, there are lots of cameras to consider as well, optimized for “conditions like high-resolution capture, high-frame-rate capture, short exposure times, dim light, ultraviolet light, or thermal imaging.”

 

  • How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Computer vision’s capacity for tracking and surveillance opens doors for interactivity between the computer and the human body, gestures, facial expressions and dialogue/conversations. Already some difficult algorithms can correctly identify facial expressions, which could be used to detect someone’s emotional levels, and can be used in mental health initiatives to help people suffering emotionally. This might relate to Stills from Cheese, an installation by Christian Möller. Additionally, like in Videoplace, participants could create shapes using gestures, and their silhouettes in different postures can be used to form different compositions. If computer vision were combined with audio and language, then systems could be even more interactive with the increase in affordances.

Reading Reflection – Week#4

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

Personally, something that might frustrate me is an object for which multiple people hold varying mental conceptual models of it. Imagine the complication if these varying mental conceptual models conflict.  Earphones might not necessarily drive me crazy, but it is initially designed to be worn wrapped around the ear and countless people simply plug it into their ear. I realize the problem is that usually a lack of signifier indicating a round wrapping action around their ear, and that users tend to place huger focus on the earbud, which is to be plugged in the earhole, leading to users simply plugging the earphone into their ear without wrapping it around their ear. I did ponder over how it could be designed to let people know clearly how earphones should be worn, but a better design that I can think of involves a long round rubber earhook placed before the earbud that indicates to the user it should be placed in the long around the antihelix, for which it can only be placed that way if the earphone cable was wrapped around ear.

Right way to wear earphones. Cited from https://www.aizerd.com/news/the-right-way-to-wear-headphones.html
Parts of the earphone (see the top-left for ear-hook). Cited from https://www.pinterest.com/pin/663084745114270948/.

Taking careful steps in designing an object is necessary, and it is crucial to consider the conceptual model that might be constructed, for these provide value for understanding, in predicting how things will behave, and in figuring out what to do when things do not go as planned.

  • How can you apply some of the author’s principles of design to interactive media?

I learned to consider affordances, the possible interactions between people and the environment, as well as incorporate signifiers to signal things, in particular what actions are possible and how they should be done. Signifiers must be perceivable, else they fail to function. I also learned that in design, signifiers are more important than affordances, for they communicate how to use the design. A signifier can be words, a graphical illustration, or just a device whose perceived affordances are unambiguous.

I have a real-life experience of the need for design improvement on my last data visualization project using the author’s principles of design to interactive media. The project involves viewing stellar objects in 3D space, allowing users to zoom in, zoom out and move across the space using mouse controls. Excitedly, I asked my friend to see it. She was keen to try out my project, but she didn’t seem delighted with her experience. Problems? There were not just one, but several:

  1. I didn’t want users to lose view of the stellar objects in their exploration, so when mouse is released, I automatically resetted the view to the initial view of the stellar objects. But it’s not the way someone else might want it to be. Probably, my friend found this sudden reset quite unnatural, and she may have thought my animation wasn’t functional. She suggested having reset as an option using a button instead. I learned that it’s preferred that after releasing the mouse in some exploration, the view stays at the result of that exploration – the user can then pick up from where the user left off to continue the exploration. This increases the affordance (possible interaction between people and the environment).
  2. Due to lack of time, I was also not able to put on a signifier that the view would reset. This caused confusion for the user.
  3. Feedback was quite slow, so this might have played a factor in dettering my friend from continuing to try the animation, going off to do other activities. The delay seemed too long. According to the reading, “Feedback must be immediate: even a delay of a tenth of a second can be disconcerting.” On the contrary, imagine quick feedback in a space simulation: smooth zooming, tilting through space – how inviting to continue exploration.

I hope to incorporate these design considerations and principles in my upcoming assignments and future projects. I would love to help users enjoy a wonderful experience!

Assignment #4 – Data Visualization

1. Concept

Faced with the decision of choosing between working on data visualization or creating a generative text output, I considered that since I had a good understanding of how to create the generative text from class, and working in data visualization could help me gain some valuable skills, I chose to work on data visualization.

I was inspired by the data visualization of flights in the U.S. I was amazed at that a flight network could tell so much information, such as through its resemblance to the U.S. geography, and where the large airports are.

I was interested in applying data visualization mapping techniques to astronomical data. I explored Kaggle datasets and decided Gaia Astronomical Data to be very appropriate for my project as I noted I could visualize data in 3D space using x (parsecs) coordinates, y (parsecs) coordinates and z (parsecs) coordinates.

I also tried to explore other space visualization projects. In particular, I was inspired by the European Space Agency’s Star Mapper, a visualisation based on data from the European Space Agency’s Hipparcos star mapper. Not only were the views under ‘Explore’ tab very beautiful and breathtaking (see images below), I especially enjoyed the interactivity element of the website; being able to control the view of stars in space using my mouse when it is pressed, and I intended to incorporate this into my p5js animation.

The project involves manipulating the Gaia Astronomical Data to map stellar objects into 3D space, and enable users to zoom in, zoom out and move across the space using mouse controls. This would be my very first time animating in 3D space, as well as controlling camera and perspective through code! How exciting.

I used the code template for wind turbines data which was discussed in class and made many revisions to obtain the outcome.

2. Code Highlights

I encountered various major challenges, resolved with the help of AI.

First, my goal includes being able to move across the space using mouse controls, which is new to me, and watching Coding Train’s WebGL and p5js tutorial on Camera and Perspective helped me understand these unfamiliar concepts in p5js. I learned important ideas including field of view (fov), perspective, orbitControl, camera([x], [y], [z], [centerX], [centerY], [centerZ], [upX], [upY], [upZ]). It was still very challenging, and with the help of AI, I learned to incorporate astronomical measurements:

  • in mapping mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
  • in perspective, which sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations).

At some point in the project development, the stellar objects weren’t visible to my eye in the initial view – though if I moved my mouse to something like mouseX = 395 and mouseY = 340, the stellar objects were visible. This posed a problem, because users might not realize at first glance that there are stars they can explore. So I used target mouse coordinates to have the stellar objects within initial view: const targetMouseX = 397.5 and const targetMouseY = 340, camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0). Since camera is incorporated when mouse is not pressed, the user’s view would reset to a place they can view the objects in case they lost sight of it.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function draw() {
background(10); // Darker background for better contrast
// Field of View - Only update if mouse is pressed.
let fov;
if (mouseIsPressed) {
// Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
fov = map(mouseX, mouseY, width, PI / 6, PI / 2);
} else {
fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX
}
// Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations)
perspective(fov, width / height, 1, 10000);
// Camera Controls - orbitControl handles movement when pressed.
if (!mouseIsPressed) {
camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0);
} //orbitControl handles the rest
print(mouseX, mouseY);
...
}
function draw() { background(10); // Darker background for better contrast // Field of View - Only update if mouse is pressed. let fov; if (mouseIsPressed) { // Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°). fov = map(mouseX, mouseY, width, PI / 6, PI / 2); } else { fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX } // Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations) perspective(fov, width / height, 1, 10000); // Camera Controls - orbitControl handles movement when pressed. if (!mouseIsPressed) { camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0); } //orbitControl handles the rest print(mouseX, mouseY); ... }
function draw() {
  background(10); // Darker background for better contrast
  
  // Field of View - Only update if mouse is pressed. 
  let fov;
  if (mouseIsPressed) {
    // Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
    fov = map(mouseX, mouseY, width, PI / 6, PI / 2);
  } else {
    fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX
  }
  // Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations)
  perspective(fov, width / height, 1, 10000);

  // Camera Controls - orbitControl handles movement when pressed.
  if (!mouseIsPressed) {
        camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0);
  } //orbitControl handles the rest
  
  print(mouseX, mouseY);
  ...
}

Second, since I needed to move across space, it is important that I draw all stellar objects every frame, so that when the user moves the cursor, the view of all stellar objects may adjust accordingly. The original code where each data point was drawn once per frame needed to be adjusted. Processing all the relevant coordinates for each stellar object was performed in a function, which was called by setup() instead of draw(). An array of stellar objects is used to hold stellar object data, with each element being an object storing the data for that stellar object (xpos, ypos, zpos, size determined according to distance from viewer).

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function findMinMaxXYZAndProcessData() {
let singleRow = [];
// loop over each row in the file
for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) {
// get a single row and split that row
// into individual words
singleRow = split(strings[csvRowNumber], ",");
// We know that the last two fields are the
// latitude and longitude and so they are
// numerical:
let x = float(singleRow[5]);
let y = float(singleRow[6]);
let z = float(singleRow[7]);
// The file may be missing a field, in which case
// the converstion to a float might have failed
if (isNaN(x) || isNaN(y) || isNaN(z)) {
print("conversion to float failed; skipping row " + csvRowNumber);
continue;
}
// Initialize min/max on the first valid row.
if (minX === undefined || x < minX) minX = x;
if (maxX === undefined || x > maxX) maxX = x;
if (minY === undefined || y < minY) minY = y;
if (maxY === undefined || y > maxY) maxY = y;
if (minZ === undefined || z < minZ) minZ = z;
if (maxZ === undefined || z > maxZ) maxZ = z;
// --- Data Processing and Storage ---
// Calculate mapped positions *here* (during setup).
let xpos = map(x, minX, maxX, -width / 2, width / 2);
let ypos = map(y, minY, maxY, -height / 2, height / 2);
let zpos = map(z, minZ, maxZ, -width / 2, width / 2);
let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance.
// Create an object to store the data for this stellar object.
stellarObjects.push({
x: xpos,
y: ypos,
z: zpos,
size: screenZ,
// You could add color, type, or other attributes here from the CSV.
});
if (csvRowNumber >= strings.length) {
print("finished");
noLoop();
}
}
print("X (min, max) = (" + minX + "," + maxX + ") ");
print("Y (min, max) = (" + minY + "," + maxY + ")");
print("Z (min, max) = (" + minZ + "," + maxZ + ")");
} // end of findMinMaxXYZ
function findMinMaxXYZAndProcessData() { let singleRow = []; // loop over each row in the file for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) { // get a single row and split that row // into individual words singleRow = split(strings[csvRowNumber], ","); // We know that the last two fields are the // latitude and longitude and so they are // numerical: let x = float(singleRow[5]); let y = float(singleRow[6]); let z = float(singleRow[7]); // The file may be missing a field, in which case // the converstion to a float might have failed if (isNaN(x) || isNaN(y) || isNaN(z)) { print("conversion to float failed; skipping row " + csvRowNumber); continue; } // Initialize min/max on the first valid row. if (minX === undefined || x < minX) minX = x; if (maxX === undefined || x > maxX) maxX = x; if (minY === undefined || y < minY) minY = y; if (maxY === undefined || y > maxY) maxY = y; if (minZ === undefined || z < minZ) minZ = z; if (maxZ === undefined || z > maxZ) maxZ = z; // --- Data Processing and Storage --- // Calculate mapped positions *here* (during setup). let xpos = map(x, minX, maxX, -width / 2, width / 2); let ypos = map(y, minY, maxY, -height / 2, height / 2); let zpos = map(z, minZ, maxZ, -width / 2, width / 2); let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance. // Create an object to store the data for this stellar object. stellarObjects.push({ x: xpos, y: ypos, z: zpos, size: screenZ, // You could add color, type, or other attributes here from the CSV. }); if (csvRowNumber >= strings.length) { print("finished"); noLoop(); } } print("X (min, max) = (" + minX + "," + maxX + ") "); print("Y (min, max) = (" + minY + "," + maxY + ")"); print("Z (min, max) = (" + minZ + "," + maxZ + ")"); } // end of findMinMaxXYZ
function findMinMaxXYZAndProcessData() {
  let singleRow = [];

  // loop over each row in the file
  for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) {
    // get a single row and split that row
    // into individual words
    singleRow = split(strings[csvRowNumber], ",");

    // We know that the last two fields are the
    // latitude and longitude and so they are
    // numerical:
    let x = float(singleRow[5]);
    let y = float(singleRow[6]);
    let z = float(singleRow[7]);

    // The file may be missing a field, in which case
    // the converstion to a float might have failed
    if (isNaN(x) || isNaN(y) || isNaN(z)) {
      print("conversion to float failed; skipping row " + csvRowNumber);
      continue;
    }
    // Initialize min/max on the first valid row.
    if (minX === undefined || x < minX) minX = x;
    if (maxX === undefined || x > maxX) maxX = x;
    if (minY === undefined || y < minY) minY = y;
    if (maxY === undefined || y > maxY) maxY = y;
    if (minZ === undefined || z < minZ) minZ = z;
    if (maxZ === undefined || z > maxZ) maxZ = z;

    // --- Data Processing and Storage ---

    // Calculate mapped positions *here* (during setup).
    let xpos = map(x, minX, maxX, -width / 2, width / 2);
    let ypos = map(y, minY, maxY, -height / 2, height / 2);
    let zpos = map(z, minZ, maxZ, -width / 2, width / 2);
    let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance.

    // Create an object to store the data for this stellar object.
    stellarObjects.push({
      x: xpos,
      y: ypos,
      z: zpos,
      size: screenZ,
      // You could add color, type, or other attributes here from the CSV.
    });

    if (csvRowNumber >= strings.length) {
      print("finished");
      noLoop();
    }
  }

  print("X (min, max) = (" + minX + "," + maxX + ") ");
  print("Y (min, max) = (" + minY + "," + maxY + ")");
  print("Z (min, max) = (" + minZ + "," + maxZ + ")");
} // end of findMinMaxXYZ

Third, my text on WebGL apparently read backwards. I made a logical guess that this was because of the view (I was behind the stellar objects rather than in front or my z-coordinates should be far the other way). I made an attempt to fix this, but it didn’t work. So with the help of AI, I learned to reset the camera for 2D drawing (orthographic projection) and default camera just specifically for text, the text read left-right.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// --- 2D Overlay (Text) ---
// Disable depth test for 2D overlay
drawingContext.disable(drawingContext.DEPTH_TEST);
// Reset the camera for 2D drawing (orthographic projection)
ortho();
camera(); // Reset to default camera
// Draw the textScreen
image(textScreen, -width / 2 + 20, -height / 2 + 20);
// Re-enable depth test for 3D rendering in the next frame
drawingContext.enable(drawingContext.DEPTH_TEST);
// --- 2D Overlay (Text) --- // Disable depth test for 2D overlay drawingContext.disable(drawingContext.DEPTH_TEST); // Reset the camera for 2D drawing (orthographic projection) ortho(); camera(); // Reset to default camera // Draw the textScreen image(textScreen, -width / 2 + 20, -height / 2 + 20); // Re-enable depth test for 3D rendering in the next frame drawingContext.enable(drawingContext.DEPTH_TEST);
// --- 2D Overlay (Text) ---
// Disable depth test for 2D overlay
drawingContext.disable(drawingContext.DEPTH_TEST);

// Reset the camera for 2D drawing (orthographic projection)
ortho();
camera(); // Reset to default camera

// Draw the textScreen
image(textScreen, -width / 2 + 20, -height / 2 + 20);

// Re-enable depth test for 3D rendering in the next frame
drawingContext.enable(drawingContext.DEPTH_TEST);

 

3. Embedded Sketch

4. Reflection and Areas for Future Improvement

This project allowed for vast improvement in my data visualization skills in 3D space as well as valuable opportunities to try using WebGL for 3D animation in a personal project for the first time. An area for future improvement could be incorporating being able to animate the stellar objects in motion, as if in orbit.

Assignment #3 – Functions, Arrays, and OOP

1. Concept

As I have experienced implementing OOP before, I was supposed to be extra challenged with creativity. I brainstormed various ideas:

  1. drawing a large galaxy spiral, planets and regular shapes (squares, triangles) on the canvas
  2. intially an all black canvas except potentially the area around the cursor which can “glow” and reveal dots that are part of a representation of face portrait. integrate interesting elements such as planets as earrings.
  3. Using Weighted Voronoi Image Stipling – this looked cool but I couldn’t see a way to use OOP with it.

The challenge in this process was trying to avoid generic ideas. Based on the reading for this week, interactive design is important. So I considered ideas for it, especially idea 2:

  • change colour of features (eg. hair) based on music
  • ask user to put in music file
  • upon user’s click, display new art with dots closely resembling image input

But the problem with it was that idea 2 would take so much time. I had learned from the first assignment that I should plan on a project that appears objectively doable/achievable within the time constraint. I found the ideation phase time-consuming and particularly challenging, and discussion with friends was helpful. I decided that the ideas I had brainstormed could be stored and used instead for a later assignment. I decided to use simple objects where I could implement OOP.

The concept for my art is “PARTY TIME.” It displays an evergreen tree with snow in the winter season. There needs to be a Tree class and Snow class. Since multiple snowballs are used, an array can be used to store snow objects. I wanted to try twinkling lights (if time permits).

2. Code Highlights

I started with creating the Tree class. A simple evergreen tree could be represented using three green triangles for its leafy parts (of increasing size as you go downwards) and one brown rectangle for the wooden trunk. So I defined attributes for each triangle’s top point x-coordinates and y-coordinates. At Professor Shiloh’s advice, I tried not to hard-code the numbers in.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Tree {
constructor() {
this.topTriangleTopX = width/2;
this.topTriangleTopY = (2 / 7) * height + 10;
this.middleTriangleTopX = width/2;
this.middleTriangleTopY = (3 / 7) * height;
this.bottomTriangleTopX = width/2;
this.bottomTriangleTopY = (4 / 7) * height;
this.woodCenterX = width/2;
this.woodCenterY = (6/7) * height;
this.woodWidth = width/10;
this.woodHeight = height/7;
this.color = color('hsb(160, 100%, 50%)');
}
...
}
class Tree { constructor() { this.topTriangleTopX = width/2; this.topTriangleTopY = (2 / 7) * height + 10; this.middleTriangleTopX = width/2; this.middleTriangleTopY = (3 / 7) * height; this.bottomTriangleTopX = width/2; this.bottomTriangleTopY = (4 / 7) * height; this.woodCenterX = width/2; this.woodCenterY = (6/7) * height; this.woodWidth = width/10; this.woodHeight = height/7; this.color = color('hsb(160, 100%, 50%)'); } ... }
class Tree {
  constructor() { 
    this.topTriangleTopX = width/2;
    this.topTriangleTopY = (2 / 7) * height + 10;
    this.middleTriangleTopX = width/2;
    this.middleTriangleTopY = (3 / 7) * height;
    this.bottomTriangleTopX = width/2;
    this.bottomTriangleTopY = (4 / 7) * height;
    this.woodCenterX = width/2;
    this.woodCenterY = (6/7) * height;
    this.woodWidth = width/10;
    this.woodHeight = height/7;
    this.color = color('hsb(160, 100%, 50%)');
  }
  ...
}

At first I created a Snow class. The idea was to have falling circles representing falling snowballs. But the issue was that my draw function would draw one row of falling circles that falls together. Even if I were to place a function to set these circles back to the top of the canvas when it reaches the ground, the prime issue is that between the time the snow moves from the top to the bottom of the canvas, there is only one row of snow falling. This was not lifelike at all.

There was one thing to appreciate, though, which was that the snow didn’t move in a straight-down manner, it could move leftward or rightward as it moved down, which was pretty realistic.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Snow {
constructor(xSpeed0, ySpeed0) {
this.xPos = random(0,600);
this.yPos = 20;
this.xSpeed = random(-1,1);
this.ySpeed =
this.diameter = 10;
}
move() {
// move the ball
this.xPos += this.xSpeed;
this.yPos += this.ySpeed;
}
draw() {
fill(255);
noStroke();
circle(this.xPos, this.yPos, this.diameter);
}
}
class Snow { constructor(xSpeed0, ySpeed0) { this.xPos = random(0,600); this.yPos = 20; this.xSpeed = random(-1,1); this.ySpeed = this.diameter = 10; } move() { // move the ball this.xPos += this.xSpeed; this.yPos += this.ySpeed; } draw() { fill(255); noStroke(); circle(this.xPos, this.yPos, this.diameter); } }
class Snow {
  constructor(xSpeed0, ySpeed0) {
    this.xPos = random(0,600);
    this.yPos = 20;
    this.xSpeed = random(-1,1);
    this.ySpeed = 
    this.diameter = 10;
  }
  move() {
    // move the ball
    this.xPos += this.xSpeed;
    this.yPos += this.ySpeed;
  }
  draw() {
    fill(255);
    noStroke();
    circle(this.xPos, this.yPos, this.diameter);
  }
}

To resolve this issue, I researched online. I found that p5js had a Snowflake class, which worked pretty similarly. I really appreciated its organic-ness that was beyond my attempt – it used a sine function to create a wavy falling down motion akin to being affected by wind – which, I have to say is pretty incredible to see even after watching it multiple times.

I played around with the Snowflake class, experimenting and researching to find out what some line does, and keeping what I liked. For instance, I watched a Khan Academy video to understand angular speed and changed it to 0 (snow falls down straight) and 100 (the rotational movement of the snow was super obvious which I didn’t want). The value 35 was nice so I kept it. I also learned that different size snowflakes fall at different y speeds, smaller snowflakes fall faster while bigger snowflakes fall slower using an equation with a graph that has a similar shape to 1/x graph.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class Snowflake {
constructor() {
this.posX = 0;
this.posY = random(-height, 0);
this.initialAngle = random(0, 360);
this.size = random(2, 5);
this.radius = sqrt(random(pow(width / 4, 2)));
this.color = color(random(200, 256), random(200, 256), random(200, 256));
}
update(time) {
// Define angular speed (degrees / second)
let angularSpeed = 35;
// Calculate the current angle
let angle = this.initialAngle + angularSpeed * time;
// x position follows a sine wave, using the parametric equation for x coordinate (x = r sin(theta))
this.posX = width / 2 + this.radius * sin(angle);
// Different size snowflakes fall at different y speeds, smaller snowflakes fall faster while bigger snowflakes fall slower using an equation with a graph that has a similar shape to 1/x graph
let ySpeed = 8/this.size;
this.posY += ySpeed;
// When snowflake reaches the bottom, move it to the top
if (this.posY > height) {
this.posY = -50;
}
}
display() {
fill(this.color);
noStroke();
ellipse(this.posX, this.posY, this.size);
}
}
class Snowflake { constructor() { this.posX = 0; this.posY = random(-height, 0); this.initialAngle = random(0, 360); this.size = random(2, 5); this.radius = sqrt(random(pow(width / 4, 2))); this.color = color(random(200, 256), random(200, 256), random(200, 256)); } update(time) { // Define angular speed (degrees / second) let angularSpeed = 35; // Calculate the current angle let angle = this.initialAngle + angularSpeed * time; // x position follows a sine wave, using the parametric equation for x coordinate (x = r sin(theta)) this.posX = width / 2 + this.radius * sin(angle); // Different size snowflakes fall at different y speeds, smaller snowflakes fall faster while bigger snowflakes fall slower using an equation with a graph that has a similar shape to 1/x graph let ySpeed = 8/this.size; this.posY += ySpeed; // When snowflake reaches the bottom, move it to the top if (this.posY > height) { this.posY = -50; } } display() { fill(this.color); noStroke(); ellipse(this.posX, this.posY, this.size); } }
class Snowflake {
  constructor() {
    this.posX = 0;
    this.posY = random(-height, 0);
    this.initialAngle = random(0, 360);
    this.size = random(2, 5);
    this.radius = sqrt(random(pow(width / 4, 2)));
    this.color = color(random(200, 256), random(200, 256), random(200, 256));
  }

  update(time) {
    // Define angular speed (degrees / second)
    let angularSpeed = 35;

    // Calculate the current angle
    let angle = this.initialAngle + angularSpeed * time;

    // x position follows a sine wave, using the parametric equation for x coordinate (x = r sin(theta))
    this.posX = width / 2 + this.radius * sin(angle);

    // Different size snowflakes fall at different y speeds, smaller snowflakes fall faster while bigger snowflakes fall slower using an equation with a graph that has a similar shape to 1/x graph
    let ySpeed = 8/this.size;
    this.posY += ySpeed;

    // When snowflake reaches the bottom, move it to the top
    if (this.posY > height) {
      this.posY = -50;
    }
  }

  display() {
    fill(this.color);
    noStroke();
    ellipse(this.posX, this.posY, this.size);
  }
}

For the stars, I tried to use a starry night source code. But as I tried to integrate it, I faced an issue which I couldn’t resolve, even after debugging attempts (by following p5js instructions: p5.js says: you have used a p5.js reserved function “currentTime” make sure you change the function name to something else.) and using AI. I couldn’t get the stars to appear and dynamically expand and contract as they should – they just seemed like dashes.

Once I moved the background color of the canvas from the draw() to setup(), I found that you can see the trailing movements of the canvas. It was supposed to be a bug, and I knew how to resolve it, but since I thought this looked so cool, I decided to keep it. I took out the stars, and used the trailing, colourful snow as the main cool and creative feature.

A slight dynamic element is a trailing cursor 🙂

3. Embedded sketch

4. Reflection and ideas for future work

There are several important experiences I had through this art process. Personally, I was able to create the snowfall animation that I had wanted to try from week 1, was able to meet my goals of implementing OOP, arrays and functions. Since I couldn’t add the music, I hope to try that in the upcoming week(s).

This work could be further extended through expanding the concept and adding design interactivity. One idea I have is that at the user’s click, the seasons would change, showing summer, fall, winter and spring, with the music changing based on Vivaldi’s Four Seasons piece. I love the idea of music complementing the visuals.