Reading Reflection – Week 8

Emotion & Design: Attractive things work better

I found Norman’s reading quite interesting, especially his argument that emotions affect how well we use objects. From the previous readings we have done in this class, I just assumed good design was mainly about functionality and usability. I also thought that even if the object is not as aesthetically pleasing, if it works well, then everyone would use it. I did not realize how much emotional appeal plays a role in the design. Norman claims that attractive designs can actually improve how people think and solve problems. When people feel positive emotions, they are more flexible, while negative emotions make people more focused but also more stressed. This made me realize that design is not only about making something work well with clear directions, but also about how it makes people feel while using it.

I liked the example of the three teapots. Each of the teapots emphasized something different. Even though one of them is harder to use, he still keeps it because he enjoys how it looks. This made me understand that objects can have an emotional value. I started thinking about how many objects I own for the same reasons. I sometimes choose things because they look nice or match my mood, even if they are not the most practical option.

This reading also made me realize how much of the visual design, like the color and layout, might help determine how people interact with it, beyond just making it usable. The reading definitely changed how I think about design by showing that usability and aesthetics should be combined, especially to have that emotional appeal. But I wonder if something can ever be too beautiful, can an object’s attractiveness ever get in the way of its function?

 

Her Code Got Humans On The Moon — And Invented Software Itself

The reading about Margaret Hamilton’s work really inspired me. The fact that the concept of software did not even exist when she started working on the Apollo missions is crazy. I never thought about how much space travel depended on programming. It made me realize how much courage and creativity it must have taken to build something new, especially in a male-dominated environment and time, while even being a mother. Not only did she lead her team successfully, but she also fought to have her ideas heard, even when people initially dismissed her concerns.

I was amazed by the story of the Apollo 11 landing, where the computer started giving error messages. But because Hamilton designed it to prioritize the most important tasks, the module still landed safely. She showed how important it is to anticipate problems before they even happen.

This reading made me reflect on how many of the innovations we have today have been taken for granted and built by people who fought the system in their time. It also makes me wonder how many people today contribute to so much work but stay unrecognized, or their credit is handed over to someone else. Hamilton’s story shows the importance of visibility and persistence in tech and in life, something that we should all take note of.

Week 8 – Unusual Switch(Face Expression)

https://github.com/skyorachorn/Intro-to-IM/blob/cd17b870a87cc39315260f98a0846555e5ce6a13/Week8_UnusualSwitchassignment.ino

See VDO:

https://youtu.be/OVTbmxnJp2E?si=spcco5jThT1CARs4

Concept:

This project explores a hands-free switch using facial movement. Instead of using hands, I used eyebrow movement as an input. When I frown, the system detects it and turns on a sad red face. When I relax my face, it switches to a happy green face.

How it works:

The system uses a digital input to detect whether the switch is ON or OFF. This input is controlled by copper tapes to contact triggered by eyebrow movement.

If the input is HIGH, the green LEDs (happy face) turn on.

If the input is LOW, the red LEDs (sad face) turn on

Handwritten Sketch Diagram:

Circuit:

https://youtu.be/OVTbmxnJp2E?si=spcco5jThT1CARs4

 

Each LED is connected to a digital pin with a 330Ω resistor to limit current.
Red LEDs and green LEDs are connected to separate pins so they can be controlled independently.

Code that I proud of:

if(state == HIGH){
  // Normal face → Green LED lip 
  for(int i=0;i<7;i++){
    digitalWrite(greenPins[i], HIGH);
    digitalWrite(redPins[i], LOW);
  }
} else {
  // Angry face → Red LED lip
  for(int i=0;i<7;i++){
    digitalWrite(greenPins[i], LOW);
    digitalWrite(redPins[i], HIGH);
  }
}

 

Problem encountered:

Initially, I tried connecting multiple LEDs to one pin, but I learned from Professor Aya’s lecture that Arduino pins have current limits. I solved this by using multiple pins and resistors.

Reflection:

This project helped me understand how digital input works and how the human body can be used as an interface. It also taught me polarity and about current limitations and proper LED connections. I found it interesting that something as simple as facial movement can be turned into an interactive system.

Reading Reflection-Week 8 Her Code Got Humans On The Moon

Connecting to the other article, for this article I found a fascinating contrast between emotional design and highly technical engineering  . Hamilton’s work on Apollo software emphasised much more on reliability, error prevention, and anticipating human mistakes, especially in stressful situations where users (astronauts) could not afford confusion. This aligns with the idea from the design reading that in high-pressure contexts, systems should prioritize clarity and function over aesthetics. However, both readings highlight the importance of understanding overall human behaviour whether it is just emotions affecting usability or human error affecting software systems. Personally, this connection made me rethink of design as not just visual or technical, but deeply resonating with human creativity and thinking. It also made me appreciate how Hamilton’s foresight in error-handling actually reflects a kind of “design thinking,” even in engineering. Together, the readings suggest that good design whether it’s interfaces or software comes from anticipating and seeing how humans actually act, think, feel, and make mistakes.

Another aspect of the Hamilton reading that I found particularly meaningful was how her work challenged both the technical and social assumptions about software and who could create it. At the time, software wasn’t even considered a central or prestigious part of engineering, and yet Hamilton insisted on its importance, eventually creating the term “software engineering.” This made me reflect on how new fields or ideas are often undervalued until their impact becomes undeniable. I also found it significant that she anticipated human error such as the famous Apollo incident where the system prioritised critical tasks during overload which directly connects to the idea of designing for real human behaviour rather than ideal users. This raises a broader question for me which is how many current technologies today are still being designed under unrealistic assumptions about users? Overall, this reading reinforced my understanding that innovation isn’t just about technical skill, but also about challenging norms and recognising problems that others overlook.

Reading Reflection- Week 8 Attractive Things Work Better

In the reading on design and emotion, The idea that “attractive things work better”, caught my attention not because they’re objectively more functional, but because they change how we think and feel while using them  . This challenged my assumption that usability is purely logical and efficiency-based. I realized that whenever I feel relaxed or positive, I would actually be more creative and tolerant of small problems, which aligns with the author’s argument about positive affect enhancing breadth-first thinking. This made me reflect on my own experiences with technology and even on my own projects as sometimes I focus too much on making things technically correct and being a perfectionist rather than enjoyable. It also raises a question for me was that if aesthetics can make users overlook flaws, is there any risk that designers might rely too much on appearance instead of improving actual functionality? I think the reading suggests a balance, but it leaves open how far that balance should go in real-world design.

Another idea from this reading that stood out to me was the contrast between positive and negative affect in shaping how we process information. The author explains that negative emotions can actually improve focus and depth of thinking, especially in stressful situations. This made me reconsider my assumption that feeling “good” is always better for performance. In fact, I can relate this to moments when I am under pressure such as before exams or deadlines where I become more focused and detail-oriented. However, this also raises a question that how can designers intentionally design for both emotional states? It seems difficult to create a system that supports both creative exploration and precise decision-making at the same time. This tension made me realise that design is highly dependent on context, and that there is no single “best” design and that only the designs that are appropriate for specific emotional and situational conditions.

week 8 reading response – her code got humans on the moon

Hamilton wanted to add error-checking code to the Apollo system that would prevent this from messing up the systems. But that seemed excessive to her higher-ups. “Everyone said, ‘That would never happen,’” Hamilton remembers.

But it did.

This really begs the question of whether or not they would’ve taken her concerns more seriously had she been male. It’s insane to think that, despite the extent to which she had already proven her competency, they still dismissed her – especially when her concerns turned out to have been very reasonable.  Even today, women have to work harder than men in most places to be perceived as equally competent – and I think plenty of women would understand what I mean here. So I can’t imagine how much more difficult it must have been back in the 60s.

Stereotype threat is a psychological phenomenon defined as the pressure a person feels to disprove negative stereotypes regarding a group they identify with – be it race, gender, class, etc. Studies have proven that stereotype threat (ironically) has a significant negative impact on a person’s performance. Even being reminded of the stereotype before a task can change the outcome. It’s a vicious self-fulfilling prophecy that further perpetuates negative stereotypes and can hurt a person’s self-esteem, which then further affects performance.

Being a woman who grew up in a misogynistic environment, I really struggled with this cycle for almost as long as I remember. I have so much respect for Hamilton given what she must have had to overcome to thrive as she did.

week 8 reading response – attractive things work better

The author discussed two types of information processing: “affect” and cognition. Of course, these aren’t exact neuroscientific terms (as he mentions himself: “to avoid the technical debate … I use the reasonably neutral term of ‘affect'”), but I really appreciated his interpretation of this concept, as it reflects a very real biological mechanism that significantly impacts our day-to-day lives.

Neuroscientists understand that, to some extent, our reasoning can come after we make a decision. As animals first and foremost, we fundamentally operate on instinct and unconscious processes, especially in faster or emotionally salient situations.

Take a simple example — trying to escape a hazardous situation. Suppose that fleeing people encounter a door that wont open. The anxiety-produced response is to try again harder. When the first push doesn’t open the door, press harder, kick, and even throw the body against it

This is illustrated beautifully by studies on subjects with a severed corpus callosum: when one half of a subject’s brain is asked to explain an action that was triggered and carried out by the other half (thus completely outside its control and awareness), the subject may provide a reasonable rationale and experience the temporal illusion that this reasoning came first.

But all this thinking comes after the fact: the affective system works independently of thought. Your thoughts are occurring after the affective system has released its chemicals.

Affect, as described by the author, is an instinctual gut reaction, while cognition comes afterward. You can see this pattern emerge especially in fields such as philosophy. In fact, I think philosophy – particularly ethics – is a perfect example. There is an almost universal, intuitive sense of right and wrong among our species: a gut feeling that assigns value judgments, just as the author describes (this idea is controversial, but I’m referring to instinctual affect applied to and affected by our species’ advanced social environment.) Ethical philosophy emerges when someone attempts to construct a cognitive framework through which these gut value judgments can be derived. Of course, since these judgments are instinctual, there is no inherent logical framework underlying moral affect, which is why there is no universal agreement on which ethical philosophy is most reliable or “true” (as far as I know).

Each system impacts the other: some emotions — affective states — are driven by cognition, and cognition is impacted by affect.

Week 8 – Unusual Switch

Demo Below:

 Concept:

I have an extra Arduino I got years ago, so I decided to scour through the parts to see if I can find something to use, which I did! The switch here is more functional rather than unusual, but its hand-less nonetheless. The 2 main players here is an RFID reader and a dot matrix module. The RFID reader reads the card in your pocket while you walk through the door. If the card matches the accepted card(s), a green LED will flash and a smiley face will be shown on the dot matrix module. However if a person with the wrong card passes through the door, the red LED will flash and the dot matrix will show a huge X.

Implementation:

void loop() {
  if (!mfrc522.PICC_IsNewCardPresent() || !mfrc522.PICC_ReadCardSerial()) {
    return;
  }
...
}

We begin our loop code with this, and what this does is just check if the RFID module can read a card at all, if it can’t the rest of the code won’t run at all.

// Long green flash when correct card and show smile.
if (match) {
  Serial.println("ACCESS GRANTED");
  digitalWrite(GREEN_LED_PIN, HIGH);
  showSmile();
  delay(3000); 
  digitalWrite(GREEN_LED_PIN, LOW);
}

Here if the card scanned matches the card we give access to, we turn on the green pin and show the smile on the dot matrix module, this lasts for 3 seconds before turning things off.

else {
    Serial.println("ACCESS DENIED - ALARM");
    showX();
    
    // Repeated red flashing
    for(int i = 0; i < 5; i++) {
      digitalWrite(RED_LED_PIN, HIGH);
      delay(100);
      digitalWrite(RED_LED_PIN, LOW);
      delay(100);
    }
  }

If the card that is read does not match the card we want, then we will show the X on the dot matrix and repeatedly flash the red LED 5 times.

// Reset visuals
lc.clearDisplay(0);
lc.setLed(0, 0, 0, true); 
mfrc522.PICC_HaltA();
mfrc522.PCD_StopCrypto1();

At the end of the loop, we just turn on a singular dot that is top left of the dot matrix, to show that it is currently on standby.

ShowX and ShowSmile are functions that simply turn on the correct dots in the matrix to show the image we want to show.

GitHub Code is here.

Reflection:

The only thing I would really add here is maybe a buzzer with 2 different sounds for either granting access or rejecting access.

 

Week 8 Reading Reflection

Emotion & Design: Attractive things work better

This was definitely an interesting reading, which, in summary, discussed the positive relationship between aesthetics and usability. Norman makes the point that although aesthetics do not play a role in the function of a certain design, they allow the user to forgive or overlook certain shortcomings in a design.

I will admit, initially, I was skeptical about his idea. I assumed that it doesn’t matter if a design is aesthetically pleasing; if it works, then that’s what matters. But my perspective shifted when one of the lines mentioned a clean versus a dirty lookng car, especially since cars are things I look at much more frequently than I analyze designs. That made me realize that even without realizing it, we are drawn to more aesthetically pleasing designs and are almost tricked into thinking that something works better when it looks better. As I think back on my mid-term project, I also make a connection where if I had included more graphics and visual elements, I would have been able to overlook certain drawbacks in the game.

Overall, I gradually got more and more convinced by his idea the more I read, and I fully believe in his idea.

Her Code Got Humans on the Moon

I really appreciated how this reading highlighted the idea of Hamilton being the backbone of software programming and space exploration, especially the part that emphasized that she and the team were practically inventing key concepts of programming.

One of the things I particularly liked was the labelling of her as a positive radical and her ability to foresee certain mistakes before they happen. It really portrayed her passion towards programming and that she took it as more than just a career. It also still made me think about her motivations when programming and her meticulous nature. Especially since she faced ridicule as she mentioned for being a working mother. It made me question whether her detail-oriented nature was a result of her passion or fear of further ridicule and facing even further criticisms and blame.

I thought the text was concise, straight to the point, and delivered an important message in a beautiful way. I loved how it didn’t stop at just the process of Hamilton programming before the Apollo launch, but even mentioned what happened during the Apollo launch and how Hamilton had to quickly find a way to fix the error that occurred despite her constant warnings. It focused on both the process and the outcome.

remy midterm project

 

Embedded sketch:
Overall concept:

My goal with this midterm was to create a demo of a video game – one that I’m planning to expand on in a future opportunity. The concept I had in mind for this demo was a retro-style horror pixel game that takes place in a lab. The player will experience a cutscene and then be placed into an environment where they must interact with the setting in some way.

The story, which isn’t really relevant in the demo version, is supposed to follow a young woman (the player character) working a late-night shift in a laboratory, where she begins to see things in the dark. Below are some of the sprites and assets (used and unused) I created for this project.

player character sprite sheet

unused non-player character sprite sheet

cutscene art

 

laboratory background

How it works and what I’m proud of:

To start with the assets and how I obtained them: all visual elements were drawn by me using the online pixel-art program Pixilart.com. All the sound effects and background noise were downloaded and cut from copy-right free YouTube sounds.

As for the code, rest-assured there was absolutely no ChatGPT usage or any other form of ai-coding. I did attempt to go to two friends – one CS major senior and one graduated CS major – and they somehow only managed to make things worse. I figured everything out myself through either research or agonizing tear-inducing Vyvanse-fueled trial and error.

Below I’ll share and briefly describe snippets of code I’m proud of.

//item toggling; ensuring you need to be within a certain distance, facing the item to interact with it, and the item is still in its initial state
if (keyIsDown(ENTER)) {
  //pc1
  if (
    pc1Opacity == 0 &&
    x > midX - bgWH / 2 + 220 &&
    x < midX - bgWH / 2 + 300 &&
    y == midY - bgWH / 2 + 390 &&
    direction === 1
  ) {
    pc1Opacity = opaque;
    inRange = true;
    //pc2
  } else if (
    pc2Opacity == 0 &&
    x > midX + bgWH / 2 - 280 &&
    y == midY - bgWH / 2 + 390 &&
    direction === 1
  ) {
    inRange = true;
    pc2Opacity = opaque;
    //pc3
  } else if (
    pc3Opacity == 0 &&
    x > midX + bgWH / 2 - 280 &&
    y == midY - bgWH / 2 + 390 &&
    direction === 3
  ) {
    inRange = true;
    pc3Opacity = opaque;
    //trash
  } else if (
    trashCanOpacity == 0 &&
    x > midX + bgWH / 2 - 460 &&
    x < midX + bgWH / 2 - 440 &&
    y == midY - bgWH / 2 + 390 &&
    direction === 1
  ) {
    inRange = true;
    garbageOpacity = 0;
    trashCanOpacity = opaque;
  } else if (
    tableOpacity == 0 &&
    x < midX + bgWH / 2 - 290 &&
    x > midX - bgWH / 2 + 310 &&
    y == midY + bgWH / 2 - 320 &&
    direction === 0
  ) {
    inRange = true;
    tableOpacity = opaque;
  } else {
    inRange = false;
  }
  //playing the toggle sound every time all parameters are met
  if (inRange) {
    toggle.setVolume(0.1);
    toggle.play();
  }
}

Okay, so I won’t say I’m exactly too proud of this one because it’s really clunky and a bit repetitive, and I’m sure I would’ve found a much more efficient way to put it had I been more experienced. It does, however, do it’s job perfectly well, and for that I think it deserves a place here. It’s probably one of the parts I struggled with the least given how straightforward it is.

for (let j = 0; j < 4; j++) {
  sprites[j] = [];
  for (let i = 0; i < 4; i++) {
    sprites[j][i] = spritesheet.get(i * w, j * h, w, h);
  }
}

//cycling through sprite array and increments by the speed value when arrow keys are pressed. %4 resets it back to the first sprite in the row (0)
if (keyIsDown(DOWN_ARROW)) {
  direction = 0;
  y += speed;
  step = (step + 1) % 4;
} else if (keyIsDown(LEFT_ARROW)) {
  direction = 2;
  x -= speed;
  step = (step + 1) % 4;
} else if (keyIsDown(UP_ARROW)) {
  direction = 1;
  y -= speed;
  step = (step + 1) % 4;
} else if (keyIsDown(RIGHT_ARROW)) {
  direction = 3;
  x += speed;
  step = (step + 1) % 4;
  //when no key is being pressed, sprite goes back to the standing position (0,j)
} else {
  step = 0;
}

//keeping the sprite from walking out of bounds
if (y >= midY + bgWH / 2 - 320) {
  y = midY + bgWH / 2 - 320;
}
if (y <= midY - bgWH / 2 + 390) {
  y = midY - bgWH / 2 + 390;
}
if (x >= midX + bgWH / 2 - 180) {
  x = midX + bgWH / 2 - 180;
}
if (x <= midX - bgWH / 2 + 175) {
  x = midX - bgWH / 2 + 175;
}

I probably included this snippet in my progress post, since it’s the code I worked on before anything else. Everything else was kind of built around this. (keep in mind that in the actual sketch, the array is created in the setup function and the rest is in the draw function. I just combined them here for simplicity.)

function cutScene1() {
  background(0, 8, 9);
  jumpscare.setVolume(1);
  spookyNoise.setVolume(0.05);
  spookyNoise.play();
  
  //having the creature jitter randomly 
  let y = randomGaussian(midY + 50, 0.4);
  let wh = bgWH;

  tint(255, doorwayOpacity);
  image(doorway, midX, midY + 55, wh, wh);
  noTint();

  //creature fading in
  if (a >= 0) {
    a += 0.5;
    tint(255, a);
    image(creature, midX, y, wh, wh);
    noTint();
  }

  // triggering jumspcare once opacity reaches a certain value
  if (a >= 50) {
    jumpscare.play();
  }

  //ending the function
  if (a > 54) {
    doorwayOpacity = 0;
    background(0);
    spookyNoise.stop();
    jumpscare.stop();
    START = false;
    WAKE = true;
  }
}

This is one of the last functions I worked on. I actually messed this one up quite a bit because my initial attempts really overcomplicated the animation process, and I didn’t know how to make sure the code executed in a certain order rather than at the same time. I tried using a for() loop for the creature fading in, and honestly I really hate for() and while() loops because they keep crashing for some goddamn reason and I kept losing so much progress. It didn’t occur to me at first that I could just… not use a for() loop to increment the opacity. It also took a few tries to get the timing right. One thing I’ll improve on here if I can is add a visual element to the jump scare. I’d probably have to draw another frame for that.

Another thing I’d improve on is adding some dialogue and text-narration to the sequence so that the player has a better idea of what’s going on. I was also planning on implementing some dialogue between the player character and the doctor right after the cutscene, though I unfortunately didn’t have the time for that.

Overall, I’m mostly proud of the visual elements (I’ll be honest, I spent MUCH more time on the visual elements and designing the assets over the rest), because I think I managed to make everything look balanced and consistent – integrating the sprite well with the environment, while having the interactions remain, as far as I’m aware, bug free.

Complete Midterm

Complete  midterm:

Since the project neeed to be interactive, I was inspired by multiple souces that I have experience, like the interactive haunted Queren Mary story and the film “Night at the Museum”.  I deciede to do an interactive storytelling about a spooky story based in the museum setting. I enabled the player to make choices at multiple points in the stories, leading to different endings accordingly. I wrote the story myself and and organized the branching so seemingly safe choices can have unexpected turns.

Surprisingly though, the most difficult part of my project was not actually the coding but having AI genertate images for me. With coding I was able to see  what was wrong and physically fix it, and as long as I code it correctly it will do its job. But with AI generating images it sometimes just doesn’t get what I am saying. And since AI don’t actually see the images it really has difficulty when I want it to edit or make changes to the image it generated.

The way my project works is that I put all my scenes in the “playing” gamesate in a giant object called storyData, and made them nested objects. The storyData is coded in an individual .js file. This allows the main code to be organized since it only fetches the  information. The properties of the scenes, including: nnames of scenes, their relations,  the audio, visual, duration, text, delays and choices for individual parameters are all in the storyData file. Example is below.

storyData.scene1_part3 = {
  text: "In the darkness, you hear laughter in the corridors...",
  duration: 5000,
  textDelay: 1500,
  visual: "booth_dark",
  audio: "girl_laughter",
  choices: [
    { label: "Investigate", next: "scene2_part1" },
    { label: "Stay", next: "scene3_part1" },
  ],
};

This object file functions throught my drawCurrentScene  function, which I am sort of proud of.

function drawCurrentScene() {
  background(0);
  let scene = storyData[currentSceneId];
  //image
  if (myImages[scene.visual]) {
    let currentImg = myImages[scene.visual];
    let aspectRatio = min(
      windowWidth / currentImg.width,
      windowHeight / currentImg.height
    );
    let drawWidth = currentImg.width * aspectRatio;
    let drawHeight = currentImg.height * aspectRatio;
    image(currentImg, windowWidth / 2, windowHeight / 2, drawWidth, drawHeight);
  }
  //timer
  let elapsedTime = millis() - sceneStartTime;
  //audio mechanism for deplayed audio. Undelayed audio mechanism is with the changeScene function
  if (
    scene.audioDelay &&
    elapsedTime > scene.audioDelay &&
    lateSoundPlayed === false //If the scene has audio delay and time is more than delay and the delayed sound has not been played
  ) {
    mySounds[scene.audio].play(); //play the sound and indicate the delayed sound has been played
    lateSoundPlayed = true;
  } //This state mechanism for sound prevents if from playing every time draw runs
  //text (subtitle) display
  let delayTime;
  //if the scene has text delay make the delay time that, if not make it 0
  //The undelayed text cannot go in changeScene like the audio because it needs to be drawn every frame constantly. It must be in draw.
  if (scene.textDelay) {
    delayTime = scene.textDelay;
  } else {
    delayTime = 0;
  }
  //if time has passed delay,
  if (elapsedTime > delayTime) {
    //draw the background box for subtitles
    rectMode(CENTER);
    fill(0, 0, 0, 200);
    rect(
      windowWidth / 2,
      windowHeight * 0.85,
      windowWidth * 0.7,
      windowHeight * 0.1,
      10
    );
    //drawing the text
    fill(255);
    noStroke();
    textAlign(CENTER, CENTER);
    textSize(windowHeight * 0.04);
    // 4th parameter limits the max width of the text, keeping it from going out of the box
    text(scene.text, windowWidth / 2, windowHeight * 0.85, windowWidth * 0.7);
  }

  //scene change logic
  if (elapsedTime > scene.duration) {
    if (scene.autoNext) {
      changeScene(scene.autoNext); //If the scener has an automatic connecting scene, change it to next scene
    } else {
      rectMode(CORNER);
      fill(0, 0, 0, 100);
      rect(0, 0, windowWidth, windowHeight);
      if (choicesDisplayed == false) {
        //If it does not have automatic next scene but has choices, draw the black veil and display the choices
        displayChoices();
      }
    }
  }
}

The drawCurrentScene function was written so it would work for every scene that has information in the storyData file. It draws the image fulfilling the current window size,  creates the scene timer, uses that timer to operate audio and visual delays and scene duration and decide how to change the scene to the next scene based on the parameters of the scene. This allows a smooth flow of the game as if it is a video when the player does not need to interact. When the player does need to make choices, it allows unlimited time on the choice page. Because every scene needs to go through all these processes, coding it in this way allows 50 scenes to go through one function instead of having 50 processes, making the code much easier and much more organized.

It actually also allows super easy editing. If you don’t like any part of the story or want to add or delete anything, because of the existance of this function, you would only need to change or add or delete things in storyData. And since storyData is only storing infotmation, it follows less strict order and organization rules than the main code. Making changes to it would be a lot closer to human language and easier.

I am also quite proud of my logic for updating element position when the canvas is resized. The code actually encorporates multiple functions

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  //resize and reposition button accordingly
  //start button
  if (gameState === "start" && startBtn) {
    //size
    let startWidth = max(200, windowWidth * 0.15);
    let startHeight = max(50, windowHeight * 0.08);
    let startFont = max(20, windowWidth * 0.02);
    startBtn.size(startWidth, startHeight);
    //positionn
    startBtn.position(windowWidth / 2 - startWidth / 2, windowHeight / 2 + 50);
    startBtn.style("font-size", startFont + "px");
  }
  //same for return button
  if (gameState === "credits" && returnBtn) {
    let btnWidth = max(200, windowWidth * 0.15);
    let btnHeight = max(50, windowHeight * 0.08);
    let btnFont = max(20, windowWidth * 0.02);
    returnBtn.size(btnWidth, btnHeight);
    returnBtn.position(windowWidth / 2 - btnWidth / 2, windowHeight * 0.85);
    returnBtn.style("font-size", btnFont + "px");
  }
  //game button
  if (choicesDisplayed && choiceButtons.length > 0) {
    for (let i = 0; i < choiceButtons.length; i++) {
      let btn = choiceButtons[i];
      updateButton(btn, i, choiceButtons.length);
    }
  }
}

I not only changed the canvas size according to window size in this code. I also included the button elements and repositioned them accordingly. I found out when I tried my code the HTML elements do not move like the texts do. They dont stay in the same position relative to the canvas. So I actively coded them to calculate new size and position when the canvas is resized. This function is then called in the buttonFunction function so they would be ready to act every time a button is created.

function buttonFunction() {
  //take the value from the button and give it to variable nextScene
  let nextScene = this.value();
  for (let i = 0; i < choiceButtons.length; i++) {
    //remove the buttons from the screen
    choiceButtons[i].remove();
  }
  //empty buttons array for new round of choices
  choiceButtons = [];
  //reset choice display state
  choicesDisplayed = false;
  //If next scene is credits
  if (nextScene === "credits") {
    //change game state in the state mechanism
    gameState = "credits";
    //display the restart button
    returnBtn.show();
    //use the windowResized function for reorganizing text and buttons for the credits screen.
    windowResized();
  } else if (nextScene === "restart") {
    //same logic as above
    gameState = "start";
    startBtn.show();
    windowResized();
  } else {
    //If it is just a choice in the story go with the story logic, the button doesnt need extra function.
    changeScene(nextScene);
  }
}

windowResized is called after the button is told to show. This way the buttons will always be in the right place no matter when or how the screen size is changed.

function startGame() {
  fullscreen(true);
  gameState = "playing";
  startBtn.hide();
  //input any scene here and its a quick portal to the scene
  changeScene("intro");
}

I also wanted to mention this code snippet that I found could also serve as a “maitainance platform”. This was originally written to changethe scene from start to intro and game state from start to playing. But change “intro” to any scene name that is in storyData and this serves as a portal to that scene. Without it I would have to needed to go through the whole story every time I changed something and wan to see the effect.

Some areas for improvement include adding fade in/out effects and more animation. When I looked through the game I felt that some scenes may need a gradual introduction effectr which fade in would perfectly suit. I wasn’t able to add that due to the limit of time. I tried to code it but bugs got in the way and I did not have enough time to trouble shoot the whole thing so I just deleted it. The game would also look better with more animation. But it would be near impossible to reproduce the shapes of spirits and ghosts in my current images with p5 code. The better way would be just make the plot a video, and code choices in between. But that would make it diverge from the goal for the midterm.

AI was used to some extent in this work. All the images used in this work was generated by Google Gemini accoirding to my requirements. For code, Gemini helped me with button styling. Because HTML objects were new to me I had trouble figuring out how to style them. Gemini introduced the different codes for button styling and I wrote the styling codes according to the examples provided. It also provided me with the idea of using an array for the buttons on the scene so they can easily be added and removed (line 433-436). (I originally only had array for choices so the buttons just got stuck on the screen). It also helped me with writing “not” in the if function (line 490) because I remembered it as || which was actually “or” and the code failed to work. Gemini also assisted my when I needed to find sound effect and voiceovers. It suggested using Freesound and Elevenlabs for these needs and gave me tutorials on how to use them. At the end ofc my project I also used Gemini for debugging once or twice when I encountered no councel error messages but the game crashing, because it was difficult for me to pick out the error among hundreds of lines of code. AI was used for help and assistance and did not write the code. The code was written based on an understanding of what was taught in class and what AI explained to me.