Midterm Progress Report

Concept:

For my midterm project, I’m creating a grid version of Color by Number game. I’ve always loved these types of games because they help me relax, focus, and feel super satisfied as the artwork slowly comes to life. My goal is to bring that same experience to users by making a game that’s easy to use, fun, and visually appealing.

The game gives users a color palette, where each color is linked to a letter or number. The image itself is made up of a grid, with each cell labeled to show which color should go there. All users have to do is click on a grid cell, and the color fills in automatically. It’s quite simple for users to use.

Challenging Aspects and Risk Prevention

1. One of the biggest challenges was ensuring precise click detection on the image grid. Since the image itself doesn’t inherently support click interactions, I had to create an invisible grid and lay it on top of the image. This allowed the game to register mouse clicks accurately while keeping the visual presentation clean. However, this was difficult because I had to align the overlay perfectly so that the user wouldn’t see it, but it would still work effectively for detecting clicks and filling in colors.

2. Another challenge was allowing users to select colors from the palette and ensuring the correct color was applied to the chosen grid cell. I had to implement logic that detected which color was clicked on and then stored that color as the “active” selection. The game then needed to apply this color to any grid cell the user clicked until a new color was chosen. The difficulty came in precisely mapping click positions to the correct palette color while keeping the interface responsive and user-friendly.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
class ColoringPage {
constructor(name, imagePath, rows, cols, palette) {
this.name = name;
this.img = loadImage(imagePath);
this.rows = rows;
this.cols = cols;
this.cellSize = 600 / this.cols;
this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
this.palette = palette;
this.selectedColor = Object.values(palette)[0].color;
}
display() {
this.drawPalette();
image(this.img, 100, 90, 600, 600);
this.drawGrid();
this.drawColoredGrid();
}
drawGrid() {
stroke(0, 50);
noFill()
for (let row = 0; row < this.rows; row++) {
for (let col = 0; col < this.cols; col++) {
rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
}
}
}
drawColoredGrid() {
for (let row = 0; row < this.rows; row++) {
for (let col = 0; col < this.cols; col++) {
if (this.grid[row][col]) {
fill(this.grid[row][col]);
rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
}
}
}
}
drawPalette() {
let keys = Object.keys(this.palette);
let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
let y = 20;
noStroke();
for (let i = 0; i < keys.length; i++) {
let colorValue = this.palette[keys[i]].color;
fill(colorValue);
ellipse(x + i * 60 + 30, y + 25, 50, 50);
let c = color(colorValue);
let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula
fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
textSize(14);
textAlign(CENTER, CENTER);
text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
}
}
class ColoringPage { constructor(name, imagePath, rows, cols, palette) { this.name = name; this.img = loadImage(imagePath); this.rows = rows; this.cols = cols; this.cellSize = 600 / this.cols; this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null)); this.palette = palette; this.selectedColor = Object.values(palette)[0].color; } display() { this.drawPalette(); image(this.img, 100, 90, 600, 600); this.drawGrid(); this.drawColoredGrid(); } drawGrid() { stroke(0, 50); noFill() for (let row = 0; row < this.rows; row++) { for (let col = 0; col < this.cols; col++) { rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize); } } } drawColoredGrid() { for (let row = 0; row < this.rows; row++) { for (let col = 0; col < this.cols; col++) { if (this.grid[row][col]) { fill(this.grid[row][col]); rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize); } } } } drawPalette() { let keys = Object.keys(this.palette); let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing let y = 20; noStroke(); for (let i = 0; i < keys.length; i++) { let colorValue = this.palette[keys[i]].color; fill(colorValue); ellipse(x + i * 60 + 30, y + 25, 50, 50); let c = color(colorValue); let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors textSize(14); textAlign(CENTER, CENTER); text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles } }
class ColoringPage {
  constructor(name, imagePath, rows, cols, palette) {
    this.name = name;
    this.img = loadImage(imagePath);
    this.rows = rows;
    this.cols = cols;
    this.cellSize = 600 / this.cols;
    this.grid = Array.from({ length: this.rows }, () => Array(this.cols).fill(null));
    this.palette = palette;
    this.selectedColor = Object.values(palette)[0].color;
  }

  display() {
    this.drawPalette();
    image(this.img, 100, 90, 600, 600);
    this.drawGrid();
    this.drawColoredGrid();
  }

  drawGrid() {
    stroke(0, 50);
    noFill()
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
      }
    }
  }

  drawColoredGrid() {
    for (let row = 0; row < this.rows; row++) {
      for (let col = 0; col < this.cols; col++) {
        if (this.grid[row][col]) {
          fill(this.grid[row][col]);
          rect(100 + col * this.cellSize, 90 + row * this.cellSize, this.cellSize, this.cellSize);
        }
      }
    }
  }

  drawPalette() {
  let keys = Object.keys(this.palette);
  let x = (width - keys.length * 60) / 2; // Decrease 70 to 60 for less spacing
  let y = 20;
  
  noStroke();

  for (let i = 0; i < keys.length; i++) {
    let colorValue = this.palette[keys[i]].color;
    
    fill(colorValue);
    ellipse(x + i * 60 + 30, y + 25, 50, 50); 
    let c = color(colorValue);
    let brightnessValue = (red(c) * 0.299 + green(c) * 0.587 + blue(c) * 0.114); // Standard luminance formula

    fill(brightnessValue < 128 ? 255 : 0); // White text for dark colors, black text for light colors
    
    textSize(14);
    textAlign(CENTER, CENTER);
    text(this.palette[keys[i]].label, x + i * 60 + 30, y + 25); // Adjusted positioning to match circles
  }
}

 

Week 5 : Reading Response

Humans recognize faces and objects effortlessly because we rely on memory and perception, without needing to crunch huge amounts of data. Computers, on the other hand, have to go through an entire process—analyzing pixel data, running algorithms, and comparing what they see to stored information just to identify something. Plus, we’re naturally good at adapting to different environments, while computers can struggle when conditions change from what they were programmed for, often leading to glitches or poor performance.

To help computers track objects better, there are a few go-to techniques. Brightness thresholding boosts contrast by filtering colors and intensities, making objects stand out more clearly. Background subtraction helps by removing the static background so the system can focus only on what’s moving. These methods make it easier for computer vision to detect and track what matters, whether in art, security, or interactive installations.

In interactive art, computer vision takes audience engagement to the next level by allowing people to actively shape the artwork with their movements and gestures. Unlike traditional static art, these installations respond in real-time, making the experience more dynamic and immersive. By tracking participants as they move, computer vision can trigger visual, auditory, or even tactile reactions, turning viewers into active contributors rather than just passive observers. This creates a deeper connection with the artwork, giving people a sense of agency in the creative process. Whether it’s motion-triggered visuals, gesture-controlled projections, or body-driven games, computer vision is expanding the possibilities of creativity in interactive media.

Week 5 – Midterm Progress

For a midterm project, I decided to make a game for baking cookies! As a passionate baker, cookies are one of my specialties (ask my friends trust me).

The concept: A simple game to bake chocolate chip cookies. If time permits and if my skill set can handle it, I also want to include a timer. This is a very basic sketch of what I’m planning on: 

  1. Start page
  2. Ingredients with a bowl; the users will drag the ingredients into the bowl, and then click on the egg to crack it. This was my “scary” part, as I wanted to try using sprite sheets, a concept I never heard of or tried before this.
  3. A mix button will bring a whisk, which is animated to turn, and then an animation will switch to a tray filled with cookie dough.
  4. The tray will be pushed into an “oven” (which I’ll draw using shapes)
  5. A screen shows up saying “Cookies are ready!!”. There will be a restart button, and a recipe button, for my actual original recipe.

So, the user interaction would essentially be their role as the baker, dragging the ingredients into the bowl, putting the tray into the oven and so on.

The scary part: Using a sprite sheet for the egg cracking. I drew the stages of an egg being cracked and then created a sprite sheet of it using Canva. I then coded the egg breaking, which was a bit confusing. At first, the egg would “crack” wherever we click on the screen, but then I figured out how to make it confined to just the egg picture.

I was very happy with the result, and it made me less afraid in approaching the midterm project, since the scariest part is solved now. Here’s an embedded sketch of my “cracking the egg” test. I might need to update the sprite sheet and code to get the egg to ACTUALLY fall into the bowl, but so far, I’m pretty confident with my project concept, and excited to code it to reality!

 

Week 4 Project

For this week’s project I decided to make something somewhat ironic in an attempt to create something funny (I think it is).

I made a quote generator that generates a quote and refreshes it for the user’s motivational needs. Needless to say, I was the one that inspired my own design, as I found myself at certain points in the day, struggling to gather the motivation to do anything remotely productive.

One part of this project that was challenging to me was finding out how to load my .csv file into the project, then gathering the correct string to be displayed back into the print().

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function keyPressed() {
if (keyCode === 13 ) {
let currentQuote = int(random(1,20))
print(currentQuote)
displayText = quotes.getString(currentQuote,0)
function keyPressed() { if (keyCode === 13 ) { let currentQuote = int(random(1,20)) print(currentQuote) displayText = quotes.getString(currentQuote,0)
function keyPressed() {
    if (keyCode === 13 ) {
      
      
    let currentQuote = int(random(1,20))
    print(currentQuote)
    displayText = quotes.getString(currentQuote,0)

 

Assignment 4: Fortune Cookies

This is my text-based art piece, which I have taken inspiration from during a recent trip to a Chinese restaurant. My friends and I enjoyed opening our cookies and reading each other’s fortunes. I wanted to give others this experience too so created this piece to do exactly that.

 

📋Assignment Brief

  • Create an interactive piece of artwork by loading data or displaying text

💭Conceptualisation

The idea for this project emerged from a delightful dining experience at a Chinese restaurant. As my friends and I cracked open our fortune cookies and shared our fortunes with laughter and anticipation, I was struck by the simple joy this tradition brought to our meal. I wanted to capture and recreate this experience, allowing others to enjoy the whimsical pleasure of fortune cookies from anywhere. The idea evolved to combine the visual aesthetics of a Chinese restaurant with the interactive element of opening a fortune cookie. I envisioned a scene where users could click to “open” a virtual fortune cookie, revealing one of many possible fortunes. This concept aimed to blend cultural elements, visual art, and interactivity into a single, engaging piece. By digitizing this experience, I hoped to make it accessible to a wider audience, allowing people to enjoy the surprise and wisdom of fortune cookies without the need for a physical restaurant visit.

💻Process

The coding process for this project began with creating the foundational elements of the Chinese restaurant scene using p5.js. I started by establishing the basic structure, including the background, tables, wall decorations, and lanterns, to set the ambiance. The next crucial step was designing the fortune cookie itself. I used bezier curves to craft a realistic cookie shape, complete with a subtle shadow and a visible fold line. To add depth to the experience, I implemented two states for the cookie: closed and open. This required creating separate functions for drawing each state.

The interactive element was introduced by implementing a mousePressed() function, allowing users to toggle between the closed and open states of the cookie. To bring the fortune-telling aspect to life, I created an array of ten different fortune messages. Each time the cookie is opened, a random fortune is selected from this array and displayed on the ‘paper’ inside the cookie.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function mousePressed() {
isOpened = !isOpened;
if (isOpened) {
currentFortune = random(fortunes);
drawOpenedFortuneCookie(openedFortuneCookie, currentFortune);
}
}
function mousePressed() { isOpened = !isOpened; if (isOpened) { currentFortune = random(fortunes); drawOpenedFortuneCookie(openedFortuneCookie, currentFortune); } }
function mousePressed() {
  isOpened = !isOpened;
  if (isOpened) {
    currentFortune = random(fortunes);
    drawOpenedFortuneCookie(openedFortuneCookie, currentFortune);
  }
}

One of the more challenging aspects was ensuring the fortune text was properly centered and legible within the opened cookie. This required careful adjustment of text positioning and size. Throughout the process, I continually refined the visual elements, adjusting colors, sizes, and positions to create a harmonious and visually appealing scene.

The final touch involved fine-tuning the user experience, ensuring smooth transitions between states and readable fortune messages. This iterative process of coding, testing, and refining resulted in an interactive digital representation of the classic fortune cookie experience, nestled within a charming Chinese restaurant setting.

🚩Challenges

One of the main challenges I faced while writing this code was achieving a realistic representation of a fortune cookie while keeping the code simple and efficient. Creating the curved shape of the cookie using bezier curves required careful adjustment of control points to achieve a natural-looking form.

Another significant challenge was implementing the random fortune selection mechanism. While the concept seems straightforward, ensuring that the fortunes were truly random and didn’t repeat too frequently required careful consideration. I had to strike a balance between maintaining a diverse selection of fortunes and avoiding predictability in the user experience.

📶Potential Improvements

While the current version successfully creates an interactive fortune cookie experience within a Chinese restaurant setting, there are several areas for potential improvement and expansion:

  • Animation Enhancements: Adding subtle animations, such as a smooth opening animation for the fortune cookie or a gentle floating effect for the lanterns, could make the scene feel more engaging

  • Sound Integration: Incorporating audio elements, like a soft cracking sound when opening the cookie or ambient Chinese restaurant background noise, could enhance the immersive experience

  • Multiple Cookie Types: Introducing different types of fortune cookies with varying shapes, colors, or even flavors could add variety and replay value to the interaction

 

Startup Funding Visualization

Concept

This project visualizes startup funding data by representing different startups as interactive elements on the screen. Users can hover over a startup to see its funding amount and valuation, and click on a startup to view more detailed information. The goal is to create an intuitive and engaging way to explore startup funding rounds.

Code Highlight

One of the key sections of the code is how the information is displayed when a startup is hovered over. The following snippet effectively creates a tooltip-like interaction:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
text(`${this.emoji} ${this.name}`, this.x, this.y - 15);
if (hovered) {
fill(255, 200);
rect(mouseX, mouseY, 160, 50, 10);
fill(0);
textSize(12);
text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20);
text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40);
}
text(`${this.emoji} ${this.name}`, this.x, this.y - 15); if (hovered) { fill(255, 200); rect(mouseX, mouseY, 160, 50, 10); fill(0); textSize(12); text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20); text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40); }
text(`${this.emoji} ${this.name}`, this.x, this.y - 15);

if (hovered) {
  fill(255, 200);
  rect(mouseX, mouseY, 160, 50, 10);
  fill(0);
  textSize(12);
  text(`💰 $${this.amountRaised}M`, mouseX + 80, mouseY + 20);
  text(`📈 $${this.valuation}M`, mouseX + 80, mouseY + 40);
}

 


This block dynamically positions the tooltip near the cursor and provides a quick summary of key financial metrics.

Embedded Sketch

 

Reflection and Future Work

While the current implementation effectively visualizes funding data, there are several areas for improvement:

  • Scalability: The current approach might become inefficient with a large dataset. Optimizing how data is rendered could enhance performance.
  • More Interactivity: Adding filtering options for different funding rounds (Seed, Series A, B, etc.) could improve user experience.
  • Enhanced Visualization: Implementing different shapes or colors to represent different funding rounds would make distinctions clearer.
  • Data Integration: Connecting to a live API to fetch real-time funding data would make this visualization more dynamic and useful.

Overall, this project provides an engaging way to explore startup funding data, and with future iterations, it can be expanded into a more powerful analytical tool.

Week 4 : Reading response

While reading the text, one thing I felt that Norman did not explicitly mention is the lack of instructions for devices and electronics, especially when their functionality is not immediately obvious. Many products assume that users will either figure things out through trial and error or consult a manual, but in reality, unclear controls often lead to confusion and frustration. For example, I own a digital camera, but I still have no idea how to zoom properly or what the different shooting modes actually do. There are no clear signifiers on the buttons, and the camera does not provide immediate feedback or guidance. Similarly, the projectors we have in the Baraha rooms can get difficult to use because the wall-docked interface offers multiple input options with no clear instructions on which mode is appropiate for what. Users are left guessing, which can lead to delays and unnecessary trial and error.

This lack of guidance can also be an issue in interactive media. Many systems expect users to learn how to navigate and use features on their own, but this often results in important functions being left undiscovered, reducing the overall usability of the system. This is where Norman’s principles of discoverability, feedback, and conceptual models become essential. If interfaces and products clearly communicated how to use them, users would not have to struggle with unnecessary complexity. By applying Norman’s ideas, interactive media could be improved by incorporating better signifiers, immediate feedback, and clearer mappings to ensure users understand what actions are possible. Whether in physical devices like cameras and projectors or in digital interfaces, design should guide users intuitively rather than forcing them to figure everything out on their own.

Reading Reflection – Week 4

This has been my favourite reading yet! I am a massive ‘good-design’ junkie, and love to appreciate when something is made with the User in mind. In fact, I’m so enthusiastic about this that I spent last summer teaching design principles to high school students in Japan. It was incredibly rewarding to share my knowledge and watch young minds grasp the importance of user-centered design.

One aspect of design that particularly excites me is analyzing the ergonomics of products. I love examining how objects interact with the human body and how they can be optimized for comfort and efficiency. This interest led me to develop my own acronym for designing things for people: SUPREME FAM. Each letter represents a crucial aspect of the design process:

S: Stakeholder
U: Usability
P: Primary User Needs
R: 6Rs (Rethink, Refuse, Reduce, Reuse, Recycle, Repair)
E: Ergonomics
M: Machining
E: Environment
F: Function
A: Aesthetic
M: Materials

I find that using this acronym helps me approach design challenges in a comprehensive and user-centered way, much like the principles Norman discusses in his work.

Norman’s emphasis on the importance of affordances, signifiers, and feedback in design struck a chord. I recall a recent visit to our Art Gallery, where I hesitated to touch an exhibit because there were no clear indications of how to engage with it. The art piece was discussing a jelly-like lizard and there was one on display. However, the lack of signifiers left me unsure whether I was allowed to interact or if I might accidentally damage the artwork. This experience made me appreciate how crucial it is for designers to provide clear cues that guide user behavior. I had a similar experience of bewilderment when I stood in front of a high-tech coffee machine, at a friend’s house, unable to figure out how to make a simple cup of coffee. The machine’s sleek design offered no clues about its functionality, leaving me feeling embarrassed and caffeine-deprived.

One thing that drives me crazy is the design of many public restroom sinks. I often find myself waving my hands frantically under the faucet, trying to trigger the sensor to start the water flow. Sometimes I’m left wondering if the sink is broken or if I’m simply not performing the correct “hand dance” to activate it. To improve this, I imagine a design where the sensor area is clearly marked with a simple icon or color, and perhaps a small LED light that illuminates when hands are detected. This would provide clear signifiers and immediate feedback, reducing user frustration and water waste.

Applying Norman’s principles to Interactive Media, I believe designers should focus on creating intuitive interfaces that provide clear feedback and match users’ conceptual models. For example, a website’s navigation should use familiar speech patterns and provide visible cues about the user’s current location and available actions. Interactive elements should offer immediate feedback to confirm user actions, much like how a well-designed physical button provides tactile and visual feedback when pressed. By incorporating these principles, designers can create digital experiences that are both engaging and frustration-free.

Week 4: Generative Text

Concept:

This project is a gentle companion for women across different phases of their cycle. Each phase—Follicular, Ovulation, Luteal, and Menstrual—reveals its own short, reflective text. The more difficult phases, Luteal and Menstrual, include extra motivating lines to offer comfort and encouragement. Overall, it acts like a quote page, providing small bursts of support and understanding. The goal is to create a sense of connection and help women feel acknowledged in whatever day they find themselves.

Highlight:

I believe that input parsing and validation is a highlight because it makes sure the user’s number is always correct. It was tricky to get the latest number when users changed the value without pressing Enter. Moving the number conversion to the button click made sure we always use the newest value. Handling wrong numbers and showing clear messages was tough, but it is key for a smooth experience.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// if valid, choose a random entry based on phase
let entry = "";
switch (currentPhase) {
case "Follicular":
entry = random(follicularEntries); // pick random phrase
break;
case "Ovulation":
entry = random(ovulationEntries);
break;
case "Luteal":
entry = random(lutealEntries);
entry += " " + random(motivationalPhrases); // add extra motivation
break;
case "Menstrual":
entry = random(menstrualEntries);
entry += " " + random(motivationalPhrases); // add extra motivation
break;
}
diaryEntry = entry; // store the chosen entry
isGenerated = true; // mark as generated
}
//validates if the day is within the correct range for the phase
function validateDayRange(phase, day) {
if (phase === "Follicular") {
// allowed days: 1 to 13
return day >= 1 && day <= 13;
} else if (phase === "Ovulation") {
// allowed days: 14 to 16
return day >= 14 && day <= 16;
} else if (phase === "Luteal") {
// allowed days: 17 to 28
return day >= 17 && day <= 28;
} else if (phase === "Menstrual") {
// allowed days: 1 to 5
return day >= 1 && day <= 5;
}
return false; // default false
}
// if valid, choose a random entry based on phase let entry = ""; switch (currentPhase) { case "Follicular": entry = random(follicularEntries); // pick random phrase break; case "Ovulation": entry = random(ovulationEntries); break; case "Luteal": entry = random(lutealEntries); entry += " " + random(motivationalPhrases); // add extra motivation break; case "Menstrual": entry = random(menstrualEntries); entry += " " + random(motivationalPhrases); // add extra motivation break; } diaryEntry = entry; // store the chosen entry isGenerated = true; // mark as generated } //validates if the day is within the correct range for the phase function validateDayRange(phase, day) { if (phase === "Follicular") { // allowed days: 1 to 13 return day >= 1 && day <= 13; } else if (phase === "Ovulation") { // allowed days: 14 to 16 return day >= 14 && day <= 16; } else if (phase === "Luteal") { // allowed days: 17 to 28 return day >= 17 && day <= 28; } else if (phase === "Menstrual") { // allowed days: 1 to 5 return day >= 1 && day <= 5; } return false; // default false }
// if valid, choose a random entry based on phase
  let entry = "";
  switch (currentPhase) {
    case "Follicular":
      entry = random(follicularEntries); // pick random phrase
      break;
    case "Ovulation":
      entry = random(ovulationEntries);
      break;
    case "Luteal":
      entry = random(lutealEntries);
      entry += " " + random(motivationalPhrases); // add extra motivation
      break;
    case "Menstrual":
      entry = random(menstrualEntries);
      entry += " " + random(motivationalPhrases); // add extra motivation
      break;
  }

  diaryEntry = entry; // store the chosen entry
  isGenerated = true; // mark as generated
}

//validates if the day is within the correct range for the phase

function validateDayRange(phase, day) {
  if (phase === "Follicular") {
    // allowed days: 1 to 13
    return day >= 1 && day <= 13;
  } else if (phase === "Ovulation") {
    // allowed days: 14 to 16
    return day >= 14 && day <= 16;
  } else if (phase === "Luteal") {
    // allowed days: 17 to 28
    return day >= 17 && day <= 28;
  } else if (phase === "Menstrual") {
    // allowed days: 1 to 5
    return day >= 1 && day <= 5;
  }
  return false; // default false
}

Reflections, ideas for future work & Improvements:

For future work, I plan to add more customization options and richer animations. I want to explore saving user entries so that they can track their mood over time. I also plan to refine the validation process and introduce more advanced error handling. These improvements would make the project even more useful and appealing as a supportive quote page for women.

Week 4 — Reading Response

One thing that drives me crazy is poorly designed shower faucets, especially the ones with a single handle where it’s difficult to tell how to adjust temperature or water pressure. Many of these designs lack clear signifiers, making it frustrating to figure out whether turning the handle left increases temperature or pressure, or if pulling or pushing it has any effect at all. Sometimes, the hot and cold indicators are either missing, faded, or counterintuitive, leading to a trial-and-error process that often results in an unexpected blast of freezing or scalding water.

In interactive media, Norman’s principles can be applied similarly to enhance usability. Just as a well-designed shower faucet should make its function immediately clear, interactive elements in digital experiences should signal their purpose intuitively. For example, buttons and interactive objects should provide visual and/or tactile feedback, such as glowing when hovered over or vibrating when clicked (the type of feedback varying depending on the intended users). Proper mapping in digital interfaces—like ensuring a swipe-down gesture naturally leads to scrolling down—mirrors the idea of making physical controls feel intuitive. Without such authorial intent and thought in the design, interactive media can be confusing and can often lose its original intended meaning as a result. By focusing on basic HCD principles, interactive media can create smoother, more engaging experiences that minimize user frustration.