Week 5 — Midterm Project Progress

1. Project Concept and User Interaction

The Secret Garden will be an immersive resource-management game that utilizes a point-and-click adventure interface. The core concept revolves around the “journey of growth” — shifting the focus from instant gratification to the physical labor of gardening. To successfully cultivate life, the player must navigate a layered world, traveling between different locations to collect raw materials. The gameplay loop requires the player to visit the River to fetch water, the Pottery Studio to craft vessels (may or may not include), and the Forest to gather nutrient-rich soil.

The interaction is designed around a “Layered Scene” system. The player navigates by clicking on specific environmental “portals,” such as a path leading to the woods or a bridge crossing the river. Within each scene, the cursor acts as a tool: clicking the river fills a bucket, while clicking a potter’s wheel initiates a crafting sequence. In the central Garden hub, interaction is more tactical, requiring the player to drag and drop collected resources from their inventory onto active plant plots to trigger growth stages.

2. Code Design: Functions, Classes, and Interactivity

The technical architecture of the game will rely on a Scene Manager class to maintain the game state. This class will store boolean flags for every location, ensuring that the program knows exactly which background to render and which set of interactive hitboxes to activate. A global inventory object tracks the count of water, soil, pots, and seeds, acting as the bridge between the gathering scenes and the planting scenes.

The plants themselves will be managed through a Plant Class, which will encapsulate all the logic for individual growth. Each instance of a plant tracks its own unique needs and developmental progress. A central mouseClicked() function serves as the primary input handler, using conditional logic to determine what action to take based on the currently active layer. Furthermore, a persistent HUD runs every frame to provide the player with a clear view of their current resources, regardless of where they are in the world.

3. The Most Uncertain Part
    • Asynchronous Growth Logic: In a multi-scene game, the player spends significant time away from the Garden. If growth is tied to the Garden’s display loop, plants will “freeze” in time whenever the player leaves the room. Ensuring plants age accurately in the background across different layers is a major logic hurdle.
    • Interactive Pottery and Asset Alignment: Designing a pottery mini-game that feels tactile — such as requiring the player to click and hold to “shape” clay — risks being buggy or unintuitive, so I’m still in the process of figuring out how exactly I want to introduce “interactivity” there. Additionally, finding specific sprites for for everything and ensuring they align perfectly is difficult.
3. Risk Minimization and Test Implementation
    • By using the millis() function, each plant records the exact moment it was last nurtured. This creates a “real-time” clock for every organism. When the player returns to the Garden, the class calculates the difference between the current time and the recorded timestamp to determine growth, ensuring the world feels persistent and alive even when the player is at the River or Studio.
    • To handle the pottery mini-game, I plan to use a “fill-bar” logic where mousePressed increments a shaping variable. This avoids complex physics and keeps the interaction reliable.

Week 5 — Reading Response

Golan Levin’s essay on computer vision really hammered home the fact that a camera isn’t an eye — it’s just a sensor feeding a math equation. Having done CS IBDP HL, I was already pretty familiar with how control systems work and how cameras process input, so honestly, none of the technical side was surprising to me. I’ve spent enough time looking at how sensors translate the physical world into data arrays to know that a computer doesn’t “understand” what it’s looking at; it’s just running basic tricks like frame differencing to see what moved or background subtraction to see what’s new in the shot. It doesn’t see a “person”; it just sees a bunch of pixels that changed from gray to brown. This makes the computer incredibly easy to trip up with something as simple as a flickering light or a shirt that’s the same color as the wall.

Because computers are so literal and limited, artists have to do a lot of the heavy lifting physically before the code even runs. Levin points out that things like controlled lighting or high-contrast backgrounds aren’t just aesthetic choices — they’re necessary “cheats” to help the computer distinguish a human from the floor. I think about projects like Myron Krueger’s backlit walls, where the environment is specifically engineered to give the computer a perfect silhouette. It makes me realize that successful interactive art isn’t just about writing clever software; it’s about designing a space that “explains” the world to the camera so the algorithm doesn’t get confused by visual noise.

There’s also a weird tension in using these tools because, at their core, they’re surveillance technologies. Levin mentions works like Sorting Daemon or the Suicide Box to show how artists can flip the script on military-grade tracking. It’s a bit uncomfortable to realize that the same tech making a digital shadow look cool in a gallery is what’s used to profile people in public spaces. It makes me wonder if we can ever fully enjoy “interactive” media without that nagging feeling of being watched and categorized. It’s a reminder that while the interaction feels like magic, the data being pulled is never really neutral — it’s always being filtered through whatever narrow definitions the programmer chose.

Midterm Progress Report

Concept:

Throughout the Assignments, I really feel in love with Assignment 3 where I made this mesmerizing colorful display. Even as developing that production, I saw that there is more to be made and even playing around with some of the variables, inspired me to make it the core focus of my midterm project. I think if time allows, I really want to create a magnificient interactive display, on that will be close with the viewer of this.

The main concept is customization of the colored canvas. I plan to add options so that the user can interact with key things of the project, such as sliders for the direction of the balls on screen (both in the X and Y direction). There will also be an option for the user to change around the RGB colors in order to get the desired color they wish. But the main thing I want to incorporate is the text from Assignment 4 and it would be surrounded by the colorful balls. Also I could see the user having so the mouse interrupts the flow of the balls, similar to how the mouse interrupts the text in Assignment 4.

Design

The design process is mainly extending and adding more features to the colorful concoction project. Firstly, there’s going to be an intro screen, which the user will be guided into what exactly the project is, and give them an overview of what’s to come. There will also be instructions for what the user could do to interact further with the project.

Then, when the user is ready, it will switch to the generative artwork. There is going to be sliders or probably text boxes, where the user will add a value and it will change something from the artwork. This includes range of color, the direction and speed of the balls, and a text box so the text can be displayed on screen. Finally there will be a button so that the user can take a picture of their final artwork.

Challenging Aspects:

I think the biggest challenge is mainly implementing the text and getting it to be a blockade for the balls so that it surrounds them. In a sense, the balls need to recognise the letters as a wall, so not only do the balls surround it, but also bounce off if they change. It’ll be a case of playing aronud with direction vectors.

Another challenging aspect is making the sliders, as I do not have any experience with making sliders so that they can dynamically change different parts of the artwork.

Mitigating Risk:

In terms of implementing the text, I plan to experiment with the text and seeing how it will affected by other objects. As a starting place, I could take the code which I used to make it so the balls do not go outside of the walls, and try to implement it for the letters. Then from there, I can manipulate the variables in order to get the desired effect I want.

For the sliders, I will read up upon how they’re implemented. Most likely our friends at the Coding Train, have made a video about how to use sliders so they will be a great starting ground. From there, I can extended them so the sliders can manipulate the variables of say, the color or direction of the balls.

 

Midterm Progress

The Concept

For my game, I decided to go for a Star Wars theme, building on the process of Anakin Skywalker becoming Darth Vader. The concept of a game is an interactive choice-making game; you have to try to help Anakin not turn to the dark side by making certain choices. Depending on the choices you make, Anakin will either remain a Jedi or turn to the Dark Side.

The Design

The game’s color palette is going to consist of red, orange, brown, and black. It’s mostly going to be geometric with minimal curves. The game will start with a start page with Anakin Skywalker and Obi-Wan Kenobi on the screen with a Mustafar background. Once the user clicks the start button, an instruction block should pop up with instructions on the game and explaining the elements of choice and consequence. Users cannot control the motion of the characters, only their choices.

There will be music in the background from the Star Wars franchise. As for the code’s design, the game will rely on Object Oriented Programming for the characters’ design, relationships,  dialogue, and the choice boxes. I will also be relying on several if statements and nested loops since the story is a very condition-based game.

What I Am Afraid Of:

My biggest concern/fear is connecting certain choices with particular events and consequences going wrong. I fear the code doesn’t run properly, the choices don’t connect, and I am left with an inconsistent story and a game that doesn’t make sense. To manage this, I’m going to work on the different parts seperately, and organize certain elements into classes to avoid complexity of the code. I will have to make explicit comments on the code to make everything organized so no mistakes happen or confusion within the events and consequences.

Visuals

AI was used to generate visuals

 

Week 5 Reading Analysis

Unlike human vision, computer vision is not as malleable and flexible. It is trained on the basis of algorithms, therefore, if any error occurs, it’s not as quick to adapts and lacks human adaptability. Instead, it would need to be trained. I believe that in order to translate our physical world for computer vision and algorithms, we must map out their particular characteristics which make them stand out. Almost similar to how we would explain to a child (or even a visual of a child’s painting) reveals the most identifying aspects of particularities in our physical world. Since computer vision has certain limitations and require algorithm training, you would need to extensively train the computers to read what you want it to read in extreme detail. In addition, you would want to make sure it makes no errors especially since surveillance and privacy are extremely sensitive topics in the digital world.

Assignment 5: Midterm Progress

CONCEPT

For my midterm project, I am planning to create an active, time-management puzzle game where the player must balance quick math with physical character movement. The goal is to hang exactly 10 pieces of clothing on a clothesline before the 1:30 timer runs out, without exceeding the line’s strict weight limit. The player’s character can hang clothes in the backyard for up to 1:30 minutes, but she must carefully manage the weight on the line because it can carry only up to 25 grams.

The player will not know in advance which piece of clothing they will get from the laundered clothes basket. Each piece has a different weight: hoodies weigh 4 grams, pants 3 grams, shirts and shorts 2 grams, and tank tops 1 gram. If the player is unsure, they can temporarily place a piece of clothing in the “for later” basket. However, this basket can only hold up to 6 pieces of clothing, and once items are placed there, they cannot be accessed again until later.

I also plan to integrate a rain effect in the middle of the game, adding another layer of challenge. At a certain point, the player will receive a 10-second warning to pick up the laundered basket and move to a shaded area somewhere on the screen. If they fail to do so, every piece of clothing will gain an extra +1 gram. If the total weight on the line exceeds 25 grams, the line will snap, and the player will have to start over.

DESIGN 

I first designed my concept in Canva so I could actually see what the game would look like. I started with the intro page, then the instructions page, and finally worked on the game UI itself. I found a background on Pinterest and added some extra elements from Canva to make it look more complete. I struggled a bit at first with how to design the game UI because I wasn’t sure how everything would fit together, but after trying different layouts, I finally got it to work. I also asked Claude to help me generate different game maps, which gave me a bunch of ideas and styles. Looking at those really helped me figure out what I liked and guided me in deciding the final layout for the game.

UNCERTAIN PART

One of the uncertainties in this game is the rain part. Without the rain, players could win more easily, which is why I want to include it to add more challenge. However, the rain event is complicated to implement because it interrupts the normal gameplay for about 40 to 50 seconds and requires multiple conditions to work at the same time. During this event, the player needs to drag the basket and hold it while walking, but the basket movement should only be allowed within a 10 second timer. At the same time, the system also needs to add +3g to the clothesline weight. The most confusing part for me is adding +1g to all remaining basket items if they are not placed in the shade.

I think this part requires careful state management, such as handling dragging, carrying, placing, walking, and temporarily locking or allowing actions, along with proper timing so that nothing breaks, overlaps, or conflicts during the rain event.

REDUCING RISK

To reduce the risk, the controls during the rain event will be simplified. The player will press the spacebar to pick up the laundered basket and use the arrow keys to move the character. Once the rain starts, +3g will be added to the clothesline weight, making it unavailable, so the player will no longer be able to hang clothes. Instead, the player just needs to move to the shaded area with the basket and wait there until the rain stops.

The part I am still unsure about is how to add +1g to each clothing item if they are not placed in the shade. If this becomes too complicated to implement, I will simplify the mechanic. Instead of adding +1g to each clothing item inside the basket, I will just add an extra +2g to the total weight. If that still does not work well, another option is to change the setup so that the laundered clothes are already placed in the shaded area. In that case, the player’s task would simply be to carry the clothes from the shaded area to the clothesline and hang them once the rain stops.

week 5 reading response

human vision and computer vision is actually quite similar in a way. humans are especially attuned to detect even the most subliminal changes in their physical environment, be it sound or light or movement etc.

to illustrate just how important change is for our “vision”, next time you’re in a very dark room (when you go to bed tonight), try and stare at the far corner without blinking or moving your eyes. you’ll begin to notice that, gradually, a darkness is creeping up from your peripherals and slowly makes it way towards the centerpoint of your vision. i thought this was so cool when i first discovered it, felt like i was falling into a void. this happens because the rods in your eyes (which are attuned to both light and movement) and the cones (which are attuned to color) are almost completely deprived of stimulation. your brain figures you dont need your vision if theres nothing to detect.

this is also the reason why we are constantly moving our eyes. ever notice the little micromovements your eyes are always making when your attention is focused externally? they need the movement to help keep them stimulated enough to see. and also, ever notice how, when theres a noise that’s been going on for a long time, you only notice it when it suddenly stops? the brain kind of filters out stimuli that are continuous and unchanging. it’s looking for change, just like computer vision does.

it’s important to realize how inseparable technology and art both are from human biology, it’s all modeled off of our understanding of ourselves. the farther we progress in the fields of biology, medicine, neuroscience, and psychology, the greater capacity we have for advancements and inspiration in ai, computers, architecture, and, by extension, interactive media art.

week 3

    • concept:

simple, i wanted to create a cute blinking birdie staring at some trippy stars and contemplating things. what on earth could he possibly be pondering about? i fear we will never know.

  • A highlight of some code that i’m particularly proud of:

i used while() and if() functions to make the background animation. it’s quite literally just a bunch of thin white concentric circle patterns bouncing off the edges of the canvas, overlapping with eachother and a static one in the middle. pretty neat.

//sky pattern
 rectMode(CENTER);
 strokeWeight(0.4);
 stroke(255);
 stroke("white");
 noFill();

 while (sky > 1 && sky < 900) {
   circle(200, 200, 1);
   circle(200, 200, sky);
   sky += 10;
 }

 stroke(0);

 //pattern 1
 Circle(10, x, y);
 if (y > 375 || y < 25) {
   speedY = speedY * -1;
 }
 if (x > 375 || x < 25) {
   speedX = speedX * -1;
 }
 x = x + speedX;
 y = y + speedY;

 // pattern 2
 Circle(10, a, b);

 if (b > 375 || b < 25) {
   speedB = speedB * -1;
 }
 if (a > 375 || a < 25) {
   speedA = speedA * -1;
 }
 a = a + speedA;
 b = b + speedB;

i also used a randomGaussian() function to have the birdie blink at random intervals.

function lilGuy(){
push();
let r = randomGaussian(50, 150);
stroke(0);
strokeWeight(1);
translate(90, 0);
fill(255);
arc(195, 355.5, 80, 160, 270, 0, PIE);
circle(195, 265, 39);
arc(194, 280, 55, 25, 180, 270, PIE);
strokeWeight(0);
arc(195.5, 360.5, 80, 170, 270, 0);
circle(195, 265, 38);
strokeWeight(1.5);
fill(255);
strokeWeight(1.5);
ellipse(192, 267, w, h);
if (r < 51 && r > 45) {
h = 1;
} else {
h = 17;
}
pop();
}
  • Reflection and ideas for future work or improvements:

if i had more time, i’d definitely add an interactive element, maybe some dialogue options so you can chat with the strange bird and get to the bottom of what he’s been musing about all mysteriously.

week 5 – midterm project

project concept:

for my midterm project, i’m planning to create a simple pixel rpg-style demo. the demo will start with an interactive cutscene, then the player will be able to navigate a room / setting.  the details are very vague, but i’ll figure out what works as i make progress.

design:

my visual inspirations for this project are the games Undertale (particularly the way the sprites and backgrounds are designed), and Sally Face, which i’m using as a reference on how to visually incorporate the dialogue interactions as well as the vibe i’m going for.

i drew the sprites via pixelart.com (honestly i spent more time on them than the coding itself…), and the background music is an 8-bit cover i found of a deftones song, which i thought sounded pretty awesome.  going forward, i intend to draw a pixel background with some interactive elements for the player to navigate. i want the overall experience to look eerie and sickly (which is why the sprite i made may seem a little jaundiced).

most frightening part and how i tackled it:

having to animate a sprite was definitely the most intimidating part for me. to start off, i reread the slides and really studied the examples provided.  problem was, i wanted my sprite to be able to move while the arrow keys are pressed, unlike the example in the slides where you have to spam the keys rapidly. to figure out how to achieve this, i did some googling and scrounged around for (mostly useless) advice on the internet (obviously including ai overview), and ultimately was referred back to the KeyIsDown() reference page on p5.js. however, in trying to incorporate what i was learning, the code got extremely messy and buggy. all sort of horrendous things happened to my little sprite – i cannot bear to speak of it. eventually, i figured things out myself through trial and error (like always), and, while heavily relied on my references to keep me on the right track, all the code is written by me. (i shall add comments later when i continue to work on the project.)

function draw() {
  background(0);
  if (keyIsDown(DOWN_ARROW)) {
    direction = 0;
    y += speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(LEFT_ARROW)) {
    direction = 2;
    x -= speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(UP_ARROW)) {
    direction = 1;
    y -= speed;
    step = (step + 1) % 4;
  } else if (keyIsDown(RIGHT_ARROW)) {
    direction = 3;
    x += speed;
    step = (step + 1) % 4;
  } else {
    step = 0;
  }
  if (y <= 0) {
    y = 0;
  }
  if (x <= 0) {
    x = 0;
  }
  if (y >= windowHeight - 126) {
    y = windowHeight - 126;
  }
  if (x >= windowWidth - 60) {
    x = windowWidth - 60;
  }
  image(sprites[direction][step], x, y, 70, 147);
}

references:

as mentioned earlier, ai was used in the sense that it popped up and tried to provide answers to the questions i googled. i used it as a tool to try and understand how KeyIsDown works when i was experiencing bugs and analyzed the (very simple) examples it provided, then tried to implement what i learned into my code. I did not ask it to fix my bugs or provide me with code.

https://p5js.org/reference/p5/keyIsDown/

https://drive.google.com/file/d/18ZMq9BB1l5XhMx5OfzNciU2OJQbUKvg3/view?usp=sharing

Week5 Reading

I found the emphasis on accessibility compelling as it resonates with me that groundbreaking ideas often emerge when constraints like hardware, resources or expertise are lowered. Still I find it questionable that whether the tone of inevitability “recent improvements make artistic experimentation a reality” oversimplifies deeper tensions: access to hardware, funding for arts-tech projects, and the proliferation of open-source culture are not universally available. The example mentioned like Videoplace and Sorting Daemon foreground surveillance as a recurring motif in art. They hihglight a critical stance toward automated perception. This aligns with my belief that technology should be examined through ethical, social, and political lenses. However, I would like the authors to more explicitly address issues of consent, data provenance, and the potential for misrepresentation when computer vision projects interpret publics.

The reading has made me to rethink my own ideas about visibility and control in realm of interactive art. It argues how tracking and head or gesture-based interactions can produce intimate, embodied experiences, yet it also raises questions about privacy concerns like who is being watched and for what purpose. The use of surveillance-tinged artworks like Standards and Double Standards and Sorting Daemon highlights a paradox  that technology that augments artistic expression can simultaneously normalize and critique surveillance. I wonder how far artists should go in personalizing tracking. Are there boundaries that preserve participants’ agency without stifling experimentation? I want us to think: How do we balance artistic intent with privacy concerns in real-time, participatory installations? What concrete techniques feel most useful for a new programmer without compromising ethical considerations? And how mcomputer vision can be used to create more reflexive, consent-aware audience engagement rather than passive observation?