Complete Midterm

Complete  midterm:

Since the project neeed to be interactive, I was inspired by multiple souces that I have experience, like the interactive haunted Queren Mary story and the film “Night at the Museum”.  I deciede to do an interactive storytelling about a spooky story based in the museum setting. I enabled the player to make choices at multiple points in the stories, leading to different endings accordingly. I wrote the story myself and and organized the branching so seemingly safe choices can have unexpected turns.

Surprisingly though, the most difficult part of my project was not actually the coding but having AI genertate images for me. With coding I was able to see  what was wrong and physically fix it, and as long as I code it correctly it will do its job. But with AI generating images it sometimes just doesn’t get what I am saying. And since AI don’t actually see the images it really has difficulty when I want it to edit or make changes to the image it generated.

The way my project works is that I put all my scenes in the “playing” gamesate in a giant object called storyData, and made them nested objects. The storyData is coded in an individual .js file. This allows the main code to be organized since it only fetches the  information. The properties of the scenes, including: nnames of scenes, their relations,  the audio, visual, duration, text, delays and choices for individual parameters are all in the storyData file. Example is below.

storyData.scene1_part3 = {
  text: "In the darkness, you hear laughter in the corridors...",
  duration: 5000,
  textDelay: 1500,
  visual: "booth_dark",
  audio: "girl_laughter",
  choices: [
    { label: "Investigate", next: "scene2_part1" },
    { label: "Stay", next: "scene3_part1" },
  ],
};

This object file functions throught my drawCurrentScene  function, which I am sort of proud of.

function drawCurrentScene() {
  background(0);
  let scene = storyData[currentSceneId];
  //image
  if (myImages[scene.visual]) {
    let currentImg = myImages[scene.visual];
    let aspectRatio = min(
      windowWidth / currentImg.width,
      windowHeight / currentImg.height
    );
    let drawWidth = currentImg.width * aspectRatio;
    let drawHeight = currentImg.height * aspectRatio;
    image(currentImg, windowWidth / 2, windowHeight / 2, drawWidth, drawHeight);
  }
  //timer
  let elapsedTime = millis() - sceneStartTime;
  //audio mechanism for deplayed audio. Undelayed audio mechanism is with the changeScene function
  if (
    scene.audioDelay &&
    elapsedTime > scene.audioDelay &&
    lateSoundPlayed === false //If the scene has audio delay and time is more than delay and the delayed sound has not been played
  ) {
    mySounds[scene.audio].play(); //play the sound and indicate the delayed sound has been played
    lateSoundPlayed = true;
  } //This state mechanism for sound prevents if from playing every time draw runs
  //text (subtitle) display
  let delayTime;
  //if the scene has text delay make the delay time that, if not make it 0
  //The undelayed text cannot go in changeScene like the audio because it needs to be drawn every frame constantly. It must be in draw.
  if (scene.textDelay) {
    delayTime = scene.textDelay;
  } else {
    delayTime = 0;
  }
  //if time has passed delay,
  if (elapsedTime > delayTime) {
    //draw the background box for subtitles
    rectMode(CENTER);
    fill(0, 0, 0, 200);
    rect(
      windowWidth / 2,
      windowHeight * 0.85,
      windowWidth * 0.7,
      windowHeight * 0.1,
      10
    );
    //drawing the text
    fill(255);
    noStroke();
    textAlign(CENTER, CENTER);
    textSize(windowHeight * 0.04);
    // 4th parameter limits the max width of the text, keeping it from going out of the box
    text(scene.text, windowWidth / 2, windowHeight * 0.85, windowWidth * 0.7);
  }

  //scene change logic
  if (elapsedTime > scene.duration) {
    if (scene.autoNext) {
      changeScene(scene.autoNext); //If the scener has an automatic connecting scene, change it to next scene
    } else {
      rectMode(CORNER);
      fill(0, 0, 0, 100);
      rect(0, 0, windowWidth, windowHeight);
      if (choicesDisplayed == false) {
        //If it does not have automatic next scene but has choices, draw the black veil and display the choices
        displayChoices();
      }
    }
  }
}

The drawCurrentScene function was written so it would work for every scene that has information in the storyData file. It draws the image fulfilling the current window size,  creates the scene timer, uses that timer to operate audio and visual delays and scene duration and decide how to change the scene to the next scene based on the parameters of the scene. This allows a smooth flow of the game as if it is a video when the player does not need to interact. When the player does need to make choices, it allows unlimited time on the choice page. Because every scene needs to go through all these processes, coding it in this way allows 50 scenes to go through one function instead of having 50 processes, making the code much easier and much more organized.

It actually also allows super easy editing. If you don’t like any part of the story or want to add or delete anything, because of the existance of this function, you would only need to change or add or delete things in storyData. And since storyData is only storing infotmation, it follows less strict order and organization rules than the main code. Making changes to it would be a lot closer to human language and easier.

I am also quite proud of my logic for updating element position when the canvas is resized. The code actually encorporates multiple functions

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  //resize and reposition button accordingly
  //start button
  if (gameState === "start" && startBtn) {
    //size
    let startWidth = max(200, windowWidth * 0.15);
    let startHeight = max(50, windowHeight * 0.08);
    let startFont = max(20, windowWidth * 0.02);
    startBtn.size(startWidth, startHeight);
    //positionn
    startBtn.position(windowWidth / 2 - startWidth / 2, windowHeight / 2 + 50);
    startBtn.style("font-size", startFont + "px");
  }
  //same for return button
  if (gameState === "credits" && returnBtn) {
    let btnWidth = max(200, windowWidth * 0.15);
    let btnHeight = max(50, windowHeight * 0.08);
    let btnFont = max(20, windowWidth * 0.02);
    returnBtn.size(btnWidth, btnHeight);
    returnBtn.position(windowWidth / 2 - btnWidth / 2, windowHeight * 0.85);
    returnBtn.style("font-size", btnFont + "px");
  }
  //game button
  if (choicesDisplayed && choiceButtons.length > 0) {
    for (let i = 0; i < choiceButtons.length; i++) {
      let btn = choiceButtons[i];
      updateButton(btn, i, choiceButtons.length);
    }
  }
}

I not only changed the canvas size according to window size in this code. I also included the button elements and repositioned them accordingly. I found out when I tried my code the HTML elements do not move like the texts do. They dont stay in the same position relative to the canvas. So I actively coded them to calculate new size and position when the canvas is resized. This function is then called in the buttonFunction function so they would be ready to act every time a button is created.

function buttonFunction() {
  //take the value from the button and give it to variable nextScene
  let nextScene = this.value();
  for (let i = 0; i < choiceButtons.length; i++) {
    //remove the buttons from the screen
    choiceButtons[i].remove();
  }
  //empty buttons array for new round of choices
  choiceButtons = [];
  //reset choice display state
  choicesDisplayed = false;
  //If next scene is credits
  if (nextScene === "credits") {
    //change game state in the state mechanism
    gameState = "credits";
    //display the restart button
    returnBtn.show();
    //use the windowResized function for reorganizing text and buttons for the credits screen.
    windowResized();
  } else if (nextScene === "restart") {
    //same logic as above
    gameState = "start";
    startBtn.show();
    windowResized();
  } else {
    //If it is just a choice in the story go with the story logic, the button doesnt need extra function.
    changeScene(nextScene);
  }
}

windowResized is called after the button is told to show. This way the buttons will always be in the right place no matter when or how the screen size is changed.

function startGame() {
  fullscreen(true);
  gameState = "playing";
  startBtn.hide();
  //input any scene here and its a quick portal to the scene
  changeScene("intro");
}

I also wanted to mention this code snippet that I found could also serve as a “maitainance platform”. This was originally written to changethe scene from start to intro and game state from start to playing. But change “intro” to any scene name that is in storyData and this serves as a portal to that scene. Without it I would have to needed to go through the whole story every time I changed something and wan to see the effect.

Some areas for improvement include adding fade in/out effects and more animation. When I looked through the game I felt that some scenes may need a gradual introduction effectr which fade in would perfectly suit. I wasn’t able to add that due to the limit of time. I tried to code it but bugs got in the way and I did not have enough time to trouble shoot the whole thing so I just deleted it. The game would also look better with more animation. But it would be near impossible to reproduce the shapes of spirits and ghosts in my current images with p5 code. The better way would be just make the plot a video, and code choices in between. But that would make it diverge from the goal for the midterm.

AI was used to some extent in this work. All the images used in this work was generated by Google Gemini accoirding to my requirements. For code, Gemini helped me with button styling. Because HTML objects were new to me I had trouble figuring out how to style them. Gemini introduced the different codes for button styling and I wrote the styling codes according to the examples provided. It also provided me with the idea of using an array for the buttons on the scene so they can easily be added and removed (line 433-436). (I originally only had array for choices so the buttons just got stuck on the screen). It also helped me with writing “not” in the if function (line 490) because I remembered it as || which was actually “or” and the code failed to work. Gemini also assisted my when I needed to find sound effect and voiceovers. It suggested using Freesound and Elevenlabs for these needs and gave me tutorials on how to use them. At the end ofc my project I also used Gemini for debugging once or twice when I encountered no councel error messages but the game crashing, because it was difficult for me to pick out the error among hundreds of lines of code. AI was used for help and assistance and did not write the code. The code was written based on an understanding of what was taught in class and what AI explained to me.

Week 5: Reading Assignment

Computer vision differs from human vision in that it does not fully understand the context of what it is seeing. Humans rely on memory, context, and storytelling to understand what they see. What stood out to me is that computer vision depends less on artificial intelligence and more on borders, corners, contrast, intensity, and brightness. Computer vision sees pixels in small details, while we as humans might overlook small details and focus more on the bigger picture.

A technique I recently learned about in computer vision is body pose detection, which is used in many games such as Xbox 360 Kinect and Just Dance. The reading made me think that this is a useful tool in the field of interactive art. The author also seems optimistic about democratizing these tools for artists. Like many things in life, it is a double-sided sword—it can be used for good or bad. I am not only talking about computer vision, but also surveillance and tracking in general. These technologies can be used to harm individuals, or they can be used to improve safety within society. Context matters, and that is why humans have the privilege of detecting and evaluating ethical issues.

Week 5 — Midterm Project Progress

https://editor.p5js.org/MariamBarakat/sketches/5sXlvsZ0U

1. Project Concept and User Interaction

The Secret Garden will be an immersive resource-management game that utilizes a point-and-click adventure interface. The core concept revolves around the “journey of growth” — shifting the focus from instant gratification to the physical labor of gardening. To successfully cultivate life, the player must navigate a layered world, traveling between different locations to collect raw materials. The gameplay loop requires the player to visit the River to fetch water, the Pottery Studio to craft vessels (may or may not include), and the Forest to gather nutrient-rich soil.

The interaction is designed around a “Layered Scene” system. The player navigates by clicking on specific environmental “portals,” such as a path leading to the woods or a bridge crossing the river. Within each scene, the cursor acts as a tool: clicking the river fills a bucket, while clicking a potter’s wheel initiates a crafting sequence. In the central Garden hub, interaction is more tactical, requiring the player to drag and drop collected resources from their inventory onto active plant plots to trigger growth stages.

2. Code Design: Functions, Classes, and Interactivity

The technical architecture of the game will rely on a Scene Manager class to maintain the game state. This class will store boolean flags for every location, ensuring that the program knows exactly which background to render and which set of interactive hitboxes to activate. A global inventory object tracks the count of water, soil, pots, and seeds, acting as the bridge between the gathering scenes and the planting scenes.

The plants themselves will be managed through a Plant Class, which will encapsulate all the logic for individual growth. Each instance of a plant tracks its own unique needs and developmental progress. A central mouseClicked() function serves as the primary input handler, using conditional logic to determine what action to take based on the currently active layer. Furthermore, a persistent HUD runs every frame to provide the player with a clear view of their current resources, regardless of where they are in the world.

3. The Most Uncertain Part
    • Asynchronous Growth Logic: In a multi-scene game, the player spends significant time away from the Garden. If growth is tied to the Garden’s display loop, plants will “freeze” in time whenever the player leaves the room. Ensuring plants age accurately in the background across different layers is a major logic hurdle.
    • Interactive Pottery and Asset Alignment: Designing a pottery mini-game that feels tactile — such as requiring the player to click and hold to “shape” clay — risks being buggy or unintuitive, so I’m still in the process of figuring out how exactly I want to introduce “interactivity” there. Additionally, finding specific sprites for for everything and ensuring they align perfectly is difficult.
3. Risk Minimization and Test Implementation
    • By using the millis() function, each plant records the exact moment it was last nurtured. This creates a “real-time” clock for every organism. When the player returns to the Garden, the class calculates the difference between the current time and the recorded timestamp to determine growth, ensuring the world feels persistent and alive even when the player is at the River or Studio.
    • To handle the pottery mini-game, I plan to use a “fill-bar” logic where mousePressed increments a shaping variable. This avoids complex physics and keeps the interaction reliable.

Week 4 – Data Visualization

Data Visualization

For this week’s assignment, an animated visualization of global temperature data from 1880 to today. The sketch fetches real data from NASA’s GISS API and draws it as a bar chart. Blue bars mean cooler-than-average years. Red bars mean warmer-than-average years. As the bars animate in, poetic phrases appear based on the temperature, giving the data a human, storytelling feel.

Controls: Click to pause/play · Hover bars for details · Press R to restart · Press S to save · Spacebar to toggle

Code I Am Proud Of

The part I am most proud of is how the bars get their color. Instead of just picking “blue” or “red,” I used lerpColor() to blend between shades based on the actual temperature value. Colder years get a deeper blue, and warmer years get a more intense red. It makes the warming trend really pop visually:

// Color: blue for cold, red for warm
let barColor;
if (d.anomaly < 0) {
  barColor = lerpColor(
    color(70, 130, 200, 200),
    color(40, 80, 160, 200),
    map(d.anomaly, 0, -0.5, 0, 1)
  );
} else {
  barColor = lerpColor(
    color(200, 100, 70, 200),
    color(240, 60, 40, 220),
    map(d.anomaly, 0, 1.5, 0, 1)
  );
}

I also like the generative text system. Each time a new data point appears, the sketch picks a phrase from a pool that matches the temperature: cold, warm, or neutral. It is a simple idea but it adds a lot of personality to the piece:

function triggerPhrase(anomaly) {
  if (millis() - lastPhraseTime < 1200) return;

  let phrases;
  if (anomaly < -0.1) {
    phrases = coldPhrases;
  } else if (anomaly > 0.3) {
    phrases = warmPhrases;
  } else {
    phrases = neutralPhrases;
  }

  activePhrase = random(phrases);
  phraseTargetAlpha = 255;
  lastPhraseTime = millis();
}

What I Learned

Working with a real API changed everything. The full 145-year dataset tells a story on its own. You can see the blue bars dominate the early decades, then the reds creep in and accelerate. I did not need to editorialize; the shape of the data is the narrative. The hardest part was dealing with CORS issues and making sure broken data rows do not crash the sketch.


Data: NASA GISS Surface Temperature Analysis · Built with p5.js · February 2026

Week 4 – Reading Response – Kamila Dautkhan

1. Something that drives me crazy (not mentioned in the reading) and how it could be improved

One of the problems that drive me crazy is the touch screen controls in cars because many new cars have replaced physical buttons and knobs with a large screen for everything (like changing the AC or the radio). This is frustrating because you have to take your eyes off the road to look at the screen. You can’t feel where the button is like you could with a real knob.

I think the solution for this problem can be bringing back physical knobs for the most important things like volume and temperature. This uses what Norman calls “tactile feedback” so that you can feel the click of the knob and know exactly what you’re doing without looking away from the road.

  1. How can you apply some of the author’s principles of design to interactive media? 

On a website buttons should look like buttons (maybe with a shadow or a bright color). This provides the user a visual cue. When you click “Submit” on a form, the button should change color or a loading circle should appear. This lets you know the website actually giving you an instant response and is actually working. Also, I found natural layout principle very useful because now I know that controls  should be placed in a logical way. For example, I should put important buttons on the right side of the screen because that’s the direction we read and move forward. Moreover, Conceptual I would use icons that people already understand. A trash can icon for deleting files or a house icon for the home page helps users understand how the app works instantly because it is already familiar to them.



Week 4- Reading response

One thing that drives me crazy is self-checkout and checkout lanes in general. Stores build 30 checkout stations but only use two. Now we have self-checkout, and still the process is confusing and slow. This problem is not about the technology itself. It is about poor design. The system does not clearly show how to use it efficiently. It lacks good signifiers. Simple visual cues, clear instructions, or intuitive layout would solve the problem. The action should be easy to understand without guessing.

From the reading, I learned that good design depends on affordances, signifiers, mapping, feedback, and clear conceptual models. In the case of self-checkout, the affordance exists because you can scan items and pay. But the signifiers are missing. The design does not communicate the correct sequence of actions. The mapping between the screen, the scanner, and the bagging area is also unclear. If designers focused more on discoverability and understanding, these everyday frustrations would not happen.

I can apply these principles to interactive media. In digital design, buttons, gestures, and workflows should clearly show what they do. Users should not guess how to swipe, click, or complete a task. Good feedback is also important. When a user takes an action, the system should respond immediately. Finally, the interface should support a simple conceptual model. Users should understand how the system works just by looking at it and trying it. If I design with clear signifiers, natural mapping, and strong feedback, I can create interactive work that feels simple, clear, and easy to use.

Week 3 – creative reading response

I think a strongly interactive system is one where the user and the system affect each other. It is not just clicking and seeing something move. It feels like a two way conversation. The system responds fast and clearly. The user understands what caused the change. The user’s choices matter and can change what happens next.

In my p5 sketches, I mostly use mouse movement and clicks. The shapes or colors react, but the interaction is sometimes very basic. It does not always feel deep. To improve this, I can make the sketch remember what the user did before. The drawing can change over time based on their actions. This will make it feel more alive and less repetitive.

I also want to give better feedback. The system should clearly show that it is responding. Small changes in color, size, or movement can help. I can also use more controls like the keyboard. This will give the user more ways to interact. I want my sketches to feel like small worlds the user can explore, not just simple animations.

Data in Graph

This weekend I was wokring on a replication study where I had to deal with data of mulitple repos. Somewhere in the process I encountered a csv which had two columns, release version and last pull request associated with that. I needed to figure out on average how many pull requests the repo used per release. When I sat of the IM assignment I thought by not display that csv on a graph using p5. This was the motivation for my work.

Initialy I wanted to make a tree on which branches represent the version number and the leaves represent the pull requests. But after some hassle I figured out that the data I have is not as hieracial and evenly distributed to create a presentable tree. After which I settled down on a graph represenation of the data.

First and the hardest part was to figure out how to represent it. cuz the amount of value was alot. I decided to group the major and minor version togather and make bins of the of the numbers of Pull requests. Even though it gave less information, the output was at least readable and understandable. In coding the grouping was the hardest part, because I had to read the array and keep track of the of what I have read and match it with the next coming value. and that is also the part I am most proud of figuring it out. The AI assitance for that is mentioned in the end.

// Loop through data row by row
  let lastGroup = "";   // keep track of the last major.minor group
  let y = paddingY;     // starting from Y position
  let groupCount = 0;   // how many PRs in the current group
  let maxBins = 0;      // track maximum number of bins for X-axis

  for (let r = 0; r < data.getRowCount(); r++) {
    let pr = int(data.getString(r, 0));
    let version = data.getString(r, 1);

    // Major.Minor for grouping
    let parts = version.replace("v", "").split(".");
    let group = `v${parts[0]}.${parts[1]}`;

    // New group  discovery
    if (group !== lastGroup) {
      y += yStep;
      groupCount = 0;
      lastGroup = group;

Sketch

AI Assistance

I asked for functions like replace() and split(), to manipulate the data values. I also asked AI to help me tidy up the code in terms of formating and variable names. The most major contribution which is not mine is the following line of code

// Update maxBins if needed
let bins = groupCount / binSize;
if (bins > maxBins) maxBins = bins;

this was didn’t change anything in the ouput but it is useful if the file size is increased significantly. I let it be there as I deem it “Nice to have” or “Good COding practice”.

Week 3 – Flower Garden

My project is an interactive generative artwork featuring a digital garden where flowers bloom and fade over time while butterflies move toward them. The user can interact by clicking anywhere on the canvas to plant new flowers.

My inspiration came from teamLab Phenomena in Abu Dhabi I visited, where there was an immersive environment allowing people to draw butterflies, snakes, and other animals that then came to life in a shared space. That experience brought me back to childhood memories of imagination, and I wanted to capture a similar feeling through code. In future versions, I plan to expand this project by adding sound elements for the atmosphere to be richer and more immersive. I also hope to introduce more types of creatures and possibly explore touch or motion-based interaction.

Inside teamLab Phenomena Abu Dhabi | Condé Nast Traveller Middle East

Explore TeamLab Phenomena General Admission for Adults and Youths - Pt Tourism | Groupon

The program begins in the setup() function, which creates the canvas and spawns an initial set of flowers and butterflies in random positions across the screen. The draw() loop serves as the heartbeat of the sketch. It first draws a smooth vertical gradient sky using the lerpColor() function, transitioning from soft blue at the top to gentle green near the bottom. Then, it updates all active flowers, allowing each one to grow and eventually fade as time passes. Meanwhile, the butterflies search for nearby flowers, moving toward them. The sketch also displays basic information such as the number of flowers and user instructions on planting new ones.

// Arrays to store objects
let flowers = [];
let butterflies = [];

function setup() {
  createCanvas(700, 600);
  
  // start flowers
  for (let i = 0; i < 8; i++) {
    flowers.push(new Flower(random(width), random(height)));
  }
  
  // start butterflies
  for (let i = 0; i < 6; i++) {
    butterflies.push(new Butterfly(random(width), random(height)));
  }
}


function draw() {
  // Draw gradient sky background
  for (let y = 0; y < height; y++) {
    let c = lerpColor(color(135, 206, 235), color(180, 220, 160), y / height);
    stroke(c);
    line(0, y, width, y);
  }
  
  // Update and draw flowers
  for (let i = flowers.length - 1; i >= 0; i--) {
    flowers[i].grow();
    flowers[i].display();
    
    // Remove old flowers
    if (flowers[i].age > flowers[i].lifespan) {
      flowers.splice(i, 1);
    }
  }
  
  // Update and draw butterflies
  for (let butterfly of butterflies) {
    butterfly.moveTowardFlowers();
    butterfly.display();
  }
  
  // Instructions
  fill(255, 200);
  noStroke();
  fill(60);
  textSize(14);
  text("Click to plant flowers", 20, 30);
  text(`Flowers: ${flowers.length}`, 20, 45);
}

// mouse interaction

function mousePressed() {
  flowers.push(new Flower(mouseX, mouseY));
}

// flower class

class Flower {
  constructor(x, y) {
    // Position
    this.x = x;
    this.y = y;
    
    // Size (starts small, grows)
    this.size = 0;
    this.maxSize = random(30, 60);
    
    // colors (random pastels)
    this.petalColor = color(random(200, 255), random(100, 200), random(200, 255));
    this.centerColor = color(random(200, 255), random(180, 220), random(50, 100));
    
    // Life
    this.age = 0;
    this.lifespan = random(600, 1000);
    
    // Look
    this.petalCount = floor(random(5, 9));
    this.angle = random(TWO_PI);
  }
  
  // make flower grow each frame
  grow() {
    this.age++;
    if (this.size < this.maxSize) {
      this.size += 0.5;
    }
    this.angle += 0.005; // Slow rotation
  }
  
  // draw the flower
  display() {
    push();
    translate(this.x, this.y);
    rotate(this.angle);
    
    // Fade out when old
    let alpha = 255;
    if (this.age > this.lifespan * 0.7) {
      alpha = map(this.age, this.lifespan * 0.7, this.lifespan, 255, 0);
    }
    
    // draw petals
    fill(red(this.petalColor), green(this.petalColor), blue(this.petalColor), alpha);
    noStroke();
    for (let i = 0; i < this.petalCount; i++) {
      let angle = (TWO_PI / this.petalCount) * i;
      let px = cos(angle) * this.size * 0.4;
      let py = sin(angle) * this.size * 0.4;
      push();
      translate(px, py);
      rotate(angle);
      ellipse(0, 0, this.size * 0.6, this.size * 0.3);
      pop();
    }
    
    // draw center
    fill(red(this.centerColor), green(this.centerColor), blue(this.centerColor), alpha);
    ellipse(0, 0, this.size * 0.4);
    pop();
  }
}

// butterfly class

class Butterfly {
  constructor(x, y) {
    // Position
    this.x = x;
    this.y = y;
    
    // movement
    this.vx = random(-1, 1);
    this.vy = random(-1, 1);
    this.speed = 1.5;
    
    // wings
    this.wingAngle = 0;
    this.wingSize = random(15, 25);
    
    // colors (random)
    this.wingColor = color(random(150, 255), random(100, 200), random(150, 255));
  }
  
  // move toward nearest flower
  moveTowardFlowers() {
    // Find closest flower
    let closestFlower = null;
    let closestDist = 999999;
    
    for (let flower of flowers) {
      let d = dist(this.x, this.y, flower.x, flower.y);
      if (d < closestDist && flower.size > 20) {
        closestDist = d;
        closestFlower = flower;
      }
    }
    
    // move toward it if close enough
    if (closestFlower && closestDist < 200) {
      let dx = closestFlower.x - this.x;
      let dy = closestFlower.y - this.y;
      this.vx += dx * 0.0002;
      this.vy += dy * 0.0002;
    }
    
    // randomness
    this.vx += random(-0.1, 0.1);
    this.vy += random(-0.1, 0.1);
    
    // limit speed
    let currentSpeed = sqrt(this.vx * this.vx + this.vy * this.vy);
    if (currentSpeed > this.speed) {
      this.vx = (this.vx / currentSpeed) * this.speed;
      this.vy = (this.vy / currentSpeed) * this.speed;
    }
    
    // update position
    this.x += this.vx;
    this.y += this.vy;
    
    // wrap around edges
    if (this.x < 0) this.x = width;
    if (this.x > width) this.x = 0;
    if (this.y < 0) this.y = height;
    if (this.y > height) this.y = 0;
    
    // flap wings
    this.wingAngle += 0.2;
  }
  
  // draw the butterfly
  display() {
    push();
    translate(this.x, this.y);
    
    // point in direction of movement
    let angle = atan2(this.vy, this.vx);
    rotate(angle);
    
    // wing flapping (0 to 1)
    let flap = sin(this.wingAngle) * 0.5 + 0.5;
    let wingHeight = this.wingSize * (0.5 + flap * 0.5);
    
    // wings
    fill(this.wingColor);
    noStroke();
    ellipse(-5, -wingHeight, 12, wingHeight * 1.5);
    ellipse(-5, wingHeight, 12, wingHeight * 1.5);
    
    // body
    fill(40);
    ellipse(0, 0, 8, 15);
    
    pop();
  }
}

 

Every flower is represented by an instance of the Flower class. Each flower starts small and increases in size frame by frame until it reaches its maximum. Colors are chosen randomly from pastel ranges to keep the palette gentle. Each flower contains several petals arranged uniformly in a circle and slowly rotates over time. The petals fade as the flower ages. The Butterfly class handles behavior for each butterfly’s movement, which combines randomness with directed motion toward flowers.

One of the challenges I encountered was controlling butterfly motion. At first, they moved too chaotically. By adjusting acceleration toward flowers and capping their speed, I achieved a more graceful flying style. I also experimented with the fading  for flowers to make the transition from bright color to transparency appear gradual and organic.

Visually, the project is calm and immersive as I wanted it to be. The background gradient,  flower and butterflies all work together to create an environment that feels alive yet peaceful.

Week 3 Reading Reflection – Megan

After reading this chapter I realized that a strongly interactive system is really like having a good conversation. Both sides have to listen think and speak well for it to feel alive. If one side fails it just becomes boring or frustrating. So for a system to be truly interactive it has to respond to the user in a way that feels meaningful not just random or automatic. I liked how Crawford explains that interaction is not the same as reaction like with a fridge or a movie you can only watch but you don’t actually talk to it. That made me think about my p5 sketches because sometimes I make things move or change when the mouse touches them but it feels kind of one-sided. I realized that to make them more interactive I could have the objects respond in different ways depending on how you interact with them like changing speed direction color or even start a little animation that is unique for each action. I also liked thinking about giving each object its own “voice” like the dice in my project could react differently to the same input so it feels more alive and less predictable. I want to experiment more with letting the user affect not just the movement but the behavior and appearance over time so it feels like the sketch is listening and thinking a little bit before it reacts. Overall reading this made me want to make my sketches feel more like a real conversation between me and the code rather than just me controlling it.