Midterm project : Bakkar Ramadan Game

Concept:

Oy, “Hello” in the Nubian language , My concept is my favorite Ramadan series teaching students new lessons everyday in the holy month of Ramadan offering wisdom and importance of community as well ass offering awareness to children and adults since it the series was a great success in Egypt and The Arab World. It is also the first 100% created Egyptian cartoon series started in 1998. It also introduces the Nubian culture which on of the rich, unique, and strongly continuing cultures in Egypt.

Bakkar (TV Series 1998– ) - IMDb

So how does it work and what am I proud of:

Initializing all images and assets

let startImg, homeImg, bakkarImg, hassonaImg, mapImg, mapSceneImg;
let teacherImg, rashidaImg, friend1Img, shalabiImg, friend2Img, shalabi2Img;
let storeShelfImg, storeAsset1Img, storeAsset2Img, bagImg, shopSignImg;
let button;
let homeMapStartTime = 0;

 

Hassona is Bakkar friend guiding him through the game. I added Typetext message. HassonaStoreIdx and hassonaMapIdx starts at 0 and counts up to the full length of the map and home  scene text, making it appear letter-by-letter.

let homeTypeText = "حسونه: يا بكار يلا نجمع فلوس الزينة رمضان\nHassona: Yalla Bakar Lets collect the money for the Ramadan decorations";
let homeTypeIdx = 0;

const hassonaMapText = "حسونة: اضغط Space للتكلم مع أصحابنا والجيران!\nHassona: Press Space to Talk to our Friends and Neighbors!";
const hassonaStoreText = "حسونة: اسحب الأغراض للشنطة عشان تشتري!\nHassona: Click & Drag objects to the Bag to Buy!";
let hassonaMapIdx = 0;
let hassonaStoreIdx = 0;

 

 

This is the pinned message at the top giving instruction. I added a white speech box to make it more clear to see the message.

fullText.substring(0, idx) slices the full string to only show characters up to idx, creating the typewriter effect.

function drawHassonaBanner(fullText, idx) {
noStroke();

// Dark semi-transparent backing strip
fill(0, 150);
rect(0, 0, width, BANNER_H);

// Hassona avatar
image(hassonaImg, ICON_PAD, ICON_PAD, ICON_SIZE, ICON_SIZE);

// White speech box
fill(255);
rect(BOX_X, ICON_PAD, BOX_W, BANNER_H - ICON_PAD * 2, 8);

// Typewriter text inside box
fill(0);
textAlign(LEFT);
textSize(14);
text(fullText.substring(0, idx), BOX_X + 12, ICON_PAD + 18, BOX_W - 22, BANNER_H - ICON_PAD * 2 - 10);
}

 

This function draws the Hassona intro banner on the home screen. It waits 2 seconds after the scene starts. I used Chatgbt to learn more about how to use time countdown before it shows an image or an object as well as typer style of text banner. Also there is a debugMode to help with seeing where the collision boxes are I got it form Claude because I struggle to envision where everything goes.

function drawHomeTypewriter() {
if (millis() - homeMapStartTime < 2000) return;
image(hassonaImg, 10, 10, 100, 100);
fill(255); rect(120, 20, 650, 100, 10);
fill(0); textAlign(LEFT);
text(homeTypeText.substring(0, homeTypeIdx), 140, 50, 600);
if (frameCount % 2 === 0 && homeTypeIdx < homeTypeText.length) homeTypeIdx++;
}

 

Every frame, updateRashidaTrail saves Bakkar’s current position to a history array, trims it to TRAIL_DELAY frames long, then positions Rashida at the oldest saved position — making her follow Bakkar with a slight delay. drawRashida simply draws her sprite at that calculated position.

function updateRashidaTrail() {
posHistory.push({ x: x, y: y });
if (posHistory.length > TRAIL_DELAY + 1) posHistory.shift();
if (posHistory.length > 0) {
rashidaX = posHistory[0].x - RASHIDA_W * 0.5;
rashidaY = posHistory[0].y + (charH - RASHIDA_H) * 0.5;
}
}

function drawRashida() {
image(rashidaImg, rashidaX, rashidaY, RASHIDA_W, RASHIDA_H);
}

 

 

For the collision I used a fixed hitboxoffset  so when my character collides with an object it gets detected, and the it either stops the object from going through the object or transitions to the next scene

The moveCharacter function moves using arrows and it minus the amount of steps towards a specific direction like x or y in negative which the left side or positive side which to the right side. If moving horizontally doesn’t cause a collision, it applies the new X; if moving vertically doesn’t cause a collision, it applies the new Y. This way the player can slide along a wall instead of getting completely stuck when hitting it diagonally.

function checkCollision(cx, cy, ox, oy, ow, oh) {
return (
cx + hitboxOffsetX < ox + ow &&
cx + hitboxOffsetX + hitboxW > ox &&
cy + hitboxOffsetY < oy + oh &&
cy + hitboxOffsetY + hitboxH > oy
);
}

function collidesWithList(nx, ny, list) {
for (let obs of list) {
if (checkCollision(nx, ny, obs.x, obs.y, obs.w, obs.h)) return true;
}
return false;
}

function moveCharacter(obstacles) {
let nx = x, ny = y;
if (keyIsDown(LEFT_ARROW)) nx -= step;
if (keyIsDown(RIGHT_ARROW)) nx += step;
if (keyIsDown(UP_ARROW)) ny -= step;
if (keyIsDown(DOWN_ARROW)) ny += step;
nx = constrain(nx, 0, width - charW);
ny = constrain(ny, 0, height - charH);
if (!collidesWithList(nx, y, obstacles)) x = nx;
if (!collidesWithList(x, ny, obstacles)) y = ny;
}

 

In the store scene it shows the amount of money collected in the variable  moneyTotal and then we have an object bag with a specific x,y,w, and h to drop object in the bag.

function drawStoreScene() {
image(storeShelfImg, 0, 0, width, height);

image(bagImg, bagZone.x, bagZone.y, bagZone.w, bagZone.h);
fill(255); textAlign(CENTER); textSize(14);
text("Drop to Buy", bagZone.x + 90, bagZone.y + 30);

// Money HUD sits below the banner
fill(0, 180); rect(0, BANNER_H, width, 48);
fill(255, 215, 0); textSize(20); textAlign(CENTER);
text("Money: " + moneyTotal + " EGP", width / 2, BANNER_H + 33);

 

It loops through storeItems and draws each one that hasn’t been bought or placed yet, then if the player is currently dragging an item it draws it following the mouse. Finally it handles the error message, dialogue, back button, and draws the Hassona banner last so it always appears on top of everything else.

for (let itm of storeItems) {
if (!itm.inBag && !itm.placedOnMap) {
image(itm.img, itm.px, itm.py, 100, 100);
fill(255); textSize(18); text(itm.name, itm.px + 50, itm.py - 10);
}
}
if (dragging && scene === 'store_scene') {
image(storeItems[dragging.idx].img, mouseX - 50, mouseY - 50, 100, 100);
}
if (millis() < statusTimer) { fill(255, 0, 0); text(statusMessage, width / 2, height / 2); }

drawDialogueUI();

fill(255); textSize(18); text("Press 'B' to Return to Map", width / 2, height - 30);

// Hassona banner drawn last — always on top
drawHassonaBanner(hassonaStoreText, hassonaStoreIdx);
if (frameCount % 2 === 0 && hassonaStoreIdx < hassonaStoreText.length) hassonaStoreIdx++;
}

 

drawBakar draws the player , drawDoor draws a brown rectangle for the home door, drawStoreEntrance draws the shop sign image, and startGame switches to the home scene, hides the start button, records the start time, and resets Rashida’s position and trail history.

function drawBakar() { image(bakkarImg, x, y, charW, charH); }
function drawDoor() { fill(101, 67, 33); rect(doorX, doorY, doorW, doorH, 5); }
function drawStoreEntrance() {
image(shopSignImg, storeDoor.x, storeDoor.y, storeDoor.w, storeDoor.h);
}
function startGame() {
scene = "home_map";
button.hide();
homeMapStartTime = millis();
rashidaX = x - RASHIDA_W - 10;
rashidaY = y;
posHistory = [];
}

 

Draws a small semi-transparent dark badge in the top-right corner just below the Hassona banner, displaying the player’s current cash total in gold text.

function drawMoneyHUD() {
// In map_scene, cash badge sits just below the Hassona banner
fill(0, 180); rect(width - 190, BANNER_H + 8, 170, 40, 10);
fill(255, 215, 0); textSize(16); textAlign(CENTER);
text("Cash: " + moneyTotal, width - 105, BANNER_H + 30);
}

 

with each interaction with each store asset and their x,y positions, name, and image that was initialized in the beginning of the code.

function initStoreItems() {
storeItems = [
{ name: "Lantern", img: storeAsset1Img, cost: 10, shelfX: 100, shelfY: 300, px: 100, py: 300, inBag: false, placedOnMap: false, mapX: 0, mapY: 0 },
{ name: "Decor", img: storeAsset2Img, cost: 15, shelfX: 300, shelfY: 300, px: 300, py: 300, inBag: false, placedOnMap: false, mapX: 0, mapY: 0 }
];
}

 

I wrote the dialogue  with each interaction with each character  and their x,y positions, name, and image that was initialized in the beginning of the code.

function initNPCs() {
npcs = [
{ name: "Teacher", img: teacherImg, x: 100, y: 200, completed: false,
greet: "Bakar! Help with Ramadan decorations?",
opts: [{ text: "Yes! (Collect 10 EGP)", reward: 10, reply: "Good luck!" }] },

{ name: "Friend", img: friend1Img, x: 200, y: 600, completed: false,
greet: "Bakar! I saved some money for the decorations!",
opts: [{ text: "Thanks! (Collect 15 EGP)", reward: 15, reply: "Yalla habibi!" }] },

{ name: "Friend2", img: friend2Img, x: 380, y: 600, completed: false,
greet: "Hey Bakar! I have some money for decorations!",
opts: [{ text: "Thanks! (Collect 15 EGP)", reward: 15, reply: "Happy Ramadan!" }] },

{ name: "Shalabi", img: shalabi2Img, x: 560, y: 200, completed: false,
greet: "Bakar ya basha! Khawd el floos!",
opts: [{ text: "Shukran! (Collect 25 EGP)", reward: 25, reply: "Ramadan Kareem!" }] }
];
}

Sketch:

Areas I would like to improve:

I would to improve a lot of things like the quality of the pixelated, adding more assets to ramadan decoration, more interaction between Bakkar and the NPCs. I honestly, had an idea in mind to make an interactive immersive game, but due to circumstance it was a bit difficult to create it. There so much more things I think I could have added it to my project to make it more unique. I feel the most unique part of it is a simple introduction to Egyptian community and culture. I could also add the music effects from the music series and the song associated with the intro in the start scene. I also did not end the game because I want to work on improving it and make it into a true RBG game similar to starview valley and add different quests and games. Therefore, I would like to add more dialogues between the character to learn about them and there role and stories in the series.

 

Week 5: Midterm Progress

Concept:

My concept is based on my favorite Ramadan TV shows, “ظاظا و جرجير” (Zaza w Gargyer) and “بكار” (Bakar). These shows teach children Islamic and Egyptian morals and values in an indirect and engaging way. Bakar is especially important because it was one of the first cartoons to represent Egyptian culture and identity.

I want to create an adventure game where players can choose one of these cartoon characters and go on a journey inspired by the spirit of Ramadan. Throughout the game, players will stop to help others, learn from their mistakes, and make choices that reflect kindness, generosity, patience, and responsibility. The goal is to combine fun gameplay with meaningful lessons, just like the original shows.

Design:

I want my game to give this cozy relaxing feel that you could play anytime like on a portable console or at console. I want it to be as nostalgic as possible. I want it to be similar aesthetic to Stardew Valley. I am currently working on the characters sprite sheets.

 

 

Frightening / Challenging Aspects

The most frightening aspect for me is figuring out how to make the game simple, yet fun, while still being a meaningful learning experience. I’m unsure whether I should design it as a pixel-style platformer or a 2D narrative game.

I also want the game to accurately capture and translate the spirit of Ramadan, just like the original cartoons. Can I successfully express that spirit through a cozy, pixelated platform game that can be enjoyed by the entire family?

Risk Prevention

To reduce this risk, I would create a short sample or prototype of the game to test whether it truly fits the idea and essence I have in mind. Most importantly, I would evaluate how the characters and the setting turn out visually.

Then, I would ask people who are familiar with the original cartoons whether they can recognize the characters and understand what they represent in the pixelated version. If a high number of people are able to recognize them and connect them to the original spirit of the shows, I would feel more confident about moving forward with the project.

Week 5: Reading Assignment

Computer vision differs from human vision in that it does not fully understand the context of what it is seeing. Humans rely on memory, context, and storytelling to understand what they see. What stood out to me is that computer vision depends less on artificial intelligence and more on borders, corners, contrast, intensity, and brightness. Computer vision sees pixels in small details, while we as humans might overlook small details and focus more on the bigger picture.

A technique I recently learned about in computer vision is body pose detection, which is used in many games such as Xbox 360 Kinect and Just Dance. The reading made me think that this is a useful tool in the field of interactive art. The author also seems optimistic about democratizing these tools for artists. Like many things in life, it is a double-sided sword—it can be used for good or bad. I am not only talking about computer vision, but also surveillance and tracking in general. These technologies can be used to harm individuals, or they can be used to improve safety within society. Context matters, and that is why humans have the privilege of detecting and evaluating ethical issues.

Week 4 – Reading Response

Reading Norman’s chapter made me realize how often I get frustrated with specific designs, especially ones that lack efficiency in everyday objects. Norman emphasizes that good design should communicate clearly, prevent errors, and provide feedback. I see this principle in some interactive media, where the design makes it easy to use without much explanation—anyone can figure it out quickly. When something is designed well, you don’t even notice it because everything feels natural and intuitive. Unlike the examples the author mentioned, such as the sink that requires pushing down on it or the door that needs a sign to explain that it is a sliding door, good design should not require instructions. If a user has to stop and think about how to use something basic, then the design has already failed.

Something that drives me crazy is the access doors on campus. I walk around carrying two access cards—one specifically for my suite and room, and another for the rest of the campus. It feels unnecessary and inefficient. On top of that, the glass doors are extremely heavy, and the sensors do not work most of the time. Instead of making entry smooth and accessible, the design creates frustration. According to Norman’s ideas, better mapping, clearer feedback, and fewer constraints could significantly improve this experience.

Week 4 – Global Mood (Data Visualization)

Concept:
My concept is based on showing the current global mood, and the world’s current situation. Because whenever I used would google “news” most of the things would is evoke aa negative emotion from me. So, I decided to visualize the news and categorize them into a few different emotions or feelings.

How I created the code:

I used Guardian and NYT API keys in order to get access to live articles, although there are some restrictions, like page requests. Therefore, I added some delay in order to access a larger number of pages and news article headlines. I also used world.json for the country borders.

I then created different arrays: one for the emotional bubbles, one for the country borders, one for the CNN breaking news ticker, and one for tracking articles so they are not shown twice. I also added a timer that updates every 60 seconds and adjusted the speed and position of the news ticker.

Then I added geographical points for a list of countries. I created bubbles for different emotions, with each emotion represented by a color. There is also a map key showing which color represents which emotion. The bubbles have visual effects like glowing and shrinking over time to make the map feel dynamic. Emotions are detected using keywords in article titles to classify sadness, anger, hope, or joy.

It initially gets the last 48 hours of news, then it is updated with live breaking news. I also added fallbacks: if the world map fails to load, a simple grid is shown, and if the API fails, a CORS proxy is used to make sure the news still comes through.

The code:
// Convert guardian format to our format
// Fetch 48 hours of historical news from The Guardian
function fetchHistoricalNews() {
let twoDaysAgo = new Date();
twoDaysAgo.setDate(twoDaysAgo.getDate() - 2);
let fromDate = twoDaysAgo.toISOString().split("T")[0]; // Format: YYYY-MM-DD
console.log("📅 Fetching Guardian news from " + fromDate + " to today...");

let totalArticles = [];
let pagesToFetch = 10; // Get 10 pages of results
let pagesLoaded = 0;
let failedPages = 0;

// Fetch pages sequentially with delay to avoid rate limiting
for (let pageNumber = 1; pageNumber <= pagesToFetch; pageNumber++) {
setTimeout(() => {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&from-date=" +
fromDate +
"&page-size=30&page=" +
pageNumber +
"&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
console.log("🔄 Requesting Guardian page " + pageNumber + "...");
fetch(apiURL)
.then((response) => {
console.log("📡 Guardian page " + pageNumber + " response status: " + response.status);
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
totalArticles = totalArticles.concat(data.response.results);
pagesLoaded++;
console.log("✅ Page " + pageNumber + " loaded: " + data.response.results.length + " articles");
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian"); // true = historical
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
} else {
console.warn("⚠️ Guardian page " + pageNumber + " returned empty results");
failedPages++;
}
})
.catch((error) => {
console.error("❌ Guardian page " + pageNumber + " failed:", error.message);
failedPages++;
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian");
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
});
}, pageNumber * PAGE_REQUEST_DELAY); // Use delay variable
}
}

// Fetch the latest breaking news from The Guardian
function fetchGuardianNews() {
console.log("📰 [" + getCurrentTime() + "] Fetching Guardian news...");
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
fetch(apiURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false; // We're showing breaking news now
processArticles(data.response.results, false, "guardian"); // false = breaking news
}
})
.catch((error) => {
console.log("⚠️ Guardian direct failed, trying CORS proxy...");
tryGuardianWithProxy();
});
}

// Backup method: Try Guardian API through CORS proxy
function tryGuardianWithProxy() {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
let proxiedURL = "https://api.allorigins.win/raw?url=" + encodeURIComponent(apiURL);
fetch(proxiedURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian via proxy: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false;
processArticles(data.response.results, false, "guardian");
}
})
.catch((error) => {
console.error("❌ [" + getCurrentTime() + "] Guardian completely failed:", error.message);
sourceStatus.guardian.active = false;
});
}

 

Reflection and ideas for future work or improvements:

Reflection:

Global Mood taught me a lot about combining live data, visualization, and emotion analysis. Seeing emotions vary across regions in real time was fascinating, and effects like glowing and shrinking bubbles made the map feel dynamic. It also taught me how to APIs and json files on P5.js.

Future Work and  improvements:

I would love to present it as an installation to show people the current global situation. For future improvements, I would incorporate Natural Language Processing to classify emotions more accurately, rather than relying solely on specific keywords. I also wish I had greater access to open-source news APIs to expand the dataset.

 

Week 3- OOP Assignment

 

Your concept:

I was inspired by my house cats, who always bring joy and happiness when I play with them or when they decide to sit on my lap. I wanted to include them, but wasn’t sure how, so I decided to base it on Nyan Cat, which I used to play when I was younger, when I had to wait for my friends to arrive, didn’t have access to the internet, or just wanted to give my brain a break.

Nyan Cat (Music Video 2011) - IMDb

Code:

I prompted Gemini to add the particles, it used blendMode(ADD) is what makes the magic particles look like they are made of light.

let segments = 8; 
let segLength = 12;
let x = [], y = [];
let fireParticles = [];

const COLORS = {
fur: [255, 235, 245], 
ears: [255, 180, 200],
eyes: [100, 200, 255],
magic: ['#FFD1DC', '#FFECB3', '#B2E2F2', '#D1FFD7', '#E0BBE4']
};

function setup() {
createCanvas(windowWidth, windowHeight);
// Initialize segment positions
for (let i = 0; i < segments; i++) {
x[i] = mouseX; 
y[i] = mouseY;
}
background(0); 
angleMode(RADIANS);
}

function draw() {
// Semi-transparent black rect creates the generative trail
blendMode(BLEND);
fill(0, 0, 0, 30); 
rect(0, 0, width, height);

let mouseSpeed = dist(mouseX, mouseY, pmouseX, pmouseY);

//Follow Logic
dragSegment(0, mouseX, mouseY);
for (let i = 0; i < x.length - 1; i++) {
dragSegment(i + 1, x[i], y[i]);
}

let headAngle = atan2(mouseY - y[1], mouseX - x[1]);
let emissionRate = map(mouseSpeed, 0, 50, 1, 8);

if (mouseIsPressed || mouseSpeed > 2) {
for(let i = 0; i < emissionRate; i++) {
fireParticles.push(new MagicParticle(x[0], y[0], headAngle, mouseSpeed));
}
}



// Tail
drawTail(x[segments-1], y[segments-1]);

// Body
for (let i = x.length - 1; i > 0; i--) {
drawCatBody(x[i], y[i], i);
}

// Glowing Particles
// We use ADD blend mode to make them pop on black
blendMode(ADD);
for (let i = fireParticles.length - 1; i >= 0; i--) {
fireParticles[i].update();
fireParticles[i].display();
if (fireParticles[i].isDead()) fireParticles.splice(i, 1);
}
blendMode(BLEND);

//Head always on top
drawCatHead(x[0], y[0], headAngle);
}

function dragSegment(i, xin, yin) {
let dx = xin - x[i];
let dy = yin - y[i];
let angle = atan2(dy, dx);
x[i] = xin - cos(angle) * segLength;
y[i] = yin - sin(angle) * segLength;
}

function drawCatBody(posx, posy, index) {
push();
translate(posx, posy);
fill(COLORS.fur);
noStroke();
ellipse(0, 0, 50 - index, 45 - index);
pop();
}

function drawTail(tx, ty) {
push();
translate(tx, ty);
stroke(COLORS.fur);
strokeWeight(12);
noFill();
let wag = sin(frameCount * 0.2) * 25;
bezier(0, 0, -15, wag, -30, -wag, -45, 0);
pop();
}

function drawCatHead(hx, hy, angle) {
push();
translate(hx, hy);
rotate(angle);

fill(COLORS.fur);
noStroke();
ellipse(10, 0, 55, 50); // Face

// Ears
fill(COLORS.ears);
triangle(-5, -20, 5, -45, 20, -20); 
triangle(10, -20, 25, -45, 40, -20); 

// Mouth
fill(255, 150, 150);
arc(30, 5, 22, 22, 0, PI);

// Eyes
fill(COLORS.eyes);
ellipse(15, -5, 10, 12);
ellipse(35, -5, 10, 12);
fill(255); 
ellipse(17, -7, 4, 4);
ellipse(37, -7, 4, 4);

// Whiskers
stroke(255, 200);
strokeWeight(1);
line(40, 2, 60, -5);
line(40, 5, 60, 5);
line(40, 8, 60, 15);
pop();
}

class MagicParticle {
constructor(x, y, angle, speed) {
this.pos = createVector(x, y);
// Spread the magic out
this.vel = p5.Vector.fromAngle(angle + random(-0.5, 0.5));
this.vel.mult(speed * 0.2 + random(1, 4));
this.lifespan = 255;
this.c = color(random(COLORS.magic));
this.size = random(2, 7);
}

update() {
this.pos.add(this.vel);
this.vel.mult(0.96); 
this.lifespan -= 4;
}

display() {
noStroke();
let alpha = map(this.lifespan, 0, 255, 0, 200);
fill(red(this.c), green(this.c), blue(this.c), alpha);
ellipse(this.pos.x, this.pos.y, this.size);

// Sparkle effect
if (random(1) > 0.95) {
fill(255, 255, 255, alpha);
ellipse(this.pos.x, this.pos.y, this.size * 0.6);
}
}

isDead() { return this.lifespan < 0; }
}

function windowResized() {
resizeCanvas(windowWidth, windowHeight);
background(0);
}
      • Embedded sketch

     

    Reflection and ideas for future work or improvements:

  • In the future, I might add interactions or obstacles to make it like an interactive fun game, but I want to be unique and something that have been done previously.

Week 3: Reading Response

I do agree with the author that interactivity is a two-way process. It is something I interact with, and something that interacts with me. However, does it have to include all three components the author mentions—speaking, listening, and thinking? Are we really interacting with computers in the same way? They do not think the way humans do. They are trained using mathematical equations and algorithms, yet they can sometimes outperform human thinking.

In my opinion, a strong interactive system allows me to manipulate things in real time—for instance, typing on a computer right now or playing a virtual reality (VR) game. There is not necessarily a listening, speaking, or even thinking component. According to the author, this would not be considered interactive. But for me, interaction means getting a reaction to my actions.

One example the author gives is an interactive rug, which I would not consider interactive from the rug’s side because it does not interact with me in the same way I interact with it—I simply play with cars on it. However, I would consider rugs that teach prayer through step-by-step instructions, or similar designs, to be interactive because I interact with them and they interact with me.

In the future, I would add things that get the user more involved in my p5 sketches, creating a real interaction between the user and the sketch. For example, I could use ml5 and other machine learning libraries to make the sketches respond to gestures, sounds, or movements. I think a strong interactive system should give feedback to the user, so their actions actually change what happens on the screen in real time. This way, the interaction doesn’t need to include thinking or speaking like the author says—it’s still interactive because the sketch reacts to me. I also wonder, though, does a system need intention or intelligence to be truly interactive, or is responsiveness enough?

 

Week 2 – Reading Reponse

Watching Casey Reas’ talk influenced the way I think about randomness in interactive artwork. It made me reflect on how, in traditional art, we often try to control every aspect of every element. I typically do not plan every aspect of my own art pieces; sometimes I go with the flow, and sometimes I see where my thoughts take me. However, Reas’ discussion of “systems” rather than “images” challenges my creative process.

This raises the question of when computational art is truly random, or whether it is our own thoughts and decisions that contribute to the apparent randomness of the work. I wonder how much control I actually have over “chance” in a digital system and whether the unpredictability is generated by the algorithm or by the choices I make in setting up the system. This makes me reconsider the balance between control and randomness in creative processes.

So, I want to say that randomness is still a type of control because I think we will never achieve full randomness. This is evident in chaos theory: in a small picture, it looks chaotic, but as we zoom out to see the full picture, it becomes more evident that it is a longer form of a pattern, more complex, yet there is still some type of control.



Week 2- Generative Art

 

My concept draws inspiration from the interactive projections often seen in galleries and museums. As I mentioned in my previous post, I wanted to include an interactive art piece that allows the audience to both enjoy and engage with it.

How was it made:

I used the ML5.js body segmentation example as a source, which captures live video from the camera. I then looped through the mask to keep the shapes inside the person mask and drew shapes only where a person was detected. If the mask pixel was black (not a person), it looped to draw random shapes instead.

I used ML5.js documentation and example for my source body segmentation used in my code.

A highlight of some code that you’re particularly proud of:

let bodySegmentation;
let video;
let segmentation;
let options = {
maskType: "person",
};

function preload() {
bodySegmentation = ml5.bodySegmentation("BodyPix", options);
}

function setup() {
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(640, 480);
video.hide();
bodySegmentation.detectStart(video, gotResults);
}

function draw() {
background(255);

// Draw the video
image(video, 0, 0);

if (segmentation) {
// Load the pixels of the mask to check each pixel
segmentation.mask.loadPixels();

This part of the code taught my a lot about body segmentation, and how interactive projections work form behind the scenes.

Embedded sketch:

Note: For it to work, please click the link and allow camera access.

Reflection and ideas for future work or improvements:

I am happy that I learned how to use ML5.js, and I look forward to using it in future projects. For this project, I would like to implement more details, using something like FaceMesh, which could affect the shape or color depending on your visible motions.

Week 1 – Self-portrait

My concept:

I added eye movement inspired by the Mona Lisa, where the eyes follow the viewer’s gaze. I chose pink as the background color since it is my favorite. I also added a touch of glitter and shimmer to the background because I grew up loving Barbie dolls, movies, and TV shows. Finally, I included my house cats, who love my family unconditionally.

“The most personal is the most creative.” — Martin Scorsese

A highlight of some code I am particularly proud of

The code I am most proud of is the eye movement:

let leftEyeX = constrain(map(mouseX, 0, width, -75, -55), -75, -55);
let leftEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);
let rightEyeX = constrain(map(mouseX, 0, width, 55, 75), 55, 75);
let rightEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);

circle(leftEyeX, leftEyeY, 38);
circle(rightEyeX, rightEyeY, 38);

How it was made:

I used basic shapes for the portrait of me, including circles, ellipses, rectangles, and arcs. I made the eye movement and the cats’ tail movements follow the mouse using mouseX and mouseY, with constraints to keep the irises within the eye area. For the background glitter, I used a loop to create a sparkling effect by randomly placing tiny circles with different alpha values. In addition, I used functions to keep everything organized and easy to edit.

Embedded sketch:

 

Reflection and ideas for future work implementations:

I really enjoyed the process. I learned a lot from this project, such as utilizing the for loop for adding glitter and shimmer. In the future I would like to add learn how to utilize the JavaScript Computer Vision and Detection libraries, such as ML5.js and face-api.js, to make more interactive to the audience. I would also want to make more detailed and add a bit more shadows.