Week 5: Midterm Progress

Concept:

My concept is based on my favorite Ramadan TV shows, “ظاظا و جرجير” (Zaza w Gargyer) and “بكار” (Bakar). These shows teach children Islamic and Egyptian morals and values in an indirect and engaging way. Bakar is especially important because it was one of the first cartoons to represent Egyptian culture and identity.

I want to create an adventure game where players can choose one of these cartoon characters and go on a journey inspired by the spirit of Ramadan. Throughout the game, players will stop to help others, learn from their mistakes, and make choices that reflect kindness, generosity, patience, and responsibility. The goal is to combine fun gameplay with meaningful lessons, just like the original shows.

Design:

I want my game to give this cozy relaxing feel that you could play anytime like on a portable console or at console. I want it to be as nostalgic as possible. I want it to be similar aesthetic to Stardew Valley. I am currently working on the characters sprite sheets.

 

 

Frightening / Challenging Aspects

The most frightening aspect for me is figuring out how to make the game simple, yet fun, while still being a meaningful learning experience. I’m unsure whether I should design it as a pixel-style platformer or a 2D narrative game.

I also want the game to accurately capture and translate the spirit of Ramadan, just like the original cartoons. Can I successfully express that spirit through a cozy, pixelated platform game that can be enjoyed by the entire family?

Risk Prevention

To reduce this risk, I would create a short sample or prototype of the game to test whether it truly fits the idea and essence I have in mind. Most importantly, I would evaluate how the characters and the setting turn out visually.

Then, I would ask people who are familiar with the original cartoons whether they can recognize the characters and understand what they represent in the pixelated version. If a high number of people are able to recognize them and connect them to the original spirit of the shows, I would feel more confident about moving forward with the project.

Week 5: Reading Assignment

Computer vision differs from human vision in that it does not fully understand the context of what it is seeing. Humans rely on memory, context, and storytelling to understand what they see. What stood out to me is that computer vision depends less on artificial intelligence and more on borders, corners, contrast, intensity, and brightness. Computer vision sees pixels in small details, while we as humans might overlook small details and focus more on the bigger picture.

A technique I recently learned about in computer vision is body pose detection, which is used in many games such as Xbox 360 Kinect and Just Dance. The reading made me think that this is a useful tool in the field of interactive art. The author also seems optimistic about democratizing these tools for artists. Like many things in life, it is a double-sided sword—it can be used for good or bad. I am not only talking about computer vision, but also surveillance and tracking in general. These technologies can be used to harm individuals, or they can be used to improve safety within society. Context matters, and that is why humans have the privilege of detecting and evaluating ethical issues.

Week 4 – Reading Response

Reading Norman’s chapter made me realize how often I get frustrated with specific designs, especially ones that lack efficiency in everyday objects. Norman emphasizes that good design should communicate clearly, prevent errors, and provide feedback. I see this principle in some interactive media, where the design makes it easy to use without much explanation—anyone can figure it out quickly. When something is designed well, you don’t even notice it because everything feels natural and intuitive. Unlike the examples the author mentioned, such as the sink that requires pushing down on it or the door that needs a sign to explain that it is a sliding door, good design should not require instructions. If a user has to stop and think about how to use something basic, then the design has already failed.

Something that drives me crazy is the access doors on campus. I walk around carrying two access cards—one specifically for my suite and room, and another for the rest of the campus. It feels unnecessary and inefficient. On top of that, the glass doors are extremely heavy, and the sensors do not work most of the time. Instead of making entry smooth and accessible, the design creates frustration. According to Norman’s ideas, better mapping, clearer feedback, and fewer constraints could significantly improve this experience.

Week 4 – Global Mood (Data Visualization)

Concept:
My concept is based on showing the current global mood, and the world’s current situation. Because whenever I used would google “news” most of the things would is evoke aa negative emotion from me. So, I decided to visualize the news and categorize them into a few different emotions or feelings.

How I created the code:

I used Guardian and NYT API keys in order to get access to live articles, although there are some restrictions, like page requests. Therefore, I added some delay in order to access a larger number of pages and news article headlines. I also used world.json for the country borders.

I then created different arrays: one for the emotional bubbles, one for the country borders, one for the CNN breaking news ticker, and one for tracking articles so they are not shown twice. I also added a timer that updates every 60 seconds and adjusted the speed and position of the news ticker.

Then I added geographical points for a list of countries. I created bubbles for different emotions, with each emotion represented by a color. There is also a map key showing which color represents which emotion. The bubbles have visual effects like glowing and shrinking over time to make the map feel dynamic. Emotions are detected using keywords in article titles to classify sadness, anger, hope, or joy.

It initially gets the last 48 hours of news, then it is updated with live breaking news. I also added fallbacks: if the world map fails to load, a simple grid is shown, and if the API fails, a CORS proxy is used to make sure the news still comes through.

The code:
// Convert guardian format to our format
// Fetch 48 hours of historical news from The Guardian
function fetchHistoricalNews() {
let twoDaysAgo = new Date();
twoDaysAgo.setDate(twoDaysAgo.getDate() - 2);
let fromDate = twoDaysAgo.toISOString().split("T")[0]; // Format: YYYY-MM-DD
console.log("📅 Fetching Guardian news from " + fromDate + " to today...");

let totalArticles = [];
let pagesToFetch = 10; // Get 10 pages of results
let pagesLoaded = 0;
let failedPages = 0;

// Fetch pages sequentially with delay to avoid rate limiting
for (let pageNumber = 1; pageNumber <= pagesToFetch; pageNumber++) {
setTimeout(() => {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&from-date=" +
fromDate +
"&page-size=30&page=" +
pageNumber +
"&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
console.log("🔄 Requesting Guardian page " + pageNumber + "...");
fetch(apiURL)
.then((response) => {
console.log("📡 Guardian page " + pageNumber + " response status: " + response.status);
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
totalArticles = totalArticles.concat(data.response.results);
pagesLoaded++;
console.log("✅ Page " + pageNumber + " loaded: " + data.response.results.length + " articles");
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian"); // true = historical
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
} else {
console.warn("⚠️ Guardian page " + pageNumber + " returned empty results");
failedPages++;
}
})
.catch((error) => {
console.error("❌ Guardian page " + pageNumber + " failed:", error.message);
failedPages++;
if (pagesLoaded + failedPages === pagesToFetch) {
if (totalArticles.length > 0) {
console.log("📊 Total Guardian historical: " + totalArticles.length + " (" + pagesLoaded + "/" + pagesToFetch + " pages successful)");
isShowingHistorical = true;
sourceStatus.guardian.active = true;
sourceStatus.guardian.articleCount = totalArticles.length;
processArticles(totalArticles, true, "guardian");
} else {
console.error("❌ All Guardian pages failed");
sourceStatus.guardian.active = false;
}
}
});
}, pageNumber * PAGE_REQUEST_DELAY); // Use delay variable
}
}

// Fetch the latest breaking news from The Guardian
function fetchGuardianNews() {
console.log("📰 [" + getCurrentTime() + "] Fetching Guardian news...");
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
fetch(apiURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false; // We're showing breaking news now
processArticles(data.response.results, false, "guardian"); // false = breaking news
}
})
.catch((error) => {
console.log("⚠️ Guardian direct failed, trying CORS proxy...");
tryGuardianWithProxy();
});
}

// Backup method: Try Guardian API through CORS proxy
function tryGuardianWithProxy() {
let apiURL =
"https://content.guardianapis.com/search?section=world&show-tags=keyword&page-size=25&show-fields=webPublicationDate&api-key=" +
GUARDIAN_API_KEY;
let proxiedURL = "https://api.allorigins.win/raw?url=" + encodeURIComponent(apiURL);
fetch(proxiedURL)
.then((response) => {
if (!response.ok) throw new Error("HTTP " + response.status);
return response.json();
})
.then((data) => {
if (data && data.response && data.response.results) {
console.log("✅ [" + getCurrentTime() + "] Guardian via proxy: " + data.response.results.length + " articles");
sourceStatus.guardian.active = true;
sourceStatus.guardian.lastUpdate = new Date();
sourceStatus.guardian.articleCount = data.response.results.length;
isShowingHistorical = false;
processArticles(data.response.results, false, "guardian");
}
})
.catch((error) => {
console.error("❌ [" + getCurrentTime() + "] Guardian completely failed:", error.message);
sourceStatus.guardian.active = false;
});
}

 

Reflection and ideas for future work or improvements:

Reflection:

Global Mood taught me a lot about combining live data, visualization, and emotion analysis. Seeing emotions vary across regions in real time was fascinating, and effects like glowing and shrinking bubbles made the map feel dynamic. It also taught me how to APIs and json files on P5.js.

Future Work and  improvements:

I would love to present it as an installation to show people the current global situation. For future improvements, I would incorporate Natural Language Processing to classify emotions more accurately, rather than relying solely on specific keywords. I also wish I had greater access to open-source news APIs to expand the dataset.

 

Week 3- OOP Assignment

 

Your concept:

I was inspired by my house cats, who always bring joy and happiness when I play with them or when they decide to sit on my lap. I wanted to include them, but wasn’t sure how, so I decided to base it on Nyan Cat, which I used to play when I was younger, when I had to wait for my friends to arrive, didn’t have access to the internet, or just wanted to give my brain a break.

Nyan Cat (Music Video 2011) - IMDb

Code:

I prompted Gemini to add the particles, it used blendMode(ADD) is what makes the magic particles look like they are made of light.

let segments = 8; 
let segLength = 12;
let x = [], y = [];
let fireParticles = [];

const COLORS = {
fur: [255, 235, 245], 
ears: [255, 180, 200],
eyes: [100, 200, 255],
magic: ['#FFD1DC', '#FFECB3', '#B2E2F2', '#D1FFD7', '#E0BBE4']
};

function setup() {
createCanvas(windowWidth, windowHeight);
// Initialize segment positions
for (let i = 0; i < segments; i++) {
x[i] = mouseX; 
y[i] = mouseY;
}
background(0); 
angleMode(RADIANS);
}

function draw() {
// Semi-transparent black rect creates the generative trail
blendMode(BLEND);
fill(0, 0, 0, 30); 
rect(0, 0, width, height);

let mouseSpeed = dist(mouseX, mouseY, pmouseX, pmouseY);

//Follow Logic
dragSegment(0, mouseX, mouseY);
for (let i = 0; i < x.length - 1; i++) {
dragSegment(i + 1, x[i], y[i]);
}

let headAngle = atan2(mouseY - y[1], mouseX - x[1]);
let emissionRate = map(mouseSpeed, 0, 50, 1, 8);

if (mouseIsPressed || mouseSpeed > 2) {
for(let i = 0; i < emissionRate; i++) {
fireParticles.push(new MagicParticle(x[0], y[0], headAngle, mouseSpeed));
}
}



// Tail
drawTail(x[segments-1], y[segments-1]);

// Body
for (let i = x.length - 1; i > 0; i--) {
drawCatBody(x[i], y[i], i);
}

// Glowing Particles
// We use ADD blend mode to make them pop on black
blendMode(ADD);
for (let i = fireParticles.length - 1; i >= 0; i--) {
fireParticles[i].update();
fireParticles[i].display();
if (fireParticles[i].isDead()) fireParticles.splice(i, 1);
}
blendMode(BLEND);

//Head always on top
drawCatHead(x[0], y[0], headAngle);
}

function dragSegment(i, xin, yin) {
let dx = xin - x[i];
let dy = yin - y[i];
let angle = atan2(dy, dx);
x[i] = xin - cos(angle) * segLength;
y[i] = yin - sin(angle) * segLength;
}

function drawCatBody(posx, posy, index) {
push();
translate(posx, posy);
fill(COLORS.fur);
noStroke();
ellipse(0, 0, 50 - index, 45 - index);
pop();
}

function drawTail(tx, ty) {
push();
translate(tx, ty);
stroke(COLORS.fur);
strokeWeight(12);
noFill();
let wag = sin(frameCount * 0.2) * 25;
bezier(0, 0, -15, wag, -30, -wag, -45, 0);
pop();
}

function drawCatHead(hx, hy, angle) {
push();
translate(hx, hy);
rotate(angle);

fill(COLORS.fur);
noStroke();
ellipse(10, 0, 55, 50); // Face

// Ears
fill(COLORS.ears);
triangle(-5, -20, 5, -45, 20, -20); 
triangle(10, -20, 25, -45, 40, -20); 

// Mouth
fill(255, 150, 150);
arc(30, 5, 22, 22, 0, PI);

// Eyes
fill(COLORS.eyes);
ellipse(15, -5, 10, 12);
ellipse(35, -5, 10, 12);
fill(255); 
ellipse(17, -7, 4, 4);
ellipse(37, -7, 4, 4);

// Whiskers
stroke(255, 200);
strokeWeight(1);
line(40, 2, 60, -5);
line(40, 5, 60, 5);
line(40, 8, 60, 15);
pop();
}

class MagicParticle {
constructor(x, y, angle, speed) {
this.pos = createVector(x, y);
// Spread the magic out
this.vel = p5.Vector.fromAngle(angle + random(-0.5, 0.5));
this.vel.mult(speed * 0.2 + random(1, 4));
this.lifespan = 255;
this.c = color(random(COLORS.magic));
this.size = random(2, 7);
}

update() {
this.pos.add(this.vel);
this.vel.mult(0.96); 
this.lifespan -= 4;
}

display() {
noStroke();
let alpha = map(this.lifespan, 0, 255, 0, 200);
fill(red(this.c), green(this.c), blue(this.c), alpha);
ellipse(this.pos.x, this.pos.y, this.size);

// Sparkle effect
if (random(1) > 0.95) {
fill(255, 255, 255, alpha);
ellipse(this.pos.x, this.pos.y, this.size * 0.6);
}
}

isDead() { return this.lifespan < 0; }
}

function windowResized() {
resizeCanvas(windowWidth, windowHeight);
background(0);
}
      • Embedded sketch

     

    Reflection and ideas for future work or improvements:

  • In the future, I might add interactions or obstacles to make it like an interactive fun game, but I want to be unique and something that have been done previously.

Week 3: Reading Response

I do agree with the author that interactivity is a two-way process. It is something I interact with, and something that interacts with me. However, does it have to include all three components the author mentions—speaking, listening, and thinking? Are we really interacting with computers in the same way? They do not think the way humans do. They are trained using mathematical equations and algorithms, yet they can sometimes outperform human thinking.

In my opinion, a strong interactive system allows me to manipulate things in real time—for instance, typing on a computer right now or playing a virtual reality (VR) game. There is not necessarily a listening, speaking, or even thinking component. According to the author, this would not be considered interactive. But for me, interaction means getting a reaction to my actions.

One example the author gives is an interactive rug, which I would not consider interactive from the rug’s side because it does not interact with me in the same way I interact with it—I simply play with cars on it. However, I would consider rugs that teach prayer through step-by-step instructions, or similar designs, to be interactive because I interact with them and they interact with me.

In the future, I would add things that get the user more involved in my p5 sketches, creating a real interaction between the user and the sketch. For example, I could use ml5 and other machine learning libraries to make the sketches respond to gestures, sounds, or movements. I think a strong interactive system should give feedback to the user, so their actions actually change what happens on the screen in real time. This way, the interaction doesn’t need to include thinking or speaking like the author says—it’s still interactive because the sketch reacts to me. I also wonder, though, does a system need intention or intelligence to be truly interactive, or is responsiveness enough?

 

Week 2 – Reading Reponse

Watching Casey Reas’ talk influenced the way I think about randomness in interactive artwork. It made me reflect on how, in traditional art, we often try to control every aspect of every element. I typically do not plan every aspect of my own art pieces; sometimes I go with the flow, and sometimes I see where my thoughts take me. However, Reas’ discussion of “systems” rather than “images” challenges my creative process.

This raises the question of when computational art is truly random, or whether it is our own thoughts and decisions that contribute to the apparent randomness of the work. I wonder how much control I actually have over “chance” in a digital system and whether the unpredictability is generated by the algorithm or by the choices I make in setting up the system. This makes me reconsider the balance between control and randomness in creative processes.

So, I want to say that randomness is still a type of control because I think we will never achieve full randomness. This is evident in chaos theory: in a small picture, it looks chaotic, but as we zoom out to see the full picture, it becomes more evident that it is a longer form of a pattern, more complex, yet there is still some type of control.



Week 2- Generative Art

 

My concept draws inspiration from the interactive projections often seen in galleries and museums. As I mentioned in my previous post, I wanted to include an interactive art piece that allows the audience to both enjoy and engage with it.

How was it made:

I used the ML5.js body segmentation example as a source, which captures live video from the camera. I then looped through the mask to keep the shapes inside the person mask and drew shapes only where a person was detected. If the mask pixel was black (not a person), it looped to draw random shapes instead.

I used ML5.js documentation and example for my source body segmentation used in my code.

A highlight of some code that you’re particularly proud of:

let bodySegmentation;
let video;
let segmentation;
let options = {
maskType: "person",
};

function preload() {
bodySegmentation = ml5.bodySegmentation("BodyPix", options);
}

function setup() {
createCanvas(640, 480);
// Create the video
video = createCapture(VIDEO);
video.size(640, 480);
video.hide();
bodySegmentation.detectStart(video, gotResults);
}

function draw() {
background(255);

// Draw the video
image(video, 0, 0);

if (segmentation) {
// Load the pixels of the mask to check each pixel
segmentation.mask.loadPixels();

This part of the code taught my a lot about body segmentation, and how interactive projections work form behind the scenes.

Embedded sketch:

Note: For it to work, please click the link and allow camera access.

Reflection and ideas for future work or improvements:

I am happy that I learned how to use ML5.js, and I look forward to using it in future projects. For this project, I would like to implement more details, using something like FaceMesh, which could affect the shape or color depending on your visible motions.

Week 1 – Self-portrait

My concept:

I added eye movement inspired by the Mona Lisa, where the eyes follow the viewer’s gaze. I chose pink as the background color since it is my favorite. I also added a touch of glitter and shimmer to the background because I grew up loving Barbie dolls, movies, and TV shows. Finally, I included my house cats, who love my family unconditionally.

“The most personal is the most creative.” — Martin Scorsese

A highlight of some code I am particularly proud of

The code I am most proud of is the eye movement:

let leftEyeX = constrain(map(mouseX, 0, width, -75, -55), -75, -55);
let leftEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);
let rightEyeX = constrain(map(mouseX, 0, width, 55, 75), 55, 75);
let rightEyeY = constrain(map(mouseY, 0, height, -8, 8), -8, 8);

circle(leftEyeX, leftEyeY, 38);
circle(rightEyeX, rightEyeY, 38);

How it was made:

I used basic shapes for the portrait of me, including circles, ellipses, rectangles, and arcs. I made the eye movement and the cats’ tail movements follow the mouse using mouseX and mouseY, with constraints to keep the irises within the eye area. For the background glitter, I used a loop to create a sparkling effect by randomly placing tiny circles with different alpha values. In addition, I used functions to keep everything organized and easy to edit.

Embedded sketch:

 

Reflection and ideas for future work implementations:

I really enjoyed the process. I learned a lot from this project, such as utilizing the for loop for adding glitter and shimmer. In the future I would like to add learn how to utilize the JavaScript Computer Vision and Detection libraries, such as ML5.js and face-api.js, to make more interactive to the audience. I would also want to make more detailed and add a bit more shadows.