Midterm

Inspiration

For this project, I want to create an interactive digital art piece that explores the true scale of reality by gradually zooming from large, natural environments down to microscopic and atomic levels.

Visual Elements

Flower Screen

  • Add a tree, birds, more flowers, a grass field, and the sun for a fuller composition.

  • Include animations such as swaying grass, apples falling from the tree, and birds flying across the screen to make it feel alive.

Leaf Screen

  • Add details like insects, the stem, and a more zoomed-in view of the leaf.

  • Animate insects crawling across the surface to bring a sense of realism.

Cell Screen

  • Show multiple plant cells floating in a jelly-like substance.

  • Design them to resemble real plant cells, with more detail and fluid animation.

Atom Screen

  • Illustrate atoms with orbiting ellipses that cross over each other.

  • Show the nucleus clearly, with protons and neutrons on display.

Interaction: Zoom Functionality

  • Replace the two-finger pinch with a two-hand gesture for zooming, making it more intuitive and reducing accidental zooms.

  • Add smooth zoom animations between levels instead of abrupt page changes, to create a more immersive transition.

Sound Design

  • Integrate sounds that complement each environment:

    • Flower screen: natural ambient sounds (e.g., wind, birds).

    • Leaf screen: subtle insect sounds.

    • Cell screen: soft “jelly-like” sounds.

    • Atom screen: buzzing or electrical sounds.

  • Add a “zoom-in” sound effect to enhance transitions

    (All sounds are sourced from Pixabay.com.)

Machine Learning

To enhance user interactivity, I incorporated machine learning using the ml5 library, which integrates well with p5.js and is relatively simple to implement. I set two thresholds, “close” and “far”, based on the distance of the user’s hands. These thresholds determine when the zooming action is triggered, making the interaction feel more natural and responsive.

Extra details and screenshots

I added a home page to show users the hand gestures and extra button functionalities.

Screen Recording 2025-10-07 at 00.17.22

Challenges

Coming up with creative ideas for this project was challenging, and implementing the zooming feature was especially difficult since I had never attempted it before. Getting it to work smoothly took a lot of trial and error.

This link from p5 was helpful – https://editor.p5js.org/mimimimimi/sketches/SOkckqY_r https://editor.p5js.org/Luxapodular/sketches/rk__bPdcm but also just experimenting with the ease in and out values to make the zoom as natural as possible.

// ===== TRANSITIONS =====
// initiate zoom transition between scenes
function startZoomTransition() {
  isTransitioning = true;        // flag to indicate transition is active
  transitionProgress = 0;        // reset
  
  // Play zoom sound for every transition at 50% volume (if not muted)
  if (zoomSound && !isMuted) {
    zoomSound.setVolume(0.5);
    zoomSound.play();
  }
}

// update for each frame
function updateTransition() {
  if (!isTransitioning) return;  
  
  transitionProgress += 0.03;    // increment by 3% each frame 

  //check if 100% (1)
  if (transitionProgress >= 1) {
    isTransitioning = false;     // stop transition
    transitionProgress = 0;      // reset
    currentPage = currentPage === SCENES.length - 1 ? 1 : currentPage + 1;
    playSceneSound(); // Play sound for the new scene
  }
}

// applies visual zoom effect during transitions
function applyCameraTransform() {
  // create smooth easing curve: slow start, fast middle, slow end
  const easeT = transitionProgress < 0.5
    ? 4 * transitionProgress ** 3      // first half: cubic ease-in
    : 1 - (-2 * transitionProgress + 2) ** 3 / 2;  // Second half: cubic ease-out
  
  // calculate zoom level: smoothly interpolate from 1x to 100x zoom
  const zoom = lerp(1, 100, easeT);
  
  // get the target point to zoom into for current scene
  const [x, y] = SCENES[currentPage].zoomTarget;
  
  // apply camera transformation:
  translate(x, y);     // move to zoom target point
  scale(zoom);         // apply zoom scaling
  translate(-x, -y);   // move back to keep target centered
}

final code – https://editor.p5js.org/kk4827/sketches/9CleTb6y1

week 5 – reading

  • What are some of the ways that computer vision differs from human vision?

As humans, we are able to look at something and classify it no matter the angle, lighting. A computer on the other hand just sees pixels of certain colours and we see the result as a reflection of real life. Then this is how machine learning gets involved, through hundred of images being labelled, colours and patterns identified, the computer is able to tell what it is looking at. That is exactly how my ml model in my midterm is able to detect which hand is which, which fingers are which.

As humans, we are told what is what by our environment and we see this come up with blind people in particular. What we see as green, one may not see it the same way. So in that sense, we are similar.

  • What are some techniques we can use to help the computer see / track what we’re interested in?

Frame differencing – detects motion by comparing each pixel in one video frame with the next. The difference in brightness indicates movement and this requires stable lighting and a stationary camera.

Background subtraction – detects presence by comparing the current frame against a stored image of the empty scene. Areas that differ significantly likely represent objects of interest, but this is sensitive to lighting changes.

Brightness thresholding – distinguishes objects based purely on luminosity and comparing each pixel’s brightness to a threshold value. This works when you can control illumination through backlightin.

By combining these techniques, we can create more complex art representations such as contact interactions (triggering events when a silhouette touches a graphic object), overlap interactions (measuring shared pixels between silhouette and virtual elements), or reflection interactions (computing angles when objects strike the silhouette). Warren’s research shows that once you’ve identified body pixels, implementing sophisticated interactions requires “little more than counting pixels” – making computer vision accessible for creating responsive installations, games, and performance systems where participants interact with virtual creatures or control visual elements through gesture and movement.

  • How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

The surveillance aspect is unavoidable, computer vision in art exists in the same technological ecosystem as security systems and facial recognition.

Different artists engage with this differently. Krueger’s Videoplace uses vision technology playfully where people willingly participate. But Lozano-Hemmer’s Standards and Double Standards explicitly creates “a condition of pure surveillance” using symbols of authority and that visibility of surveillance is the point. Rokeby’s Sorting Daemon confronts automated profiling by making visible the disturbing implications of computer vision used for racial categorisation, using surveillance tools to critique surveillance itself. Jeremijenko’s Suicide Box is honeslty very creepy to me, to see deaths and have them shown really raises questions about who has the right to see those sort of moments.

This is a topic that raises questions on consent and where data of users is being stored. If I was to interact with some art, should I assume that it won’t store any input that I am feeding it

Week 5 – midterm progress

So for my midterm, I want to create some form of art and use Machine Learning. I want to have a visualisation of biology. I want to show a flower and have it keep being zoomed in to the atomic level. I want to use the ML model to detect the pinching motion, and this would trigger the page change.

index.html -> leaf.html -> cell.html -> atom.html

Firstly, I wanted to focus on the ML model and have the motion be detected. I used the ‘Hand Pose Detection with ml5.js’ video from the coding train as a foundation. I changed the parameters just to detect the right hand index finger and thumb.

Currently, I have incredibly basic images for the 4 pages and I will work on making them more aesthetic. The last page has OOP principles from the several atoms and the spinning electrons.

I also want to add some sort of noise to the first 3 images to represent what environment you could find them in. I am also thinking of making the transition between them represent some sort of medium between the 2 images.

 

class Atom {
  constructor(x, y, rotationSpeed = 0.02, innerOrbitRadius = 40, outerOrbitRadius = 60) {
    this.x = x;
    this.y = y;
    this.rotationSpeed = rotationSpeed;
    this.innerOrbitRadius = innerOrbitRadius;
    this.outerOrbitRadius = outerOrbitRadius;
    this.rotation = 0;
    this.nucleusSize = 20;
    this.electronSize = 8;
    this.outerElectronSize = 6;
  }

 

Week 5 – reading

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

I wis Norman’s analysis went deeper into economic incentives behind poor design. While he touches on cost-cutting measures, like companies using “the least expensive sound device” that can only beep, or single lights with confusing flash patterns instead of clear displays, he doesn’t fully address whether some companies intentionally create poor user experiences.

I suspect many businesses actually profit from user confusion. Think about subscription services with deliberately maze-like cancellation processes, or software that makes basic functions require premium upgrades (Dark UI/UX patterns). Norman notes that when people struggle with technology, “the people are blamed for not understanding the machine” – but this blame-shifting can be profitable through tech support fees, extended warranties, or forcing users toward expensive premium versions. Apple is notoriously known for doing this.

Norman hints at this when discussing cost reduction forcing designers to use inadequate feedback systems, but I wish he’d been more direct about the perverse economic incentives. Sometimes bad design isn’t just oversight – it’s strategy. Companies know users will adapt and memorise workarounds rather than switch products, especially when switching costs are high.

  • How can you apply some of the author’s principles of design to interactive media?

In my interactive media and web design work, I constantly rely on Norman’s concept of discoverability. Having built apps and websites, I’ve learnt there are established conventions most users already understand – the hamburger menu, colour changes indicating clickable text, or standard navigation patterns.

These conventions work as what Norman calls “signifiers”, they provide clues about what actions are possible and where to perform them. When I use a familiar icon or follow expected layout patterns, I’m leveraging users’ existing mental models rather than forcing them to learn arbitrary new systems in the hopes of a simpler smoother user experience.

Norman’s principle of natural mapping is also significant. Just as he advocates arranging light switches to match the pattern of the lights they control, I arrange interface elements to match users’ spatial expectations. Navigation goes where people expect it, buttons look like buttons with proper visual hierarchy, and interactive elements behave as they should.

The key insight I take from Norman is that good interactive design should feel invisible – users shouldn’t have to think about how to use it. I should be able to show my work to someone who hardly interacts with websites and be confident that they will able to navigate around my work without additional instructions. If I have to add text for something simple, it is a good indicator that I didn’t design my solution well.

Week 4 – Persian generative text

Inspiration

For this week, I was really struggling on what to do I was prompting with Claude AI and decided to go with the generative text option. I knew I wanted to do something with a different language and then decided to go with my mother-tongue, Farsi. Persian poetry is famous and so I wanted to choose a quote that had a theme, replicate that theme in p5.js with shapes and colours.

My name in Farsi ‘خاطره’ means memory, remembrance, longing – something along those lines. It was difficult to find old poetry with that word as it is more modern but I liked the theme of memories and love so I went with the following line from Rumi – Masnavi (مثنوی معنوی)

Rumi is a 13th century poet and was known for his work regarding mystical themes in Islam (Sufism). Majority of his work is in Persian and very famous.

Idea

This was the quote I went with.

let poem = {
persian: “بشنو از نی چون حکایت می‌کند، از جدایی‌ها شکایت می‌کند”,
transliteration: “Beshno az ney chon hekayat mikonad, Az jodayiha shekayat mikonad”,
english: “Listen to the reed flute, how it tells a tale, complaining of separations”,
};
I wanted the words to be floating around, referring to ‘separation’, and. I want the colours / shapes to be pink, hearts just to refer to love.
I also wanted the user to be able to see the quote altogether so that they can see the final image.

Heart

There is no heart shape in p5.js so I was looking at different examples with math function and came across this link. The artist uses sin and cos functions to connect the hearts.  https://editor.p5js.org/marlontenorio/sketches/M_BGUpfKL

I edited it to suit the image in my mind. I made it a function with the size parameter because I wanted the heart to have some sort of pounding effect.

drawHeart(size) {
        beginShape();
        let s = size * 0.5;
        for (let t = 0; t < TWO_PI; t += 0.1) {
            let x = s * (16 * pow(sin(t), 3)) * 0.03;
            let y = -s * (13 * cos(t) - 5 * cos(2*t) - 2 * cos(3*t) - cos(4*t)) * 0.03;
            vertex(x, y);
        }
        endShape(CLOSE);
    }

Interactivity

Click Mouse

I wanted the user to be able to add words or hearts when the mouse is pressed.

function mousePressed() {
    hearts.push(new Heart());
    texts.push(new Text(mouseX, mouseY));
    
    // Limit elements
    if (hearts.length > 15) hearts.shift();
    if (texts.length > 15) texts.shift();
}

I added a limit so the program won’t be too complex. The words added would be from either the Persian, transliteration, or English – and at the point of the mouse. The hearts would simply be added randomly on the canvas.

SpaceBar – pressed

I wanted a way for users to add more words to screen, but a bunch of words. So I added the keyPressed function for the spacebar so that they user can see more words at once.

function keyPressed() {
    if (key === ' ') {
        for (let i = 0; i < 3; i++) texts.push(new Text());
    }

 

A / a – pressed

I wanted a way for the user to see the 3 versions of the poem at once, so I just used the A button as that trigger.

I have a Text class that will choose random words from the 3 lines verses and then have the floating simulation similar to the hearts. I also wanted these words to have some lifespan and disappear into the background slowly to reference the theme of the separation and memory.

class Text {
    constructor(x = random(100, width-100), y = random(100, height-100)) {
        this.x = x;
        this.y = y;
        this.text = this.getRandomText();
        this.size = random(14, 20);
        this.life = 0;
        this.maxLife = random(400, 800);
        this.dx = random(-0.3, 0.3);
        this.dy = random(-0.2, 0.2);
    }

Here is the display function and how it uses alpha to dictate the opacity. Depending on the life value, the opacity of the text changes.

display() {
    let alpha = this.getAlpha();
    if (alpha <= 0) return;
    
    push();
    textAlign(CENTER, CENTER);
    textSize(this.size);
    fill(340, 50, 90, alpha);
    noStroke();
    text(this.text, this.x, this.y);
    pop();
}

// calculate the opacity value for fading 
getAlpha() {
    let ratio = this.life / this.maxLife; // value between 0 and 1
    // fade in at the first 20% of life
    if (ratio < 0.2) return map(ratio, 0, 0.2, 0, 0.8);
    // fade out at the last 20% of life
    if (ratio > 0.8) return map(ratio, 0.8, 1, 0.8, 0);
    // stay mostly visible in the middle
    return 0.8;
}

Complete poem texts

Whenever the ‘a’ button is pressed, I would add the whole lines of poetry to the texts variable so that it would float around exactly like the singular words. I also added the life parameter as the same so that they disappear at the same time.

function showCompletePoem() {
    texts = [];
    let centerX = width / 2;
    let startY = height / 2 - 60;
    
    // add complete poem texts
    let persian = new Text(centerX, startY);
    persian.text = poem.persian;
    persian.size = 24;
    persian.maxLife = 1200;
    texts.push(persian);
    
    let transliteration = new Text(centerX, startY + 50);
    transliteration.text = poem.transliteration;
    transliteration.size = 18;
    transliteration.maxLife = 1200;
    texts.push(transliteration);
    
    let english = new Text(centerX, startY + 100);
    english.text = poem.english;
    english.size = 20;
    english.maxLife = 1200;
    texts.push(english);
}

It is also important to remove ‘old’ texts from the array once their life is ‘complete’. This gets checked in the draw function.

// remove old texts
texts = texts.filter(text => text.life < text.maxLife);

Next time

Next time, I would definitely want to add some flute noises to really tie the piece together. Next time I should probably add some message to indicate which key / mouse action equates to each action.

Outcome

Week 3 – reading assignment

  • What do you consider to be the characteristics of a strongly interactive system?

The author defines interactivity as a cyclic process where two actors listen, think, and speak, with the interaction’s quality depending on these three elements. While I initially agreed with this perspective, I began to question it as I read further.

I believe that while some forms of interactivity are more creative than others, this doesn’t invalidate simpler or “weaker” interactive experiences. For me, a system is interactive if my physical actions cause its output to change. For example, a traditional book isn’t interactive because flipping the pages doesn’t alter the story itself.

However, the perception of interactivity is highly subjective. A beginner in an IM course might find a simple mouse click that changes the canvas in p5.js to be incredibly interactive. In contrast, a more advanced student, familiar with more unique methods, might see it as basic. This relativity reminds me of the broader philosophical question, “What is Art?”

  • What ideas do you have for improving the degree of user interaction in your p5 sketches?

As someone with experience in p5.js, I want to explore more complex forms of interactivity. However, I think it’s important to balance visual complexity with interactive complexity. If a piece is already visually intricate, adding a highly complex interaction might overwhelm the user.

Therefore, I believe advanced interactive methods would be most effective in my more visually simple pieces. For these, using the computer’s camera or microphone could be a unique way to increase engagement. I previously used ML to detect faces in another class, and I am interested in incorporating that type of interactivity into my future work.

I could even approach more simpler methods such as drag bars.

Assignment 3 – Khatira

Inspiration

So I absolutely love space, I just think the science, visuals, and grandness of it is so crazy and something we will never be able to interpret properly. From music to visuals, I have always wanted to create some art related to space and so I decided to create some sort of simulation of our solar system in the 2D space of p5. However, I didn’t want circular rings, I wanted more elliptical rings to give it that 3d effect. 

Design

This is just a quick mockup of what I would want the sketch to look like, I will also have moons for some of the planets like Earth, Saturn, and Jupiter.
I need classes for the planets, moons, stars of the background.
For all 3 of these ‘Objects’ I will make a parent class called SpaceBody which will host some basic attributes, such as position (x,y) and size.
I will also host all these objects in a SolarSystem class for encapsulation.

Background

I wanted a deep-purple background with stars that grow a little then get smaller again, to give that illusion of glowing stars. I can do that by having the stars randomly drawn on the canvas and changing the alpha value change its ‘brightness’.

My ‘stars’ will have 4 attributes: x,y, size, twinkle. These will be js objects with x,y being random and the size being random between 1-3. Twinkle will be our variable that alters the alpha value. (quality of image isn’t the best, apologies)

SpaceBody

This will be my parent class and will host the basic attributes mentioned previously. I will also have a draw function here that will simply display circles. (Only Saturn will be a circle PLUS the ring).

class SpaceBody {
    constructor(x, y, size, color) {
        this.pos = createVector(x, y);
        this.size = size;
        this.color = color;
    }
    
    // display method - simple circle
    display() {
        push();
        translate(this.pos.x, this.pos.y);
        fill(this.color);
        noStroke();
        circle(0, 0, this.size);
        pop();
    }
}

Sun

Here I will have the Sun at the very centre. For my interactive element of this project, I will have the mouse position be the Sun’s location and the centre of the solar system essentially. I wanted the Sun to stand out compared to the other planets so I will have a few less opaque circles around it to give that ‘glow’ effect.

update() {
        // follow mouse - interactivity
        if (mouseX > 0 && mouseX < width && mouseY > 0 && mouseY < height) {
            this.pos.set(mouseX, mouseY);
        }
    }
    
    display() {
        push();
        translate(this.pos.x, this.pos.y);

        // the soft glow around the sun
        for (let r = this.size * 1.5; r > this.size; r -= 5) {
            let alpha = map(r, this.size, this.size * 1.5, 100, 0);
            fill(255, 200, 50, alpha);
            noStroke();
            circle(0, 0, r);
        }

        // main body of the sun
        fill(this.color);
        noStroke();
        circle(0, 0, this.size);

        pop();
    }

I overrode the display function here as I wanted to add the glow of the sun.
I also wanted the rings of the planets to be drawn here.

//  elliptical orbit rings around the sun
    drawOrbitRings() {
        push();
        translate(this.pos.x, this.pos.y);

        let orbitData = [
            {radius: 80, factor: 0.2},   // Mercury
            {radius: 110, factor: 0.25}, // Venus
            {radius: 140, factor: 0.3},  // Earth
            {radius: 180, factor: 0.35}, // Mars
            {radius: 250, factor: 0.4},  // Jupiter
            {radius: 320, factor: 0.45}, // Saturn
            {radius: 380, factor: 0.5},  // Uranus
            {radius: 420, factor: 0.55}  // Neptune
        ];

        // each orbit draw with faint blue
        for (let orbit of orbitData) {
            stroke(100, 100, 150, 80); 
            strokeWeight(1);
            noFill();
            ellipse(0, 0, orbit.radius * 2, orbit.radius * 2 * orbit.factor);
        }
        pop();
    }

Moon

When having my original code drafted, I was using the SpaceBody class as the base for both Moon and Planet. I then realised, I could use Moon as a child class of SpaceBody AND THEN use Planet as a child of Moon.

I did this because both Moon and Planet do some sort of orbiting, whether it be the Sun they orbit, or a planet. I would say this realisation / code was the part I was most impressed with.

class Moon extends SpaceBody {
    constructor(x, y, size, color, orbitRadius, orbitSpeed, parentBody) {
        super(x, y, size, color);
        this.orbitRadius = orbitRadius;
        this.orbitSpeed = orbitSpeed;
        this.angle = random(TWO_PI); // random starting position
        this.parentBody = parentBody; // the body this orbits around
        this.ellipticalFactor = 0.3; // 3D effect - stronger ellipses
    }
    
    update() {
        // elliptical orbit around the parent body for 3D effect
        this.angle += this.orbitSpeed;
        
        // elliptical orbit (wider than tall for 3D effect)
        let x = this.parentBody.pos.x + cos(this.angle) * this.orbitRadius;
        let y = this.parentBody.pos.y + sin(this.angle) * this.orbitRadius * this.ellipticalFactor;
        this.pos.set(x, y);
    }
}

Planet

For the planet class, I added some extra attributes to the Moon class such as hasRings ( Boolean for Saturn) and moons[] for planets with moons. Because Planet inherits the Moon class, there were fewer lines of code which was nice to have.

SolarSystem

This class was to hold all the objects (Sun, Moons, Planets).

I used AI to help me with the creation of the planet objects by adjusting the size and moon knowledge. Example, Jupiter is the biggest planet so having that planet with the largest size.

let planetData = [
    [8, color(169, 169, 169), 80, 0.04, 0.2, false, []], // Mercury
    [12, color(255, 198, 73), 110, 0.03, 0.25, false, []], // Venus
    [14, color(100, 149, 237), 140, 0.025, 0.3, false, [[4, 25, 0.1, color(220, 220, 220)]]], // Earth
    [10, color(205, 92, 92), 180, 0.02, 0.35, false, []], // Mars
    [28, color(255, 140, 0), 250, 0.015, 0.4, false, [[6, 40, 0.08, color(255, 255, 224)], [4, 55, 0.06, color(139, 69, 19)], [5, 70, 0.04, color(176, 196, 222)]]], // Jupiter
    [24, color(255, 215, 0), 320, 0.012, 0.45, true, [[5, 45, 0.07, color(255, 255, 240)]]], // Saturn
    [18, color(64, 224, 208), 380, 0.008, 0.5, false, []], // Uranus
    [17, color(30, 144, 255), 420, 0.006, 0.55, false, []] // Neptune
];

Challenges

Honestly, as I was coding this, I kept on seeing more way to use OOP such as having Planet and Moon extend this new class I made. Originally I had them as children of SpaceBody and then the update function was a global helper function then it was part of my new class. There was a lot of restructuring so I wish I spent more time focusing on the structure of the classes.

Next Time

I added the mouse position as a part of the interactivity but I want to come up with something more creative next time. Maybe having the whole system be on an angle?

Final Product

Week 2 – reading response

This week we had to watch Casey Reas’s talk on randomness and computational art. One example that related to me most was his demonstration of controlled randomness as a new creative medium. His example of an 11×11 grid of dots demonstrated this perfectly as the dots moved with increasing random constraints, they transformed from orderly patterns into seemingly chaotic movement. This progression raised a question: at what point does controlled randomness become indistinguishable from chaos?
I don’t believe there’s a definitive answer to this question, which connects to broader philosophical debates about the nature of art itself. Can art truly be controlled, or does its essence lie in the unpredictable? This becomes even more interesting when considering symmetry in computational art. By introducing simple random elements, we often perceive meaningful shapes and patterns, even when those elements are generated through chance, like the pixel art example. This suggests that our interpretation and meaning, making as viewers is as crucial as the artist’s intention.
Reas’s point about how minor parameter adjustments can produce entirely new artistic outcomes resonated strongly with my own work. In this week’s assignment, I experimented with adjusting particle colours and sizes based on the number of connected particles, and witnessed how small changes created dramatically different visual results. This reinforces how computational art explores vast creative possibilities through systematic variation.
Finally, Reas’s discussion of exploiting machine systems and their unique characteristics highlighted an important aspect of digital art, the same foundational artistic concept can be expressed differently depending on the computational system used. To me, this shows how computational art differs to traditional art in the sense that once a piece of ‘traditional’ art has been created, it can’t be changed in its entirety. Whereas computational art can be changed depending on machine type, revealing another layer of computational art.

Week 2 – Khatira

For week 2, we needed to usr some loop to create some form of computational art. I decided to do some form of connecting particles and this was inspired from some work from my previous IM class. I know by altering some simple variables or having some simple visual changes, you can create something very different.

I had Dots placed randomly on the screen and gave each particle a velocity randomly assigned between -3 and 3.If the distance between points was less than 100 pixels, the line would be displayed.

I then wanted to add some more dimension, so I added a NUMBER OF CONNECTIONS variable to keep track of how many lines were coming out of each dot (incrementing by 1). The more connections, the bigger the dot.

// size of dot increases with number of connections
const size = this.baseSize + (this.connections * 2);


One very simple tool in p5.js that I love is the background alpha feature. When redrawing the background, you can add some opacity and you can see the cool trail effect it gives in the image below.

function draw() {
    // background(0, 0, 0);
    background(0, 0, 0, 25); // trail effect

Background – documentation

I wanted to add an additional dimension to the dots – the more connections, the more red, the less connections, the more blue. I used a map function to have the values be on a sort of scale, I needed to use HSB mode for this to work and then back to RGB for the background trial effect.
Blue -> red

display() {
    // colorr of dot changes with number of connections, more connections -> more red
    const hue = map(this.connections, 0, MAX_CONNECTIONS, 200, 0);
    // size of dot increases with number of connections
    const size = this.baseSize + (this.connections * 2);
    
    colorMode(HSB);
    fill(hue, 100, 100);
    //for trail effect - swithc back to rgb
    colorMode(RGB);      
    noStroke();
    ellipse(this.x, this.y, size);
}

I used for loops initalise the random dots, but to also go through each do to check the distance, number of connections, and ‘update’ them.

for (let i = 0; i < dots.length; i++) {
    for (let j = i + 1; j < dots.length; j++) {
        if (dist(dots[i].x, dots[i].y, dots[j].x, dots[j].y) < CONNECTION_DISTANCE) {
            // keep track of connections
            dots[i].connections++;
            dots[j].connections++;
            line(dots[i].x, dots[i].y, dots[j].x, dots[j].y);
        }
    }
}

For interactivity, the user can drag and hold the mouse across the screen to add more dots.

Self portrait – Khatira

Assignment 1 – Self Portrait

For this weeks assignment, I wanted to create a simple image of myself but have the mood / expression and weather change depending on the user’s interactivity. As a hijabi, I like to change the colour of my hijab and so I wanted it be changeable by the user pressing the mouse. I also enjoy the city life so I wanted to replicate some sort of simple skyline.

Emotions

I decided to go with 4 different emotions inspired from the following emojis.

cheesing. I wanted this one to be me at my happiest with a simple rainbow in the background.

sad. I wanted it to rain in the background and have my facial expression be a simple upside down arc.

neutral. I wanted this one to be me just meh, with nothing interesting in the background.

happy. I wanted this one to be me just normally happy with the sun in the background.

Previous coding experience with p5.js

I took Decoding Nature with Aya so I have had exposure to replicating natural patterns in p5.js so I wanted to replicate the rain pouring down with my old skill set learnt from her class.

For the rain pouring down effect,  I wanted to have simple white lines

// Draw rain
stroke(255, 255, 255); // 'white' rain
strokeWeight(2);
for (let i = 0; i < 20; i++) {
  let x = random(0, 600);
  let y = random(0, 400);
  line(x, y, x + 1, y + 5); // rain drops
}

Here I am generating lines and having their starting (x,y) coordinates randomly generated from any x coordinate of the screen and y coordinates of 0 to 400. I then have the line stretched out a little to the x by one pixel to create a more angled look.

For the eyes, I wanted them to also be interactive so I made them follow the cursor of the user.

else {
      // normal eyes - white background and black pupils that follow cursor
      noStroke();
      
      // white eye 
      fill(255);
      ellipse(270, 180, 25, 25); 
      ellipse(330, 180, 25, 25); 
      
      // black pupils that follow mouse
      fill(0);
      
      // calculate pupil position based on mouse location
      let leftEyeX = 270;
      let leftEyeY = 180;
      let rightEyeX = 330;
      let rightEyeY = 180;
      
      // calculte teh angle from eye to mouse
      let leftAngle = atan2(mouseY - leftEyeY, mouseX - leftEyeX);
      let rightAngle = atan2(mouseY - rightEyeY, mouseX - rightEyeX);
      
      // LIMIT pupil movement within eye bounds
      let pupilDistance = 4;
      
      let leftPupilX = leftEyeX + cos(leftAngle) * pupilDistance;
      let leftPupilY = leftEyeY + sin(leftAngle) * pupilDistance;
      let rightPupilX = rightEyeX + cos(rightAngle) * pupilDistance;
      let rightPupilY = rightEyeY + sin(rightAngle) * pupilDistance;
      
      // drawing the pupils
      ellipse(leftPupilX, leftPupilY, 8, 8); 
      ellipse(rightPupilX, rightPupilY, 8, 8);

From decoding nature, we learnt how to have particles follow our cursor by making use of the mouseX, mouseY variables. I had the pupils initial X,Y coordinates be the same as the white circles and then limit their distance to 4 so that they don’t go out of bound of the white circles.

The use of atan2() is used to calculate the angle from each eye center to the mouse position and returns an angle in radians. It is then used in the sin and cos formulas with basic trigonometry to create a new X,Y coordinate. Because the value of sin and cos will always be between -1 and 1, when multiplied by our pupilDistance (constraint we implemented earlier)  it will never go out of bound of the white part of the eye.

Final solution