Week 4 – Assignment

Concept

My inspiration for this project was a magic 8-Ball, which is like a classic toy that lets people ask a question, shake the ball, and then receive a random answer like “Yes,” “No,” or “Ask again later.” My project is similar to this but in a digital form, so instead of shaking a physical object, the user types a question and presses enter. The program then generates a response from a set list of answers.

Code Highlight

The core of this code is pretty straightforward. The setup function creates the canvas, places the input box, and ensures everything is aligned properly. Then generateResponse function then picks a random answer from a list whenever the user presses Enter. The draw function then continuously updates the screen, displaying the prompt and response.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let inputBox;
let response = ""; // variable to store the response text
let fade = 0;
let answers = [
"Yes.", "No.", "Definitely.", "Absolutely not.",
"Likely.", "Without a doubt.", "Maybe?", "Never."
];
function setup() {
createCanvas(600, 400);
textAlign(CENTER, CENTER);
textSize(24);
// inputbox
inputBox = createInput("");
inputBox.position(width / 2 - 100, height / 2 - 50);
inputBox.size(200);
inputBox.style("font-size", "16px");
inputBox.changed(generateResponse); // calls generateResponse when text inside inputbox changes
}
let inputBox; let response = ""; // variable to store the response text let fade = 0; let answers = [ "Yes.", "No.", "Definitely.", "Absolutely not.", "Likely.", "Without a doubt.", "Maybe?", "Never." ]; function setup() { createCanvas(600, 400); textAlign(CENTER, CENTER); textSize(24); // inputbox inputBox = createInput(""); inputBox.position(width / 2 - 100, height / 2 - 50); inputBox.size(200); inputBox.style("font-size", "16px"); inputBox.changed(generateResponse); // calls generateResponse when text inside inputbox changes }
let inputBox; 
let response = "";  // variable to store the response text
let fade = 0;  
let answers = [ 
  "Yes.", "No.", "Definitely.", "Absolutely not.", 
  "Likely.", "Without a doubt.", "Maybe?", "Never."
];

function setup() {
  createCanvas(600, 400);  
  textAlign(CENTER, CENTER);  
  textSize(24);  
  
  // inputbox
  inputBox = createInput("");
  inputBox.position(width / 2 - 100, height / 2 - 50);  
  inputBox.size(200);  
  inputBox.style("font-size", "16px");  
  
  
  inputBox.changed(generateResponse);  // calls generateResponse when text inside inputbox changes
}
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// generates a random response when the input changes
function generateResponse() {
response = random(answers); // selects random responses from answers array
// generates a random response when the input changes function generateResponse() { response = random(answers); // selects random responses from answers array
// generates a random response when the input changes
function generateResponse() {
  response = random(answers);  // selects random responses from answers array

Reflection/Imporvements

If I were to improve this, I’d start by making the responses feel more dynamic. Right now, all of the answers appear in the same style, but I could change the color of the text based on the response, so maybe green for positive, red for negative, and yellow for uncertain ones. I could even add sound effects or animations would also really improve the experience, making it feel more interactive. Finally, I’d like to refine the user input, so that the box automatically clears after the question is asked.  Overall, I like how this project turned out. It’s a small but effective way to capture the fun of a Magic 8-Ball.

Week #4 – Production Assignment ( Data Visualization)

Introduction:

I have always found bar graphs to be quite fascinating. The ability to stack and compare data visually allows it to be easily perceived. In my case, I wanted to make sure, that I can log my study hours, as an extrinsic motivation. Something which will give me visual overview of my efforts in the form of time put into studying.

Concept:

In theory, I wanted the data visualization and bar graph to be based on weekly basis. A snapshot of just one week as opposed to a complete log since the dawn of time.  Each day in the week will have a bar , extending from top to bottom axis, and will vary in height, to signify which day was the most productive (I know this isn’t exactly considered productivity, but we might just pretend). Each bar will have designated study hours listed on top of it, and will give a quantitive and qualitative visual of study pattern.

The user will be greeted by start page, then taken to a page to log in study hours, and then again upon a button click, will see the graphs appear.

Code Highlight:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function setup() {
}
function startProgram() {
}
}
function submitData() {
// check and store if
for (let i = 0; i < studyInputs.length; i++) {
//cycle through the inputs to verify the nature of data.
let val = parseFloat(studyInputs[i].value());
// parseFloat is a javascript function, which parses
// a value as string and then converts it to number
studyHours.push(isNaN(val) ? 0 : val);
/*
isNaN is 'Not a number' function which returns true if the value is passed is not a number. in our case if the value passed is a
number, then its true, and pushes 'val' into study hours. and for
the one where if there is no number entered for instance a letter /
character instead, this will push the value '0' for the hours.
*/
studyInputs[i].hide(); // hide the input field after submission
}
submitButton.hide();
state = "graph"; // change state so that draw() will call drawGraph()
}
function drawGraph() {
}
//atlas draw function which keeps on cycling, and only draws graph when the state has been changed to draw graph.
function draw() {
// if state is "graph" continuously draw the graph.
if (state === "graph") {
drawGraph(); // custom function
}
}
function setup() { } function startProgram() { } } function submitData() { // check and store if for (let i = 0; i < studyInputs.length; i++) { //cycle through the inputs to verify the nature of data. let val = parseFloat(studyInputs[i].value()); // parseFloat is a javascript function, which parses // a value as string and then converts it to number studyHours.push(isNaN(val) ? 0 : val); /* isNaN is 'Not a number' function which returns true if the value is passed is not a number. in our case if the value passed is a number, then its true, and pushes 'val' into study hours. and for the one where if there is no number entered for instance a letter / character instead, this will push the value '0' for the hours. */ studyInputs[i].hide(); // hide the input field after submission } submitButton.hide(); state = "graph"; // change state so that draw() will call drawGraph() } function drawGraph() { } //atlas draw function which keeps on cycling, and only draws graph when the state has been changed to draw graph. function draw() { // if state is "graph" continuously draw the graph. if (state === "graph") { drawGraph(); // custom function } }
function setup() {

}

function startProgram() {

  }
 
}

function submitData() {
  // check and store if 
  for (let i = 0; i < studyInputs.length; i++) {
    //cycle through the inputs to verify the nature of data.
    let val = parseFloat(studyInputs[i].value());
    // parseFloat is a javascript function, which parses
    // a value as string and then converts it to number
    studyHours.push(isNaN(val) ? 0 : val);
    /*
    isNaN is 'Not a number' function which returns true if the value is     passed is not a number. in our case if the value passed is a 
    number, then its true, and pushes 'val' into study hours. and for
    the one where if there is no number entered for instance a letter /
    character instead, this will push the value '0' for the hours.
    */
    studyInputs[i].hide(); // hide the input field after submission
  }
  submitButton.hide();
  state = "graph";  // change state so that draw() will call drawGraph()
}

function drawGraph() {
  
}

//atlas draw function which keeps on cycling, and only draws graph when the state has been changed to draw graph.
function draw() {
  // if state is "graph" continuously draw the graph. 
  if (state === "graph") {
    drawGraph(); // custom function
  }
}

The code in submit data is what I am proud of. At first, I encountered the possibility where the user might not enter a numerical value, but rather a string or character. Hence I made use of ‘parseFloat’ and ‘isNaN’ javascript functions paired with a inline if-else logic statement to check for the nature of input. If the input happened to be other than a number, then push a ‘zero’ value into the ‘studyHours’ array.

 

Things I struggled with:

I struggled the most with the placement and sizing of the bar graphs. There coordinates were inconsistent, and the spacing was off. Hence, I decided to use mathematical formulation to calculate the division of space on the spot , and keep its coordinates and their margin  consistent.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let margin = 60;
let gap = 20; //gap between days
let availableWidth = width - 2 * margin;
let barWidth = (availableWidth - (days.length - 1) * gap) / days.length;
let maxHours = max(studyHours); // maxiumum value in the array
if (maxHours === 0) {
maxHours = 1; // to avoid division by zeor
}
let graphHeight = height - 100;
stroke(0);
//horizontal bar
line(margin, height - 50, width - margin, height - 50);
// For each day, draw the corresponding bar and label its study hours and name
for (let i = 0; i < days.length; i++) {
let x = margin + i * (barWidth + gap);
let barHeight = map(studyHours[i], 0, maxHours, 0, graphHeight);
// Draw a blue bar representing the study hours
fill(50, 150, 200);
rect(x, height - 50 - barHeight, barWidth, barHeight);
let margin = 60; let gap = 20; //gap between days let availableWidth = width - 2 * margin; let barWidth = (availableWidth - (days.length - 1) * gap) / days.length; let maxHours = max(studyHours); // maxiumum value in the array if (maxHours === 0) { maxHours = 1; // to avoid division by zeor } let graphHeight = height - 100; stroke(0); //horizontal bar line(margin, height - 50, width - margin, height - 50); // For each day, draw the corresponding bar and label its study hours and name for (let i = 0; i < days.length; i++) { let x = margin + i * (barWidth + gap); let barHeight = map(studyHours[i], 0, maxHours, 0, graphHeight); // Draw a blue bar representing the study hours fill(50, 150, 200); rect(x, height - 50 - barHeight, barWidth, barHeight);
let margin = 60;
 let gap = 20; //gap between days
 let availableWidth = width - 2 * margin;
 let barWidth = (availableWidth - (days.length - 1) * gap) / days.length; 
 let maxHours = max(studyHours); // maxiumum value in the array
 if (maxHours === 0) {
   maxHours = 1;  // to avoid division by zeor
 }
 let graphHeight = height - 100;
 stroke(0);
 //horizontal bar
 line(margin, height - 50, width - margin, height - 50);
 
 // For each day, draw the corresponding bar and label its study hours and name
 for (let i = 0; i < days.length; i++) {
   let x = margin + i * (barWidth + gap);
   let barHeight = map(studyHours[i], 0, maxHours, 0, graphHeight);
   
   // Draw a blue bar representing the study hours
   fill(50, 150, 200);
   rect(x, height - 50 - barHeight, barWidth, barHeight);

 

 

Instruction on How to Use:

Click on ‘start’ to proceed to data logging stage.

Then, enter numerical values for the hours studied each day.

 

Once pressed on ‘submit’, the user will see visualized pattern of data entered.

Since ‘ali’ is not a numerical value, it shows zero, and for the rest, it visualizes the trend and numbers entered.

Embedded Sketch:

 

Future Improvements:

In the future, I want to make it even more dynamic. For instance 7 is not far away from hitting the title. Even though the current program is able to resize depending on the maximum and the minimum relative to other bars, and the graph, yet still, better margins and specific coloration to each bar can make it more sybmolic. For instance hot red can refer to the most hours studied, and simple greenish can refer to the least.

Complete code:

The complete code is extremely lengthy, not really but can be accessed via clicking on the sketch.

 

Week #4 – Reading Response

Overview of the reading:

I thoroughly enjoyed this reading. I had no idea the famous ‘Norman Door’ was literally named after a personality named Norman! The reading touches upon various concepts and examples such as doors and teapot for masochists. By using the example of his friend and his struggle with an aesthetically pleasing array of doors, the author made and argument that ‘discoverability’ and ‘understanding’  are two of the most critical elements of design, which are often neglected. Discoverability means the how easy it is for users to discover possible actions and understanding refers to what extent user can understand the meaning of those actions.

 

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

If there is one thing, that I happened to have been struggling with recently are the plastic latches. To be more precise, the battery latches on cameras.

Turns out, I am not the only one, many folks over the internet have been struggling with these latches. The latch mechanism works by pulling a small protruded lever. Instead, those unfamiliar, end up pooling the whole latch backwards, which ends up damaging , loosening, or even breaking it. No wonder,  why they have so many spare plastic latches lined up on amazon:

In order to improve, I think a release mechanism can be designed, based on a simple push mechanism, rather than a pull mechanism. A simple button, which releases the hatchet rather than you having to pry it with your nails, making it extremely difficult and scratching the plastic in the process. Most importantly, memory card has to be removed every time to transfer images, and with latch being this annoying, it is bound to be damaged and broken in the long run. Therefore, a push to open mechanism would be more important than a push / pry to open mechanism.

Note: I am not referring to the movement of the latch (wouldn’t make any different to door), but rather the release mechanism that works in latches and sometimes say’s push to open or ‘pull’ but reality results in user confused which axis to apply force in.

Solution: A press to open and press to release mechanism, similar to kitchen drawers.

 

How can you apply some of the author’s principles of design to interactive media?

Feedback mechanism is one of the main visual or multi-sensory cue (signifier) that can be implemented to interactive media projects. Upon user input, the designed system should either inform the user after an adequate amount of time wether his/her action has been negated or accepted. I personally like the idea of sound or haptics. For instance on Macintosh mousepad, when user is performing a drag or drop action with the mouse which is beyond the graphical constraints of that program, the springs in the mousepad harden and vibrate. This gives the simulation of difficulty when dragging . Similarly that annoying sound from windows 7, when prompt window not dealt with are some of the examples.

Secondly affordances such as designs which allow for ‘easy’ discoverability is something that I would like to work upon.  Easy to understand designs, which prompts user in the right direction, such as drawing borders in the game selectively to direct the player in the right direction.

Reading Reflection – Week#4

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

Personally, something that might frustrate me is an object for which multiple people hold varying mental conceptual models of it. Imagine the complication if these varying mental conceptual models conflict.  Earphones might not necessarily drive me crazy, but it is initially designed to be worn wrapped around the ear and countless people simply plug it into their ear. I realize the problem is that usually a lack of signifier indicating a round wrapping action around their ear, and that users tend to place huger focus on the earbud, which is to be plugged in the earhole, leading to users simply plugging the earphone into their ear without wrapping it around their ear. I did ponder over how it could be designed to let people know clearly how earphones should be worn, but a better design that I can think of involves a long round rubber earhook placed before the earbud that indicates to the user it should be placed in the long around the antihelix, for which it can only be placed that way if the earphone cable was wrapped around ear.

Right way to wear earphones. Cited from https://www.aizerd.com/news/the-right-way-to-wear-headphones.html
Parts of the earphone (see the top-left for ear-hook). Cited from https://www.pinterest.com/pin/663084745114270948/.

Taking careful steps in designing an object is necessary, and it is crucial to consider the conceptual model that might be constructed, for these provide value for understanding, in predicting how things will behave, and in figuring out what to do when things do not go as planned.

  • How can you apply some of the author’s principles of design to interactive media?

I learned to consider affordances, the possible interactions between people and the environment, as well as incorporate signifiers to signal things, in particular what actions are possible and how they should be done. Signifiers must be perceivable, else they fail to function. I also learned that in design, signifiers are more important than affordances, for they communicate how to use the design. A signifier can be words, a graphical illustration, or just a device whose perceived affordances are unambiguous.

I have a real-life experience of the need for design improvement on my last data visualization project using the author’s principles of design to interactive media. The project involves viewing stellar objects in 3D space, allowing users to zoom in, zoom out and move across the space using mouse controls. Excitedly, I asked my friend to see it. She was keen to try out my project, but she didn’t seem delighted with her experience. Problems? There were not just one, but several:

  1. I didn’t want users to lose view of the stellar objects in their exploration, so when mouse is released, I automatically resetted the view to the initial view of the stellar objects. But it’s not the way someone else might want it to be. Probably, my friend found this sudden reset quite unnatural, and she may have thought my animation wasn’t functional. She suggested having reset as an option using a button instead. I learned that it’s preferred that after releasing the mouse in some exploration, the view stays at the result of that exploration – the user can then pick up from where the user left off to continue the exploration. This increases the affordance (possible interaction between people and the environment).
  2. Due to lack of time, I was also not able to put on a signifier that the view would reset. This caused confusion for the user.
  3. Feedback was quite slow, so this might have played a factor in dettering my friend from continuing to try the animation, going off to do other activities. The delay seemed too long. According to the reading, “Feedback must be immediate: even a delay of a tenth of a second can be disconcerting.” On the contrary, imagine quick feedback in a space simulation: smooth zooming, tilting through space – how inviting to continue exploration.

I hope to incorporate these design considerations and principles in my upcoming assignments and future projects. I would love to help users enjoy a wonderful experience!

week 4- reading response

One I thing I find frustrating is the lack of customization in close-source systems. My iphone that uses IOS for example, I like it, I think its great, but it lacks a lot of customizable features that other systems like android, which lets you customize almost everything in your phone, from keyboard appearance to even system functionalities.

When it comes to designing interactive media, applying key design principles can make all the difference in usability. First, affordances and signifiers should be clear—buttons and links need to look like what they are, so users don’t waste time guessing. Feedback is just as crucial; when a user taps a button, they should see or hear an immediate response that confirms their action. And then there’s mapping—controls should align naturally with their effects, like familiar gestures and intuitive layouts that make navigation feel seamless.

week #4- Reading

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
One everyday design failure that drives me crazy is poorly designed TV remotes. Many remotes today have an overwhelming number of buttons with unclear labels, making simple tasks like changing the volume or input source frustrating. This problem could be improved by applying Don Norman’s principles of discoverability and mapping. For example, remotes should group related buttons more logically and use tactile or color-coded signifiers to indicate key functions. A minimalistic design, similar to Apple’s Siri Remote, which relies on fewer buttons and intuitive gestures, would make the experience much more user-friendly.

How can you apply some of the author’s principles of design to interactive media?
Interactive media, such as mobile apps and websites, can greatly benefit from Norman’s principles of affordances and signifiers. For instance, buttons in an app should clearly indicate their function through visual cues like color, shape, or animations when hovered over. Poor feedback, such as a lack of confirmation when submitting a form, can leave users confused about whether their action was successful. Applying clear feedback mechanisms, like progress indicators or subtle vibrations for mobile interactions, would improve usability and user satisfaction, making digital experiences more intuitive and engaging.



Week 4- Audio Visualizer

Music has always had a profound effect on me—it can transform my mood, inspire creativity, and even transport me to another place. But what if we could see music, not just hear it? That was the inspiration behind my music visualization project. I wanted to create something that would take raw audio data and turn it into a living, breathing visual experience.

I set out to design a program that listens to a song and dynamically generates visuals based on its frequencies. The goal was to make the visualization feel fluid, immersive, and organic—like the music itself. With a gradient background, floating particles, and an animated frequency bar display, the result is a captivating blend of movement and sound.


Code Highlight: Crafting the Visual Bars

At the core of this project is the drawVisualizer function. This function takes the Fourier Transform (FFT) data from the song and maps it to visual elements—essentially transforming audio frequencies into colorful, dynamic bars.

The function first analyzes the sound spectrum using fft.analyze(), which returns an array of frequency amplitudes. These values determine the height of the bars, making them respond dynamically to the beat of the song.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function drawVisualizer() { // draw bars
let spectrum = fft.analyze(); // get spectrum data
let barWidth = width / 60;
let maxHeight = height / 4;
for (let i = 0; i < 60; i++) { // loop 60 bars
let barHeight = map(spectrum[i], 0, 255, 10, maxHeight); // calc bar height
let hue = map(i, 0, 60, 180, 360); // set hue
let yPos = height - barHeight; // top pos of bar
let xPos = i * barWidth + barWidth / 2; // center pos of bar
fill(hue % 360, 80, 80, 80); // set fill color
rect(xPos - barWidth / 2, yPos, barWidth * 0.8, barHeight, 5); // draw bar
if (barHeight > maxHeight * 0.7 && frameCount % 5 === 0) { // add sparkle if tall
fill(255, 255, 255, random(50, 100)); // sparkle color
ellipse(xPos, yPos, random(3, 8)); // draw sparkle
}
}
}
function drawVisualizer() { // draw bars let spectrum = fft.analyze(); // get spectrum data let barWidth = width / 60; let maxHeight = height / 4; for (let i = 0; i < 60; i++) { // loop 60 bars let barHeight = map(spectrum[i], 0, 255, 10, maxHeight); // calc bar height let hue = map(i, 0, 60, 180, 360); // set hue let yPos = height - barHeight; // top pos of bar let xPos = i * barWidth + barWidth / 2; // center pos of bar fill(hue % 360, 80, 80, 80); // set fill color rect(xPos - barWidth / 2, yPos, barWidth * 0.8, barHeight, 5); // draw bar if (barHeight > maxHeight * 0.7 && frameCount % 5 === 0) { // add sparkle if tall fill(255, 255, 255, random(50, 100)); // sparkle color ellipse(xPos, yPos, random(3, 8)); // draw sparkle } } }
function drawVisualizer() { // draw bars
  let spectrum = fft.analyze(); // get spectrum data
  let barWidth = width / 60;
  let maxHeight = height / 4; 
  for (let i = 0; i < 60; i++) { // loop 60 bars
    let barHeight = map(spectrum[i], 0, 255, 10, maxHeight); // calc bar height
    let hue = map(i, 0, 60, 180, 360); // set hue
    let yPos = height - barHeight; // top pos of bar
    let xPos = i * barWidth + barWidth / 2; // center pos of bar
    fill(hue % 360, 80, 80, 80); // set fill color
    rect(xPos - barWidth / 2, yPos, barWidth * 0.8, barHeight, 5); // draw bar
    if (barHeight > maxHeight * 0.7 && frameCount % 5 === 0) { // add sparkle if tall
      fill(255, 255, 255, random(50, 100)); // sparkle color
      ellipse(xPos, yPos, random(3, 8)); // draw sparkle
    }
  }
}

 

One of my favorite touches is the conditional statement that adds “sparkles” to the taller bars. This subtle effect makes the visualization feel more alive, as if the bars are bursting with energy at their peaks.


Self-Reflection and Areas for Improvement

One of the biggest challenges was fine-tuning the balance between randomness and control. The bars needed to feel reactive but not chaotic, structured but not rigid.

Another area I’d like to explore is incorporating more interactive elements. Right now, the only interaction is toggling the song on and off with a mouse click. Adding features like changing color themes based on the song’s mood or allowing users to manipulate the bars with the mouse could make the experience even more engaging.

Sketch

Week #4- Text display

Your concept:

My project is an interactive genie lamp simulation where bubbles of smoke rise from the spout when the user holds the “R” key. As the bubbles grow, they form a speech bubble that displays a random percentage between 1-100%. This percentage acts as a playful response from the genie lamp, making the interaction fun and unpredictable. The bubbles disappear after a few seconds, allowing for continuous engagement

Code highlight:

I thought this part would be easy, just make bubbles appear when “R” is pressed. But at first, they wouldn’t stop generating, and itd fill the whole screen. The diagonal movement also felt off, no matter how I adjusted the values.

After some trial and error, I fixed it by limiting the bubbles and refining the movement formula. It was frustrating at first, but seeing the end result made it worth the effort:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function draw() {
background(190, 195, 255);
drawGenieBottle();
// starts generating when r key is pressed
if (keyIsPressed && key === "r" && !generating) {
generating = true; // starts generation process
}
//generate bubbles if within limit
if (generating && bubbles.length < maxBubbles) {
let x = width / 1.35 + bubbles.length * 5; // slight diagonal movement
let y = height - 100 - bubbles.length * 15;
bubbles.push({ x, y, size: 10 + bubbles.length * 3 });
}
function draw() { background(190, 195, 255); drawGenieBottle(); // starts generating when r key is pressed if (keyIsPressed && key === "r" && !generating) { generating = true; // starts generation process } //generate bubbles if within limit if (generating && bubbles.length < maxBubbles) { let x = width / 1.35 + bubbles.length * 5; // slight diagonal movement let y = height - 100 - bubbles.length * 15; bubbles.push({ x, y, size: 10 + bubbles.length * 3 }); }
function draw() {
  background(190, 195, 255);
  drawGenieBottle();

  // starts generating when r key is pressed
  if (keyIsPressed && key === "r" && !generating) {
    generating = true; // starts generation process
  }

  //generate bubbles if within limit
  if (generating && bubbles.length < maxBubbles) {
    let x = width / 1.35 + bubbles.length * 5; // slight diagonal movement
    let y = height - 100 - bubbles.length * 15;
    bubbles.push({ x, y, size: 10 + bubbles.length * 3 });
  }

 

Press “R” key:

Reflection and future improvement:

Overall, I am happy with how my project turned out, especially the animation of the bubbles and the random percentage display. However, I think it could be improved by adding glowing effects, sound, and different animations based on the percentage shown. Adding more visuals and variety would make the experience even more engaging.

Assignment #4 – Data Visualization

1. Concept

Faced with the decision of choosing between working on data visualization or creating a generative text output, I considered that since I had a good understanding of how to create the generative text from class, and working in data visualization could help me gain some valuable skills, I chose to work on data visualization.

I was inspired by the data visualization of flights in the U.S. I was amazed at that a flight network could tell so much information, such as through its resemblance to the U.S. geography, and where the large airports are.

I was interested in applying data visualization mapping techniques to astronomical data. I explored Kaggle datasets and decided Gaia Astronomical Data to be very appropriate for my project as I noted I could visualize data in 3D space using x (parsecs) coordinates, y (parsecs) coordinates and z (parsecs) coordinates.

I also tried to explore other space visualization projects. In particular, I was inspired by the European Space Agency’s Star Mapper, a visualisation based on data from the European Space Agency’s Hipparcos star mapper. Not only were the views under ‘Explore’ tab very beautiful and breathtaking (see images below), I especially enjoyed the interactivity element of the website; being able to control the view of stars in space using my mouse when it is pressed, and I intended to incorporate this into my p5js animation.

The project involves manipulating the Gaia Astronomical Data to map stellar objects into 3D space, and enable users to zoom in, zoom out and move across the space using mouse controls. This would be my very first time animating in 3D space, as well as controlling camera and perspective through code! How exciting.

I used the code template for wind turbines data which was discussed in class and made many revisions to obtain the outcome.

2. Code Highlights

I encountered various major challenges, resolved with the help of AI.

First, my goal includes being able to move across the space using mouse controls, which is new to me, and watching Coding Train’s WebGL and p5js tutorial on Camera and Perspective helped me understand these unfamiliar concepts in p5js. I learned important ideas including field of view (fov), perspective, orbitControl, camera([x], [y], [z], [centerX], [centerY], [centerZ], [upX], [upY], [upZ]). It was still very challenging, and with the help of AI, I learned to incorporate astronomical measurements:

  • in mapping mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
  • in perspective, which sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations).

At some point in the project development, the stellar objects weren’t visible to my eye in the initial view – though if I moved my mouse to something like mouseX = 395 and mouseY = 340, the stellar objects were visible. This posed a problem, because users might not realize at first glance that there are stars they can explore. So I used target mouse coordinates to have the stellar objects within initial view: const targetMouseX = 397.5 and const targetMouseY = 340, camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0). Since camera is incorporated when mouse is not pressed, the user’s view would reset to a place they can view the objects in case they lost sight of it.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function draw() {
background(10); // Darker background for better contrast
// Field of View - Only update if mouse is pressed.
let fov;
if (mouseIsPressed) {
// Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
fov = map(mouseX, mouseY, width, PI / 6, PI / 2);
} else {
fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX
}
// Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations)
perspective(fov, width / height, 1, 10000);
// Camera Controls - orbitControl handles movement when pressed.
if (!mouseIsPressed) {
camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0);
} //orbitControl handles the rest
print(mouseX, mouseY);
...
}
function draw() { background(10); // Darker background for better contrast // Field of View - Only update if mouse is pressed. let fov; if (mouseIsPressed) { // Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°). fov = map(mouseX, mouseY, width, PI / 6, PI / 2); } else { fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX } // Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations) perspective(fov, width / height, 1, 10000); // Camera Controls - orbitControl handles movement when pressed. if (!mouseIsPressed) { camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0); } //orbitControl handles the rest print(mouseX, mouseY); ... }
function draw() {
  background(10); // Darker background for better contrast
  
  // Field of View - Only update if mouse is pressed. 
  let fov;
  if (mouseIsPressed) {
    // Maps mouse X-position to 30°-90° zoom range. Wider FOV (90°) shows more galactic context, narrower (30°) focuses on specific star formations. Uses actual astronomical angular measurements (π/6 = 30°, π/2 = 90°).
    fov = map(mouseX, mouseY, width, PI / 6, PI / 2);
  } else {
    fov = map(targetMouseX, targetMouseY, width, PI / 6, PI / 2); // Use targetMouseX
  }
  // Sets viewing frustum matching typical astronomical scales: 1 parsec near clip (prevents nearby masers from clipping); 10000 parsec far clip (accommodates distant OB associations)
  perspective(fov, width / height, 1, 10000);

  // Camera Controls - orbitControl handles movement when pressed.
  if (!mouseIsPressed) {
        camera(initialCamX, initialCamY, initialCamZ, initialLookAtX, initialLookAtY, 0, 0, 1, 0);
  } //orbitControl handles the rest
  
  print(mouseX, mouseY);
  ...
}

Second, since I needed to move across space, it is important that I draw all stellar objects every frame, so that when the user moves the cursor, the view of all stellar objects may adjust accordingly. The original code where each data point was drawn once per frame needed to be adjusted. Processing all the relevant coordinates for each stellar object was performed in a function, which was called by setup() instead of draw(). An array of stellar objects is used to hold stellar object data, with each element being an object storing the data for that stellar object (xpos, ypos, zpos, size determined according to distance from viewer).

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
function findMinMaxXYZAndProcessData() {
let singleRow = [];
// loop over each row in the file
for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) {
// get a single row and split that row
// into individual words
singleRow = split(strings[csvRowNumber], ",");
// We know that the last two fields are the
// latitude and longitude and so they are
// numerical:
let x = float(singleRow[5]);
let y = float(singleRow[6]);
let z = float(singleRow[7]);
// The file may be missing a field, in which case
// the converstion to a float might have failed
if (isNaN(x) || isNaN(y) || isNaN(z)) {
print("conversion to float failed; skipping row " + csvRowNumber);
continue;
}
// Initialize min/max on the first valid row.
if (minX === undefined || x < minX) minX = x;
if (maxX === undefined || x > maxX) maxX = x;
if (minY === undefined || y < minY) minY = y;
if (maxY === undefined || y > maxY) maxY = y;
if (minZ === undefined || z < minZ) minZ = z;
if (maxZ === undefined || z > maxZ) maxZ = z;
// --- Data Processing and Storage ---
// Calculate mapped positions *here* (during setup).
let xpos = map(x, minX, maxX, -width / 2, width / 2);
let ypos = map(y, minY, maxY, -height / 2, height / 2);
let zpos = map(z, minZ, maxZ, -width / 2, width / 2);
let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance.
// Create an object to store the data for this stellar object.
stellarObjects.push({
x: xpos,
y: ypos,
z: zpos,
size: screenZ,
// You could add color, type, or other attributes here from the CSV.
});
if (csvRowNumber >= strings.length) {
print("finished");
noLoop();
}
}
print("X (min, max) = (" + minX + "," + maxX + ") ");
print("Y (min, max) = (" + minY + "," + maxY + ")");
print("Z (min, max) = (" + minZ + "," + maxZ + ")");
} // end of findMinMaxXYZ
function findMinMaxXYZAndProcessData() { let singleRow = []; // loop over each row in the file for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) { // get a single row and split that row // into individual words singleRow = split(strings[csvRowNumber], ","); // We know that the last two fields are the // latitude and longitude and so they are // numerical: let x = float(singleRow[5]); let y = float(singleRow[6]); let z = float(singleRow[7]); // The file may be missing a field, in which case // the converstion to a float might have failed if (isNaN(x) || isNaN(y) || isNaN(z)) { print("conversion to float failed; skipping row " + csvRowNumber); continue; } // Initialize min/max on the first valid row. if (minX === undefined || x < minX) minX = x; if (maxX === undefined || x > maxX) maxX = x; if (minY === undefined || y < minY) minY = y; if (maxY === undefined || y > maxY) maxY = y; if (minZ === undefined || z < minZ) minZ = z; if (maxZ === undefined || z > maxZ) maxZ = z; // --- Data Processing and Storage --- // Calculate mapped positions *here* (during setup). let xpos = map(x, minX, maxX, -width / 2, width / 2); let ypos = map(y, minY, maxY, -height / 2, height / 2); let zpos = map(z, minZ, maxZ, -width / 2, width / 2); let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance. // Create an object to store the data for this stellar object. stellarObjects.push({ x: xpos, y: ypos, z: zpos, size: screenZ, // You could add color, type, or other attributes here from the CSV. }); if (csvRowNumber >= strings.length) { print("finished"); noLoop(); } } print("X (min, max) = (" + minX + "," + maxX + ") "); print("Y (min, max) = (" + minY + "," + maxY + ")"); print("Z (min, max) = (" + minZ + "," + maxZ + ")"); } // end of findMinMaxXYZ
function findMinMaxXYZAndProcessData() {
  let singleRow = [];

  // loop over each row in the file
  for (let csvRowNumber = 1; csvRowNumber < strings.length; csvRowNumber++) {
    // get a single row and split that row
    // into individual words
    singleRow = split(strings[csvRowNumber], ",");

    // We know that the last two fields are the
    // latitude and longitude and so they are
    // numerical:
    let x = float(singleRow[5]);
    let y = float(singleRow[6]);
    let z = float(singleRow[7]);

    // The file may be missing a field, in which case
    // the converstion to a float might have failed
    if (isNaN(x) || isNaN(y) || isNaN(z)) {
      print("conversion to float failed; skipping row " + csvRowNumber);
      continue;
    }
    // Initialize min/max on the first valid row.
    if (minX === undefined || x < minX) minX = x;
    if (maxX === undefined || x > maxX) maxX = x;
    if (minY === undefined || y < minY) minY = y;
    if (maxY === undefined || y > maxY) maxY = y;
    if (minZ === undefined || z < minZ) minZ = z;
    if (maxZ === undefined || z > maxZ) maxZ = z;

    // --- Data Processing and Storage ---

    // Calculate mapped positions *here* (during setup).
    let xpos = map(x, minX, maxX, -width / 2, width / 2);
    let ypos = map(y, minY, maxY, -height / 2, height / 2);
    let zpos = map(z, minZ, maxZ, -width / 2, width / 2);
    let screenZ = map(z, minZ, maxZ, 8, 1); // Size based on distance.

    // Create an object to store the data for this stellar object.
    stellarObjects.push({
      x: xpos,
      y: ypos,
      z: zpos,
      size: screenZ,
      // You could add color, type, or other attributes here from the CSV.
    });

    if (csvRowNumber >= strings.length) {
      print("finished");
      noLoop();
    }
  }

  print("X (min, max) = (" + minX + "," + maxX + ") ");
  print("Y (min, max) = (" + minY + "," + maxY + ")");
  print("Z (min, max) = (" + minZ + "," + maxZ + ")");
} // end of findMinMaxXYZ

Third, my text on WebGL apparently read backwards. I made a logical guess that this was because of the view (I was behind the stellar objects rather than in front or my z-coordinates should be far the other way). I made an attempt to fix this, but it didn’t work. So with the help of AI, I learned to reset the camera for 2D drawing (orthographic projection) and default camera just specifically for text, the text read left-right.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
// --- 2D Overlay (Text) ---
// Disable depth test for 2D overlay
drawingContext.disable(drawingContext.DEPTH_TEST);
// Reset the camera for 2D drawing (orthographic projection)
ortho();
camera(); // Reset to default camera
// Draw the textScreen
image(textScreen, -width / 2 + 20, -height / 2 + 20);
// Re-enable depth test for 3D rendering in the next frame
drawingContext.enable(drawingContext.DEPTH_TEST);
// --- 2D Overlay (Text) --- // Disable depth test for 2D overlay drawingContext.disable(drawingContext.DEPTH_TEST); // Reset the camera for 2D drawing (orthographic projection) ortho(); camera(); // Reset to default camera // Draw the textScreen image(textScreen, -width / 2 + 20, -height / 2 + 20); // Re-enable depth test for 3D rendering in the next frame drawingContext.enable(drawingContext.DEPTH_TEST);
// --- 2D Overlay (Text) ---
// Disable depth test for 2D overlay
drawingContext.disable(drawingContext.DEPTH_TEST);

// Reset the camera for 2D drawing (orthographic projection)
ortho();
camera(); // Reset to default camera

// Draw the textScreen
image(textScreen, -width / 2 + 20, -height / 2 + 20);

// Re-enable depth test for 3D rendering in the next frame
drawingContext.enable(drawingContext.DEPTH_TEST);

 

3. Embedded Sketch

4. Reflection and Areas for Future Improvement

This project allowed for vast improvement in my data visualization skills in 3D space as well as valuable opportunities to try using WebGL for 3D animation in a personal project for the first time. An area for future improvement could be incorporating being able to animate the stellar objects in motion, as if in orbit.