Week 2 Reading Response: Casey Reas

In the realm of digital art, Casey Reas’s perspectives on ‘Order’, ‘Chaos’, and ‘Controlled Randomness’ are quite intriguing. Art was perceived as a conservatively rigorous field that necessitated meticulous, well-planned strategies in contemporary forms. Nevertheless, Casey proposes to broaden the definition of “Digital Art” through his views. His perspective on the use of algorithms and randomization as creative mechanisms enables the emergence of unanticipated patterns and themes that never would have been anticipated. He draws inspiration from historical examples from the 20th century and presents examples of how randomness can be incorporated into a creative process.

Casey’s “Process 18” is an appropriate example for this. Using parameters to control the extent of randomness is a subtle masterstroke as it is able to encompass the artist’s intentions and also show the creative factor. His work raises the question of whether the concept of “randomness” is genuinely random or if it is a more complex form of order that has yet to be fully comprehended. This phenomenon is also observed in other domains, such as the “Lava Lamps” of Cloudfare for secure data encryption, the “Quantum random number generator” of Google for the generation of random numbers based on quantum mechanics, and the “Ivy Bridge” of Intel for the generation of random numbers based on thermal noise.

I intend to integrate Casey’s perspectives into my work by employing parameters to produce unpredictable variations in my projects. These variations would be beneficial in assisting me in the exploration of a variety of aesthetics, patterns, themes, and interactions that are prompted by user inputs. These would ensure that my projects offer a distinctive experience to all users.

Week 3 Reading Response: “Art of Interactive Design”

The Art of Interactive Design” by Chris Crawford is a fascinating book as it explores the distinction between a really interactive system and basic “reactive” gimmicks. Crawford characterizes the interaction as a dynamic process with three fundamental elements: hearing, thinking, and speaking. He also emphasizes on how these three elements are crucial for a meaningful interaction between the technology and the user and extends on them separately. This is why, in my opinion, every single interaction of the system with the user needs to be meticulously prepared and ensure that all three ways in which we interact with the system—listening, speaking, and thinking—are in line with the intended usage.

Regarding implementing the reading’s insights into my work, I think I might increase the degree of interaction by paying more attention on how the system “listens” and “speaks” to the user. My p5.js sketches could react to basic inputs like mouse movements or clicks right now, but they don’t necessarily provide deliberate or meaningful response. For examples, in the pixelated text sketch (Week 4: Generative text), I could have incorporated nuances into the ways user could react with my sketch. Maybe a random song plays when the user presses the “Enter” key, or an interesting home screen with brief instructions, and so on. This would mean that I would have to place myself into the user’s shoes and make design choices according to that.

Week 4: Reading Reflection of Don Norman’s “The Design of Everyday Things”

Hey everyone! 👋

After reading Don Norman’s “The Design of Everyday Things”, I was reminded to take a look around me, and find out other examples of bad user experience. One of the first examples that comes to mind, is from my short time at NYUAD’s dorms. As you can see, there are 2 blinds in the same window, one partially translucent, and one fully opaque.

NYUAD's blinds
NYUAD’s blinds

While the mechanism to control a particular blind is simple enough, and a common enough pattern for us to intuitively use it, what is extremely confusing is which string controls which blind, as they are both identical! Also, it seemed liked different windows had a different arrangement/side of strings controlling each blind! My  solution to fix this, would be to have the opaque one keep it’s metal beads, but to switch the translucent one’s beads to like a white plastic. This way, you can both see and feel the difference between the 2 strings, allowing you to easily choose which blind to control. Additionally, they can be in any order/side, but it must be consistent for all the windows, allowing you to develop muscle memory.

Another example, though not affecting everyone, is USB. You might think I’m referring to the USB A style connectors, that somehow always take 3 tries to get in.

USB A, 3 tries and states
I didn’t know they exhibited quantum properties!

No, that has mostly been solved with USB C (ah, a godsend). Unfortunately, USB C has introduced it’s own complications, by becoming the universal standard!

Wait, a minute, that doesn’t make sense. Isn’t that a good thing?

Well mostly yes, but this also means that USB C’s capabilities have far expanded outside the original scope of a USB connector, allowing the transfer of not just power and files (each at different levels), but also HDMI, DisplayPort, Thunderbolt, Ethernet, and so much more. The result is an incredibly convenient ecosystem and user experience, where everything fits together seamlessly… as long it works. The issue isn’t USB C’s reliability, but rather that since it supports so many protocols and extensions, almost all of which are optional, it can be quite hard to know which cable or device (or even port on the same device!) supports which features (this could  supporting a protocol, like DisplayPort, or a certain power and speed output). As a result, it often leads to frustration, realising only after a long time wondering why it isn’t working, that the cable I have in hand is inadequate for the task.

The solution I would recommend, would be to support everything that makes sense, and the highest speed & power level. Although, of course I would say that, I’m the user! It’s true that going this route is going to considerably increase costs for manufacturers, though as Don Norman said, poor design is sometimes due to decisions aimed at reducing cost, and this is certainly an example of that, so maybe it would be worth it for manufacturers to implement it, at least for our sanity.

Note:

Obviously, the capabilities supported should be within reason. FOr example, it makes no sense for a router which lacks any direct graphical interface, to support HDMI or DisplayPort in their USB C ports. Also, there does exist a standard that supports almost everything, and the best power & speed levels, which makes it much easier to know what works, Thunderbolt Thunderbolt icon. However, it is usually pretty expensive and isn’t offered/supported on many devices. Oh also, USB’s naming needs to be fixed. They have changed it so many times, leading to a massive amount of confusion. Search it up.

 

Coming to second part of our prompt, I believe the main way we can apply his principles, is by integrating discoverability, affordances, & signifiers, with deliberate thought. Namely, we should make sure that we strive towards adapting our work around the person, having intuitive design, where the user doesn’t need to read an instruction manual (or ideally, read any instructions at all, possibly by using familiar visual elements and design practices people have already gotten accustomed to) to interact with our work. It should instead nudge or guide the user subtly (if required), and perhaps most importantly, provide immediate (and satisfying) feedback, and provide the space for people to make mistakes, but gracefully get back on track.

They should enjoy interacting with our work, not be afraid or overwhelmed by it that, so that it doesn’t end up like the unfortunate (but probably technically great) Italian washer-dryer combo 😉

Sad washer-dryer
A sad, abandoned boi, forever waiting for someone to enjoy using it🥺

Week 4 Assignment: Generative Text

Concept

This assignment is mainly based on the text patterns of early games like “Space Invaders” or “Tetris,” where the text and images were made out of square pixels. The way the blocks move and respond to user input is like the jumpy, random effects we see in old games where graphics were powered by simple hardware.

The key interactions involve entering any word which you want to display, and pressing “Generate” forms the word in the grid. The grid can be toggled on/off with a separate “Toggle Grid” button. When the mouse hovers over the text, the blocks near the cursor change color and become more diverse. When we hold down the “Enter” key, it causes the blocks representing the text to scatter randomly, simulating random jumpy movements. As soon as the key is let go, the boxes come back to their original positions.

Sketch

Code I am proud of

// Setup Function
function setup() {
  // Create the canvas
  createCanvas(600, 400);
  cols = floor(width / gridSize);
  rows = floor(height / gridSize);
  
  // Taking the input and generating the other buttons
  input = createInput();
  input.position(20, height + 20);
  button = createButton('Generate');
  button.position(input.x + input.width + 10, height + 20);
  button.mousePressed(startAnimation);

  toggleButton = createButton('Toggle Grid');
  toggleButton.position(button.x + button.width + 10, height + 20);
  toggleButton.mousePressed(toggleGrid);

  textSize(fontSize);
  textAlign(CENTER, CENTER);

  pg = createGraphics(width, height);
  pg.pixelDensity(1);
}

// Function to show/hide the grid
function toggleGrid() {
  showGrid = !showGrid;
}

// Draw function
function draw() {
  // Smooth background blending
  bgColor.r = lerp(bgColor.r, targetBgColor.r, 0.01);
  bgColor.g = lerp(bgColor.g, targetBgColor.g, 0.01);
  bgColor.b = lerp(bgColor.b, targetBgColor.b, 0.01);
  background(bgColor.r, bgColor.g, bgColor.b);

  if (showGrid) {
    drawGrid();
  }

  if (animating && word.length > 0) {
    fillLetterPixels();
  }

  for (let i = 0; i < pixelIndex; i++) {
    let px = letterPixels[i].x * gridSize;
    let py = letterPixels[i].y * gridSize;
    
    // Jumpy explosion effect
    if (exploding) {
      // Randomize position for the explosion effect
      px += random(-10, 10);
      py += random(-10, 10);
    } 
    else {
      // Smooth return to original positions
      px = lerp(px, originalPositions[i].x * gridSize, 0.1);
      py = lerp(py, originalPositions[i].y * gridSize, 0.1);
    }

    let d = dist(mouseX, mouseY, px + gridSize / 2, py + gridSize / 2);
    if (d < gridSize * 4) {
      let blockColor = blockColors[i];
      fill(blockColor.r, blockColor.g, blockColor.b, map(d, 0, gridSize * 4, 255, 50));
    } 
    else {
      fill(0);
    }

    noStroke();
    rect(px, py, gridSize, gridSize);

    if (random(1) < sparkleChance) {
      fill(random(255), random(255), random(255));
      rect(px, py, gridSize, gridSize);
    }
  }
}
  • Setup() function sets up the canvas, makes the Graphics buffer and initializes the canvas color to white.
  • Draw() function is the function which is the main loop that runs continuously. It controls the background color blending, sparkle effect, jumpy explosion effect, stores which keys are pressed, checks grid is toggled on/off, and so on.
// Startup animation after the generate button is clicked
function startAnimation() {
  word = input.value();
  pixelIndex = 0;
  animating = true;
  getLetterPixels();

  blockColors = letterPixels.map(() => {
    return { r: random(255), g: random(255), b: random(255) };
  });

  originalPositions = letterPixels.map((p) => ({ ...p })); // Store original positions

  targetBgColor = {
    r: random(255),
    g: random(255),
    b: random(255)
  };
}

– This function is responsible for the short simulation of making the letters of the word when “Generate button” is pressed.

function getLetterPixels() {
  letterPixels = [];

  // Use the off-screen buffer (pg) to render the text
  pg.background(255);
  pg.textSize(fontSize);
  pg.textAlign(CENTER, CENTER);
  pg.fill(0);
  pg.text(word, pg.width / 2, pg.height / 2);

  pg.loadPixels();
  for (let y = 0; y < pg.height; y += gridSize) {
    for (let x = 0; x < pg.width; x += gridSize) {
      let index = (x + y * pg.width) * 4;
      let r = pg.pixels[index];
      if (r < 128) {
        let gridX = floor(x / gridSize);
        let gridY = floor(y / gridSize);
        letterPixels.push({ x: gridX, y: gridY });
      }
    }
  }
}
  • This function uses the off-screen buffer (pg) to render the input text and identifies the grid positions where the pixels are black (indicating a letter).
function fillLetterPixels() {
  if (pixelIndex < letterPixels.length) {
    let px = letterPixels[pixelIndex].x * gridSize;
    let py = letterPixels[pixelIndex].y * gridSize;
    fill(0);
    noStroke();
    rect(px, py, gridSize, gridSize);
    pixelIndex++;
  } 
  else {
    animating = false;
  }
}
  • This function simulates the pixels that form the letters. It gradually reveals the blocks one by one.
// Run the function if ENTER key is pressed
function keyPressed() {
  if (keyCode === ENTER) {
    exploding = true; // Start the explosion effect
  }
}

// Run the function if ENTER key is released
function keyReleased() {
  if (keyCode === ENTER) {
    exploding = false; // Stop the explosion and return to original positions
  }
}
  • When the Enter key is pressed or released, it starts or stops the jumpy explosion effect. When the Enter key is pressed, the exploding flag is set to true, causing the blocks to scatter randomly. When the Enter key is released, the exploding flag is set to false, and the blocks return to their original positions.

Challenges/ Future Improvements

Some of the major challenges I had faced were:

  • Creating the random “jumpy” explosion effect and making sure that the boxes making up the letters stay in place after key is released.
  • The “sparkle” effect was another challenge as it required creating intermittent flashes of random color across specific blocks.
  • The color changing due to the “hovering  mouse” was another one. However, by measuring the distance (dist()) between the mouse and the block’s center, the program smoothly blends colors based on proximity to the cursor, giving the effect of interaction with the text.

However, the code can be further improved upon by introducing some collission, introducing different layers to create a 3D pixelated text feel, and so on.

Reading Reflection: Week 04

From the readings, when I came across the issues with door handling, it immediately reminded me of the doors at NYUAD, especially in D2. They’re supposed to be semi-automatic with sensors, but I often find myself having to pull them with all my strength just to get them to open. It’s incredibly frustrating, especially when I’m carrying something heavy or have my hands full. Struggling with these doors can be really annoying.

What really drives me crazy is that we live in a world that talks about inclusivity and accessibility, yet my own university isn’t fully disability-friendly. I can’t understand how a person in a wheelchair can access the dining halls when the doors require someone else to pull them open. Drawing from Don Norman’s ideas, I really connect with his emphasis on human-centered design to create better usability. He points out that “Engineers are trained to think logically,” which is something I’ve noticed over the years. However, I believe there’s been a shift in recent times.

As someone aspiring to be an engineer, I resonate with this statement. We often focus more on mathematical modeling rather than on accessibility and the psychological aspects of human behavior. Many engineering projects—like billboards and airport dashboards—do help by displaying information clearly, but they often overlook people with dyslexia. For example, using fonts like Sansa can make reading difficult for them.

Norman also talks about cognitive load, which refers to the mental effort required to use a device. In our fast-paced world, having a high cognitive load can be overwhelming. Take, for instance, a multifunctional printer with tons of buttons and features. If I have to remember multiple steps just to print a document, it can be exhausting. A better design would simplify the process, reducing the steps and making controls more intuitive. This aligns perfectly with Norman’s argument that good design should minimize cognitive load, letting users focus on their tasks instead of trying to figure out how to use the device.

Overall, this reading has been the most enlightening for me in the past four weeks.

Week 04: Data Visualisation

Inspiration:

As someone who believes in the transformative power of data visualization, I see it as an essential tool for making complex information accessible and understandable. Visualizing data allows us to grasp patterns, trends, and insights that would otherwise remain hidden in raw numbers. In the context of healthcare, especially with something as critical as blood pressure, effective visualization can significantly impact how we interpret and respond to health data.

One of my inspirations for this project comes from the recent work of renowned data artist, Giorgia Lupi, a renowned information designer known for her innovative approach to data visualization. I loved how she turned data visualisation in an artistic human connector engaging us with tales, emotions, informations through data.

Concept: The blood pressure dataset, originally from Stata, serves as a crucial resource for understanding the impacts of various factors on blood pressure among different demographic groups. I have taken the dataset from the Github Open Source For Data Science account. The idea came into my mind after remembering how blood pressure runs into families and sometimes we don’t care about it. Though the dataset is pretty small and not too complex, I enjoyed creating a visualisation for the first time.

Codes which I’m proud of:

I had trouble with managing datasets so I used the console.log technique that we learned in the class to check if  the CSV data loads correctly before any processing occurs or not. This is crucial because if the data is not available, the subsequent calculations will fail. I created two charts as the dataset had more columns, so I made two error handling for each chart.

    function preload() {
    data = loadTable('blood_pressure.csv', 'csv', 'header');
}

else {
        console.error("Data not loaded properly"); // Error handling if data isn't loaded
    }

I couldn’t find it possible nto fit the data in one chart. To make it easy to understand I created two charts which are connected based on the parameters. Here is the snippet for dynamically creating two separate charts for age and sex while keeping the layout clear. It required careful organization of data and chart settings.

function createAgeGroupChart() {
    const ageGroups = {}; // Object to hold blood pressure data by age group
    
    // Ensure the data is loaded
    if (data) {
        // Iterate through each row of data
        data.rows.forEach(row => {
            const agegrp = row.getString('agegrp'); // Get age group
            const bp_before = row.getNum('bp_before'); // Get blood pressure before
            const bp_after = row.getNum('bp_after'); // Get blood pressure after

            // Initialize arrays for this age group if it doesn't exist
            if (!ageGroups[agegrp]) {
                ageGroups[agegrp] = { before: [], after: [] };
            }
            // Add blood pressure data to the respective arrays
            ageGroups[agegrp].before.push(bp_before);
            ageGroups[agegrp].after.push(bp_after);
        });

        // Extract labels and average values for the chart
        const labels = Object.keys(ageGroups);
        const avgBefore = labels.map(label => avg(ageGroups[label].before)); // Average BP before
        const avgAfter = labels.map(label => avg(ageGroups[label].after)); // Average BP after

        // Get context for the first chart
        const ctx1 = document.getElementById('chart1').getContext('2d');
        chart1 = new Chart(ctx1, {
            type: 'bar', // Bar chart type
            data: {
                labels: labels, // X-axis labels (age groups)
                datasets: [
                    {
                        label: 'BP Before', // Dataset label
                        data: avgBefore, // Data for BP before
                        backgroundColor: 'rgba(75, 192, 192, 0.6)', // Fill color
                        borderColor: 'rgba(75, 192, 192, 1)', // Border color
                        borderWidth: 1 // Border width
                    },
                    {
                        label: 'BP After', // Dataset label
                        data: avgAfter, // Data for BP after
                        backgroundColor: 'rgba(153, 102, 255, 0.6)', // Fill color
                        borderColor: 'rgba(153, 102, 255, 1)', // Border color
                        borderWidth: 1 // Border width
                    }
                ]

Similarly, I did another chart for the average counts according to sex.

function avg(array) {
    return array.reduce((a, b) => a + b, 0) / array.length;
}

Lastly, I tried to ensure that the charts adjust to different screen sizes while keeping everything readable and visually appealing. I used this part of code to implement it. I used Inspiration from Bar_Chart_in_P5.js video.

options: {
    responsive: true,
    scales: {
        y: { beginAtZero: true, title: { display: true, text: 'Blood Pressure' } },
        x: { title: { display: true, text: 'Age Group' } }
    }
}

 

p5.js Sketch:

Challenges and further improvements: 

The main challenge was to find a proper dataset. I tried to use different data sets from Google Dataset Search, Kaggle, Data.gov etc but they were not showing into charts, sometimes error was coming or the dataset was too big to use for this project. I took help from chatgpt to give me resources for fine datatsets but it didn’t give me any dataset or link, suggested me to check on Github so I used the idea and searched github.

 

 

Week 4 Reading Response: Ch 1, “The Design of Everyday Things”

In Chapter 1 of the “The Design of Everyday Things” titled as, “The Psychopathology of Everyday things,” Norman emphasizes how everyday objects can be confusing and fail in their design because they can’t be understood by its users. Throughout the chapter, he introduces key concepts like:

  • “Discoverability,” which is determined by users who instinctively know how to work with something, just by seeing it, and,
  • “Feedback” which is some kind of stimulus to the user that the task they intended to perform has been carried out successfully or has failed in between.

He gives the example of doors (like the ‘Norman Doors’, named after himself) which set people thinking on how to use it. His main argument is one which I agree with too, i.e., the designs should be “Human-Centered” and ensure that it is easily usable for everyone, no matter how simple or complex the object is.

“Human-Centered Design” is a term which I have also previously encountered when I took a J-term class “Interactive Media in the World”. I learnt that this principle can be easily applied to various situations ranging from how things are arranged (like electric tools in a lab) to flow of traffic in a city (to prevent accidents), smart home technologies, human-computer interactions, and so on.

If I had to think of a product which annoys me a lot, it would be the “Electrical Adapters” with a bad design. There is a particular one which I always keep with me as a reminder of the same.

Similar products with the same design can be found on Amazon and other places. However, it is quite clear that while mobile devices can be charged without any issues, it is the laptop chargers or desktop power cords where the issue arises (especially in UAE). There is not enough distance between laptop charging cord and the wall, rendering it unusuable for devices which have bigger plugs.

In terms of interactive media, I believe Norman’s principles can significantly improve user experiences. For example, when designing websites or apps, discoverability should be a primary goal. For example, making discoverability a top priority when creating apps or interactive displays aids users in immediately understanding how to engage with the system. The user may be led through the interface with the use of obvious signifiers like labels or icons. Feedback is also important here, as when interacting with digital elements; for example, a button’s color changing when clicked or a sound signaling the completion of a task are both visual and auditory signals that boost user confidence and enhance the overall experience.

Week 4 – Reading Reflection

Don Norman’s “The Design of Everyday Things” highlights how poor design impacts our daily interactions with common objects. In the first chapter, he introduces the concept of discoverability, which emphasizes that people should be able to figure out how something works simply by looking at it. Norman explains that objects like doors or light switches can become confusing when they lack clear visual cues, often leading users to make mistakes. The idea of feedback is also essential, ensuring users receive confirmation that their actions—such as opening a door or turning off a light—are completed successfully. Norman’s focus is on making designs intuitive and straightforward, so users can easily understand them without needing instructions.

One frustration I experience, which Norman’s work sheds light on, is how stressful the use of modern smart devices like home assistants or thermostats are. These devices often overwhelm users with too many options and hidden functions, making them difficult to use. Applying Norman’s principles of signifiers and mapping could make these systems more user-friendly. For example, clearer labels, icons, or simple gestures could improve usability. Additionally, feedback through visual or sound cues would help users feel confident that they’ve completed a task correctly. Norman’s focus on human-centered design highlights the importance of keeping the user in mind, ensuring that products are approachable and simple to navigate. This approach could significantly improve our interactions with modern technology, reducing frustration and making these tools more accessible.

Week 4 – Ghost of Words

Intro

This week, I adopted my production in week 2 – the dots silhouette according to the webcam input – to create a representation of the figure reflected in the webcam as well as the soul behind it in the background. It seems to me that the product in week 2 did lack some message/meaning – or whatsoever: What is the point of minoring the webcam with dots? Then, when it comes to the text generation of this week, the answer appeared to me to be the combination of text and my existent mechanism – as the mirroring mechanism symbolizes the entity, the phantom, the ghost, and the creator as well as the user, while the poems floating across the canvas reflects a piece of my soul. By doing so, the ghost of me (or you) becomes the exact pathway to discovering that piece of soul, adding incentives to the interaction.

process

I started simply by replacing the dots drawing in my week 2 product with text drawing – and obviously, the flashing of words can not stand for further meanings except for blinding the user even more compared to the dots, as we intuitively tend to read whatever the words presented in front of us.

Therefore, I tried another approach to displaying the poem’s lines anyway in the background and let the probability pixel matrix act as the alpha value matrix, this time to overlay on the text, thus resulting in the ghosty effect.

In the preload function, I’m ensuring that all external resources are loaded before the sketch runs –  Using loadTable, I import lines of text from textLines.csv, which will be used to generate the floating texts dynamically.

function preload() {
  // Load the CSV file
  textLines = loadTable('textLines.csv', 'csv', 'header'); // Adjust the path and options as needed
}

This time, I directly use the grayscale value as alpha value as they have the same range:

function drawAlphaFilter() {
  noStroke();
  
  // Iterate through each cell in the grid
  for (let y = 0; y < k; y++) {
    for (let x = 0; x < j; x++) {
      let index = x + y * j;
      let grayValue = pixelArray[index];
      
      // Calculate alpha value
      // Ensure alphaValue is within 0-250 for better visibility
      let alphaValue = constrain(grayValue, 0, 250); 
      
      // Set fill color to background color with the calculated alpha for overlay effect
      fill(17, 38, 56, alphaValue);
      
      // Calculate the position and size of each rectangle
      let rectWidth = windowWidth / j;
      let rectHeight = windowHeight / k;
      let rectX = x * rectWidth;
      let rectY = y * rectHeight;
      
      rect(rectX, rectY, rectWidth, rectHeight);
    }
  }
}

The RGB value used in this product is extracted from my personal website: Sloth’s Slumber | Xiaotian Fan’s Collection (sloth-slumber.com).

Then, the floating texts are managed through both class and helper functions, including:

function updateFloatingTexts() {
  // Update and display existing floating texts
  for (let i = floatingTexts.length - 1; i >= 0; i--) {
    let ft = floatingTexts[i];
    ft.update();
    ft.display();
    
    // Remove if off-screen
    if (ft.isOffScreen()) {
      floatingTexts.splice(i, 1);
      
      // Also remove from its slot
      let s = ft.slot;
      slots[s] = null; // Mark the slot as free
    }
  }
  
  // Iterate through each slot to manage floating texts
  for (let s = 0; s < totalSlots; s++) {
    if (slots[s] === null) {
      // If the slot is free, add a new floating text
      let newText = getNextText();
      if (newText) {
        let ft = new FloatingText(newText, s);
        floatingTexts.push(ft);
        slots[s] = ft; // Assign the floating text to the slot
      }
    } else {
      // If the slot is occupied, check if the tail has entered the screen
      let lastText = slots[s];
      
      if (lastText.direction === 'ltr') { // Left-to-Right
        // Check if the tail has entered the screen (x + width >= 0)
        if (lastText.x + lastText.getTextWidth() >= 0) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      } else { // Right-to-Left
        // Check if the tail has entered the screen (x - width <= windowWidth)
        if (lastText.x - lastText.getTextWidth() <= windowWidth) {
          // Safe to add a new floating text
          let newText = getNextText();
          if (newText) {
            let ft = new FloatingText(newText, s);
            floatingTexts.push(ft);
            slots[s] = ft; // Replace the old floating text with the new one
          }
        }
      }
    }
  }
}

Another important function is to concatenate lines in order to fulfill across the windowWidth:

function getNextText() {
  // Reset index if end is reached
  if (currentLineIndex >= textLines.getRowCount()) {
    currentLineIndex = 0; // Reset to start
  }
  
  let combinedText = '';
  let estimatedWidth = 0;
  let tempIndex = currentLineIndex;
  let concatenationAttempts = 0;
  let maxAttempts = textLines.getRowCount(); // Prevent infinite loops
  
  // Loop to concatenate lines until the combined text is sufficiently long
  while (estimatedWidth < windowWidth * TEXT_MULTIPLIER && concatenationAttempts < maxAttempts) {
    let textLine = textLines.getString(tempIndex, 0);
    if (!textLine) break; // If no more lines available
    
    combinedText += (combinedText.length > 0 ? ' ' : '') + textLine;
    tempIndex++;
    
    // Reset if at the end of the table
    if (tempIndex >= textLines.getRowCount()) {
      tempIndex = 0;
    }
    
    // Estimate text width using p5.js's textWidth
    textSize(24); // Set a default size for estimation
    estimatedWidth = textWidth(combinedText);
    
    concatenationAttempts++;
    
    // Break if the same index is on loop to prevent infinite concatenation
    if (tempIndex === currentLineIndex) break;
  }
  
  // Update the currentLineIndex to tempIndex
  currentLineIndex = tempIndex;
  
  return combinedText;
}

Eventually, this time, when dealing with the full window canvas, I added a canvas resize function to respond to window resizing:

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
  
  // Update Y positions of floating texts based on new window size
  for (let ft of floatingTexts) {
    let padding = 5; // Padding from top and bottom
    ft.y = map(ft.slot, 0, totalSlots - 1, padding, windowHeight - padding);
  }
}

To do & reflection

While this product is germinated from my previous product, I believe it has the potential to be further polished, including the varying text aesthetics, responsive relation between text and webcam (or audio level), etc.

On the other hand, I would say that this product is indeed an improvement compared to week 2 as I started to incorporate my own message and character into the code instead of creating fancy (or not) demos.

Assignment 4 – The Lyrical Video

Concept

For this project, I wanted to explore something simple yet engaging with text. My initial idea involved allowing user input, where text would fall to the ground. While that was a good starting point, I felt it needed more interactivity. Then, inspiration struck while I was listening to music: why not create a lyric video? And that’s how this project took shape – a lyric video with the text fading in and out, synchronized to the music playing in the background.

 

Code I’m Particularly Proud Of

In this simple project, the code I’m most proud of is the part that handles the fade-in and fade-out effect of the text. Normally, this would require a loop, but since the draw() function in p5.js acts as a natural loop, I managed it using a simple if statement combined with a counter that gradually changes the opacity of the text until it fully fades out. Here’s the core code snippet:

// Display the current line with a fade-in effect
  fill(255, fadeValue);
  text(lyrics[currentLine], width / 2, lineY); // Display the current line of lyrics at the center of the canvas

  // Gradually make the text appear by decreasing its opacity
  fadeValue -= 1;

  // When the text is fully faded, move to the next line
  if (fadeValue <= 0) 
  {
    currentLine = (currentLine + 1) % lyrics.length; // Move to the next line, looping back to the start if at the end
    
    currentColor = (currentColor + 1) % colors.length; // Change to the next background color, looping through the array
    
    fadeValue = 255; // Reset the fade value to fully opaque
  }

 

Final Product

The final product is available to experience, and you can interact with it by pressing the mouse button to move the lyrics forward. Otherwise, you can simply watch it as a lyric video with music in the background. Just a heads-up: the video includes audio, so be mindful before playing it.

 

 

Final Thoughts and Reflection

Working on this project was both intriguing and challenging. It was a lesson in embracing simplicity, as my initial ideas were quite ambitious. However, I realized that there’s a unique power in crafting something straightforward yet effective. While I’m pleased with the outcome, there are a few areas for improvement: I would like to synchronize the lyrics with the music more precisely, enhance the background visuals, and add more interactive elements to make it more engaging. This project has sparked new ideas, and I look forward to applying these insights to something even bigger and better. Stay tuned!