Week 4- Loading Data, Displaying text Elyazia Abbas

Concept:

The goal of this sketch is to generate daily affirmations in the form of “I am …” statements. Each click on the canvas refreshed the draw function to bring a new affirmation assembled from a CSV file of positive words. The affirmations are placed on top of a sunset-inspired Perlin noise background using snippets of code from the decoding nature class.

Daily Doses of Positive are the Best Prevention for the Blues!

Code:

//here i am splitting these global variables to make each map to a spot or place in the phrase 
let SUBJECT = 0;
let QUALITY = 1;
let VERB = 2;
let ACTION = 3;
let PLACE = 4;




let strings = []; //this array holds all thre lines from the csv file 
let zoff = 0; // this is the third dimensionw e are using for the perlin noise


let sunsetPalettes = [ // this is the color pallette i use to make a sunset theme 
  ["#FF9E80", "#FF6E40", "#FF3D00", "#DD2C00"], 
  ["#FFB74D", "#FF8A65", "#F06292", "#BA68C8"], 
  ["#FFD180", "#FFAB40", "#FF7043", "#8E24AA"], 
  ["#FFE082", "#FFB74D", "#F48FB1", "#9575CD"]  
];

let currentPalette;

function preload(){ //preloading the csv file holding the words before setup
  strings = loadStrings("words.csv"); 
}

function setup() {//setting the canvas size and the font for the writing 
  createCanvas(600, 400);
  textFont("Georgia");
  textAlign(CENTER, CENTER);
  noLoop();
  pickPalette();
}

function draw() {
  background(255);
  noStroke();
  // this nested for loop will step between every 20 pixels +- horizontally and vertically
  for (let y = 0; y < height; y += 20) {
    for (let x = 0; x < width; x += 20) {
      let n = noise(x * 0.01, y * 0.01, zoff);// in that position we call the noise function and store th eresult in n 
      let c = color(random(currentPalette)); // get a  random sunset color
      c.setAlpha(90); // adding transparency using Alpha
      fill(c); // fill with the random color
      ellipse(x, y, n * 40, n * 40); //draw the ellipse here 
    }
  }
  zoff += 0.02; //editing the noise dimension after the draw

  
  
  //here we are randomly choosing a line to work with

  let line = "";
  do {
    line = strings[int(random(strings.length))]; // once we get a line that is not empty 
  } while (line.trim().length === 0); 

  let row = split(line, ','); // split the chosen line into n arrray of 5 tokens 

  let subject = row[SUBJECT];
  let quality = row[QUALITY];
  let verb = row[VERB];
  let action = row[ACTION];
  let place = row[PLACE];


  
  
  
  
  fill(30);
  textSize(32);
  text(subject + " " + quality, width / 2, height / 2 - 40); // here we put togehter the affirmation text 

  textSize(22);// here we put togehter the affirmation text 
  text(subject.replace("I am","I") + " " + verb + " to " + action + " in the " + place, width / 2, height / 2 + 20);


  textSize(14); // this si the footer that just asks sht euser to click 
  fill(60);
  text("Click anywhere on the screan for a new affirmation", width / 2, height - 30);
}

function mouseClicked() {
  pickPalette(); // picking. a new color pallette 
  redraw(); //draw again or call the drae function again 
}

function pickPalette() {
  currentPalette = random(sunsetPalettes); // pick random color to 
}

Embedded Skecth:

Conclusion: 

In the future, I’d like to refine grammar for more natural phrases. Maybe even make the array that holds the sentences longer so that we can get more complex phrases. I also want to let the Perlin noise flow continuously, and change colors for future sketches possibly.

 

Creative Reading Response:

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?

One thing that always drives me crazy is traditional TV remotes. There are so   many small buttons that all look the same, and most of them I never even touch. When I just want to change the volume or switch channels, I end up pressing the wrong thing and I’m stuck in some random settings menu. It feels like the design makes everything equally important, when really most people only use a few basic functions. If remotes had bigger, clearly marked buttons for the essentials and maybe hid the less-used ones, plus some simple feedback like a backlight, they’d be so much easier to use.

  • How can you apply some of the author’s principles of design to interactive media?

Norman’s design principles fit really naturally into interactive media because the whole field is about making technology feel intuitive and meaningful. Take affordances and signifiers, for example—these are really important when we design an interface. If a button actually looks like it can be clicked, or an arrow or sign shows you that you should swipe, users don’t have trouble guessing what to do next. In projects like games, apps, or interactive installations, these little cues make the experience smooth instead of frustrating. It’s basically about letting the design speak to the user so they can focus on enjoying the content rather than fighting with the controls. When people don’t have to think too hard about how to use something, they can actually connect with the creative side of the project.

 

 

Week 4 Generative Text

Concept:
For this week’s coding assignment, I wanted to experiment with something more on the creative side. I decided to work on generative text, with the idea of making the appearance of words reflect different moods. My goal was to have the text evoke an emotion not just through what it says, but how it looks and behaves on the screen.

The concept I explored was simple: each time a user clicks, the mood changes, and the text morphs to visually represent that emotion. To achieve this, I combined techniques we covered in class, like sine functions and noise. I also experimented with movement mechanics, such as vertical speed (gravity), bouncing off edges, and the dynamic effect of writing a word.

Code I’m most proud of:

if (!excitedInitialized) 
      // convert text into points (vector outlines of letters)
      points = font.textToPoints(current_mood, width / 2, height / 2, 60, {
        sampleFactor: 0.16,       // density of points
        simplifyThreshold: 0      // no simplification
      });

      // create particles starting at random positions moving toward text points
      particles = [];
      for (let p of points) {
        particles.push({
          x: random(width),
          y: random(height),
          targetX: p.x,
          targetY: p.y
        });
      }
      excitedInitialized = true; // mark as initialized
    }

    // animate particles moving toward their target text points
    for (let p of particles) {
      p.x = lerp(p.x, p.targetX, 0.05); // smooth movement toward targetX
      p.y = lerp(p.y, p.targetY, 0.05); // smooth movement toward targetY

      ellipse(p.x, p.y, 4, 4); // draw particle as a bubble
    }

This snippet stands out to me because it uses two functions I learned during this assignment: textToPoints and lerp.

textToPoints breaks down a word into a set of points based on the chosen font, giving me the flexibility to manipulate text at the particle level.

lerp (linear interpolation) was the key to achieving the effect I wanted. It allowed particles to smoothly move from random positions on the canvas to their designated target points. As a result, the word takes shape out of multiple “bubbles,” giving the text an energetic, almost playful quality.

This was exactly the kind of interaction I wanted. The text doesn’t just appear, it comes alive.

Future Improvements:
While I’m happy with how the project turned out, there’s still plenty of room to push it further. A key next step would be to make the generative text more interactive, so that it doesn’t just display moods but actively responds to the user. I imagine scenarios where hovering over the text could cause particles to scatter and fall apart, or where words might sparkle, ripple, or shift dynamically in response to movement on the screen.

Week 4: Data Visualization

Concept

For this project, I decided to make a data visualization using a dataset of Disney movies. Instead of showing the numbers in a typical bar chart, I wanted something more fun. I represented each genre as a balloon: the bigger the balloon, the more money that genre grossed overall. I also color coded the balloons: pink for drama, purple for musical, green for adventure and yellow for comedy, so each genre is easy to distinguish.

Favorite Code

I’m especially happy with the part where I drew the balloons. The shapes and colors turned out really cute, and combining the ellipse with the string image really tied it all together! This section of the code really brought the visualization to life:

//visuals
 stroke("rgba(255,255,255,0.75)");
 imageMode(CENTER);

 //drama genre balloon
 fill("#F8BCBC");
 image(squiggle, 68, 150, 20, 110);
 ellipse(70, 110, sDrama, sDrama);

 //musical genre balloon
 fill("#9194CF");
 image(squiggle, 130, 150, 20, 110);
 ellipse(130, 110, sMusical, sMusical);

 //adventure genre balloon
 fill("#66B6B6");
 image(squiggle, 225, 180, 20, 120);
 ellipse(225, 110, sAdventure, sAdventure);

 //comedy genre balloon
 fill("#EBF1AB");
 image(squiggle, 320, 150, 20, 110);
 ellipse(320, 110, sComedy, sComedy);

Here’s my sketch:

Reflection and Future Improvements

One challenge I had was calculating the total gross for each genre. I couldn’t figure out how to get the program to add everything automatically from the dataset, so I just did the math myself and typed in the totals. I know there’s probably a way to loop through the data and calculate those sums directly, but I couldn’t figure out how to write it.

For future improvements, I’d like to fix that so the totals are generated by the code itself. I also think it would be fun to add more genres (with more balloons), or even animate the balloons so they float around the screen like real ones. That would make the visualization more dynamic and interactive.

 

Week 4: Reflection

Something That Drives Me Crazy
One thing that really frustrates me is when digital buttons or interactive elements look clickable but actually aren’t. For example, I’ll see a button that’s styled like it should do something, click it, and… nothing happens. Sometimes I double-check, hover around, or even click other areas, thinking maybe I missed something. It’s confusing, annoying, and honestly breaks the flow of using the site or app. I’ve seen it in everything from websites to apps, and even small projects can suffer if users aren’t given clear cues. It’s such a simple thing to fix, too. Adding hover effects, subtle animations, shadows, or clear visual cues can immediately signal that something is interactive, saving users time and preventing frustration.

How I Can Apply Norman’s Principles to Interactive Media
Norman’s ideas about affordances and signifiers really clicked for me. Affordances tell users what actions are possible, and signifiers indicate where they can take those actions. In my interactive media work, I try to make every clickable element obvious. For instance, in my p5.js sketches like my floating dad joke bubble project, I make sure bubbles are visually distinct, move gently to draw attention, and respond when hovered or clicked. These small cues let users know exactly what to do without guessing. I also pay attention to natural mapping. Just like Norman talks about arranging light switches to match the lights they control, I place interface elements in locations that feel intuitive, buttons go where people expect them, and interactive elements behave like their real-world counterparts would.

Reflection and Ideas for Improvement
Working on this has made me realize how tiny details in design can have a huge impact. Even something as simple as whether a bubble looks “poppable” can completely change how a user experiences the project. In the future, I’d like to experiment with multiple interactive elements at once and make sure each is clearly signaled. I’m also thinking about adding feedback for users, like subtle animations when they hover or click, so the system feels alive and responsive. Another idea is giving users more control over interactions; for example, letting them drag or rearrange elements, while still keeping things intuitive. Ultimately, I want my interactive media to be fun, obvious, and frustration-free, where users can explore naturally and enjoy the experience without ever feeling lost.

Week 4 – Dad Joke Aquarium

Concept
I love dad jokes, so I wanted to give them a playful home. My project is a digital aquarium where each joke floats inside a bubble. Click on a bubble and it pops to reveal a new joke. The goal was to combine humor with a visually appealing theme and make the jokes feel alive in their own little environment.

Highlight of Code I am Proud Of
The hardest part was making sure the joke text always fits neatly inside the bubble. I created a custom function called drawWrappedText that splits the joke into lines and centers them vertically. This required measuring text widths, calculating line spacing, and dynamically adjusting positions so the text always looks clean and balanced. I also added gentle floating and rotation to the bubble, giving the jokes a lively, buoyant feel.

// draw wrapped and vertically centered text inside bubble
function drawWrappedText(txt, x, y, maxWidth, lineSpacing) {
  let words = txt.split(" "); // split text into words
  let lines = [];
  let line = "";

  // build lines that fit within maxWidth
  for (let i = 0; i < words.length; i++) {
    let testLine = line + words[i] + " ";
    if (textWidth(testLine) > maxWidth && line.length > 0) {
      lines.push(line);
      line = words[i] + " ";
    } else {
      line = testLine;
    }
  }
  lines.push(line);

  // calculate vertical centering
  let totalHeight = lines.length * lineSpacing;
  let startY = y - totalHeight / 2 + lineSpacing / 2;

  // draw each line
  for (let i = 0; i < lines.length; i++) {
    text(lines[i], x, startY + i * lineSpacing);
  }
}

Reflection and Future Improvements
I had a lot of fun combining humor with interactive design. In the future, I would like to add multiple bubbles at once, each with a different joke, and animate the fish reacting to the bubbles for extra playfulness. Another idea is letting users submit their own dad jokes to make the aquarium more personalized and community-driven.

Assignment 4 – Generative Text Output

Concept + references

 

This piece was inspired by the landscapes that are usually used for some songs when a person wants to look at the lyrics, and I thought it would be a nice idea to explore with the generative text output. Although the parameters and functions used are very similar to the ones we learned in class, I tried to make changes to the structure of the text, as well as its position in the canvas, the background, and the color of the text. Overall, the generative text output resulted from a combination of words which I found most “romantic” and appealing, while also integrating a sense of “discomfort” laced within the text.

Highlight code

As previously mentioned, most of the codes are drawn from the ones learned in class. Despite this, my favorite code I incorporated into this project was the use of random text color and the gradient color background. These codes allowed me to 1) set a random change of color of text that adjusts to the random changes of the words for each sentence, and 2) create a gradient color background to resemble a sunset.

// This function is to make the background colors gradient
let c1, c2;


function setup() {
  createCanvas(400,400);
   c1 = color(240, 0, 110); // Red
  c2 = color(240, 120, 0); // yellow
  // setting the function so that the colors are displayed 
  
  
  
  // Number of lines (2)
print('The number of lines:' + strings.length);
print(strings);
  
}


function draw() {
  background(255);
  
  
  // mapping the text from excel to the canvas
  for (let y = 0; y < height; y++) {
        let n = map(y, 0, height, 0, 1); // Scale y to a 0-1 range
        let newColor = lerpColor(c1, c2, n); // Interpolate between colors
        stroke(newColor);
        line(0, y, width, y); // Draw a horizontal line
      }

  // random function for text color
    // these are random RGB values
  let r = random(255);
  let g = random(255);
  let b = random(255);

 

Embedded sketch

 

Reflection and ideas for future work or improvements

While this project was able to fill the basic requirements for a generative text output, I wish I could have been more creative and explore a different and more interactive way of displaying the text. For future works, I will try to research in advance how to implement interactivity while also taking in consideration the time needed and if the techniques I found are accessible to my level of experience.  Nevertheless, I do appreciate the visual aspect of the work, and believe it adjusts to my initial vision of how I wanted the outcome to be.

Week 4 Reading Response

Prompt:

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
  • How can you apply some of the author’s principles of design to interactive media?

Response:

One of the inventions that drives me crazy on weekly basis is the washing machine. When I tried to do laundry, I am always confused on multiple options that are provided on the panel.

As demonstrated by the picture above, I am confused by the information presented. What is easy care? what is its difference with skin care? when should I choose easy care? When choosing the temperature, how will different temperature affect the washing effect? As a result, I need to go through multiple websites to look for my answer. To improve on this process, I believe the machine could be more interactive. For example, using a display screen to ask users to choose what type of clothes are they washing and then asking how long would they want to wait, etc.. To save the machine from asking repeated question, the screen could provide a default option for users if users have found the best washing mode.

I want to focus on HCD later on for my interactive media works. I always admire how Steve Jobs design the iPad. He successfully incorporated the touch screen technology with very intuitive human interaction as he stated that a 6-year-old could start playing games on iPad without any instruction(visual, text, etc.) (https://www.forbes.com/sites/michaelnoer/2010/09/08/the-stable-boy-and-the-ipad/). Everything should be intuitive and audience could receive very clear feedback after they interact.

Assignment 4 – Generative text output

Concept

For this project, I wanted to challenge myself and experiment with generative text output. I thought it would be interesting to create a program that builds short messages by mixing different pieces of text together. I wrote sentence templates with placeholders and then filled those placeholders with random words from lists of activities, places, events, and more. This approach makes every message different, even though they all follow a similar style. Each click shows a new message, so the text keeps changing and never feels the same. After eight messages, the conversation refreshes and starts again, so it feels like a brand-new chat every time. I also made the layout look like a chat screen, with colored message bubbles and a date at the top, so it feels like you are reading a conversation.

Highlight of the code I am proud of

The part I am most proud of is making the messages switch sides and look like a real chat conversation. As a beginner, figuring out how to alternate the chat bubbles between the left and right side was tricky, but using nextMessageIndex %2 to alternate sides worked perfectly.

// Interaction: add new message when mouse is pressed

function mousePressed() {
  if (nextMessageIndex === 0) {
    messages = [];  // clear canvas after 8 messages
  }
  let side;
  if (nextMessageIndex%2==0){ // Alternate sides
    side = 'left';
  } 
  else{
    side= 'right';
  }
  messages.push({
    text: generateMessage(),
    side: side
  });

  nextMessageIndex+=1;
  if (nextMessageIndex >= 8) { // Reset counter after 8 messages
    nextMessageIndex = 0; 
  }
}

Another thing I am proud of is the way I handled text wrapping. At first, I didn’t know how to make long messages fit inside the bubbles, but I learned how to split the text into words and build lines that stay within the bubble’s width. It made the conversation look clean and easy to read.

// Text wrapping into lines

let words = m.text.split(" ");  // Split message into words
let lines = [];
let tempLine = "";
for (let w of words) {
  
   // Check if adding word exceeds bubble width
  if (textWidth(tempLine + w + " ") < bubbleMaxWidth - 20) { 
    tempLine += w + " ";  // Add word to current line
  } else {
    lines.push(tempLine); // Save current line
    tempLine = w + " ";  // Start new line with current word
  }
}
lines.push(tempLine); // Add last line

Sketch

Click the mouse to add a new message

Reflection

While making this project, I wanted to experiment with generative text output and see how random combinations of words could create a conversation. I am proud of how the messages alternate between left and right, making it feel like a real chat, and how the text wrapping keeps the messages neat inside the bubbles. For the improvements, I would like to add more templates and word lists to make the conversations even more interesting, maybe even including images in the chat bubbles. Also, one of the improvements would be adding a typing animation to make it feel more like a real conversation, and make it mobile-friendly so it works on smaller screens.

Week 4 – Reading Response

Don Norman’s ‘The Design of Everyday Things’ touches very important aspect of proper design that is both convenient and understandable to the user. He talks about engineer making clever decisions, but they don’t properly communicate how to use what they have developed and blames all of it on bad design. I fully agree that good design should have signifiers and have a simple conceptual model, so that in user’s head everything connects easily.

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
I suppose most of it is done purposefully, but I hate how some website pop-ups hide the dismiss button. It is either tiny, camouflaged, or doesn’t exist. I do usually press top right corner and try to guess it’s location, and sometimes it works, but most of the time I’m redirected to a random website. I believe that such websites prioritize business over user needs, and they are not following the principles of human-centered design. The design intentionally hides the way to escape, and it is a false affordance, and in simple words a trap. Solution is quite simple, but again I don’t think companies want to respect user rights: 1. Create a clear and easy-to-click ‘X’ or ‘Close’ button. 2. Respect user’s initial goal, giving full access to the content they came for, and offer in a smaller window their additional services.

How can you apply some of the author’s principles of design to interactive media?
My friend lent me a book by William Lidwell ‘The Universal Principles of Design’, and I like how author there talks about very specific concepts, and some of the topic from there overlap with Don Norman’s ideas of simple user interface. I think for my future p5js sketches if I’m using preload() function I will be using loading indicator of a spinner to provide a feedback to the user that system is working. Also, when hovering over the button, it will change color slightly also a form of feedback, meaning that the button can be clicked. Overall, I want to create genuine and very simple system that will not confuse the user and at the core of it will be human-centered design.

Week 4 – Reading Reflection

In The Design of Everyday Things, Norman highlights the importance of feedback and conceptual models in making technology understandable. What struck me is his insistence that people should not have to “fight” with design in order to use it. If a product requires trial and error to figure out its basic functions, then the design has already failed. This perspective made me think about how often digital interfaces ignore human expectations and force us to adapt to them, instead of the other way around.

Something that really draws me crazy is poorly designed spaces, and while reading, I thought of examples from our campus. The automatic doors at building entrances are heavy and only open one way, which makes it difficult to get in, especially when carrying books or bags. In large study rooms, the motion-sensitive lights often turn off unexpectedly because the sensors are placed in strange spots, leaving people sitting in the dark until the lights turn back on. These everyday problems perfectly show what the author means: poor design is everywhere, and it affects our daily life in ways we often do not even notice. It makes simple tasks harder and reminds us how much thoughtful design matters.

For me, applying Norman’s principles to interactive media means designing in a way that anticipates misunderstandings before they happen. People should not feel embarrassed or incompetent because of poor interface decisions. I want to create digital experiences where the actions are transparent, the feedback is immediate, and the user feels in control. When design gets this right, it not only avoids frustration but also builds trust between people and technology.