Assignment 4 – Generative text output

Concept

For this project, I wanted to challenge myself and experiment with generative text output. I thought it would be interesting to create a program that builds short messages by mixing different pieces of text together. I wrote sentence templates with placeholders and then filled those placeholders with random words from lists of activities, places, events, and more. This approach makes every message different, even though they all follow a similar style. Each click shows a new message, so the text keeps changing and never feels the same. After eight messages, the conversation refreshes and starts again, so it feels like a brand-new chat every time. I also made the layout look like a chat screen, with colored message bubbles and a date at the top, so it feels like you are reading a conversation.

Highlight of the code I am proud of

The part I am most proud of is making the messages switch sides and look like a real chat conversation. As a beginner, figuring out how to alternate the chat bubbles between the left and right side was tricky, but using nextMessageIndex %2 to alternate sides worked perfectly.

// Interaction: add new message when mouse is pressed

function mousePressed() {
  if (nextMessageIndex === 0) {
    messages = [];  // clear canvas after 8 messages
  }
  let side;
  if (nextMessageIndex%2==0){ // Alternate sides
    side = 'left';
  } 
  else{
    side= 'right';
  }
  messages.push({
    text: generateMessage(),
    side: side
  });

  nextMessageIndex+=1;
  if (nextMessageIndex >= 8) { // Reset counter after 8 messages
    nextMessageIndex = 0; 
  }
}

Another thing I am proud of is the way I handled text wrapping. At first, I didn’t know how to make long messages fit inside the bubbles, but I learned how to split the text into words and build lines that stay within the bubble’s width. It made the conversation look clean and easy to read.

// Text wrapping into lines

let words = m.text.split(" ");  // Split message into words
let lines = [];
let tempLine = "";
for (let w of words) {
  
   // Check if adding word exceeds bubble width
  if (textWidth(tempLine + w + " ") < bubbleMaxWidth - 20) { 
    tempLine += w + " ";  // Add word to current line
  } else {
    lines.push(tempLine); // Save current line
    tempLine = w + " ";  // Start new line with current word
  }
}
lines.push(tempLine); // Add last line

Sketch

Click the mouse to add a new message

Reflection

While making this project, I wanted to experiment with generative text output and see how random combinations of words could create a conversation. I am proud of how the messages alternate between left and right, making it feel like a real chat, and how the text wrapping keeps the messages neat inside the bubbles. For the improvements, I would like to add more templates and word lists to make the conversations even more interesting, maybe even including images in the chat bubbles. Also, one of the improvements would be adding a typing animation to make it feel more like a real conversation, and make it mobile-friendly so it works on smaller screens.

Week 4 – Shahram Chaudhry – Reading Response

As a senior actively applying to jobs, one thing that really drives me crazy is Workday applications. When I spend time filling out a long online application, only for the site to suddenly prompt me (after I click “Next”) to log in or create an account. Sometimes, it even realizes I already have an account and redirects me to a different version where I am signed in. The logical solution would be to sign up or log in first, but the problem is that not all applications require it upfront. Some let you start without logging in, others ask midway, and many don’t make it clear at all. This inconsistency forces me to guess the “correct” order of actions every time, which becomes especially frustrating when data gets lost in the process. A better approach would be a more consistent and user-friendly UX design: clearly prompting users at the start to sign in, continue an existing application, or proceed as a guest with visible consequences for each. Even more importantly, any entered information should be temporarily saved in local storage or cache, so if a redirect occurs, my progress isn’t wiped out. Small design choices like these can make a huge difference in preserving user effort and trust.

As for the reading, it was a fun and informative read. I used to think interaction design and experience design were basically interchangeable. But now I understand that interaction design is more about usability, discoverability, and understanding, while experience design is about the emotional impact and satisfaction we feel when using something. That distinction really clicked for me. I also loved the conversation about how engineers think logically and they think everyone does too or everyone should at least, but systems need to be intuitive not logical. Just because the engineers wrote a manual to use it, and the logical step for users is to read the manual before using the system, doesn’t mean they will. So it’s the desginers/engineers job to understand their audience/people and design accordingly. Human’s shouldn’t have to try to understand all the different rules different machines/systems have. I mean I can see the microwave with at least 10 different buttons, but I always use the same settings. It’s not that I don’t want more options, it’s that the way they’re presented makes them hard to understand. This is where Norman’s principles of discoverability and understanding come in. If something is hard to discover or hard to understand, we just stop using it.

In interactive media, principles like affordances, signifiers, and feedback can greatly improve how users interact with a piece. Affordances aren’t just properties of an object, but also depend on what the user is capable of doing. A heavy chair might afford sitting, and maybe lifting, depending on who is interacting with it. That highlights how important it is to design with the user in mind, not just the object. For example, a scrollable layout that hints at more content affords downward movement, a pulsing microphone icon acts as a signifier that it’s listening for input, and a search bar that updates results in real-time offers immediate feedback. These small design choices guide, inform, and respond to user actions in intuitive ways. I applied these ideas in my own p5.js generative art sketch. I included a placeholder that says “Enter a word” to clearly signal that users need to type something in the input box. Once the user presses Enter, the word appears instantly on screen providing immediate feedback.



Reading Response – Week 4

One thing that drives me crazy is when I’m looking for a parking spot in a busy garage that uses red and green lights to show if a spot is taken or free. Yesterday, I was rushing to make it to an appointment. I saw a spot far away, but then I noticed a green light closer to the mall entrance, which was supposed to mean the space was free. I drove toward it, only to find a car already parked there. By the time I went back to the original spot, it was taken. It was so frustrating and a complete waste of time.

From Norman’s perspective, this is a design problem caused by poor visibility and misleading signals. The green light gave me a false cue, similar to the confusing doors and devices he describes. A better design would use more accurate sensors that reliably detect when a car is present. It could even provide feedback, like showing when the spot was last updated, so drivers can trust what they see. This would prevent wasted time and make the whole parking experience smoother and less stressful.

     

When working on interactive media, especially projects that are heavy on user engagement, I would apply Norman’s principles by making the instructions very clear and giving clear, immediate feedback to every action. Norman emphasizes visibility and feedback, which are key for helping users understand what to do and what is happening as they interact with the program.

I think even in my p5.js projects, I can start practicing this. For example, if a user types an invalid input, instead of the program crashing, I could display a clear message telling them what went wrong and how to fix it. This way, the user isn’t left confused or frustrated. A good approach is to design as if the person using the program is a child, everything should be simple, obvious, and easy to understand without needing extra instructions. This makes the experience smoother, more engaging, and aligned with Norman’s idea of good, user-centered design.

Week 4 – Shahram Chaudhry – Antakshari

This project started with a simple yet poetic idea , visualizing lines from old Urdu poetry on screen, placing the main verse at the center and surrounding it with words from the next line, like floating thoughts. But then I began replacing those with some of my favorite Urdu song lyrics , the kind I really connect with , trying to merge different verses across songs to create something emotionally layered. That’s when I was reminded of Antakshari, a fun game many of us grew up playing. The rules are simple: one person sings a song, and the next person has to sing another song that starts with the last letter of the previous one. I loved the idea of using that structure as the basis for generative text. I initially wanted to build this fully in Urdu, but I realized it would be tricky to detect and match the last letter accurately in the script. So, I switched to English to get the logic working smoothly. Instead of full lyrics, I used individual words and for each word, the next one is chosen randomly from a list of possible options starting with the last letter. That means each run of the program generates a unique chain of words, but with a meaningful constraint just like Antakshari. I also added visual flair: a short delay between words, small random rotations and scaling for different text sizes so it wouldn’t look like a grid structure and feel a little unpredictable.

The two functions placeWord() and pickNextWord() were the heart of my sketch. I was particularly proud of how I used my prior knowledge of dictionaries to implement the words dictionary, which acts as a lookup system. It takes the last letter of the previous word and finds a list of possible next words starting with that letter. Then I randomly select one to create branching possibilities making the output  different every time.

In placeWord(), I figured out how to make each word feel organic and unique by adding randomness to its angle and size using random(-PI/16, PI/16) and a scaling factor. I also calculated the text width dynamically so I could position the next word without overlap between words. These choices made the flow of words appear natural.

function placeWord(word) {
  let fontOriginalSize = 24;
  let wordMargin = 8; 

  let angle = random(-PI / 16, PI / 16);
  let scaleFactor = random(0.9, 1.3);
  //find color from colormap
  let firstLetter = word.charAt(0).toLowerCase();
  let wordColor = colorMap[firstLetter];
  
  
  textSize(fontOriginalSize * scaleFactor);
  let wordWidth = textWidth(word);

  placedWords.push({
    word: word,
    x: gridX,
    y: gridY + random(-3, 3),
    size: fontOriginalSize * scaleFactor,
    angle: angle,
    color: wordColor
  });
  
  //move the horizontal position to right for next word
  gridX += wordWidth + wordMargin;
  
  //starts a new line/row
  if (gridX > width - 100) {
    gridX = 50;
    gridY += gridSpacing;
  }
  //canvas not filled checked by seeing if we reach bottom or not
  if (autoRun && gridY < height - 50) {
    pickNextWord();
    lastPlacedWord = nextWord;
    setTimeout(() => placeWord(nextWord), 150);
  }
}
function pickNextWord() {
  let lastLetter = lastPlacedWord.charAt(lastPlacedWord.length - 1).toLowerCase();
  //find the candidate words using lastletter and choose a random word
  let candidates = words[lastLetter];
  nextWord = random(candidates);
}

Here’s the sketch:

Eventually, I’d love to go back to my original vision and bring Urdu fully into the experience. That would mean figuring out last-letter detection in Urdu script and possibly integrating a calligraphic font to preserve the beauty of the language. If I can pull that off, the result would be a truly generative, Urdu Antakshari as a perfect blend of nostalgia, music, and generative text. 

 



Week 4 – Word Rain

Concept

For this exercise, I wanted to merge two things: playful interactivity and meaningful (well almost) outcome. I came up with this Tetris-like game where instead of geometric blocks, words fall from the sky. Players can move these word blocks left or right with arrow keys to stack them into bizarre sentences or whatever they like.

The sentences hardly make sense; in fact, the randomness of the generated words is something that makes the results funny. This can be perceived as a silly project but it’s fun. Stack words – make nonsense – laugh.

Inspiration

The inspiration came from two souces:

  1. Tetris game
  2. Text Rain (by Camille Utterback, where letters fall and interact with the human body)

I wanted to combine the two ideas intro something playful but also generative, where the player becomes either a poet or a chaotic builder (or both) depending on the words that randomly fall and where they place them.

Code Structure

Here is the list of things I implemented this with:

  • Array of words (tried to include nouns, verbs, adjectives, adverbs, connectors; however, this can be modified for the code meaning user can use any array of any words they like)
  • fallingWord object that moves down the grid
  • blocks array to store all placed word, and collision detection so blocks stack on top of each other or store at the floor.
  • Keyboard controls to move words left and right within the canvas
  • A small function to resize text dynamically. This was done keeping in mind that is user wants longer words in the array and they will be able to do so as the font size will be adjusted as needed.

Every new word is randomly chosen from the array, dropped from the top, and the loop continues.

Reflection

I liked how this project was fairly simple yet becomes engaging once the player is allowed to have some control. This is a result of both randomness (of the word array) and control (the stacking mechanism).

Challenges:

  • Making sure words didn’t move out of canvas. I tried to put this condition in the same one as if keyPressed but that wasn’t to work. It turned out I have to create a nested condition to implement this.
  • Handling long words that wouldn’t fit in the blocks. I thought whether to allocate more than 1 blocks for long words, but i realized that adjusting the font size is much convenient.

Future Improvements

  • Add scoring system: If a player completes a sentence, they get a point. This requires the rules of grammar and is too difficult for me at present.
  • Make the termination case: for now, I didn’t instruct the code what to do when the canvas is full. I think this is necessary for a game, but mine isn’t yet.

Week 4 – Reading Response

Don Norman’s ‘The Design of Everyday Things’ touches very important aspect of proper design that is both convenient and understandable to the user. He talks about engineer making clever decisions, but they don’t properly communicate how to use what they have developed and blames all of it on bad design. I fully agree that good design should have signifiers and have a simple conceptual model, so that in user’s head everything connects easily.

What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
I suppose most of it is done purposefully, but I hate how some website pop-ups hide the dismiss button. It is either tiny, camouflaged, or doesn’t exist. I do usually press top right corner and try to guess it’s location, and sometimes it works, but most of the time I’m redirected to a random website. I believe that such websites prioritize business over user needs, and they are not following the principles of human-centered design. The design intentionally hides the way to escape, and it is a false affordance, and in simple words a trap. Solution is quite simple, but again I don’t think companies want to respect user rights: 1. Create a clear and easy-to-click ‘X’ or ‘Close’ button. 2. Respect user’s initial goal, giving full access to the content they came for, and offer in a smaller window their additional services.

How can you apply some of the author’s principles of design to interactive media?
My friend lent me a book by William Lidwell ‘The Universal Principles of Design’, and I like how author there talks about very specific concepts, and some of the topic from there overlap with Don Norman’s ideas of simple user interface. I think for my future p5js sketches if I’m using preload() function I will be using loading indicator of a spinner to provide a feedback to the user that system is working. Also, when hovering over the button, it will change color slightly also a form of feedback, meaning that the button can be clicked. Overall, I want to create genuine and very simple system that will not confuse the user and at the core of it will be human-centered design.

Week 4 Reading Response

One everyday object that always drives me crazy, but not quite do with interactive media, is my water bottles. Every bottle I’ve owned is either too tight to open or too loose so it leaks everywhere. Especially for the current one I’m using, I always ended up struggling like it’s a test of grip strength. The most baffling example is the viral Stanley cup, people on social media complain nonstop about leaks, and yet it still become a must-have item. I’ve even seen TikToks where the “fixing the leak” process itself becomes a kind of trend. From Norman’s perspective, that feels like a failure of design: the affordance (a secure, portable cup) clashes with the signifiers (the lid/the cap) that don’t clearly tell you how to tighten it enough. The fact that millions of users are improvising and complaining is proof that the “system image” isn’t communicating what the designer intended.

Norman’s principles also made me think about my own experiments in p5.js. When I build interactive sketches, I realize I need to add clear signifiers, not just rely on “hidden” affordances. For example, I once made a generative text piece where users could drag characters around, but unless I showed some visual cue, like a subtle highlight or a cursor change, no one discovered the interaction. It’s exactly what Norman warns about: affordances exist, but if they’re invisible, they fail.

Another part of the article I found really interesting was Norman’s discussion of mapping. He uses the example of car seat adjustment buttons shaped like the actual seat, you just push the part of the button that matches the part of the seat you want to move. That struck me because it feels so obvious and natural, but you realize how many objects miss this. I think about the confusing array of stove knobs in my apartment where I’m constantly turning on the wrong burner. If those knobs had a layout that mapped directly to the burners, I’d never make that mistake. It made me realize that whether it’s an app, a sketch, or just a stove, people shouldn’t need labels and trial-and-error to figure out something so basic.

Week 4: Generative Text Art

Concept

I started the brainstorming of generative text art by the inspiration of Tibetan font we’ve seen in class. I felt so amazing how this niche and complicated language can be incorporated into unicode and redesigned into various new format. Then I came up the idea of generating rearrangement of typography from Chinese Wubi input method, which decomposes characters into strokes that can then be recombined into full forms. I wanted to apply this same principle of decomposition and recomposition to my favorite language of all times: Nüshu (The direct shortcut key for ü is v, referring as Nvshu in the following).

Nvshu was historically used in Jiangyong County, Hunan, China, as a secretive women’s writing system. It is the only known script in the world designed and used exclusively by women, and was transmitted through sisterhood groups, generational teaching, and playful exchanges. Today, Nvshu often appears on carvings, embroidered fans, handbags, or even tattoos, celebrated as both a form of sisterhood heritage and an aesthetic design language.

My goal was to design a digital generative “Nvshu board” where users can appreciate the beauty of these glyphs and play with them (dragging, rearranging, and composing their own patterns). The resulting forms resemble ornaments, motifs, or even tattoo-like compositions, highlighting how a script can carry both linguistic and visual meaning.

Sketch (Written Horizontally, same as historical Chinese Characters)

Highlight Code

The code I am most proud of is the section that generates random Nvshu glyphs from the Unicode range and places them into a 10×10 board. To make sure the characters display properly, I had to research Unicode code points and how to convert them into visible strings in p5.js. The line below is the most rewarding part of my code:

//       randomized and round down to integer to match the string
      let nscode = floor(random(0x1B170, 0x1B2FB + 1));
      //take the code create the string from the unicode sequence of code points
      // saw this way of locating unicode in chinese-fonts p5.js works by tytyshi
      let nsglyph = String.fromCodePoint(nscode);
      // store it into the array with parameters

Here, random() generates a floating-point number within the Unicode range for Nvshu (0x1B170 to 0x1B2FB inclusive). floor() ensures the result is an integer, since Unicode code points must be whole numbers. Then, String.fromCodePoint() converts that integer into the actual Nvshu glyph. This was a breakthrough moment for me, because it connected abstract Unicode numbers to visible characters and allowed me to fill the canvas with living Nvshu script.

On top of this, I added drag-and-drop interactivity: when the user presses a glyph, the program tracks it and updates its coordinates as the mouse moves. This simple interaction lets users create their own custom compositions from the script.

Future Improvements and Problems

While researching fonts, I first discovered a brush-style Nvshu font package on GitHub. However, its size (18.3MB) exceeded the 5MB upload limit on the p5.js online editor. This led me to the only option of Noto Sans Nvshu on Google fonts. This size of 0.2MB really surprised me. Learning this powerfulness of unifying typeface made me think about how technology preserves cultural memory. I also consulted Lisa Huang’s design reflections on the challenges of Noto Sans Nüshu (https://www.lisahuang.work/noto-sans-nueshu-type-design-with-multiple-unknowns) to better understand the typographic logic of the font.

I would also like to add a save feature in future, so that once users have composed a design, they could export it as an image (to use as a print, tattoo design, or ornament).

This project combined cultural research, typographic exploration, and interactive coding. By bringing Nvshu into a playful digital space, I wanted to highlight both its fragility as an endangered script and its resilience through Unicode and modern typography. Users can engage with scripts not only as carriers of language but also as visual and cultural materials.

Week 4: Reading

As an international student at NYUAD who often travels, I’ve found sensor-activated washroom taps frustrating and time-consuming, whether in airports or public restrooms in different countries. There are different types of sensors in different locations, many of which behave unpredictably, and sometimes they simply don’t work at all. Even the taps in NYUAD occasionally fail. When I first arrived, I remember waving my hands around trying to figure out where to start the water, unsure if the faucet was broken or if I was using it wrong. From Don Norman’s perspective, this is a textbook example of weak signifiers (no clear visual cue where to place hands), poor feedback (no light or sound to confirm detection), and a missing conceptual model (no simple mental picture of how the sensor works). A better design could include a soft light to indicate the active zone and an immediate, gentle chime when the sensor registers movement. Thinking beyond faucets, In Interactive Media, I want every sketch and installation I build to “explain itself,” so that users can start playing without extra instructions. Don Norman’s principles give me a practical checklist: I can design strong affordances by making buttons and draggable objects look touchable or moveable; create clear signifiers through animated highlights, glowing borders, or subtle sounds; keep mappings natural so a slider that moves right always increases a value; and provide immediate feedback with color changes, vibration, or playful sound effects. Most importantly, I can help visitors form the right conceptual model by giving them a short, visual introduction or letting the interface demonstrate its own rules when the design is complex. 

Week 4: Fortune Cookie

I got my idea for this week’s artwork when my friend and I were talking about common childhood toys, and she mentioned the Magic 8 Ball. This got me thinking about other fortune-telling knick-knacks, when I remembered fortune cookies. I thought it would be really fun to make a digital version of a fortune cookie and populate it with interesting fortunes for users to come across. Just having a cookie on the canvas seemed a little aesthetically boring, though, so I found an image of anime-style food on a table on Pinterest, and chose to draw the fortune cookie as part of that setting and match the same style (to the best of my abilities).

Before I started coding, it took me a long time to decide on how I would draw the cookie. I looked at a lot of pictures online for cartoon fortune cookie, and the final look of it that I decided upon is really a mix of a bunch of photos, and more importantly, a realistic structure through which I though I could animate the effect of the cookie breaking open. Overall though, I am happy with how I mirrored the aesthetics of the background for my own cookie and plate. The fortune texts are a collection of messages I found online as fortune cookie messages and thought were funny.

In this sketch, the program is organized into four main functions: preload() loads the external assets like the background image and the cookie snap sound before anything else runs, setup() initializes the canvas, text settings, and starting positions of the cookie and fortune paper, draw() continuously updates the scene by displaying the image, plate, cookie halves, shading, and animating the cracking motion and sliding fortune paper, while mousePressed() handles the interaction by toggling the cookie state. I particularly like this part of the code:

  // fortune paper
  if (isOpen) {
    fill("#ebd4b4");
    stroke("#4e1408");
    // keep it where cookie is
    rect(xPaper*0.9, height * 0.61, 320, 40, 5);
    
    // fortune text
    noStroke();
    fill("#4e1408");
    textFont('Gloria Hallelujah');
    textSize(14);
    text(fortune, xPaper*0.9, height * 0.61);
  
    // paper sliding effect
    xPaper = lerp(xPaper, xTarget, 0.1);
  }
  
  // animate cookie parts cracking apart
  angleL = lerp(angleL, leftTarget, 0.05);
  angleR = lerp(angleR, rightTarget, 0.05);  
}

function mousePressed() {
  if (!isOpen) {    
    isOpen = true;
    fortune = random(fortunes); // Pick a random fortune
    // Rotate left half outward slightly
    leftTarget = -QUARTER_PI / 2; 
    // Rotate right half outward slightly
    rightTarget = QUARTER_PI / 2; 
    // Slide fortune paper out to the right
    xTarget = width / 2 + 150;
    
    // cookie opening sound
    if (snapSound.isLoaded()) {
      snapSound.play();
    }
    
  } else {
    // reset cookie to closed state when clicked again
    isOpen = false;
    leftTarget = 0;  // Return left half to original position
    rightTarget = 0; // Return right half to original position
    xTarget = width / 2; // Hide fortune paper
  }
}

I think what’s interesting here is the lerp() function. It is what makes the cookie and paper feel smooth instead of jerky. Without it, the halves and the fortune paper would instantly jump to their final positions. Basically what it does is it tells p5.js to move the current value a little closer to the target value on every frame. This creates a gradual transition so that the cookie halves slowly rotate outward like they’re cracking, and the fortune paper slides out as if being pulled, instead of teleporting into place. I also really like the cookie breaking sound I included when it opens and think that it increases the interactive feel in this sketch.

The biggest challenge I faced with this artwork was drawing the cookie and its shadow. It involved translations and well as the arc() function which was completely new to me, and I had to carefully go through its documentation of the p5js site to understand how the different arguments it takes work. The shadow took some trial and error, but I achieved it by first drawing a darker arc on the upper-right edge of the cookie, then cutting it down with a lighter arc. The second arc overlaps most of the first one and leaves only a slim crescent of shadow visible.

Looking forward, I would like to enhance the aesthetics of the cookie so that it looks less flat, and make its shape more closely matching with real fortune cookies. On the interaction side, fortunes could be randomized with different fonts, animations, or even themed backgrounds for variety. Also, right now the paper just sort of slides into the canvas from nowhere, I would like to make it so that it clearly pops out from the cookie, or maybe change the design completely so that it’s like a little slip that users can pull out from the cookie by dragging their cursor. Maybe, I could also add the sound of crinkling paper!