“YOaMoNUEvAYoL”
“The only thing more powerful than hate is love.”
“Together, we are America.”
– Benito Antonio Martínez Ocasio 2026
Concept
The concept of this project was to create something connected to the recent social and political conversations about Latin American communities, especially in the United States. Recently I watched the Bad Bunny halftime show that became extremely popular, and as a Latina it honestly felt like a huge moment. Seeing someone singing in Spanish in front of all of the USA, especially considering the controversies around immigration and the fact that many immigrants in the USA are Latino, made me want to base my project around that idea.
Therefore, for the project I chose the song “NUEVAYoL” by Bad Bunny, which was part of the performance and also part of the album that just won two Grammys. The song talks about New York and mixes the American image of the city with a Latin cultural perspective of it. Because of that, I used an image of the Empire State Building that felt vibrant and alive. The image has four dominant colors: orange, light blue, white, and gray. Then I found official 2023 data about Hispanic/Latino population in the United States organized by state. I grouped the states into three main regions: South, West, and Northeast.
Each main color in the image (blue, orange and white) represents one of those regional population groups, and so the amount each color appears is controlled by population data.
The idea is that the skyline is not just a picture anymore. The city lights are metaphorically “powered” by the people who live there. If a region has a larger Hispanic population, that color appears more often in the picture. The music adds movement because when the song gets louder, the dots grow and become brighter, so it feels more like a celebration instead of a static visualization.
To see (and listen) it in action just click the little ⏯ button on the lower left corner!
Click the play ⏯ button!
Process
Originally I tried to make a migration tree showing people moving from Latin America to the U.S. But when I started doing it, technically it was data visualization, but visually I didn’t like it. It looked more like a diagram than something expressive.
After watching The Coding Train videos about sound visualization, I followed the tutorial and experimented with audio-reactive graphics (This tutorial is also why I kept the little toggle button in the corner to play and pause the audio). I then found this Bad Bunny song and immediately thought about using the Empire State Building because the song is about New York. However, I couldn’t use the song because of issues with the UAE library store so I had to extract the audio from the YouTube video of that song and then cut the mp3 so that the intro of the video that wasn’t part of the song itself. I learned how to do this and did it with my terminal.
First I created a sound visualizer using p5.js amplitude analysis (p5.Amplitude()), which gives a number between 0 and 1 representing how loud the music is at each moment. I mapped that value to the size of the dots so louder music makes the building pulse.
Then I converted the image into a pointillism drawing (i based myself of off these p5.js examples 1, 2). Instead of drawing every pixel, the code samples every 5 pixels in both directions. This reduces detail but still makes visible dots.
After that I connected the dataset. I loaded a CSV file using loadTable() and extracted the column HispanicTotal_2023. I summed populations into regional totals (West, South, Northeast). Instead of placing states spatially, I mapped data into a visual variable: color frequency. So, higher population = higher probability a dot of that color appears.
Media and tools used:
-
p5.js
-
The Coding Train tutorials
-
U.S. Census 2023 dataset
-
audio extracted and trimmed from YouTube
-
ChatGPT used to understand pixel arrays and debugging
One of the hardest parts was understanding how images are stored in p5.js. At first the dots appeared randomly because I didn’t understand that each pixel uses 4 values (RGBA). Once I learned how to correctly calculate the pixel index, the image reconstruction finally worked.
Code that I’m proud of
The part of the code I am most proud of is the section where the program reads the color of each pixel from the image and connects it to the demographic dataset. Instead of drawing the Empire State Building manually, the sketch actually reconstructs it from the photograph itself. The program loads the image and then scans across it every few pixels rather than reading every single one. For each sampled position it calculates where that pixel exists inside the image array, extracts the RGB color values, and then classifies the pixel as sky, window light, highlight, or building structure. After that, the demographic data determines whether the dot is drawn or not, so population directly controls how frequently certain colors appear. Because of this, the building is a data-driven reconstruction made of sampled points. The line that makes this possible is the calculation that converts the (x, y) position on the image into the correct position in the pixel array, which allows the code to access the exact color information for that location.
Another reason this is the part of the code I am most proud of is because I tried many different ways of displaying the data before arriving at this solution. I experimented with horizontal assignments, bar-style representations, mapping individual states directly, and even changing the size of the dots to represent population, but none of those approaches worked visually or conceptually. They either made the Empire State Building unrecognizable or the data unclear.
//sampling the image, skiping every 5 px to make dots
for (let x = 0; x < img.width; x += 5) {
for (let y = 0; y < img.height; y += 5) {
// locate pixel inside the 1D pixel array to extract its color
let index = (x + y * img.width) * 4;
// RGB values of that pixel
let r = img.pixels[index];
let g = img.pixels[index + 1];
let b = img.pixels[index + 2];
// brightness decides how visible the dot should be
let brightness = (r + g + b) / 3;
// darker parts = bigger dots for the building silhouette to appear
let dotsize = map(brightness, 0, 255, 2, 0);
// music reaction: louder song = dots expand
dotsize = dotsize + vol * 20;
// classify the pixel color
let regionType;
if (r > 200 && g > 200 && b > 200){
regionType = "white"; // bright highlights
}
else if (r > 200 && g > 120 && b < 100){
regionType = "orange"; // other buildings
}
else if (b > r && b > g){
regionType = "blue"; // background
}
else{
regionType = "structure"; // gray building
}
// population controls how often each color appears
let maxRegion = max(westPopu, southPopu, eastPopu);
let allow = 0;
if(regionType === "orange"){ // WEST
allow = map(westPopu, 0, maxRegion, 0.1, 1);
}
else if(regionType === "white"){ // SOUTH
allow = map(southPopu, 0, maxRegion, 0.1, 1);
}
else if(regionType === "blue"){ // EAST
allow = map(eastPopu, 0, maxRegion, 0.1, 1);
}
else{
allow = 0.15; // so that the building structure always faints
}
noStroke();
fill(r, g, b);
if(random() < allow){
circle(x, y, dotsize);
}
}
Overall Reflection
Overall I’m really happy with the final result. I like thaat it balances data and aesthetics. It is not a traditional chart, but it still encodes real demographic information so you don’t read numbers directly, but still you perceive distribution through color presence.
If I continue this project, I would add multiple years of census data and animate the skyline over time so you could see growth instead of a single snapshot. Another improvement would be adding clearer interaction (for example hovering to reveal which region each color corresponds to). Although, a limitation is that the visualization depends on the colors of this specific image so if the image changes, the mapping must also change.