Week 14 – Final Project! – Megan

Music Producer — Final Project

“Music gives a soul to the universe, wings to the mind, flight to the imagination, and life to everything.” — Plato

Concept

I truly love music. I actually realized, while looking at my past projects, that I have implemented music in many of then as a core element. I think it’s maybe because of my surroundings and how I grew up playing so many instruments, and for my final project, I wanted others to feel this unique connection to music too. I wanted to give a space for users to experiment with sound and visuals so that they would want to create music, which is very different than only listening to it. So, that is how my project was born.

“The only way to do great work is to love what you do.” — Steve Jobs

In general, I wanted to build something that actually feels like a creative tool and not just a demo. The idea is that you are a music producer. You start by recording your own voice, and then you build a whole song on top of it using different instruments like piano, drums, and bass, all controlled through physical buttons connected to an Arduino. Everything you play gets layered and looped, and at the same time the system generates a live visual artwork based on what you are creating.

I wanted to make something where every single action has a visual response, so that by the end you do not just have a song, you have something that looks like a living piece of art. The kind of thing where even if you have no idea how to make music, you can still produce something that feels like yours.

It is designed for anyone who wants to experiment with sound and visuals in a fun and intuitive way. The controls are simple enough that you do not need to be a musician, but expressive enough that if you are, you can actually do something interesting with it.

How It Works

Interaction Design

The whole experience is split into three modes: Piano, Drums, and Bass. In each mode, the four Arduino buttons trigger different sounds, piano notes, drum hits, or bass notes. The potentiometer controls the playback speed of any loops you have selected. A green LED on the Arduino lights up when you are recording and goes red when you stop.

On the screen you have a recordings panel that you can toggle open and close, recording buttons, and mode selectors. The center of the screen always shows a Siri-like wave visualization that responds to everything playing. And depending on which mode you are in, a different kind of art generates in the background as you play. Subtle drifting lines for piano, glowing expanding circles for drums, and thickening colored waves for bass.

The flow is: record your voice, pick a mode, play sounds on top, record the whole thing as a loop, loop it, and keep building on top of that.

Arduino

The Arduino does only one thing and that is reading the physical inputs and sending them to p5.js. It does not make any decisions about what the sounds mean. It sends the state of all four buttons and the potentiometer value every 50 milliseconds, formatted as b1,b2,b3,b4,pot. It also listens for messages back from p5 to control the LEDs. LED:1 turns the green on and LED:0 goes back to red.

I kept the Arduino side as clean and minimal as possible on purpose. So that the user gets that p5.js is the brain and Arduino is just the hands.

You can find the full Arduino source code here

p5.js

The sketch handles literally everything else: the audio system, the visuals, the UI, the serial communication, and the art generation. It has multiple screens (menu, instructions, main), a full recording and playback system using p5.SoundRecorder and p5.SoundFile, an FFT analyzer for the wave visualization, a particle system for the art, and a styled HTML button system injected directly from JavaScript.

You can also see the full p5.js source code here

How This Was Made

The starting point for the audio visualization was the Sound Visualization: Frequency Analysis with FFT video by Daniel Shiffman on The Coding Train. I had never used p5.FFT before and that video really broke it down in a way I could actually follow. From there I understood how fft.waveform() gives you the time domain signal and fft.getEnergy() gives you the overall loudness, which is exactly what I needed to make the wave lines spread based on how loud the audio is. I spent a lot of time with that one and went back to it probably like four or five times throughout the project.

For understanding how p5.SoundRecorder and p5.SoundFile worked together I went back to The Coding Train’s p5.Sound library series starting from episode 11.1. The documentation alone was not enough for me, I needed to see it in action to understand the difference between what the recorder does versus what the SoundFile stores and when it is actually ready to use.

For the HTML buttons inside p5, like styling them with CSS classes and using createButton() and addClass(), I found The Coding Train’s p5.js DOM tutorials really useful. I did not know you could inject a style tag directly from JavaScript in setup and have it affect your buttons in real time. That one I kind of figured out by going through the p5.js reference library and experimenting a lot, but the DOM series gave me the foundation to even know that was possible.

For the particle system, the piano lines, the drum circles, I referenced The Coding Train’s Nature of Code series on particle systems for how to structure a particle with a life, a decay, and properties that change over time. I had seen it before but applying it here in a way that felt musical and not just random took a lot of trial and error honestly.

For the serial communication between Arduino and p5.js I followed what we learned in class.

I also used ChatGPT AI, mainly for debugging. A big portion of the time I spent on this project was debugging, and having something I could explain the problem to and get a clear answer helped a lot. I always made sure I understood what it was suggesting before implementing it, because if I did not understand it I could not fix the next thing that broke on top of it.

What I’m Proud Of

The Mode System

The thing I am most proud of conceptually is how the same four buttons completely change meaning depending on which mode you are in. In piano mode they are notes, in drums mode they are different drum hits, in bass mode they are bass lines. And all of that switches with one click, no lag, no confusion. Building that felt simple in theory but getting the visual art to also switch correctly, and making sure the wave visualization responded differently in bass mode, that took a lot of coordination between different parts of the code. This is the core of the whole interaction design and I think it works really well.

The Recordings Panel

The recordings panel was one of the more technically satisfying things I built. It is a slide-in panel inside the canvas, not a separate screen, where you can see all your recordings, click to play or stop them, double click to rename them, and delete selected ones. What makes it tricky is that all the interaction has to be calculated manually because it is drawn on the canvas and not a real HTML element. So I had to make sure the coordinates in drawRecordingsPanel() matched exactly with the hit detection in mousePressed(), otherwise clicking a recording would trigger the wrong one or nothing at all.

// y has to match exactly between drawing and clicking
let y = 196 + i * 46;
if (mouseX < 252 && mouseY > y - 14 && mouseY < y + 24) {

Also managing the selectedRecordings array, tracking which ones are playing, and deleting them in descending order so the indexes do not shift mid-loop, that took me more debugging than I expected and I was very happy when it finally worked correctly.

The Voice Wave

The wave visualization is honestly the thing I am most proud of technically. The idea is that all 9 lines overlap in the center when it is quiet and spread apart as the audio gets louder. I had to figure out a math system where each line gets a t value from -1 to 1, and the amplitude of that line is abs(t) times baseAmp, so the outermost lines spread the most and the center line never moves. Then I flip the direction for the top half versus the bottom half so they mirror each other like a reflection.

let t   = map(k, 0, N - 1, -1, 1);
let amp = abs(t) * baseAmp;
let dir = t >= 0 ? 1 : -1;
And in bass mode the lines that belong to the note you are pressing get thicker using lerp() so the thickness transitions smoothly instead of snapping. Getting that to feel right took a lot of tweaking. At first it looked too dramatic, then too subtle. I think where I landed feels good because it is noticeable without being distracting.

The Art Per Mode

Making every sound trigger a different visual response was something I really cared about from the beginning. For piano it is thin drifting lines that appear across the screen and slowly fade. For drums the circles are sized by drum type, the kick spawns big glowing circles and the hihat spawns tiny ones. For bass the wave lines thicken in the color that belongs to the note you are pressing. None of it is random in a meaningless way, every parameter was chosen to match the feeling of that sound. That intentionality is something I am really proud of.

Areas for Future Improvement

Honestly one of the biggest things missing right now is the ability to export your final song as an audio file. You can loop everything and listen to it but there is no way to actually save what you made and share it with someone, which feels like a pretty big missing piece for a tool that is supposed to be about music production.

I would also love to add volume control per recording. Right now all selected recordings play at the same level, which means if you stack too many loops it gets muddy and hard to hear what is what. Being able to raise or lower individual loops would make it way more usable as an actual production tool.

The piano art could also be a lot more interesting. Right now the lines just drift horizontally and fade, but I wanted them to feel more like a magic keyboard that reacts specifically to which note you are pressing, not just that a note was pressed. I did not get there in time.

And if I had another two weeks I would probably add some kind of BPM sync so that the loops actually stay in time with each other instead of playing freely. Right now if you start two recordings at slightly different moments they drift out of sync pretty fast, which limits how musical the final result can actually be.

Overall I am really proud of this project. It is the most complex thing I have built so far and it actually works the way I imagined it when I first wrote down the concept. That does not always happen so it feels good when it does.

SCHEMATIC

IMAGES

VIDEO DEMO

Leave a Reply