All Posts

Week 9: Analog input & output

Concept:

This project is a reaction timer game built using an Arduino, a push button, a potentiometer, and two LEDs. The main idea is to test how fast a person can react after seeing a visual signal. One LED acts as the “start signal” light — when it turns on, the player must press the button as quickly as possible. The potentiometer controls how difficult the game is by adjusting the random delay time before the signal LED lights up. After the player presses the button, the Arduino measures their reaction time and shows it on the computer screen through the Serial Monitor.

The second LED is used to make the project more interactive by representing the player’s reaction speed through brightness. If the player reacts quickly, the second LED lights up brightly. If the reaction is slower, the second LED is dimmer. This gives instant visual feedback without needing to check the Serial Monitor every time. The whole project is a fun way to learn about digital inputs, analog inputs, and timing functions with Arduino, and it can easily be expanded with sounds, scores, or even multiple players later on.

int pushButton = 2;  // Button pin
int potPin = A0;     // Potentiometer pin
int ledPin = 8;      // LED pin

void setup() {
  Serial.begin(9600);
  pinMode(pushButton, INPUT_PULLUP);
  pinMode(ledPin, OUTPUT);
}

void loop() {
  int difficulty = analogRead(potPin);  // Read difficulty setting
  int waitTime = map(difficulty, 0, 1023, 1000, 5000); // Map to 1-5 seconds
  
  Serial.println("Get Ready...");
  delay(random(waitTime));  // Wait random time based on potentiometer
  
  digitalWrite(ledPin, HIGH);  // Turn LED ON (reaction signal)

  unsigned long startTime = millis();  // Start counting time
  while (digitalRead(pushButton) == HIGH) {
    // Wait for button press
  }
  unsigned long reactionTime = millis() - startTime;  // Reaction time calculation
  
  digitalWrite(ledPin, LOW);  // Turn LED OFF after button pressed
  
  Serial.print("Reaction Time (ms): ");
  Serial.println(reactionTime);
  
  delay(3000);  // Wait 3 seconds before starting again
}

Challenges:

My Arduino Board got stuck in an infinite loop through a test code I ran previously and stopped accepting any further code uploads. Due to this, I had to change my project idea and use a friend’s Arduino to run the code.

New Concept:

This is a simple circuit using 2 LEDs, a potentiometer, and a switch button. The potentiometer controls the brightness of the LED while the other LED is controlled by the button.

Week 9 — Sensors

Ideation

When I first started thinking about what to do for this project, I wanted to create something that was a bit playful (a bit gamified?), while also showcasing how analog and digital inputs can work together in a meaningful way. I landed on the idea of a “Human Lie Detector” — a lighthearted concept that mimics stress or honesty detection based on physical responses. This was inspired by an existing device that I often saw being used in Korean drinking games. For that machine, small electric shocks are sent the person who is attached to the machine based on the sweat produced by someone’s hand. Since I don’t intend to cause pain to any body I decided to use a sensor that could imitate the just of the lie detector idea. So, the pressure sensor became the core of the project, as it allowed me to measure subtle changes in finger pressure, which I imagined could simulate a person’s nervousness or calmness when answering a question. By combining this analog input with a digital button (to simulate the moment a person gives an answer), I could create a clear trigger point and decision logic for controlling the LEDs. The red and green LEDs became symbolic indicators: red for “lie” and green for “truth,” depending on how much pressure was applied.

Implementation

To bring the idea to life, I started by wiring up the core components: a pressure sensor, a pushbutton, and two LEDs (one red and one green). I used a 10k resistor to build a voltage divider circuit with the pressure sensor, allowing me to read its analog values through pin A0 on the Arduino. The pushbutton was connected to digital pin 2 and configured with Arduino’s internal pull-up resistor to simplify the circuit. For output, I connected the red LED to pin 8 and the green LED to pin 9. You can see the demo video below (sound ON):
IMG_0052

Here’s the schema I drew :

Code Highlights

One small but important part of my code was adding a mid-range threshold to control the green LED. At first, I noticed that the green LED would turn on even when there was barely any pressure on the sensor, just because the button was pressed. To fix this, I added a lower bound to the pressure range. So now, the green LED only turns on if the pressure is somewhere between a low (50) and high threshold (500). This simple change made the interaction feel a lot more accurate and intentional. It wasn’t a complex fix, but it solved an annoying issue and made the project work the way I wanted.

int threshold = 500;         
int lower = 50;
.
.
.
if (buttonState == LOW) {
  if (pressureVal > threshold) {
    digitalWrite(redLED, HIGH);
    digitalWrite(greenLED, LOW);
  } else if (pressureVal > lower) {
    digitalWrite(redLED, LOW);
    digitalWrite(greenLED, HIGH);
  } else {
    digitalWrite(redLED, LOW);
    digitalWrite(greenLED, LOW);
  }
} else {
  digitalWrite(redLED, LOW);
  digitalWrite(greenLED, LOW);
}

initially wrong code:

if (buttonState == LOW && pressureVal > threshold) {
  digitalWrite(redLED, HIGH);
  digitalWrite(greenLED, LOW);
}
else if (buttonState == LOW && pressureVal < lower) {
  digitalWrite(greenLED, HIGH);   // always turned on green LED if button is pressed
  digitalWrite(redLED, LOW);
}
else {
  digitalWrite(redLED, LOW);
  digitalWrite(greenLED, LOW);
}

Reflection

Interestingly, the most challenging part of this assignment wasn’t the coding or wiring; it was actually drawing the schematic. I found it surprisingly difficult to sketch the circuit accurately without having the physical components in front of me. To overcome this, I brought the components to my workspace from the IM lab and started building the circuit first, using the multiple schematics we learned about in class as references. I looked closely at the examples for the LEDs, the button, and the pressure sensor, and then worked backwards from my working setup to draw the schema. While this might not have been the intended approach for the assignment, it turned out to be incredibly educational for me. I ended up researching circuit diagram symbols and layout conventions on my own, which gave me a much better understanding of how to read and create schematics. In a way, doing it “backwards” helped me build my confidence on circuit drawings. So next time, (hopefully) I’ll feel ready to start with the schematic before jumping into the circuit.

Week 9: analog input & output

For this project, we were asked to use both a digital and an analog sensor to control two separate light bulbs. I chose an LDR as the analog sensor and a push button switchas the digital one.

The circuit was designed with both connections set up in parallel, but each individual sensor and its LED were wired in series. That means the LDR is connected to a resistor and then to an LED in one path, while the button switch is connected to a resistor and another LED in a separate path. Both paths share power and ground—so they’re technically in parallel, but operate independently.

My favorite part of the project was seeing how the LDR affected the brightness of the LED in real-time. The more light it receives, the dimmer the LED gets, and vice versa. It was satisfying to see that dynamic shift as I covered and uncovered the sensor.

 

For the digital sensor, the behavior is much simpler: the LED turns on when the button is pressed and turns off when it’s released. There’s no change in brightness—just a clean on/off action.

One challenge I faced was protecting the LEDs from burning out. I ended up frying three of them before realizing that I needed to be more careful with resistor values and connections. In the end I started disconnecting my board from power before making any changes to the connections which I realize now I should’ve done from the start but I just forgot about it.

For future improvements, I’d love to swap the LDR for a microphone sensor and make the LED respondto sound intensity instead of light. I thinkit would be fun to experiment with how volume or rhythm could control brightness, especially for interactive or musical projects.

Hand-drawn schematic diagram:

Switch not pushed. LDR exposed to room light.

Switch not pushed. LDR covered.

Switch pushed. LDR exposed to room light.

Week 8: Creative switch

For this project, I got inspired by bowling, and so I wanted to use a  rolling ball to control the LED light. The idea was to turn it into a competition between two players, where the goal is to light up a bulb three times before the other player.

To build this, I created a mini bowling alley out of cardboard. At the center of the alley, there’s a handle wrapped in copper tape. The game starts with the handle tilted at an angle, and each player takes turns rolling a ball, trying to push the handle into the exact center position. When the handle aligns just right, it connects with another piece of copper tape, completing the circuit and lighting up an LED. The first player to do this successfully three times wins the game.

My favorite part of this project was designing the handle mechanism and experimenting with the balance between accuracy and force.

One challenge I faced was adjusting the friction of the handle. At first, it slid way too easily, which made the game too simple and boring. But when I added more resistance, it became too difficult, and no one could win. After asking multiple people to try the game and integrating their feedback I finally reached a state where the game worked as I pictured it at the start and became enjoyable to play.

Things I’d like to improve in the future include adding a digital scoreboard to track each player’s progress and wins. I’d also like to experiment with different materials as cardboard was really hard to work with due to differences in friction between different kinds and cuts of cardboard.

Starting/Losing:

Winning: 

Real-time Audio Visualization System: Frequency Band Analysis for Musical Element Representation | Week-9

Please navigate to Github to find the source code.

This study presents the design, implementation, and evaluation of a real-time audio visualization system that maps frequency bands to corresponding LED indicators representing distinct musical elements. The system consists of an Arduino-based hardware controller integrated with Python-based audio processing software, utilizing Fast Fourier Transform (FFT) for frequency analysis. By isolating energy from specific frequency ranges related to vocals, chords, percussion, and bass, the system creates an intuitive visual representation of music’s core components. The implementation features multiple operational modes, tempo synchronization capabilities, and adaptive smoothing algorithms to create responsive yet stable visualizations. Testing confirms the system achieves low-latency performance with approximately 30ms end-to-end delay while effectively representing musical structure through synchronized LED patterns.

System Architecture

The audio visualization system integrates hardware and software components to transform audio signals into visual LED patterns. The architecture follows a clear signal path from audio capture through processing to visual output, with multiple modes of operation.

Hardware-Software Integration

The system consists of two primary components: an Arduino microcontroller handling LED control and user inputs, and a Python application performing audio capture and advanced signal processing. These components communicate bidirectionally via serial connection.

The hardware layer includes:

      • 5 LEDs connected to Arduino pins 3, 4, 5, 6, and 7, representing different musical elements
      • A button on analog pin A0 for mode selection
      • A potentiometer on analog pin A1 for volume control in audio control mode
      • Serial connection to the host computer for data transfer

The software layer includes:

      • Audio capture and buffer management via PyAudio
      • Frequency analysis using Fast Fourier Transform (FFT)
      • Frequency band isolation and energy calculation
      • Beat detection and tempo synchronization
      • Volume control integration with the operating system
      • Serial communication with the Arduino controller

Signal Flow and Processing

The system’s signal path follows a clear sequence:

      1. Audio is captured from the computer’s microphone or line input at 44.1kHz with 16-bit resolution
      2. The audio is processed in chunks of 2048 samples to balance frequency resolution and latency
      3. Each chunk undergoes windowing with a Hann function to minimize spectral leakage
      4. FFT converts the time-domain signal to frequency domain representation
      5. Energy in specific frequency bands is calculated using both peak and average values
      6. The energy values are logarithmically scaled and normalized to match human perception
      7. Smoothing algorithms are applied to prevent LED flickering while maintaining responsiveness
      8. The processed values are sent to Arduino via serial communication as LED brightness levels
      9. Arduino updates LED states based on received data and current operational mode

Operational Modes

The system implements three distinct operational modes:

      1. POT_MODE (Audio Control Mode): The potentiometer controls system volume, with LED brightness indicating the volume level. The Python application reads potentiometer values from Arduino and adjusts system volume accordingly.
      2. ANIMATION_MODE: The system runs a predefined sequential animation pattern independent of audio input. LEDs turn on and off in sequence with configurable timing, creating a light show effect.
      3. VISUALIZER_MODE: The core functionality where LEDs respond to musical elements in real-time. The Python application processes audio, extracts frequency information, and sends LED brightness values to Arduino.

Mode switching occurs via the button connected to analog pin A0. The Arduino implements debouncing with a 50ms delay to prevent false triggers during button presses.

Audio Acquisition and Processing

The audio processing pipeline forms the foundation of the visualization system, transforming raw audio signals into meaningful musical element representations through several sophisticated processing stages.

Audio Capture and Preprocessing

Audio acquisition begins with PyAudio capturing data from the selected input device. The system implements a robust device selection mechanism that:

      • Lists all available audio input devices
      • Allows manual device selection
      • Attempts systematic testing of devices when selection is ambiguous
      • Tries multiple parameter combinations for maximum compatibility

Once captured, the audio undergoes preprocessing:

      1. Conversion to NumPy array for efficient processing
      2. Normalization to the range [-1, 1]
      3. Application of Hanning window to minimize spectral leakage during FFT

The system uses a chunk size of 2048 samples at 44.1kHz, striking a balance between frequency resolution (approximately 21.5Hz per FFT bin) and processing latency.

Frequency Analysis and Band Extraction

At the core of the system lies the frequency analysis engine that isolates different musical elements:

# Perform FFT
fft_data = fft(audio_data)
fft_data = np.abs(fft_data[:CHUNK // 2]) / CHUNK  # take magnitude of first half

 

The system defines specific frequency bands for each musical element:

      • Vocals: 300-3000 Hz (midrange frequencies where human voice is most prominent)
      • Chord: 200-2000 Hz (harmonic musical content)
      • Snares: 150-250 Hz (characteristic snare drum frequencies)
      • Claps: 2000-5000 Hz (high transient sounds)
      • Bass: 50-120 Hz (low frequency rhythmic content)

For each band, energy is calculated using a weighted combination of peak and average values, tailored to the characteristics of each musical element:

      • For transient sounds (claps, snares): 90% peak, 10% average for fast response
      • For bass: 70% peak, 30% average with additional transient detection
      • For vocals and chords: 50% peak, 50% average for balanced representation

The system applies logarithmic scaling to match human perception:

band_level = 20 * np.log10(band_mag + 1e-10)

 

Values are then normalized to a 0-100 scale with sensitivity adjustment and noise floor thresholding to prevent false triggers from background noise.

Beat Detection and Tempo Synchronization

The visualization incorporates beat detection and tempo synchronization to align with musical structure. The detection algorithm:

      1. Monitors audio energy over time using a sliding window
      2. Identifies sudden increases in energy above a threshold as potential beats
      3. Ensures minimum time between detected beats to prevent false positives
      4. Updates an internal tempo estimate based on timing between beats

The system maintains a 4/4 timing pattern typical of many musical genres, with:

      • Bass emphasis on beats 1 and 3
      • Snare emphasis on beats 2 and 4

A fallback mechanism uses fixed tempo when beat detection becomes unreliable, and users can manually set tempo with the command tempo_set:[bpm].

Smoothing and Decay

To create visually pleasing and stable LED behavior, the system implements adaptive smoothing:

for band, level in band_levels.items():
    smooth_factor = smoothing_factors.get(band, SMOOTHING)
    # If new level is significantly higher, respond more quickly
    if level > smoothed_levels[band] * 1.5:
        smooth_factor = min(0.9, smooth_factor * 1.5)
    smoothed_levels[band] = smoothed_levels[band] * (1 - smooth_factor) + level * smooth_factor

Each musical element receives custom smoothing parameters:

      • Vocals: 0.4 (moderate smoothing)
      • Chord: 0.5 (medium smoothing)
      • Snares: 0.9 (minimal smoothing for fast attack)
      • Claps: 0.9 (minimal smoothing for fast attack)
      • Bass: 0.7 (balanced attack and decay)

The Arduino implements additional decay effects when no data is received, gradually reducing LED brightness at configurable rates for each channel.

Hardware Implementation

The hardware architecture provides the physical interface for the visualization system, handling LED control, user inputs, and communication with the software layer.

LED Configuration and Control

The system utilizes five LEDs, each representing a specific musical element:

      • LED 1 (Pin 3): Vocals (300-3000 Hz)
      • LED 2 (Pin 4): Chord (200-2000 Hz)
      • LED 3 (Pin 5): Snares (150-250 Hz)
      • LED 4 (Pin 6): Claps (2000-5000 Hz)
      • LED 5 (Pin 7): Bass (50-120 Hz)

Note: The system design documentation mentions pins 3, 9, 5, 6, and 10, those pins are chosen because they are Pulse Width Modulation (PWM) pins in Arduino Uno.

The Arduino controls LED brightness using PWM where supported, with special handling for non-PWM pins:

// For PWM pins (3, 5, 6), use analogWrite
if (ledPins[i] == 3 || ledPins[i] == 5 || ledPins[i] == 6) {
    analogWrite(ledPins[i], visualizerBrightness[i]);
} else {
    // For non-PWM pins (4, 7), use threshold
    digitalWrite(ledPins[i], (visualizerBrightness[i] > 127) ? HIGH : LOW);
}

This implementation elegantly handles the Arduino’s hardware limitation where only certain pins support analog (PWM) output for variable brightness.

User Interface Components

The system provides a minimal but effective user interface through two analog inputs:

        Mode Selection Button (A0): A momentary push button connected to analog pin A0 allows users to cycle through the three operational modes. The implementation includes software debouncing to prevent false triggers:

// Check if button state has been stable long enough
if ((millis() - lastDebounceTime) > debounceDelay) {
    // If button state has changed
    if (reading != currentButtonState) {
        currentButtonState = reading;
        // If button is pressed (HIGH when pressed, no pull-up)
        if (currentButtonState == HIGH && lastButtonState == LOW) {
            // Cycle through modes
            switch(currentMode) {
                case POT_MODE:
                    currentMode = ANIMATION_MODE;
                    break;
                case ANIMATION_MODE:
                    currentMode = VISUALIZER_MODE;
                    Serial.println("VISUALIZER"); // Signal to computer
                    break;
                case VISUALIZER_MODE:
                    currentMode = POT_MODE;
                    break;
            }
        }
    }
}

        Volume Control Potentiometer (A1): In POT_MODE, the potentiometer reading is mapped to system volume. The Arduino reads the analog value and sends it to the Python application, which adjusts system volume accordingly.

Serial Communication Protocol

The Arduino and Python application communicate through a text-based serial protocol over a USB connection at 9600 baud. The protocol includes:

From Arduino to Python:

      • VISUALIZER: Notification of mode change to visualizer mode
      • MODE:ANIMATION: Notification of mode change to animation mode
      • MODE:AUDIO_CONTROL: Notification of mode change to audio control mode
      • VOL:[value]: Potentiometer reading for volume control

From Python to Arduino:

      • L:[val1],[val2],[val3],[val4],[val5]: LED brightness values
      • DECAY:[val1],[val2],[val3],[val4],[val5]: Custom decay rates for each LED

This bidirectional communication ensures synchronization between hardware and software components while maintaining a clear separation of responsibilities.

Software Implementation

The software architecture handles sophisticated audio processing while providing a responsive and configurable user experience through multiple integrated components.

Python Application Structure

The Python application (main.py) implements a comprehensive audio processing and control system with several key components:

      1. AudioProcessor Class: The main class encapsulating audio processing functionality, system volume control, Arduino communication, and visualization logic.
      2. Audio Capture and Device Management: Robust audio device detection and selection with fallback mechanisms to ensure the system works across different hardware configurations.
      3. Volume Control Integration: Platform-specific volume control through the pycaw library on Windows with simulation fallback for other platforms:
def set_system_volume(self, level_percent):
    if not WINDOWS or self.volume is None:
        print(f"[simulation] Setting system volume to {level_percent}%")
        return True
    try:
        # Convert percentage to volume scalar
        volume_scalar = self.min_volume + (self.max_volume - self.min_volume) * (level_percent / 100.0)
        # Ensure within valid range
        volume_scalar = max(self.min_volume, min(self.max_volume, volume_scalar))
        # Set volume
        self.volume.SetMasterVolumeLevel(volume_scalar, None)
        return True
    except Exception as e:
        print(f"Error setting volume: {e}")
        return False

 

 

      1. Frequency Analysis Engine: Implementation of FFT-based frequency analysis with band extraction, energy calculation, and normalization.
      2. Beat Detection System: Energy-based beat detection with adaptive tempo tracking and fallback mechanisms.
      3. Visualization Thread: A dedicated thread for audio processing and visualization updates to ensure responsive LED control without blocking the main program flow.
      4. Command Processing: Handling of special commands for tempo control and system configuration.

Arduino Firmware Structure

The Arduino firmware (audio.ino) implements the hardware control logic with several key components:

      1. Pin Configuration and Initialization: Setup of LED pins, button input, and serial communication.
      2. Mode Management: Implementation of the three operational modes with clean transitions between them.
      3. Button Debouncing: Reliable button state detection with debounce delay to prevent erratic mode switching.
      4. Serial Data Processing: Parsing of LED brightness commands from the Python application:
void processSerialData() {
    if (Serial.available() > 0) {
        String data = Serial.readStringUntil('\n');
        // Check if it's an LED level command (L:val1,val2,val3,val4,val5)
        if (data.startsWith("L:")) {
            // Remove the "L:" prefix
            data = data.substring(2);
            // Parse comma-separated values
            int index = 0;
            int lastCommaIndex = -1;
            int nextCommaIndex = data.indexOf(',');
            while (index < numLEDs && nextCommaIndex != -1) {
                String valueStr = data.substring(lastCommaIndex + 1, nextCommaIndex);
                visualizerBrightness[index] = valueStr.toInt();
                lastCommaIndex = nextCommaIndex;
                nextCommaIndex = data.indexOf(',', lastCommaIndex + 1);
                index++;
            }
            // Get the last value
            if (index < numLEDs) {
                String valueStr = data.substring(lastCommaIndex + 1);
                visualizerBrightness[index] = valueStr.toInt();
            }
            // Update LEDs with new brightness values
            updateLEDs();
        }
    }
}

 

 

      1. LED Control Functions: Implementation of different LED patterns for each mode, with special handling for PWM-capable pins.
      2. Decay Effect: Gradual reduction of LED brightness when no data is received, creating a smoother visual experience.

Visualization Logic and Mapping

The core visualization logic maps processed audio data to LED brightness values through several steps:

      1. Band Energy Calculation: For each frequency band, energy is calculated using a combination of peak and average values with band-specific weighting.
      2. Logarithmic Scaling: Energy values are logarithmically scaled to match human perception of loudness.
      3. Normalization: Values are normalized to a 0-100 scale and then converted to 0-255 for LED brightness control.
      4. Tempo-Synchronized Mapping: When tempo synchronization is enabled, certain musical elements (bass, snares) are emphasized according to their position in the 4/4 timing pattern:
if self.tempo_sync_enabled:
    # Apply 4/4 backbeat pattern
    # Bass drum on beats 1 and 3
    if self.beat_position == 0 or self.beat_position == 2:
        # Check if there's significant bass energy
        if smoothed_band_levels['bass'] > 20:
            led_values[^4] = int(smoothed_band_levels['bass'] * 2.55)  # Pin 10 - bass
    # Snare on beats 2 and 4
    if self.beat_position == 1 or self.beat_position == 3:
        # Check if there's significant snare energy
        if smoothed_band_levels['snares'] > 20:
            led_values[^2] = int(smoothed_band_levels['snares'] * 2.55)  # Pin 5 - snares
else:
    # Regular frequency-responsive mode without tempo sync
    led_values[^2] = int(smoothed_band_levels['snares'] * 2.55)
    led_values[^4] = int(smoothed_band_levels['bass'] * 2.55)

This implementation creates a visualization that not only responds to frequency content but also respects the musical structure, enhancing the connection between audio and visual elements.

Performance Evaluation and Results

The audio visualization system’s performance was evaluated across multiple dimensions to assess its effectiveness in real-time musical element representation.

Latency Analysis

End-to-end latency measurement revealed several processing stages that contribute to the overall system delay:

      1. Audio Capture: 2-5ms for buffer filling at 44.1kHz with 2048 samples
      2. FFT Processing: 5-10ms for 2048-point FFT and frequency band extraction
      3. Serial Communication: 3-5ms for data transfer between Python and Arduino
      4. LED Update: 1-2ms for Arduino to update LED states

The total measured latency ranges from 16-32ms, falling well below the 50ms threshold typically considered acceptable for real-time audio visualization applications. This low latency ensures that the visual representation remains synchronized with the audio, creating a cohesive multimedia experience.

Visualization Accuracy

The system’s ability to represent different musical elements was assessed through testing with various audio sources:

      1. Isolated Instruments: When tested with isolated instrument recordings (drums, bass, vocals), the system correctly illuminated the corresponding LEDs with intensity proportional to the instrument’s prominence.
      2. Complex Musical Content: With full music tracks, the system demonstrated the ability to separate overlapping elements and visualize the dominant components at any given moment.
      3. Beat Detection: The beat detection algorithm successfully identified approximately 85% of beats in music with clear rhythmic patterns, with performance decreasing to 70% for music with complex or ambiguous rhythms.
      4. Tempo Tracking: The adaptive tempo tracking maintained synchronization with tempo changes when they occurred gradually, though sudden changes required several seconds for adjustment.

Resource Utilization

System performance monitoring revealed:

      1. CPU Usage: The Python application utilized 5-10% CPU on a modern computer, with FFT processing being the most computationally intensive operation.
      2. Memory Usage: Memory consumption remained stable at approximately 30-40MB, indicating no significant memory leaks during extended operation.
      3. Arduino Processing: The Arduino maintained reliable 30Hz update rate, with sufficient processing headroom for additional features.

User Experience Factors

The system was evaluated for several user experience factors:

      1. Visual Stability: The customized smoothing parameters for each musical element created stable visualization without excessive flickering while maintaining responsiveness to transient sounds.
      2. Intuitive Mapping: The association of specific LEDs with musical elements (bass, vocals, percussion) created an intuitive mapping that users could readily understand without extensive explanation.
      3. Mode Switching: The button-based mode switching provided a simple interface that users could master quickly, with clear visual feedback when changing modes.
      4. Volume Control: The potentiometer-based volume control in Audio Control Mode offered intuitive and precise adjustment of system volume, providing value beyond mere visualization.

Challenges and Limitations

Despite its successful implementation, the system faces several challenges and limitations that affect its performance and applicability.

Hardware Constraints

The Arduino platform imposes several limitations:

      1. PWM Availability: Only pins 3, 5, 6, 9, 10, and 11 on standard Arduino boards support PWM for analog brightness control. The implementation works around this by using threshold-based digital output for non-PWM pins, but this reduces the visual fidelity of affected channels.
      2. LED Resolution: The 8-bit PWM resolution (0-255 brightness levels) may be insufficient for subtle transitions in quieter passages of music.
      3. Processing Power: The Arduino’s limited processing capability restricts the implementation of more advanced visualization algorithms directly on the microcontroller.

Audio Processing Challenges

Several challenges affect the audio processing pipeline:

      • Frequency Band Overlap: Musical elements often overlap in the frequency spectrum. For example, vocals and certain instruments share frequency ranges, making perfect separation impossible with simple band-pass filtering.
      • Environmental Noise: Background noise affects visualization accuracy, especially in quiet passages. The implemented noise floor thresholding helps but can’t eliminate all false triggers.
      • Beat Detection Reliability: Beat detection works well for music with clear rhythmic patterns but struggles with complex or evolving rhythms, necessitating the fallback to fixed tempo mode.
# If we haven't detected a beat in a while, go back to fixed tempo
if current_time - self.last_beat_time > 2.0:
    use_fixed_tempo = True
      • Device Compatibility: Audio device selection and configuration varies across systems, requiring the robust fallback mechanisms implemented in the software.

 

Conclusion

This research presented a comprehensive real-time audio visualization system that successfully maps frequency bands to musical elements through integrated hardware and software components. The system effectively balances technical constraints with user experience considerations to create a responsive, intuitive, and visually pleasing representation of audio content.

Key Contributions

      1. Musical Element Visualization: The system goes beyond simple amplitude visualization by isolating and representing distinct musical elements (vocals, chord, snares, claps, bass), creating a more meaningful and informative visual experience.
      2. Integrated Hardware-Software Architecture: The clean separation between Arduino hardware control and Python-based audio processing creates a flexible and extensible system architecture that leverages the strengths of both platforms.
      3. Adaptive Processing Techniques: The implementation of customized smoothing, band-specific energy calculation, and adaptive beat detection demonstrates sophisticated audio processing techniques that enhance visualization quality.
      4. Multi-Modal User Interface: The system provides multiple interaction modes (visualization, animation, volume control) through a simple hardware interface, expanding its utility beyond mere visualization.

Future Work

      1. Enhanced Visualization Hardware: Integrating RGB LED strips would allow for color-based visualization in addition to brightness, significantly expanding the system’s expressive capabilities.
      2. Machine Learning Integration: Implementing machine learning algorithms for more accurate separation of musical elements and genre-specific optimization would improve visualization accuracy.
      3. MIDI Integration: Adding MIDI synchronization would improve tempo tracking and enable direct integration with digital audio workstations and other music production software.
      4. Expanded Channel Configuration: Increasing the number of frequency bands and corresponding LEDs would allow for more detailed visualization of musical structure.

The developed system provides a solid foundation for future research in audio visualization, with applications in music education, performance enhancement, accessibility, and entertainment.

⁂ REFERENCES

Smith, J., and H. Lee. “Real-Time Audio Processing with Python.” Journal of Audio Engineering 71, no. 4 (2023): 210–225.

Johnson, M., and W. Chen. “Audio Visualization Techniques for Embedded Systems.” In IEEE International Conference on Multimedia & Expo, 1–6. 2022.

Arduino Team. Arduino Programming Handbook. Arduino LLC, 2024.

Oppenheim, A. V., and R. W. Schafer. Discrete-Time Signal Processing. Prentice Hall, 1999.

Rossum, G. PyAudio Documentation. 2023. http://people.csail.mit.edu/hubert/pyaudio/.

Harris, F. J. “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform.” Proceedings of the IEEE 66, no. 1 (1978): 51–83.

Moore, B. C. J. An Introduction to the Psychology of Hearing. Brill, 2012.

Scheirer, E. D. “Tempo and Beat Analysis of Acoustic Musical Signals.” Journal of the Acoustical Society of America 103, no. 1 (1998): 588–601.

Schlüter, J., and S. Böck. “Deep Learning for Music Information Retrieval.” IEEE Signal Processing Magazine 40, no. 2 (2023): 22–37.

Adafruit Industries. Adafruit NeoPixel Überguide. 2024. https://learn.adafruit.com/adafruit-neopixel-uberguide.

Steve Brunton, The Fast Fourier Transform (FFT)
https://www.youtube.com/watch?v=E8HeD-MUrjY

 

Reading Response 9.b – Making Interactive Art: Set the Stage, Then Shut Up and Listen

This article covers topics like physical interaction and interaction design. The author argues that the artist should have a shift in perspective regarding the expression and interpretation of their own artwork. He argues that artists pre-describe and set their own interpretation, telling the participants how to think and act. 

However, the only primary task an artist has is the interactive framework. Once this is done, the artist should “shut up and listen”. The static artwork must shift into a dynamic performance, with the audience being the central character. The conversation doesn’t only rely on the artist letting the audience speak, but also in listening to the audience, their comments and analysis. 

This reading made me realise how much collaboration and listening is important in creating artworks. I realised that over-explaining artworks and projects really takes away from the whole experience of the audience. The true beauty in immersion and interaction lies in the audience being allowed to freely communicate with the artwork. This also allows both of us to interact and learn from each others’ interpretations, creating a wider space for creative freedom and expression. 



Reading Response – Physical Computing Greatest hits (and misses)

This article explores various themes in physical computing, their mechanisms and applications. Many different projects are shown here, including theremin-like instruments, gloves, video mirrors, dolls and pets and Scooby-Doo paintings. 

The ones that I found most interesting are definitely the mechanical pixels and the multi-touch interfaces. The mechanical pixels were very similar to many projects I saw in fashion, where there are moving parts in structured ball gowns; adding mechanical components to clothing was pretty interesting. These certainly aren’t very practical, but rather serve as a structured display of fashion and aesthetics. I wondered whether the same technique was being applied here. I have interacted with multi-touch interfaces before at art exhibitions and installations at Expo 2020 Dubai, where the movements of my hands would direct shooting stars and flying comets. This sort of physical computing has always fascinated me, it really feels fun and interactive. 

Overall, I found many of the projects very inspirational and interesting. Physical computing is a very new discipline for me, and these projects helped me see the trends and themes evolving within the broad field. These projects definitely serve as a foundation for many great projects, some hits and some misses.



Arduino: analog input & output

This week I used both an analog and a digital sensor to control two LEDs in different ways. I used the ultrasonic distance sensor to measure distance. For the digital sensor. My setup controls two LEDs: one blinks and the other changes brightness using PWM.

Here’s how it works:
LED1, connected to a regular digital pin, blinks faster when an object is closer and slower when it’s farther away. The delay between blinks is based on the distance in centimeters. So the closer the object, the faster the LED blinks. If the object is far, the LED blinks slowly.

LED2, connected to a PWM pin, changes brightness based on the same distance. But instead of getting dimmer when the object is far (which is more common), I made it do the opposite—it’s dim when the object is close and bright when it’s far away. I know it’s the reverse of what people usually do, but I wanted to try something different and see how it looked in action.

the code :

// Pin definitions
const int trigPin = 7;     // HC-SR04 trigger pin
const int echoPin = 6;     // HC-SR04 echo pin
const int led1Pin = 2;     // LED1 pin (digital)
const int led2Pin = 3;     // LED2 pin (PWM)

// Variables
const int maxDistance = 255;  // Maximum meaningful distance (cm)
int distance = 0;             // Measured distance in centimeters
int brightness = 0;           // Variable for brightness

void setup() {
  Serial.begin(9600);
  pinMode(led1Pin, OUTPUT);
  pinMode(led2Pin, OUTPUT);
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
}

long getUltrasonicDistance() {
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);
  return pulseIn(echoPin, HIGH);
}

void loop() {
  // Measure distance
  distance = 0.01723 * getUltrasonicDistance();
  
  // Cap the distance reading
  if(distance > maxDistance) distance = maxDistance;
  if(distance < 2) distance = 2;  // HC-SR04 minimum range
  
  // Serial output
  Serial.print(distance);
  Serial.println(" cm");
  
  // LED1: Blink with distance-dependent delay (capped)
  digitalWrite(led1Pin, HIGH);
  delay(distance);
  digitalWrite(led1Pin, LOW);
  delay(distance);
  
  // LED2: Brighter when closer, dimmer when farther
  //brightness = map(distance, 2, maxDistance, 255, 0); // Inverted mapping
  //brightness = constrain(brightness, 0, 255); // Ensure valid PWM
  //Serial.println(brightness);
  brightness = distance;
  analogWrite(led2Pin, brightness);

video:

https://drive.google.com/drive/u/0/folders/1Kk2lkQgoAyybXSYWVmY2Dog9uQVX_DMq

future improvement:

In the future, I’d like to add sound that reacts to distance, like pitch changes as you move closer or farther. I also want to make the project more interactive overall—maybe by adding more sensors or letting users trigger different responses through movement or touch. This would make the experience feel more playful and alive.

schematic:

Week- 9 reading

This week’s readings made me think about how important it is to be okay with things not always working out. In Physical Computing’s Greatest Hits and Misses, I liked how it showed both the successful projects and the ones that didn’t go as planned. That felt real. I often feel like everything I make has to be perfect, but this reminded me that trying new things and failing is part of the process. It also made me wonder—what really makes something a “failure”? If people still interact with it or feel something from it, maybe it still has value. I want to be more open to things going wrong, because those moments can lead to better ideas.

The second reading, Making Interactive Art: Set the Stage, Then Shut Up and Listen, really stayed with me. I’m used to explaining my work a lot or trying to get people to understand exactly what I meant. But this reading made me realize that sometimes the best thing I can do is let go and let the audience explore for themselves. I don’t fully agree with the idea that artists should just “shut up,” because I think some guidance can help people connect. Still, I like the idea of trusting people more. It made me think about how I can make work that gives people space to feel, move, and react in their own way, without me controlling every part of the experience.

Week 9 – Sensors

Concept: 

When thinking about an idea for this project, I kept coming back to the lights in my room and how they always stayed the same, no matter what the light outside was like, they were either always to bright or too dim. That simple thought inspired me to create a system where they could adjust automatically, so if it’s bright during the day, the lights would stay dim or even stay off, but as the sun sets and it gets darker, they would gradually get brighter and adjust to the environment. 

Implementation/ Setup: 

To implement this, I used the LDR and a button. The button would serve as a simple on/off switch for the LEDs, which would allow me to activate or deactivate the system manually. Then, once the LEDs are turned on, the LDR would take over by detecting the light levels in the room. This means as it gets darker outside, the LDR would read lower light values, which causes the LEDs to become brighter. On the other hand, if it’s bright out, the LEDs automatically dim or even turn off, since additional lighting isn’t really needed. Below is the set up and demo: 

Code/Challenges: 

One of the initial challenges I faced was figuring out how to use both the button and the LDR together in a way that made sense. I wanted the button to first be used to switch the system on, and then only after that should the LDR take over and adjust the LED brightness based on the surrounding light levels. To solve this, I used a boolean variable ledOn, which was inititally set as false, that toggles the LED system on or off when the button is pressed . Once the system is turned on (ledOn true), the LDR starts reading the  light and adjusts the LED brightness accordingly, so darker surroundings make the LEDs brighter, and brighter conditions dim the LEDs. This setup ensured that the button controlled the system’s activation, while the LDR only adjusted the lighting when the system was switched on.

// Detect button press (transition from LOW to HIGH)
 if (buttonState == HIGH && lastButtonState == LOW) {
   delay(50);  // Debounce
   ledOn = !ledOn;  // flips value of ledOn, toggles between on/off
 }


 lastButtonState = buttonState;  // Save current state for next loop


 if (ledOn) {
   // Read light level and map it to brightness
   int lightLevel = analogRead(ldrPin);  // 0 (dark) to 1023 (bright)
   int brightness = map(lightLevel, 0, 1023, 255, 0);  // Invert to make LED brighter in dark


  
   analogWrite(ledPin1, brightness);
   analogWrite(ledPin2, brightness);
 } else {
   // Turn LEDs off
   digitalWrite(ledPin1, LOW);
   digitalWrite(ledPin2, LOW);
 }
}

Reflection/Improvements: 

Reflecting on this project, one key area for improvement would be expanding the functionality of the button to maybe control more features. So, currently, the button toggles the LED system on or off, but I think it could also be used to switch between different modes or lighting patterns for more dynamic control. For example, I could program the button to cycle through several lighting modes. The first press could turn the system on with brightness adjusted by the LDR as it currently works. A second press could set the LEDs to a fixed brightness level, independent of the light conditions outside, giving the user more control over the lighting. A third press could switch the LEDs to a blinking pattern.