Week 10: The Arduino Piano (Takudzwa & Bismark)

The final product for you convenience is here: https://youtu.be/62UTvttGflo

Concept:

The motivation behind our project was to create a unique piano-like instrument using Arduino circuits. By utilizing two breadboards, we had a larger workspace, allowing for a more complex setup. We incorporated a potentiometer as a frequency controller—adjusting it changes the pitch of the sounds produced, making the instrument tunable. To enhance the experience, we added synchronized LED lights, creating a visual element that complements the sound. This combination of light and music adds a fun, interactive touch to the project. Here’s the project cover:

The tools used for this project were: The potentiometer, Piezo Speaker, LEDs, 10k & 330 ohm resistors, push buttons and jump wires.

Execution:

The following was the schematic for our project, which served as the foundation that allowed us to successfully execute this project:

The following Arduino code snippet brought our project to life, controlling both sound and light to create an interactive musical experience:

void setup() {
  // Set button and LED pins as inputs and outputs
  for (int i = 0; i < 4; i++) {
    pinMode(buttonPins[i], INPUT);       // Button pins as input
    pinMode(ledPins[i], OUTPUT);         // LED pins as output
  }
  pinMode(piezoPin, OUTPUT);             // Speaker pin as output
}

void loop() {
  int potValue = analogRead(potPin);                    // Read potentiometer value
  int pitchAdjust = map(potValue, 0, 1023, -100, 100);  // Map pot value to pitch adjustment range

  // Check each button for presses
  for (int i = 0; i < 4; i++) {
    if (digitalRead(buttonPins[i]) == HIGH) {         // If button is pressed
      int adjustedFreq = notes[i] + pitchAdjust;      // Adjust note frequency based on potentiometer
      tone(piezoPin, adjustedFreq);                   // Play the adjusted note
      digitalWrite(ledPins[i], HIGH);                 // Turn on the corresponding LED
      delay(200);                                     // Delay to avoid rapid flashing
      noTone(piezoPin);                               // Stop the sound
      digitalWrite(ledPins[i], LOW);                  // Turn off the LED
    }
  }
}

 

Finally, the final project can be found here: https://youtu.be/62UTvttGflo

Reflection:

Although our project may seem simple, we encountered several challenges during its development. Initially, we accidentally placed the digital pins incorrectly, preventing the project from functioning as expected. After hours of troubleshooting, we sought help to identify the issue. This experience turned into a valuable teamwork activity, helping us grow as students and problem-solvers. I view challenges like these as opportunities to build skills I can apply to future projects, including my final one. To enhance this project further, I would improve its visual design and sound quality to make it more appealing to a wider audience. That’s all for now!

Week 10 – Handisyn – An Instrument (Group project by Zavier & Xiaotian)

INTRO

What makes something an instrument? I’d like to borrow a concept from Rehding 2016, Three Music-Theory Lessons: ‘epistemic things,’ things that we use to practice knowledge while themselves are the knowledge in terms of how knowledge is generated. Aside from the extent to which a person can easily make acoustically pleasant sound waves out of an installation, what constitutes an instrument is that itself is an epistemic thing—having connections with and making it feasible to practice our common musical knowledge on while embodying a distinct system that generates/contributes to musical knowledge/practice.

That being said, how we divided our work is basically mapping the two sides (and the two stage of the development) to the two of us—the music and the interface, the backend and the frontend, the soft and the hard, etc. On my side, I had the Arduino and the speaker to start up a very basic synthesizer consisting of two audio oscillators and two control oscillators.

Process

In a nutshell, what I tried to realize is a simple cosine wave oscillator frequency modulated according to the four input parameters that control root frequency: intensity (the extent to which the root signal is modulated to the target frequency), vibrato speed (the rate at which the modulation is automatically carried on), and modulation ratio (the multiple to determine the target frequency from the root).

#include <Mozzi.h>
#include <Oscil.h> // oscillator
#include <tables/cos2048_int8.h> // table for Oscils to play
#include <Smooth.h>
#include <AutoMap.h>

// desired carrier frequency max and min, for AutoMap
const int MIN_CARRIER_FREQ = 22;
const int MAX_CARRIER_FREQ = 440;

// desired intensity max and min, for AutoMap, inverted for reverse dynamics
const int MIN_INTENSITY = 10;
const int MAX_INTENSITY = 1000;

// desired modulation ratio max and min
const int MIN_MODRATIO = 5;
const int MAX_MODRATIO = 2;

// desired mod speed max and min, for AutoMap, note they're inverted for reverse dynamics
const int MIN_MOD_SPEED = 10000;
const int MAX_MOD_SPEED = 1;

AutoMap kMapCarrierFreq(400,700,MIN_CARRIER_FREQ,MAX_CARRIER_FREQ);
AutoMap kMapIntensity(400,700,MIN_INTENSITY,MAX_INTENSITY);
AutoMap kMapModRatio(400,700,MIN_MODRATIO,MAX_MODRATIO);
AutoMap kMapModSpeed(400,700,MIN_MOD_SPEED,MAX_MOD_SPEED);

const int FREQ_PIN = 0;
const int MOD_PIN = 1;
const int RATIO_PIN = 3;
const int SPEED_PIN = 2;

Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aCarrier(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_CONTROL_RATE> kIntensityMod(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aModulator(COS2048_DATA);

int mod_ratio; // harmonics
long fm_intensity; // carries control info from updateControl() to updateAudio()

// smoothing for intensity to remove clicks on transitions
float smoothness = 0.95f;
Smooth <long> aSmoothIntensity(smoothness);

void setup(){
  Serial.begin(115200); // set up the Serial output for debugging
  startMozzi();
}

void updateControl(){
  // read the freq
  int freq_value = mozziAnalogRead<10>(FREQ_PIN); // value is 0-1023

  // map the input to carrier frequency
  int carrier_freq = kMapCarrierFreq(freq_value);

  // read the ratio
  int ratio_value = mozziAnalogRead<10>(RATIO_PIN);

  // map the input to ratio
  mod_ratio = kMapModRatio(ratio_value);

  //calculate the modulation frequency to stay in ratio
  int mod_freq = carrier_freq * mod_ratio;
 
  // set the FM oscillator frequencies to the calculated values
  aCarrier.setFreq(carrier_freq);
  aModulator.setFreq(mod_freq);

  // calculate the fm_intensity
  int mod_level= mozziAnalogRead<10>(MOD_PIN); // value is 0-1023
  fm_intensity = ((long)mod_level * (kIntensityMod.next()+128))>>8;

  // use a float here for low frequencies
  int speed_value = mozziAnalogRead<10>(SPEED_PIN);
  float mod_speed = (float)kMapModSpeed(speed_value)/1000;
  kIntensityMod.setFreq(mod_speed);
}

AudioOutput updateAudio(){
  long modulation = aSmoothIntensity.next(fm_intensity) * aModulator.next();
  return MonoOutput::from8Bit(aCarrier.phMod(modulation)); // phMod does the FM
}

void loop(){
  audioHook();
}

THE GLOVE

Zavier here, now it’s my turn! So after testing the circuit and code (huge thanks to Xiaotian for the awesome sound control!), I began working on attaching it to the glove, which was… painful. I first just measured out and positioned things, then I started attaching them with my small transparent tape (which isn’t the ideal way (sowing it would be better), and besides my tape wasn’t strong at all). After asking a friend for help, I got the sensors attached. Ok, the easy part done. Now the troubles begin. You see, I obviously had to connect the flex sensors to the Arduino. I thought I could I just use the female-male wires, but nope! The pins on the flex sensors were too small, so it was far too lose :(. I tried thinking of a few other options, but in the end, I had to do what I was trying to avoid, soldering. To be honest, I didn’t even mind it that much before (past me talking), and thought it would take a few fun minutes, but boy oh boy. I don’t know what it is about these solders (maybe they’re dirty?), but only a very tiny bit of the tip is actually going to melt the solder, so a lot of time was spent just moving and rotating the tip. Also, I shouldn’t have attached the flex sensors already. It was a huge pain to get them soldered. Now admittedly, (probably a large) part of that is because I’ve hardly soldered before, but also (in addition to the tip issue), I was making these 3 point connections (such as connecting the ground of one the flex sensors, to the ground of the adjacent ones), so whenever I tried soldering, it would just release the other 2!

Anyways, after some work, I finally got the flex sensors wired up, and it was finally working. Great! Ok, we’re done… we’re done?…. right?. Haha, nope. I thought it would be a good idea to add neopixels (addressable LED strips) to the gloves too. After testing a strip, I didn’t repeat my mistake, and this time I soldered together the strips first, before attaching them. This went a lot smoother (also thanks to just having some experience doing it for the flex sensors), but it still took sooo long. Unfortunately, since I soldered it first, the connections weren’t the right length 🙂. Luckily, I had expected them not to be precisely correct (and besides the distance between the strips would change a bit as the hand was flexed and relaxed), and so kept it a bit longer, so that the length could be adjusted as needed. This unintentionally also ended up creating a nice pattern 😅.

While it was a lot of work, it definitely made things a LOT cooler, and also provided a way to give some info to the user visually.

 

Final Product

Demo (note: the sound is very distorted in the recording)

Code:

// Configuring Mozzi's options
#include <MozziConfigValues.h>
#define MOZZI_ANALOG_READ_RESOLUTION 10 // Not strictly necessary, as Mozzi will automatically use the default resolution of the hardware (eg. 10 for the Arduino Uno), but they recommend setting it (either here globally, or on each call)

#include <Mozzi.h>
#include <Oscil.h> // oscillator
#include <tables/cos2048_int8.h> // table for Oscils to play
#include <Smooth.h>
#include <AutoMap.h>

#include <FastLED.h>


// Flex sensor stuff

// Define flex sensor pins (these have to be analog)
const int FREQ_SENSOR_PIN = A0;
const int MOD_SENSOR_PIN = A1;
const int SPEED_SENSOR_PIN = A2;
const int RATIO_SENSOR_PIN = A3;

// Smoothening for each pin (was previously using rolling averages)
Smooth<unsigned int> smoothFreq(0.8f);
Smooth<unsigned int> smoothMod(0.5f);
Smooth<unsigned int> smoothSpeed(0.75f);
Smooth<unsigned int> smoothRatio(0.5f);

// Input ranges for flex sensors (will be calibrated)
unsigned int freqInputMin = 1000; // Just FYI, the flex sensors in our setup roughly output in the range of ~ 200 - 650
unsigned int freqInputMax = 0;
unsigned int modInputMin = 1000;
unsigned int modInputMax = 0;
unsigned int speedInputMin = 1000;
unsigned int speedInputMax = 0;
unsigned int ratioInputMin = 1000;
unsigned int ratioInputMax = 0;


// Neopixel (addressable LED strip) stuff

// Define neopixel pins
const int FREQ_NEOPIXEL_PIN = 2;
const int MOD_NEOPIXEL_PIN = 3;
const int SPEED_NEOPIXEL_PIN = 4;
const int RATIO_NEOPIXEL_PIN = 5;

// Number of LEDs in each strip
const int NEOPIXEL_NUM_LEDS = 11;

// Define the array of leds
CRGB freqLEDs[NEOPIXEL_NUM_LEDS];
CRGB modLEDs[NEOPIXEL_NUM_LEDS];
CRGB speedLEDs[NEOPIXEL_NUM_LEDS];
CRGB ratioLEDs[NEOPIXEL_NUM_LEDS];


// Sound stuff

// desired carrier frequency max and min, for AutoMap
const int MIN_CARRIER_FREQ = 22;
const int MAX_CARRIER_FREQ = 440;

// desired intensity max and min, for AutoMap, inverted for reverse dynamics
const int MIN_INTENSITY = 10;
const int MAX_INTENSITY = 1000;

// desired modulation ratio max and min
const int MIN_MOD_RATIO = 5;
const int MAX_MOD_RATIO = 2;

// desired mod speed max and min, for AutoMap, note they're inverted for reverse dynamics
const int MIN_MOD_SPEED = 10000;
const int MAX_MOD_SPEED = 1;

Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aCarrier(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_CONTROL_RATE> kIntensityMod(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aModulator(COS2048_DATA);

int mod_ratio; // harmonics
long fm_intensity; // carries control info from updateControl() to updateAudio()

// smoothing for intensity to remove clicks on transitions
float smoothness = 0.95f;
Smooth<long> aSmoothIntensity(smoothness);


void setup(){
  Serial.begin(9600); // set up the Serial output for debugging

  // Set the flex sensor pins
  pinMode( FREQ_SENSOR_PIN, INPUT_PULLUP);
  pinMode(  MOD_SENSOR_PIN, INPUT_PULLUP);
  pinMode(SPEED_SENSOR_PIN, INPUT_PULLUP);
  pinMode(RATIO_SENSOR_PIN, INPUT_PULLUP);

  // Setup the neopixels
	FastLED.addLeds<NEOPIXEL, FREQ_NEOPIXEL_PIN>(freqLEDs, NEOPIXEL_NUM_LEDS);
	FastLED.addLeds<NEOPIXEL, MOD_NEOPIXEL_PIN>(modLEDs, NEOPIXEL_NUM_LEDS);
	FastLED.addLeds<NEOPIXEL, SPEED_NEOPIXEL_PIN>(speedLEDs, NEOPIXEL_NUM_LEDS);
	FastLED.addLeds<NEOPIXEL, RATIO_NEOPIXEL_PIN>(ratioLEDs, NEOPIXEL_NUM_LEDS);
	FastLED.setBrightness(32); // 0 - 255

  // Feed/prime/initialise the smoothing function to get a stable output from the first read (to ensure the calibration isn't messed up). A value of 1630 was chosen by trial and error (divide and conquer), and seems to work best (at least for our setup)
  smoothFreq.next(1630);
  smoothMod.next(1630);
  smoothSpeed.next(1630);
  smoothRatio.next(1630);

  startMozzi();
}


// Basically our actual traditional loop in Mozzi (but still needs to kept reasonably lean and fast)
void updateControl(){

  // Read the smoothened freq
  int freqValue = smoothFreq.next(mozziAnalogRead(FREQ_SENSOR_PIN - 14)); // value is 0-1023, -14 since mozzi just takes a number (eg. 0 instead of A0), and the analog ones are 14 onwards

  // Calibrate the mapping if needed
  if (freqValue < freqInputMin) freqInputMin = freqValue;
  if (freqValue > freqInputMax) freqInputMax = freqValue;

  // Map the input to the carrier frequency
  int carrier_freq = map(freqValue, freqInputMin, freqInputMax, MIN_CARRIER_FREQ, MAX_CARRIER_FREQ);


  // Read the smoothened ratio
  int ratioValue = smoothRatio.next(mozziAnalogRead(RATIO_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (ratioValue < ratioInputMin) ratioInputMin = ratioValue;
  if (ratioValue > ratioInputMax) ratioInputMax = ratioValue;

  // Map the input to the ratio
  mod_ratio = map(ratioValue, ratioInputMin, ratioInputMax, MIN_MOD_RATIO, MAX_MOD_RATIO);


  // calculate the modulation frequency to stay in ratio
  int mod_freq = carrier_freq * mod_ratio;
 
  // set the FM oscillator frequencies to the calculated values
  aCarrier.setFreq(carrier_freq);
  aModulator.setFreq(mod_freq);


  // Read the smoothened mod
  int modValue = smoothMod.next(mozziAnalogRead(MOD_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (modValue < modInputMin) modInputMin = modValue;
  if (modValue > modInputMax) modInputMax = modValue;

  // Calculate the fm_intensity
  fm_intensity = ((long)modValue * (kIntensityMod.next()+128))>>8;


  // Read the smoothened speed
  int speedValue = smoothSpeed.next(mozziAnalogRead(SPEED_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (speedValue < speedInputMin) speedInputMin = speedValue;
  if (speedValue > speedInputMax) speedInputMax = speedValue;

  // use a float here for low frequencies
  float mod_speed = (float)map(speedValue, speedInputMin, speedInputMax, MIN_MOD_SPEED, MAX_MOD_SPEED) / 1000;
  kIntensityMod.setFreq(mod_speed);


  // Set the leds

  FastLED.clear(); // Resets them

  // The frequency controls how many of the LEDs are light up (in a rainbow colour)
  int freqLEDAmount = map(freqValue, freqInputMin, freqInputMax, 0, NEOPIXEL_NUM_LEDS);
  fill_rainbow(&freqLEDs[NEOPIXEL_NUM_LEDS - freqLEDAmount], freqLEDAmount, CRGB::White, 25); // &...LEDs[i] to start lighting from there, allowing us to light them in reverse

  // For the mod, show a meter (blue - deep pink) showing the mix level of the 2 sounds
  int modLEDAmount = map(modValue, modInputMin, modInputMax, 0, NEOPIXEL_NUM_LEDS);
  fill_solid(modLEDs, NEOPIXEL_NUM_LEDS, CRGB::Blue);
  fill_solid(&modLEDs[NEOPIXEL_NUM_LEDS - modLEDAmount], modLEDAmount, CRGB::DeepPink);

  // The speed controls the blinking rate of its LEDs (between 1/2 to 3 seconds per blink cycle)
  int speedLEDBlinkRate = map(speedValue, speedInputMin, speedInputMax, 500, 3000);
  if (millis() % speedLEDBlinkRate < speedLEDBlinkRate/2)
	fill_rainbow(speedLEDs, NEOPIXEL_NUM_LEDS, CRGB::White, 25);

  // The ratio controls the hue of its LEDs
  int ratioLEDHue = map(ratioValue, ratioInputMin, ratioInputMax, 0, 360);
  fill_solid(ratioLEDs, NEOPIXEL_NUM_LEDS, CHSV(ratioLEDHue, 100, 50));
  // We could also blend between 2 colours based on the ratio, pick the one you prefer
  // fract8 ratioLEDFraction = map(ratioValue, ratioInputMin, ratioInputMax, 0, 255);
  // fill_solid(ratioLEDs, NEOPIXEL_NUM_LEDS, blend(CRGB::Blue, CRGB::DeepPink, ratioLEDFraction));

  FastLED.show(); // Shows them
}


// Mozzi's function for getting the sound. Must be as light and quick as possible to ensure the sound buffer is adequently filled
AudioOutput updateAudio() {
  long modulation = aSmoothIntensity.next(fm_intensity) * aModulator.next();
  return MonoOutput::from8Bit(aCarrier.phMod(modulation)); // phMod does the FM
}


// Since we're using Mozzi, we just call its hook
void loop() {
  audioHook();
}

 

Week 10: Musical Instrument – Handisyn (Group project by Zavier & Xiaotian)

INTRO

What makes something an instrument? I’d like to borrow a concept from Rehding 2016, Three Music-Theory Lessons: ‘epistemic things,’ things that we use to practice knowledge while themselves are the knowledge in terms of how knowledge is generated. Aside from the extent to which a person can easily make acoustically pleasant sound waves out of an installation, what constitutes an instrument is that itself is an epistemic thing—having connections with and making it feasible to practice our common musical knowledge on while embodying a distinct system that generates/contributes to musical knowledge/practice.

That being said, how we divided our work is basically mapping the two sides (and the two stage of the development) to the two of us—the music and the interface, the backend and the frontend, the soft and the hard, etc. On my side, I had the Arduino and the speaker to start up a very basic synthesizer consisting of two audio oscillators and two control oscillators.

Process

In a nutshell, what I tried to realize is a simple cosine wave oscillator frequency modulated according to the four input parameters that control root frequency: intensity (the extent to which the root signal is modulated to the target frequency), vibrato speed (the rate at which the modulation is automatically carried on), and modulation ratio (the multiple to determine the target frequency from the root).

#include <Mozzi.h>
#include <Oscil.h> // oscillator
#include <tables/cos2048_int8.h> // table for Oscils to play
#include <Smooth.h>
#include <AutoMap.h> 

// desired carrier frequency max and min, for AutoMap
const int MIN_CARRIER_FREQ = 22;
const int MAX_CARRIER_FREQ = 440;

// desired intensity max and min, for AutoMap, inverted for reverse dynamics
const int MIN_INTENSITY = 10;
const int MAX_INTENSITY = 1000;

// desired modulation ratio max and min
const int MIN_MODRATIO = 5;
const int MAX_MODRATIO = 2;

// desired mod speed max and min, for AutoMap, note they're inverted for reverse dynamics
const int MIN_MOD_SPEED = 10000;
const int MAX_MOD_SPEED = 1;

AutoMap kMapCarrierFreq(400,700,MIN_CARRIER_FREQ,MAX_CARRIER_FREQ);
AutoMap kMapIntensity(400,700,MIN_INTENSITY,MAX_INTENSITY);
AutoMap kMapModRatio(400,700,MIN_MODRATIO,MAX_MODRATIO);
AutoMap kMapModSpeed(400,700,MIN_MOD_SPEED,MAX_MOD_SPEED);

const int FREQ_PIN = 0; 
const int MOD_PIN = 1; 
const int RATIO_PIN = 3;
const int SPEED_PIN = 2;

Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aCarrier(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_CONTROL_RATE> kIntensityMod(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aModulator(COS2048_DATA);

int mod_ratio; // harmonics
long fm_intensity; // carries control info from updateControl() to updateAudio()

// smoothing for intensity to remove clicks on transitions
float smoothness = 0.95f;
Smooth <long> aSmoothIntensity(smoothness);

void setup(){
  Serial.begin(115200); // set up the Serial output for debugging
  startMozzi(); 
}

void updateControl(){
  // read the freq
  int freq_value = mozziAnalogRead<10>(FREQ_PIN); // value is 0-1023

  // map the input to carrier frequency
  int carrier_freq = kMapCarrierFreq(freq_value);

  // read the ratio
  int ratio_value = mozziAnalogRead<10>(RATIO_PIN);

  // map the input to ratio
  mod_ratio = kMapModRatio(ratio_value);

  //calculate the modulation frequency to stay in ratio
  int mod_freq = carrier_freq * mod_ratio;
  
  // set the FM oscillator frequencies to the calculated values
  aCarrier.setFreq(carrier_freq);
  aModulator.setFreq(mod_freq);

  // calculate the fm_intensity
  int mod_level= mozziAnalogRead<10>(MOD_PIN); // value is 0-1023
  fm_intensity = ((long)mod_level * (kIntensityMod.next()+128))>>8;

  // use a float here for low frequencies
  int speed_value = mozziAnalogRead<10>(SPEED_PIN);
  float mod_speed = (float)kMapModSpeed(speed_value)/1000;
  kIntensityMod.setFreq(mod_speed);
}

AudioOutput updateAudio(){
  long modulation = aSmoothIntensity.next(fm_intensity) * aModulator.next();
  return MonoOutput::from8Bit(aCarrier.phMod(modulation)); // phMod does the FM
}

void loop(){
  audioHook();
}

THE GLOVE

Zavier here, now it’s my turn! So after testing the circuit and code (huge thanks to Xiaotian for the awesome sound control!), I began working on attaching it to the glove, which was… painful. I first just measured out and positioned things, then I started attaching them with my small transparent tape (which isn’t the ideal way (sowing it would be better), and besides my tape wasn’t strong at all). After asking a friend for help, I got the sensors attached. Ok, the easy part done. Now the troubles begin. You see, I obviously had to connect the flex sensors to the Arduino. I thought I could I just use the female-male wires, but nope! The pins on the flex sensors were too small, so it was far too lose :(. I tried thinking of a few other options, but in the end, I had to do what I was trying to avoid, soldering. To be honest, I didn’t even mind it that much before (past me talking), and thought it would take a few fun minutes, but boy oh boy. I don’t know what it is about these solders (maybe they’re dirty?), but only a very tiny bit of the tip is actually going to melt the solder, so a lot of time was spent just moving and rotating the tip. Also, I shouldn’t have attached the flex sensors already. It was a huge pain to get them soldered. Now admittedly, (probably a large) part of that is because I’ve hardly soldered before, but also (in addition to the tip issue), I was making these 3 point connections (such as connecting the ground of one the flex sensors, to the ground of the adjacent ones), so whenever I tried soldering, it would just release the other 2!

Anyways, after some work, I finally got the flex sensors wired up, and it was finally working. Great! Ok, we’re done… we’re done?…. right?. Haha, nope. I thought it would be a good idea to add neopixels (addressable LED strips) to the gloves too. After testing a strip, I didn’t repeat my mistake, and this time I soldered together the strips first, before attaching them. This went a lot smoother (also thanks to just having some experience doing it for the flex sensors), but it still took sooo long. Unfortunately, since I soldered it first, the connections weren’t the right length 🙂. Luckily, I had expected them not to be precisely correct (and besides the distance between the strips would change a bit as the hand was flexed and relaxed), and so kept it a bit longer, so that the length could be adjusted as needed. This unintentionally also ended up creating a nice pattern 😅.

While it was a lot of work, it definitely made things a LOT cooler, and also provided a way to give some info to the user visually.

 

Final result:

 

 

 

Demo (note: the sound is very distorted in the recording)

 

Code:

// Configuring Mozzi's options
#include <MozziConfigValues.h>
#define MOZZI_ANALOG_READ_RESOLUTION 10 // Not strictly necessary, as Mozzi will automatically use the default resolution of the hardware (eg. 10 for the Arduino Uno), but they recommend setting it (either here globally, or on each call)

#include <Mozzi.h>
#include <Oscil.h> // oscillator
#include <tables/cos2048_int8.h> // table for Oscils to play
#include <Smooth.h>
#include <AutoMap.h>

#include <FastLED.h>


// Flex sensor stuff

// Define flex sensor pins (these have to be analog)
const int FREQ_SENSOR_PIN = A0;
const int MOD_SENSOR_PIN = A1;
const int SPEED_SENSOR_PIN = A2;
const int RATIO_SENSOR_PIN = A3;

// Smoothening for each pin (was previously using rolling averages)
Smooth<unsigned int> smoothFreq(0.8f);
Smooth<unsigned int> smoothMod(0.5f);
Smooth<unsigned int> smoothSpeed(0.75f);
Smooth<unsigned int> smoothRatio(0.5f);

// Input ranges for flex sensors (will be calibrated)
unsigned int freqInputMin = 1000; // Just FYI, the flex sensors in our setup roughly output in the range of ~ 200 - 650
unsigned int freqInputMax = 0;
unsigned int modInputMin = 1000;
unsigned int modInputMax = 0;
unsigned int speedInputMin = 1000;
unsigned int speedInputMax = 0;
unsigned int ratioInputMin = 1000;
unsigned int ratioInputMax = 0;


// Neopixel (addressable LED strip) stuff

// Define neopixel pins
const int FREQ_NEOPIXEL_PIN = 2;
const int MOD_NEOPIXEL_PIN = 3;
const int SPEED_NEOPIXEL_PIN = 4;
const int RATIO_NEOPIXEL_PIN = 5;

// Number of LEDs in each strip
const int NEOPIXEL_NUM_LEDS = 11;

// Define the array of leds
CRGB freqLEDs[NEOPIXEL_NUM_LEDS];
CRGB modLEDs[NEOPIXEL_NUM_LEDS];
CRGB speedLEDs[NEOPIXEL_NUM_LEDS];
CRGB ratioLEDs[NEOPIXEL_NUM_LEDS];


// Sound stuff

// desired carrier frequency max and min, for AutoMap
const int MIN_CARRIER_FREQ = 22;
const int MAX_CARRIER_FREQ = 440;

// desired intensity max and min, for AutoMap, inverted for reverse dynamics
const int MIN_INTENSITY = 10;
const int MAX_INTENSITY = 1000;

// desired modulation ratio max and min
const int MIN_MOD_RATIO = 5;
const int MAX_MOD_RATIO = 2;

// desired mod speed max and min, for AutoMap, note they're inverted for reverse dynamics
const int MIN_MOD_SPEED = 10000;
const int MAX_MOD_SPEED = 1;

Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aCarrier(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_CONTROL_RATE> kIntensityMod(COS2048_DATA);
Oscil<COS2048_NUM_CELLS, MOZZI_AUDIO_RATE> aModulator(COS2048_DATA);

int mod_ratio; // harmonics
long fm_intensity; // carries control info from updateControl() to updateAudio()

// smoothing for intensity to remove clicks on transitions
float smoothness = 0.95f;
Smooth<long> aSmoothIntensity(smoothness);


void setup(){
  Serial.begin(9600); // set up the Serial output for debugging

  // Set the flex sensor pins
  pinMode( FREQ_SENSOR_PIN, INPUT_PULLUP);
  pinMode(  MOD_SENSOR_PIN, INPUT_PULLUP);
  pinMode(SPEED_SENSOR_PIN, INPUT_PULLUP);
  pinMode(RATIO_SENSOR_PIN, INPUT_PULLUP);

  // Setup the neopixels
    FastLED.addLeds<NEOPIXEL, FREQ_NEOPIXEL_PIN>(freqLEDs, NEOPIXEL_NUM_LEDS);
    FastLED.addLeds<NEOPIXEL, MOD_NEOPIXEL_PIN>(modLEDs, NEOPIXEL_NUM_LEDS);
    FastLED.addLeds<NEOPIXEL, SPEED_NEOPIXEL_PIN>(speedLEDs, NEOPIXEL_NUM_LEDS);
    FastLED.addLeds<NEOPIXEL, RATIO_NEOPIXEL_PIN>(ratioLEDs, NEOPIXEL_NUM_LEDS);
    FastLED.setBrightness(32); // 0 - 255

  // Feed/prime/initialise the smoothing function to get a stable output from the first read (to ensure the calibration isn't messed up). A value of 1630 was chosen by trial and error (divide and conquer), and seems to work best (at least for our setup)
  smoothFreq.next(1630);
  smoothMod.next(1630);
  smoothSpeed.next(1630);
  smoothRatio.next(1630);

  startMozzi(); 
}


// Basically our actual traditional loop in Mozzi (but still needs to kept reasonably lean and fast)
void updateControl(){

  // Read the smoothened freq
  int freqValue = smoothFreq.next(mozziAnalogRead(FREQ_SENSOR_PIN - 14)); // value is 0-1023, -14 since mozzi just takes a number (eg. 0 instead of A0), and the analog ones are 14 onwards

  // Calibrate the mapping if needed
  if (freqValue < freqInputMin) freqInputMin = freqValue;
  if (freqValue > freqInputMax) freqInputMax = freqValue;

  // Map the input to the carrier frequency
  int carrier_freq = map(freqValue, freqInputMin, freqInputMax, MIN_CARRIER_FREQ, MAX_CARRIER_FREQ);


  // Read the smoothened ratio
  int ratioValue = smoothRatio.next(mozziAnalogRead(RATIO_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (ratioValue < ratioInputMin) ratioInputMin = ratioValue;
  if (ratioValue > ratioInputMax) ratioInputMax = ratioValue;

  // Map the input to the ratio
  mod_ratio = map(ratioValue, ratioInputMin, ratioInputMax, MIN_MOD_RATIO, MAX_MOD_RATIO);


  // calculate the modulation frequency to stay in ratio
  int mod_freq = carrier_freq * mod_ratio;
  
  // set the FM oscillator frequencies to the calculated values
  aCarrier.setFreq(carrier_freq);
  aModulator.setFreq(mod_freq);


  // Read the smoothened mod
  int modValue = smoothMod.next(mozziAnalogRead(MOD_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (modValue < modInputMin) modInputMin = modValue;
  if (modValue > modInputMax) modInputMax = modValue;

  // Calculate the fm_intensity
  fm_intensity = ((long)modValue * (kIntensityMod.next()+128))>>8;


  // Read the smoothened speed
  int speedValue = smoothSpeed.next(mozziAnalogRead(SPEED_SENSOR_PIN - 14));

  // Calibrate the mapping if needed
  if (speedValue < speedInputMin) speedInputMin = speedValue;
  if (speedValue > speedInputMax) speedInputMax = speedValue;

  // use a float here for low frequencies
  float mod_speed = (float)map(speedValue, speedInputMin, speedInputMax, MIN_MOD_SPEED, MAX_MOD_SPEED) / 1000;
  kIntensityMod.setFreq(mod_speed);


  // Set the leds

  FastLED.clear(); // Resets them

  // The frequency controls how many of the LEDs are light up (in a rainbow colour)
  int freqLEDAmount = map(freqValue, freqInputMin, freqInputMax, 0, NEOPIXEL_NUM_LEDS);
  fill_rainbow(&freqLEDs[NEOPIXEL_NUM_LEDS - freqLEDAmount], freqLEDAmount, CRGB::White, 25); // &...LEDs[i] to start lighting from there, allowing us to light them in reverse

  // For the mod, show a meter (blue - deep pink) showing the mix level of the 2 sounds
  int modLEDAmount = map(modValue, modInputMin, modInputMax, 0, NEOPIXEL_NUM_LEDS);
  fill_solid(modLEDs, NEOPIXEL_NUM_LEDS, CRGB::Blue);
  fill_solid(&modLEDs[NEOPIXEL_NUM_LEDS - modLEDAmount], modLEDAmount, CRGB::DeepPink);

  // The speed controls the blinking rate of its LEDs (between 1/2 to 3 seconds per blink cycle)
  int speedLEDBlinkRate = map(speedValue, speedInputMin, speedInputMax, 500, 3000);
  if (millis() % speedLEDBlinkRate < speedLEDBlinkRate/2)
    fill_rainbow(speedLEDs, NEOPIXEL_NUM_LEDS, CRGB::White, 25);

  // The ratio controls the hue of its LEDs
  int ratioLEDHue = map(ratioValue, ratioInputMin, ratioInputMax, 0, 360);
  fill_solid(ratioLEDs, NEOPIXEL_NUM_LEDS, CHSV(ratioLEDHue, 100, 50));
  // We could also blend between 2 colours based on the ratio, pick the one you prefer
  // fract8 ratioLEDFraction = map(ratioValue, ratioInputMin, ratioInputMax, 0, 255);
  // fill_solid(ratioLEDs, NEOPIXEL_NUM_LEDS, blend(CRGB::Blue, CRGB::DeepPink, ratioLEDFraction));

  FastLED.show(); // Shows them
}


// Mozzi's function for getting the sound. Must be as light and quick as possible to ensure the sound buffer is adequently filled
AudioOutput updateAudio() {
  long modulation = aSmoothIntensity.next(fm_intensity) * aModulator.next();
  return MonoOutput::from8Bit(aCarrier.phMod(modulation)); // phMod does the FM
}


// Since we're using Mozzi, we just call its hook
void loop() {
  audioHook();
}

 

Week 10 – Reading Response

For this reading, I agree with the author on the limitation of the types of interaction we are using today. He criticized today’s digital device for only limited to “pictures under class”. Of course this type of interaction only allow the hands to feel and receive limited signals. The “pictures under class” leaves out the potentials of texture, weight and multiple other factors, for example, playing piano on a screen does not have the same satisfying feelings as playing on the real piano with hands interaction feedback.

However, I disagree with the author in two main points. The reading focuses too much on the capability of humans but does not take into consideration how we can use that capability to fit our needs. Of course we can have a more dynamic medium, but would that be better than the functionality of the Iphone? Iphones are designed compact and multifunctional, which I think can only achieved best through the “pictures under glass” interaction type. The second thing is that even though our hands are magnificent in many ways, other body parts can create interactions that are not less interesting. For example, the eyes can calculate the distance between objects, recognizing colors, emotions, etc. The “picture under glass” fits well to this capability of the eyes. Hence, I think it’s not a bad medium, it’s just a medium that makes use of other human capability.

Week 10: Reading Response

Bret Victor – A Brief Rant on the Future of Interactive Design

My initial instinct upon reading Bret Victor’s article was to push back and think, “But you’re not really improving anything.” This reaction softened after reading his direct responses to these criticisms. Victor’s defensiveness, in some ways, protects his vision—arguably so. But even beyond that, the true point of his article lies in his challenge to our current conception of interactivity. He questions why we’ve limited ourselves to “single-finger” interaction, arguing that we’re barely scratching the surface of what interactive technology could become. I found myself agreeing, especially when he mentioned that if, 20 years down the line, all we had were glorified iPads, it would be a sign of stagnation. Now, over a decade since the article was written, we’ve indeed developed more advanced interfaces—like VR, AR, and even some early-stage holographic tech—but these technologies haven’t become mainstream, and they haven’t revolutionized interaction to the degree Victor imagined thus proving his point to a degree.

Reflecting on his perspective today, it’s clear he raises valid points. For the field of interactive design to truly evolve, we need critical voices like his, highlighting what’s lacking and pushing the boundaries of what’s possible. Yet, while I appreciate his vision, I also think it’s worth noting that the demand for fully immersive tech isn’t universal. Not everyone wants full-body interaction, and realistically, innovations often emerge only if there’s sufficient market interest. While technologies like VR and AR are groundbreaking, they remain largely inaccessible to many—especially those in marginalized or economically disadvantaged communities. In contrast, iPads and similar devices, while more limited, have found a place even in lower-income communities. Victor’s perspective is compelling and reminds us of the potential for interactive design, but it also underscores the need for accessibility and practical applications.

Thoughts on the Video:

The video accompanying Victor’s article showcases futuristic and visually stunning technologies, like transparent phones and computers. These concepts seem efficient, fast, and intuitive, presenting a vision of an accessible tech-forward society. But this vision quickly becomes complicated when you consider the societal implications. The choice to illustrate this world in a city like Johannesburg, for instance, inadvertently sidelines the broader realities of poverty and inequality. The technology depicted is only accessible to specific demographics, creating an illusion of widespread accessibility that falls short in practice. Can such tech really deliver on its promise of happiness if it only deepens capitalist divides?

Moreover, there’s an unsettling irony in the interactions depicted in the video. People engrossed in advanced technology appear disconnected and isolated, as though their devices detract from meaningful social interactions. This tension isn’t new; many feared that the rise of technology would eventually isolate us, even as it ostensibly connects us. The video seems to highlight this concern, making me reflect on whether such advancements genuinely enhance human happiness or merely serve to reinforce certain fears about a tech-saturated future.

A Brief Rant on the Future of Interaction Design

In A Brief Rant on the Future of Interaction Design, Bret Victor argues that most interaction design today isn’t meaningful enough. He believes designers focus too much on making things look nice on screens rather than creating tools that help people think or solve real problems. This stood out to me because I agree that design should do more than just look good—it should make our lives easier or allow us to do more.

As someone studying computer science and interested in interactive media, I think Victor’s ideas are important. He makes me want to focus on designing tools that actually help users accomplish things rather than just looking nice. His views remind me that good design should be about creating real benefits for people, not just entertainment or convenience.

The responses to A Brief Rant on the Future of Interaction Design show different views. Some people agree with Victor and think design should be more useful, while others say his ideas are too difficult to make real. One response I read pointed out that many companies prefer simple screen designs because they make money more easily. This made me think about the challenges of aiming high in design while facing real-life limits, like budgets or technology.

These responses remind me that good design is a balance between what’s possible and what’s ideal. While Victor’s ideas are inspiring, they also show the need for practical solutions. Moving forward, I want to think more about how to push for meaningful design within real-world limits.

Assignment Week 10: Interactive Musical Instrument

Inspiration

The WaveSynth project is inspired by the theremin, one of the first electronic instruments, invented in 1920 by Russian physicist Léon Theremin. Known for its eerie, vocal-like sound and its unique, touchless control, the theremin uses two antennas to detect the position of the player’s hands: one antenna controls pitch, and the other controls volume. By moving their hands through electromagnetic fields, players can create smooth, flowing sounds without touching the instrument. This expressive control has influenced generations of musicians and has become iconic in sci-fi, horror, and experimental music.

Concept

The WaveSynth is a gesture-controlled musical instrument that turns hand movements and environmental factors into dynamic sound. Designed to be both intuitive and expressive, the WaveSynth combines multiple sensors—ultrasonic, temperature, and a potentiometer—to create a cohesive interface.

The ultrasonic sensor detects hand distance, adjusting either pitch or volume based on the player’s proximity. The potentiometer serves as a mode selector, allowing the user to switch between pitch and volume control, as well as access different sound effects like vibrato, pulse, and temperature modulation. The temperature sensor adds an additional layer of subtlety, with ambient temperature shifts introducing slight pitch modulations, making the instrument responsive to its surroundings.

List of the hardware components used in the WaveSynth project:

  • Arduino Uno
  • HC-SR04 Ultrasonic Sonar Sensor (for gesture-based distance measurement)
  • TMP36GZ Temperature Sensor (for ambient temperature-based modulation)
  • 10k Ohm Potentiometer (for mode and effect selection)
  • Piezo Speaker (for sound output)
  • Connecting Wires (for connections between components and the Arduino)
  • Breadboard (for prototyping and circuit connections)
  • 310 Ohm Resistor (for LED circuit)

Schematic Diagram:

 

Code:

// Pin definitions
const int potPin = A0;            // Analog pin for potentiometer
const int tempPin = A1;           // Analog pin for TMP36GZ temperature sensor
const int trigPin = 3;            // Digital pin for sonar trigger
const int echoPin = 4;            // Digital pin for sonar echo
const int speakerPin = 9;         // Digital pin for speaker
const int ledPin = 5;             // Digital pin for LED (PWM-enabled)

// Variables
int effectType = 0;               // Tracks which effect is active (0: none, 1: vibrato, 2: pulse, 3: temperature modulation)

void setup() {
  pinMode(speakerPin, OUTPUT);      // Speaker as output
  pinMode(trigPin, OUTPUT);         // Sonar trigger as output
  pinMode(echoPin, INPUT);          // Sonar echo as input
  pinMode(ledPin, OUTPUT);          // LED as output
  Serial.begin(9600);               // For debugging output
}

// Function to read distance from the sonar sensor
long readDistance() {
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(10);
  digitalWrite(trigPin, LOW);

  // Measure the pulse duration on the echo pin
  long duration = pulseIn(echoPin, HIGH);
  
  // Calculate distance in centimeters
  long distance = duration * 0.034 / 2;
  return distance;
}

// Function to read temperature from the TMP36GZ
float readTemperature() {
  int tempReading = analogRead(tempPin);             // Read analog value from TMP36
  float voltage = tempReading * (5.0 / 1023.0);      // Convert reading to voltage (0-5V)
  float temperatureC = (voltage - 0.5) * 100.0;      // Convert voltage to temperature in Celsius
  return temperatureC;
}

void loop() {
  // Potentiometer to control mode and effect
  int potValue = analogRead(potPin);                  // Read potentiometer (0-1023)
  bool pitchMode = potValue < 512;                    // Below midpoint is pitch mode, above is volume mode
  
  // Determine the effect based on the potentiometer value ranges
  if (potValue < 256) {
    effectType = 0;                                   // No effect
  } else if (potValue < 512) {
    effectType = 1;                                   // Vibrato
  } else if (potValue < 768) {
    effectType = 2;                                   // Pulse
  } else {
    effectType = 3;                                   // Temperature modulation
  }

  // Read sonar distance and map to a lower pitch range for soothing tones
  long distance = readDistance();                     // Distance in cm
  int baseToneValue = pitchMode ? map(distance, 5, 50, 150, 600) : 440;  // Map distance to pitch if in Pitch Mode
  
  // Control LED brightness based on distance
  int ledBrightness = map(distance, 5, 50, 255, 0);   // Closer is brighter (5 cm = max brightness)
  ledBrightness = constrain(ledBrightness, 0, 255);   // Constrain within 0-255
  analogWrite(ledPin, ledBrightness);                 // Set LED brightness
  
  // Read temperature and map it to a gentle pitch effect
  float temperature = readTemperature();
  int tempEffect = map(temperature, 20, 35, 20, 80);  // Map temperature to subtle pitch modulation
  
  // Debug output to Serial Monitor
  Serial.print("Distance: ");
  Serial.print(distance);
  Serial.print(" cm, LED Brightness: ");
  Serial.print(ledBrightness);
  Serial.print(", Pot Value: ");
  Serial.print(potValue);
  Serial.print(", Effect Type: ");
  Serial.print(effectType);
  Serial.print(", Temperature: ");
  Serial.print(temperature);
  Serial.println(" C");

  // Play sound based on the selected effect type
  switch (effectType) {
    case 0: // No effect
      tone(speakerPin, baseToneValue); // Basic tone based on distance
      break;
    
    case 1: // Smooth Vibrato
      for (int i = 0; i < 20; i++) {
        int vibratoTone = baseToneValue + (sin(i * 0.3) * 10); // Soft vibrato effect with lower amplitude
        tone(speakerPin, vibratoTone, 50); // Short tone bursts for vibrato
        delay(20); // Slightly slower delay for soothing vibrato effect
      }
      break;
      
    case 2: // Gentle Pulse
      tone(speakerPin, baseToneValue);      // Play base tone continuously
      analogWrite(speakerPin, 128);         // Soft fade for pulse effect
      delay(100);                           // Adjust pulse duration for gentler effect
      noTone(speakerPin);                   // Turn off sound briefly to create pulse
      delay(100);                           // Wait before next pulse
      break;
      
    case 3: // Temperature Modulation
      int tempModulatedTone = baseToneValue + tempEffect;  // Adjust pitch slightly based on temperature
      tone(speakerPin, tempModulatedTone); // Continuous tone with slight modulation
      delay(200); // Keep tone smooth
      break;
  }
  
  delay(50); // Small delay for stability
}

Media:

Working Process:

 

  1. Initial Setup and Calibration:
    1. When powered on, the Arduino initializes all sensors and components, including the ultrasonic sensor, temperature sensor, potentiometer, and speaker.
    2. The potentiometer’s position is read to determine the initial mode (Pitch or Volume) and effect (Vibrato, Pulse, Temperature Modulation, or None). The instrument is ready to interpret the player’s gestures and environmental inputs to start producing sound.
  2. Gesture Detection and Distance Measurement:
    1. The player positions their hand near the ultrasonic sensor and moves it to change sound properties.
    2. The ultrasonic sensor measures the distance between the player’s hand and the sensor by sending out an ultrasonic pulse and timing how long it takes for the pulse to bounce back.
    3. The distance value is calculated and then mapped to control either pitch or volume based on the selected mode:
      1. Pitch Mode: The distance between the sensor and the player’s hand changes the pitch of the sound. Closer hand positions produce higher pitches, while farther positions result in lower pitches.
      2. Volume Mode: In this mode, the distance controls the volume of the sound. Closer distances yield louder sounds, and farther distances make the sound quieter.
  3. Sound Modification through Effects:
    1. The potentiometer serves as a selector for various sound effects that add dynamic layers to the base tone. Depending on the potentiometer’s position, the following effects are applied:
      1. No Effect (Basic Tone): The sound responds directly to the pitch or volume based on the hand distance with no additional modulation.
      2. Vibrato Effect: The instrument adds a wave-like oscillation to the pitch, producing a gentle, undulating sound. This effect is applied continuously, allowing the sound to vary smoothly.
      3. Pulse Effect: The sound output is pulsed, creating a rhythmic on-and-off pattern. This effect provides a percussive quality, ideal for rhythmic play.
      4. Temperature Modulation: Ambient temperature subtly adjusts the pitch, creating an atmospheric and evolving sound that changes with the surrounding environment. This effect responds more slowly, allowing the sound to naturally vary over time.
  4. Environmental Adaptation with Temperature Modulation:
    1. When Temperature Modulation is selected, the temperature sensor reads the ambient temperature. The Arduino then uses this temperature reading to modulate the pitch subtly.
    2. For example, warmer temperatures gradually increase the pitch, while cooler temperatures lower it. This effect is gradual and blends naturally with the other sound properties, adding a unique, ambient quality to the instrument’s sound.
  5. Real-Time Sound Output:
    1. The piezo speaker produces sound based on the interpreted data, transforming distance measurements, temperature readings, and selected effects into real-time audio.
    2. The speaker continuously updates its output to reflect the current settings and environmental conditions, providing an immediate response to hand movements and mode changes.
    3. As the player moves their hand closer or farther from the ultrasonic sensor, the sound changes instantly in pitch or volume. Additionally, adjustments to the potentiometer instantly modify the effect applied to the sound.
  6. Interactive Feedback Loop:
    1. The player continuously interacts with the WaveSynth by adjusting their hand position, changing the potentiometer setting, and experiencing the evolving sound.
    2. This interactive feedback loop allows the player to dynamically control and modify the instrument’s output, creating an immersive musical experience that feels responsive and alive.

 

Future Improvement and Challenges

One of the primary challenges encountered was calibrating the sensors to respond smoothly and accurately to the user’s hand movements. Fine-tuning the pitch range and ensuring that the effects—such as vibrato and pulse—blended naturally with the sound output took several iterations to achieve a pleasing result.

The temperature sensor was tough to work with on the Arduino board.

Additionally, integrating digital sound synthesis or MIDI compatibility would enable users to connect the WaveSynth with other musical devices or software, greatly expanding its versatility as a tool for music creation.

Another possible enhancement could be the inclusion of LEDs or other visual feedback elements to indicate mode selection and provide dynamic light effects that correspond to the sound output. This would enhance the visual aspect of the instrument, making it even more engaging for live performances.

Reading Responses: Brief Rant

As a reader and technology enthusiast, I find Bret Victor’s “A Brief Rant on the Future of Interaction Design” to be a thought-provoking critique of current trends in human-computer interaction. Victor’s argument against “Pictures Under Glass” technology and her call for more tactile, three-dimensional interfaces resonates with my own experiences and frustrations with touchscreen devices.Victor’s vivid descriptions of how we use our hands to manipulate objects in the real world highlight the limitations of current touchscreen interfaces. I’ve often felt that something was missing when using my smartphone or tablet, and Victor’s examples of reading a book or drinking from a glass perfectly capture that sense of disconnection. The richness of tactile feedback we get from physical objects is indeed absent in our flat, glassy screens

However, I believe Victor’s critique, while insightful, doesn’t fully acknowledge the benefits of touchscreen simplicity and accessibility. In my experience, touchscreens have made technology more approachable for a wider range of users, including children and the elderly. The ease of use and intuitiveness of swiping and tapping have democratized access to digital tools in ways that more complex interfaces might not.That said, I agree with Victor’s call for more ambitious visions in interaction design. Her example of Alan Kay envisioning the iPad decades before its creation is inspiring and reminds us of the power of long-term, visionary thinking.

As someone who uses technology daily, I’m excited by the possibility of interfaces that better utilize our hands’ capabilities and even our entire bodies.Victor’s argument extends beyond just hands to encompass our entire bodies, noting that we have “300 joints” and “600 muscles”

This resonates with my own experiences of how we naturally use our whole bodies when interacting with the physical world. I’ve often felt constrained by the limited range of motion required to use current devices, and the idea of more holistic, full-body interfaces is intriguing.While I appreciate Victor’s vision, I also recognize the practical challenges of implementing more tactile and three-dimensional interfaces. Issues of cost, durability, and scalability would need to be addressed. Additionally, I believe the future of interaction design will likely involve a combination of approaches, including enhanced haptic feedback, hybrid interfaces that combine touchscreens with physical controls, and multimodal interaction incorporating touch, voice, and gesture.

Week 10: “Banana!!!!!”

Yup, I was watching “Minions: Rise of Gru” over the weekend in order to find some inspiration for this week’s project, and I sure found the basis of my project when I watched this:

Concept

I decided on using a wheatstone bridge setup with two variable resistors (one photoresistor, and ‘the’ banana), and find the absolute difference between the two values and emit different sounds. I also decided to use a potentiometer to determine the pitch of the notes.

Apart from that, the user also has an option to not use the photoresistor at all by toggling the fixed 330 Ohm resistor in its place.

Material Required
  • Arduino Board
  • Jumper Wires
  • Breadboard
  • Potentiometer
  • 10k Ohm Resistors
  • 330 Ohm Resistors
  • SEN-09088 Photocell
  • Potentiometer
  • Banana
Schematic

 

Demonstration Video
Code

This Arduino code reads the voltage difference between V1 and V2, maps it to a frequency for the speaker, and uses a potentiometer to shift the octave.

// Pin configuration
const int v1Pin = A1;       // V1 of Wheatstone bridge
const int v2Pin = A2;       // V2 of Wheatstone bridge
const int potPin = A3;      // Potentiometer for octave control
const int speakerPin = 9;   // Speaker pin

// Variables for calibration and tuning
float baseFrequency = 220.0;  // Base frequency in Hz for a low A note
float maxVoltageDifference = 5.0;  // Maximum expected voltage difference (adjust as necessary)
int maxOctaveShift = 7;       // Maximum number of octaves to shift

void setup() {
  Serial.begin(9600);
  pinMode(v1Pin, INPUT);
  pinMode(v2Pin, INPUT);
  pinMode(potPin, INPUT);
  pinMode(speakerPin, OUTPUT);
}

void loop() {
  // Read voltage difference across V1 and V2
  float v1 = analogRead(v1Pin) * (5.0 / 1023.0);
  float v2 = analogRead(v2Pin) * (5.0 / 1023.0);
  float voltageDifference = abs(v1 - v2);

  // Map voltage difference to a frequency
  float noteFrequency = map(voltageDifference * 100, 0, maxVoltageDifference * 100, baseFrequency, baseFrequency * 2);

  // Read potentiometer and map to octave shift
  int potValue = analogRead(potPin);
  int octaveShift = map(potValue, 0, 1023, 0, maxOctaveShift);

  // Apply octave shift to the frequency
  noteFrequency *= pow(2, octaveShift);

  // Output the frequency to the speaker
  tone(speakerPin, noteFrequency);
  
  // Debugging output
  Serial.print("Voltage Difference: ");
  Serial.print(voltageDifference);
  Serial.print(" V, Frequency: ");
  Serial.print(noteFrequency);
  Serial.println(" Hz");

  delay(50);  // Delay for bouncing error
}
Conclusion and Reflection

Constructing this banana-powered musical instrument was a fun and an instructive process combining artistic design with technological principles. By using a Wheatstone bridge, I was quite free to play around on how I wanted to shape my music instrument.

Challenges Faced

1. Stabilizing the Output:
One major challenge was stabilizing the audio output, as random values were being read when the pencil jockey wasn’t in contact with the banana. This caused unwanted noises and a “wobbling” effect. To overcome this, I decided that I needed a pulldown resistor (here I used 2 Mega Ohms) to avoid reading noise in the Analog Pin A2

2. Mapping Voltage to Frequency Accurately:
Mapping the voltage difference to frequencies that sound musical took some experimentation. Adjusting the base frequency and range, as well as the octave shift, required fine-tuning to achieve the desired range of notes without distorting the sound.

Jingle Bells – Speed Variation

Concept 

In this assignment, I collaborated with @Nelson and we both love Christmas. The famous song Jingle Bells brings memories of the the times. So we explored various possibilities and decided to come up with speed variation of the Jingle Bells melody with respect to distance.

Here is the demonstration Video:

Schematic 

Here is the Schematic  for our Arduino connections:

Code:

In the implementation of our our idea, we searched for possible combinations of the notes and durations to match the Jingle Bells melody and stored them in an array. We then implemented the code mapping distance with durations. The variations in durations for each note make it seem playing faster or slower. Here is the code:

#include "pitches.h"
#define ARRAY_LENGTH(array) (sizeof(array) / sizeof(array[0]))

// Notes and Durations to match the Jingle Bells 
int JingleBells[] = 
{
  NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_G4,
  NOTE_C4, NOTE_D4, NOTE_E4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4,
  NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_D4, NOTE_E4,
  NOTE_D4, NOTE_G4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_G4,
  NOTE_C4, NOTE_D4, NOTE_E4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4,
  NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_D4, NOTE_E4,
  NOTE_D4, NOTE_G4,
};

int JingleBellsDurations[] = {
  4, 4, 4, 4, 4, 4, 4, 4,
  4, 4, 4, 4, 4, 4, 4, 4,
  4, 4, 4, 4, 4, 4, 4, 4,
  4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
  4, 4, 4, 4, 4, 4, 4, 4,
  4, 4, 4, 4, 4, 4, 4, 4,
  4, 4
};

const int echoPin = 7;
const int trigPin = 8;;
const int Speaker1 = 2;
const int Speaker2 = 3;
int volume;

void setup() 
{
// Initialize serial communication:
  Serial.begin(9600);
  pinMode(echoPin, INPUT);
  pinMode(trigPin, OUTPUT);
  pinMode(Speaker1,OUTPUT);
}

void loop() 
{
  long duration,Distance;
  
// Distance Sensor reading
  digitalWrite(trigPin, LOW);
  delayMicroseconds(2);
  digitalWrite(trigPin, HIGH);
  delayMicroseconds(5);
  digitalWrite(trigPin, LOW);
  duration = pulseIn(echoPin, HIGH);
  Distance = microsecondsToCentimeters(duration);

// Map Distance to volume range (0 to 255)
  volume = map(Distance, 0, 100, 0, 255);  
  volume = constrain(volume, 0, 255); 

// Play melody with adjusted volume


 playMelody(Speaker1 , JingleBells, JingleBellsDurations, ARRAY_LENGTH(JingleBells), volume);
  
// Debug output to Serial Monitor
  Serial.print("Distance: ");
  Serial.print(Distance);
  Serial.print("    Volume: ");
  Serial.print(volume);
  Serial.println();
}
// Get Centimeters from microseconds of Sensor
long microsecondsToCentimeters(long microseconds) 
{
  return microseconds / 29 / 2;
}
// PlayMelody function to accept volume and adjust note duration
void playMelody(int pin, int notes[], int durations[], int length, int volume) 
{
  for (int i = 0; i < length; i++) 
  {
// Adjust the note Duration based on the volume
    int noteDuration = (1000 / durations[i]) * (volume / 255.0);  

// Play the note with adjusted Durations
    tone(pin, notes[i], noteDuration);
// Delay to separate the notes
    delay(noteDuration * 1.3);  
    noTone(pin); 
  }
}
Reflection

This week’s assignment was especially interesting because we had a chance to collaborate on the project and combine our imagination and skills to create something original. I really enjoyed working with @Nelson. We worked great as a team by first coming up with an idea and then adjusting the concept to choose the ideal balance between randomness and something ordinary to achieve the final result.

I believe that this project has a lot of potential for future improvements, and perhaps I will use some parts of this week’s assignment for my future ideas and projects. Looking forward to the next weeks of the course!