Final Project – ThumbWave Studio – Final Report – Dachi


My final project mostly stays true to its initial goal – except I combined both of my ideas. This project is an interactive art project that allows you to both visualize and create your own visual art. At core of its project are sin wave structures, which in simple term serve as fundamental graphical elements (used OpenGL for 3D) to create dynamic flowing patterns. The user is free to control them with various gestures and interactions all of which are natural with our hands to perform. For example, pinching your thumb and index finger together and moving your hand up and down. There is also a layer of Arduino integration which serves to expand the functionality with ultrasonic sensor, dictating the zoom level of the canvas, and three buttons which serve different functions. Red one is responsible for toggling Trail mode which lets you create a dynamic painting, the yellow one lets you stop the drawing to then save your work with green button as a form of a screenshot. This simple concept can be expanded to bigger scales. I could imagine this implemented in science or other classes where we demonstrate different mathematical equations or models with which students are able to interact with simple gestures. Conversely, they are able to adjust individual parameters and really see whats going on under the hood. The gestures make it much more intuitive and fun to do so. Moreover, they are free to experiment with trails, creating abstract drawing of their own and coming up with creative ways to combine and fuse different shapes and patterns to come up with beautiful renders.


Example sketches:







Building Arduino interface:


User Testing:

My friend Adi who has already taken this class and has experience in Interactive Media figured it out quite quickly. Before we even started, he told me he had worked with ML5 before and knew of its strengths and weaknesses and what to watch out for. Despite, this he still made some mistakes initially where he did not fully know how to do gestures. I showed him instruction paged and after confirming some of his doubts with hand orientations with me we started testing it second time where I tried to only explain after he did things on his own. He said it was pretty smooth and there were no major delays. I feel like one way to make this more clear for people unlike Adi who have never worked with ML5 before, is to instead of putting pictures and texts, record short GIFS or videos of visual demonstrations so they can see whats happening in 3 dimensions since the project itself is very much three dimensional. It will be interesting to see how people react to it during IM showcase which I will update in the respective section. Here is Adi’s second run:

More on:



Interaction Design:

The interaction design revolves around using hand gestures to control various parameters of the visual art. Each gesture is mapped to a specific parameter, allowing the user to intuitively manipulate the shapes and patterns. For example, pinching the thumb and index finger together and moving the hand up and down controls the X-axis rotation. Conversely, thumb and middle finger controls number of shapes. Thumb and ring finger controls shape detail and thumb and pink changes perceived rotation speed (zAngle). Additionally, using two hands they can change zScale which visually demonstrates larger movement across Z, sort of like pulsing. Moving two hands like accordion further adds to fun and visual candy. Users can also use the physical box with three buttons and distance sensor to change zoom, toggle trail mode, stop/continue drawing and save a screenshot. The use of gestures makes the interaction more engaging and natural, enabling users to explore and experiment with the artwork in a hands-on manner.

Arduino Code:

const int trigPin = 9;
const int echoPin = 10;
const int buttonPin = 2;
const int ledPin = 13;
const int trailButtonPin = 4;
const int trailLedPin = 7;
const int screenshotButtonPin = 12;
const int screenshotLedPin = 8;
const int adminButtonPin = 6;

long duration;
int distance;
int buttonState = 0;
int trailButtonState = 0;
int screenshotButtonState = 0;
int adminButtonState = 0;

void setup() {
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
  pinMode(buttonPin, INPUT);
  pinMode(ledPin, OUTPUT);
  pinMode(trailButtonPin, INPUT);
  pinMode(trailLedPin, OUTPUT);
  pinMode(screenshotButtonPin, INPUT);
  pinMode(screenshotLedPin, OUTPUT);
  pinMode(adminButtonPin, INPUT);

void loop() {
  // Read distance from ultrasonic sensor
  digitalWrite(trigPin, LOW);
  digitalWrite(trigPin, HIGH);
  digitalWrite(trigPin, LOW);
  duration = pulseIn(echoPin, HIGH);
  distance = duration * 0.034 / 2; //calculate distanse
  distance = constrain(distance, 1, 20); //constraint

  // Read button states
  buttonState = digitalRead(buttonPin);
  trailButtonState = digitalRead(trailButtonPin);
  screenshotButtonState = digitalRead(screenshotButtonPin);
  adminButtonState = digitalRead(adminButtonPin);

  // Control LEDs based on button states
  digitalWrite(ledPin, buttonState);
  digitalWrite(trailLedPin, trailButtonState);
  digitalWrite(screenshotLedPin, screenshotButtonState);

  // Send data to p5.js


The Arduino code is responsible for reading data from the ultrasonic sensor and button states, and sending this information to the p5.js sketch. It continuously measures the distance using the ultrasonic sensor, which is then mapped to control the zoom level of the canvas. The code also reads the states of three buttons: a red button for toggling the trail mode, a yellow button for stopping the drawing, and a green button for taking a screenshot. The button states and distance data are sent to p5.js via serial communication.

Description of p5.js code:

The p5.js code handles the visual rendering and interaction of the artwork. It uses the ML5 library for hand pose detection, allowing the sketch to recognize and track hand gestures. The code maps each gesture to a specific parameter of the visual elements, such as X-axis rotation, number of shapes, shape detail, and Z-axis rotation. It also incorporates the data received from the Arduino, using the ultrasonic sensor readings to control the zoom level and the button states to toggle trail mode, stop drawing, and take screenshots. The code creates a dynamic and interactive experience by combining the hand gesture controls with the Arduino inputs.

I have three main sketch files. Sketch.js is the main one and is responsible for drawing the artwork as well as declaring, initializing variables, creating buttons, sliders, other helpers and toggles. It binds whole program together and keeps it running.


let handPose; // ml5.js hand pose object
let video; 
let hands = []; // stores hand data from the pose detector
let rotateXAngle = 60;
let rotateZAngle = 60;
let numShapes = 50; // number of geometric shapes displayed
let shapeDetail = 360 / 60;  // detail level of each shape
let radialScale = 3;
let zScale = 50; 
let gestureActivated = false;
let zoom = 1; 
let trailsEnabled = false;
let drawingEnabled = true;
let osc;
let adminMode = false; //for debugging + additional controls
let instructionsVisible = false;

//based on instruction on vs off
function updateButtonVisibility() {
  let buttons = selectAll('button:not(#backButton)');
  for (let i = 0; i < buttons.length; i++) {
    buttons[i].style('display', instructionsVisible ? 'none' : 'inline-block');

function createInstructionsButton() {
  let buttonContainer = createDiv("");"position", "absolute");"right", "229px");"top", "20px");

  let instructionsButton = createButton("Instructions");

function toggleInstructions() {
  instructionsVisible = !instructionsVisible;

function hideInstructions() {
  instructionsVisible = false;
  backButtonPressed = true;

function createBackButton() {
  let buttonContainer = createDiv("");"position", "absolute");"left", "20px");"top", "20px");

  let backButton = createButton("↑");

function toggleAdminMode() {
  adminMode = !adminMode;

function updateAdminElements() {
  let elements = selectAll('button:not(#adminButton):not(#fullscreenButton):not(#instructionsButton):not(#backButton), input[type="range"]');
  for (let i = 0; i < elements.length; i++) {
    elements[i].style('display', adminMode && !instructionsVisible ? 'inline-block' : 'none');

function createAdminButton() {
  let buttonContainer = createDiv("");"position", "absolute");"right", "20px");"top", "20px");

  let adminButton = createButton("Admin");

function createFullscreenButton() {
  let buttonContainer = createDiv("");"position", "absolute");"right", "110px");"top", "20px");

  let fullscreenButton = createButton("Fullscreen");

function styleButton(button) {"background-color", "#4CAF50");"border", "none");"color", "white");"padding", "10px 20px");"text-align", "center");"text-decoration", "none");"display", "inline-block");"font-size", "16px");"border-radius", "4px");"cursor", "pointer");

function createSliders() { //visible in admin mode
  let sliderContainer = createDiv("");"sliderContainer");"position", "absolute");"left", "20px");"top", "20px");"display", "flex");"flex-direction", "column");

  let rotateZSlider = createSlider(10, 180, rotateZAngle);
  rotateZSlider.parent(sliderContainer);"width", "200px");
  rotateZSlider.input(() => updateRotateZAngle(rotateZSlider.value()));

  let numShapesSlider = createSlider(10, 100, numShapes, 1);
  numShapesSlider.parent(sliderContainer);"width", "200px");
  numShapesSlider.input(() => updateNumShapes(numShapesSlider.value()));

  let shapeDetailSlider = createSlider(3, 60, 6, 1);
  shapeDetailSlider.parent(sliderContainer);"width", "200px");
  shapeDetailSlider.input(() => updateShapeDetail(shapeDetailSlider.value()));

  let radialScaleSlider = createSlider(1, 10, radialScale, 0.1);
  radialScaleSlider.parent(sliderContainer);"width", "200px");
  radialScaleSlider.input(() => updateRadialScale(radialScaleSlider.value()));

  let zScaleSlider = createSlider(10, 100, zScale, 1);
  zScaleSlider.parent(sliderContainer);"width", "200px");
  zScaleSlider.input(() => updateZScale(zScaleSlider.value()));

  let zoomSlider = createSlider(0.1, 2, zoom, 0.1);
  zoomSlider.parent(sliderContainer);"width", "200px");
  zoomSlider.input(() => updateZoom(zoomSlider.value()));

function toggleFullscreen() {
  if (!document.fullscreenElement) {
    if (document.documentElement.requestFullscreen) {
    } else if (document.documentElement.webkitRequestFullscreen) { // Safari
    } else if (document.documentElement.msRequestFullscreen) { // IE/Edge
  } else {
    if (document.exitFullscreen) {
    } else if (document.webkitExitFullscreen) { // Safari
    } else if (document.msExitFullscreen) { // IE/Edge
function windowResized() {
  if (document.fullscreenElement) {
    resizeCanvas(windowWidth, windowHeight);
    instructionsGraphics.resizeCanvas(windowWidth, windowHeight * 2); //for fullscreen

  } else {
    resizeCanvas(windowWidth, windowHeight);
    instructionsGraphics.resizeCanvas(windowWidth, windowHeight * 2);


function updatePitch(value) { //sound
  let freq = map(value, 1, 200, 50, 400);

function updateFrequency(value) {  //different shape sound, reversed
  let freq = map(value, 10, 180, 400, 50);

function preload() {
  handPose = ml5.handPose();
  img1 = loadImage('gestures.png');
  img2 = loadImage('box.png');

function createScreenshotButton() {
  let buttonContainer = createDiv("");"position", "absolute");"left", "20px");"top", "260px");

  let screenshotButton = createButton("Take Screenshot");

function takeScreenshot() {
  saveCanvas('screenshot', 'png');

function setup() {
  createCanvas(windowWidth, windowHeight, WEBGL);
  video = createCapture(VIDEO);
  video.size(640, 480); //og resolution'transform', 'scale(-1, 1)'); //reverse if need to test video
  video.hide(); //if need to test for video
  handPose.detectStart(video, gotHands);
  oscZScale = new p5.Oscillator('triangle');
  oscRotateZAngle = new p5.Oscillator('sawtooth'); //waveform sawtooth
  envZScale = new p5.Envelope(); //used to ocntrol amplitude
  envZScale.setADSR(0.1, 0.2, 0.5, 0.5); //controls attack decay sustain and release
  envZScale.setRange(0.2, 0); //silence  after sustained phase

  oscZScale = new p5.Oscillator('triangle'); //waveform triangle

  // updateFrequency(rotateZAngle); //initial frequency
  updateAdminElements(); // make admin initially not visible
  instructionsGraphics = createGraphics(width, height * 2); //new graphics buffer for instructions
function keyPressed() { //spacebar to select serial port
  if (key === ' ') {

let backButtonPressed = false;

function draw() {
  if (!trailsEnabled || backButtonPressed) {
    background(30); //clear background
    backButtonPressed = false;

  if (instructionsVisible) {
    // Clear the instructions graphics buffer
    instructionsGraphics.fill(0); //rectangle background black
    instructionsGraphics.rect(0, 0, width, 2*height);
    instructionsGraphics.textAlign(LEFT, TOP); //align text
    // Adjust text size based on fullscreen mode
    let textSize = document.fullscreenElement ? 24 : 16;


    // instruction paragraphs
    let paragraph1 = `Welcome to ThumbWave studio where you can explore interactive mathematical graphics with your hand gestures. Different hand gestures are assigned to different parameter. Holding your thumb and index finger together alters the tilt of the visuals, mimicking the effect of changing your perspective. Bringing your thumb and middle finger together adjusts the number of shapes on the display, allowing you to fill the screen with complexity or clear it for simplicity. Connecting your thumb to your ring finger modifies the intricacy of each shape, adding a layer of detail with further movements.`;
    let paragraph2 = `Keep in mind, while you hold these gestures you can change the value of individual parameter by moving your hand up and down (for X-axis rotation) or side to side (for the rest). Finally, a touch between your thumb and pinkie will spin the shapes around the Z-axis, injecting motion into the scene. For a more dramatic effect, use both hands as if handling an accordion: moving your hands together and apart changes the scale and depth of the shapes on the screen and alters the pitch of the background sounds to match your movements, enhancing the sensory experience.`;

    // each paragraph with appropriate spacing
    let padding = 20;
    let topPadding = 80; // top padding before the first paragraph
    let maxTextWidth = width * 0.9;
    let lineSpacing = document.fullscreenElement ? 60 : 50;

    drawParagraph(instructionsGraphics, paragraph1, padding, topPadding + padding, maxTextWidth, lineSpacing);
    let paragraph1Height = calculateParagraphHeight(instructionsGraphics, paragraph1, maxTextWidth, lineSpacing);

    // first image after the first paragraph
    let img1Width = width * 0.8;
    let img1Height = img1.height * (img1Width / img1.width);
    let img1X = (width - img1Width) / 2;
    let img1Y = topPadding + padding + paragraph1Height + lineSpacing;
    instructionsGraphics.image(img1, img1X, img1Y, img1Width, img1Height);

    drawParagraph(instructionsGraphics, paragraph2, padding, img1Y + img1Height + lineSpacing, maxTextWidth, lineSpacing);
    let paragraph2Height = calculateParagraphHeight(instructionsGraphics, paragraph2, maxTextWidth, lineSpacing);

    // second image after the second paragraph
    let img2Width = width * 0.8;
    let img2Height = img2.height * (img2Width / img2.width);
    let img2X = (width - img2Width) / 2;
    let img2Y = img1Y + img1Height + lineSpacing + paragraph2Height + lineSpacing;
    instructionsGraphics.image(img2, img2X, img2Y, img2Width, img2Height);
    let scrollPosition;
    if (document.fullscreenElement) {
      scrollPosition = map(mouseY, 0, windowHeight, 0, instructionsGraphics.height - windowHeight);
    } else {
      scrollPosition = map(mouseY, 0, height, 0, instructionsGraphics.height - height);
    image(instructionsGraphics, -width / 2, -height / 2, width, height, 0, scrollPosition, width, height);

    //  the back button
    select("#backButton").style("display", "inline-block");
  } else { //dynamic rendering of 3D geometric shapes
    if (drawingEnabled) {
      for (let i = 0; i < numShapes; i++) { //dynamic color assignment
        let r = map(sin(frameCount / 2), -1, 1, 100, 200);
        let g = map(i, 0, numShapes, 100, 200);
        let b = map(cos(frameCount), -1, 1, 200, 100);
        stroke(r, g, b);
        rotate(frameCount / rotateZAngle); //rotate shape around z axis
        for (let j = 0; j < 360; j += shapeDetail) { //3D cordinates for each vertex of shape
          let rad = j * radialScale;
          let x = rad * cos(j);
          let y = rad * sin(j);
          let z = sin(frameCount * 2 + i * 5) * zScale;
          vertex(x, y, z);

    // Hide the back button
    select("#backButton").style("display", "none");

// Helper function to draw a paragraph of text
function drawParagraph(graphics, text, x, y, maxWidth, lineSpacing) {
  let words = text.split(' ');
  let currentLine = '';
  let yPos = y;
  //split text in individual words
  for (let i = 0; i < words.length; i++) {
    let word = words[i];
    let testLine = currentLine + ' ' + word; //add to current line
    let testWidth = graphics.textWidth(testLine);

    if (testWidth > maxWidth && currentLine !== '') { //exceed max
      graphics.text(currentLine, x, yPos); //we draw currentline on graphics
      currentLine = word;
      yPos += lineSpacing;
    } else {
      currentLine = testLine; //word added to current line
  graphics.text(currentLine, x, yPos); // draw the last line

function calculateParagraphHeight(graphics, text, maxWidth, lineSpacing) {
  let words = text.split(' ');
  let currentLine = '';
  let height = 0;

  for (let i = 0; i < words.length; i++) {
    let word = words[i];
    let testLine = currentLine + ' ' + word;
    let testWidth = graphics.textWidth(testLine);

    if (testWidth > maxWidth && currentLine !== '') { 
      currentLine = word;
      height += lineSpacing; //increments height counter
    } else {
      currentLine = testLine;
  height += lineSpacing; // Add the last line's spacing
  return height;

function createStopDrawingButton() {
  let buttonContainer = createDiv("");"position", "absolute");"left", "20px");"top", "230px");

  let stopDrawingButton = createButton("Stop Drawing");
function toggleDrawing() {
  drawingEnabled = !drawingEnabled;

function createTrailButton() {
  let buttonContainer = createDiv("");"position", "absolute");"left", "20px");"top", "200px");

  let trailButton = createButton("Toggle Trails");

function toggleTrails() {
  trailsEnabled = !trailsEnabled;

let rotateXAngleHistory = [];
let rotateXAngleHistorySize = 10;
let rotateXAngleSmoothingFactor = 0.2; //used for smoothing X with moving average

function updateRotateZAngle(value) {
  rotateZAngle = value;

function updateNumShapes(value) {
  numShapes = value;

function updateShapeDetail(value) {
  shapeDetail = 360 / value;

function updateRadialScale(value) {
  radialScale = value;

function updateZScale(value) {
  zScale = value;

function updateZoom(value) {
  zoom = value;

The other file is gestures.js which is the heart of the project. It handles detection of gestures as I described above. It is commented well for general functions and you are free to inspect for particular mechanisms/logic.


function gotHands(results) {
  hands = results; //store data
  if (hands.length === 2) { //two hand detection
    let leftWrist = hands[0].keypoints[0];
    let rightWrist = hands[1].keypoints[0];
    //get wrist positions and then calculate wrist distanse
    let wristDistance = dist(leftWrist.x, leftWrist.y, rightWrist.x, rightWrist.y);

    let minDistance = 100;
    let maxDistance = 400;
    //normalize wrist constraints
    let mappedDistance = constrain(wristDistance, minDistance, maxDistance);
    let zScaleNew = map(mappedDistance, minDistance, maxDistance, 1, 200); //we map it to z scale
    zScale = zScaleNew; //update global value
    if (adminMode) {
      console.log("Two hands gesture - zScale:", zScale);
  } else if (hands.length > 0) { //if at least one is detected
    if (adminMode) {
      console.log("Pinch gesture - rotateXAngle:", rotateXAngle);
      console.log("Middle-thumb gesture - numShapes:", numShapes);
      console.log("Ring-thumb gesture - shapeDetail:", shapeDetail);
      console.log("Pinkie-thumb gesture - rotateZAngle:", rotateZAngle);
    //fingertips for vairous gestures
    let indexFingerTip = hands[0].keypoints[8];
    let thumbTip = hands[0].keypoints[4];
    let pinchDistance = dist(indexFingerTip.x, indexFingerTip.y, thumbTip.x, thumbTip.y);

    let middleFingerTip = hands[0].keypoints[12];
    let middleThumbDistance = dist(middleFingerTip.x, middleFingerTip.y, thumbTip.x, thumbTip.y);

    let ringFingerTip = hands[0].keypoints[16];
    let ringThumbDistance = dist(ringFingerTip.x, ringFingerTip.y, thumbTip.x, thumbTip.y);

    let pinkieFingerTip = hands[0].keypoints[20];
    let pinkieThumbDistance = dist(pinkieFingerTip.x, pinkieFingerTip.y, thumbTip.x, thumbTip.y);

    //thumb to index gesture
    if (pinchDistance < 20) {
      gestureActivated = true;
      let wristY = hands[0].keypoints[0].y;
      let centerY = video.height * 0.6;
      let range = video.height / 10;
      let mappedY = constrain(wristY, centerY - range, centerY + range);
      let rotateXAngleNew = map(mappedY, centerY - range, centerY + range, 200, 0);

      //we maintain history of rotateX angles to smoothen transition
      if (rotateXAngleHistory.length > rotateXAngleHistorySize) {

      let rotateXAngleAverage = rotateXAngleHistory.reduce((sum, value) => sum + value, 0) / rotateXAngleHistory.length;

      rotateXAngle = lerp(rotateXAngle, rotateXAngleAverage, rotateXAngleSmoothingFactor);

    } else if (middleThumbDistance < 20) { //thumb gesture and middle finger
      let wristX = hands[0].keypoints[0].x;
      let centerX = video.width * 0.5;
      let range = video.width / 8;
      let mappedX = constrain(wristX, centerX - range, centerX + range);
      let numShapesNew = round(map(mappedX, centerX - range, centerX + range, 100, 1));
      numShapes = numShapesNew;

    } else if (ringThumbDistance < 20) { //ring and thumn gesture
      let wristX = hands[0].keypoints[0].x;
      let centerX = video.width * 0.5;
      let range = video.width / 8;
      let mappedX = constrain(wristX, centerX - range, centerX + range);
      let shapeDetailNew = round(map(mappedX, centerX - range, centerX + range, 3, 60));
      shapeDetail = 360 / shapeDetailNew;
    } else if (pinkieThumbDistance < 20) { //pinkie thumb gesture
      let wristX = hands[0].keypoints[0].x;
      let centerX = video.width * 0.5;
      let range = video.width / 8;
      let mappedX = constrain(wristX, centerX - range, centerX + range);
      let rotateZAngleNew = round(map(mappedX, centerX - range, centerX + range, 10, 180));
      rotateZAngle = rotateZAngleNew;

    } else {
      gestureActivated = false;

Last but not least is the Arduino file. This code sets up and runs the serial communication between the Arduino and p5.js. It reads data from the Arduino, which includes the ultrasonic sensor distance and button states, and updates the corresponding variables in the p5.js sketch. The setUpSerial function initializes the serial communication, while the runSerial function continuously reads data from the serial port. The readSerial function parses the received data and updates the p5.js sketch accordingly. The distance value from the ultrasonic sensor controls the zoom level of the canvas using a smoothing technique. The button states toggle various functionalities, such as enabling/disabling drawing, trails, and taking screenshots. If adminMode is enabled, the code logs relevant information to the console for debugging purposes.


async function setUpSerial() {
  ({ reader, writer } = await getPort());
  serialActive = true;

async function runSerial() {
  try {
    while (true) {
      if (serialActive) {
        const { value, done } = await;
        if (done) {
      } else {
  } catch (e) {

let zoomPrev = zoom;
let zoomSmoothingFactor = 0.1;
let zoomHistory = [];
let zoomHistorySize = 10;

let previousButtonState = 0;
let previousTrailButtonState = 0;
let previousScreenshotButtonState = 0;

let previousAdminButtonState = 0;

function readSerial(data) {
  let values = data.trim().split(",");
  if (values.length === 5) {
    let distance = parseInt(values[0]);
    let buttonState = parseInt(values[1]);
    let trailButtonState = parseInt(values[2]);
    let screenshotButtonState = parseInt(values[3]);
    let adminButtonState = parseInt(values[4]);

    if (!isNaN(distance) && !isNaN(buttonState) && !isNaN(trailButtonState) && !isNaN(screenshotButtonState) && !isNaN(adminButtonState)) {
      let zoomNew = map(distance, 1, 20, 4, 0.2);
      if (zoomHistory.length > zoomHistorySize) {
      let zoomAverage = zoomHistory.reduce((sum, value) => sum + value, 0) / zoomHistory.length;
      zoom = lerp(zoomPrev, zoomAverage, zoomSmoothingFactor);
      zoomPrev = zoom;
      if (adminMode) {
        console.log("Distance sensor - zoom:", zoom);

      if (buttonState === 1 && previousButtonState === 0) {
        drawingEnabled = !drawingEnabled;
        if (adminMode) {
          console.log("Drawing state toggled:", drawingEnabled);
      previousButtonState = buttonState;

      if (trailButtonState === 1 && previousTrailButtonState === 0) {
        trailsEnabled = !trailsEnabled;
        if (adminMode) {
          console.log("Trails state toggled:", trailsEnabled);
      previousTrailButtonState = trailButtonState;

      if (screenshotButtonState === 1 && previousScreenshotButtonState === 0) {
        saveCanvas('screenshot', 'png');
        if (adminMode) {
          console.log("Screenshot taken");
      previousScreenshotButtonState = screenshotButtonState;

The communication between Arduino and p5.js:

As we said, the Arduino code sends the ultrasonic sensor distance and button states as a comma-separated string to p5.js. The p5.js code listens for the serial data and parses the received string to extract the distance and button states. This parsed data is then used to update the corresponding variables in the p5.js sketch, allowing the Arduino inputs to influence the visual output in real-time. The seamless communication between Arduino and p5.js enables the integration of physical interactions with the digital artwork.

Sketch Embed

Link for testing:

What I am proud of:

I am proud of making ML5 integration with Handpose as smooth as it is. For the most part, the experience is seamless with minor delay. I am glad that visualization is working and due to the flexible nature of this project, this could be expanded to other mathematical demonstrations.

I am also proud of the fact that I did not necessarily fix myself to one idea. I experimented with both and found a way to combine them. Allowing users to not just test but also create something of their own is true interaction and I feel like this project accomplishes this task by integrating numerous hardware, software and design principles we learned in the class.


Resources I used:

For sine wave form visualizations I followed Colorful Coding videos:

For general knowledge about ML5, I used ML5 project website:

For Handpose detection model, I used following Github repository with next gen ml5:

For general principle, knowledge, troubleshooting – I used open web, provided slides and other available resources.

Challenges I faced and how I overcame them:

Throughout the development of this project, I encountered several challenges that tested my problem-solving skills and pushed me to think creatively. One of the main challenges I faced was ensuring a smooth and responsive interaction between the hand gestures and the visual elements. Initially, the gestures felt sluggish and unreliable, leading to a frustrating user experience. To overcome this, I spent a considerable amount of time fine-tuning the gesture recognition and mapping algorithms, experimenting with different thresholds and smoothing techniques. (Ended up with moving average method). Through trial and error, I managed to strike a balance between responsiveness and stability, resulting in a more intuitive and enjoyable interaction.

Another challenge I faced was different graphics buffers. Sometime’s drawings would get messed up and not display at all, or display on each other or not fully. Again it took lots of trials and errors but I eventually found what worked. There were several other minor bugs that might not have been immediately noticeable by users but I tried to polished them out so that whole experience remained coherent. Last aspect was just improving CSS and styling and make presentation visually pleasing. The audio aspect was also a bit confusing. I tried to keep constant audio but that got annoyed after a file. The dynamic audio which is only audible during logical (e.g. stretching across Z) movements is much more satisfying.

I also took a few suggestions from friends/professors to improve the interface. For example I added visuals to instructions page to make it more user friendly. Additionally I added a piece of plywood inside the box as a counterweight to keep the box from moving when users would press the buttons. Perhaps, cramming everything together in one box was the most nerve wracking part as I was using double sided tape and only had one shot at making it work. I planned it in my had many, many times before I actually committed to cutting out cardboard shapes and putting my components in without disconnecting. The most annoying issue I faced was perhaps something I could not control at all, because my USB hub, which never gives me issues, does not properly work with Arduino. Sometimes it just refused to work and I had to use other hubs when it would miraculously start working again.

Areas for Future Improvement:

There are several directions this project can be extended in. Firstly, I would love to integrate more advanced machine learning algorithms or perhaps one day learn to write one myself to make the whole experience even smoother. With more accurate detection, you could get even more nuanced gestures which could further ameliorate user experience and make the whole process run faster.

Additionally, I would like to expand and add more mathematical models for this project and its educational context. Perhaps by making modular design, I could let users pick their desired mathematical or other science based concepts from biology or chemistry and have their own visualization running in as little time as possible. They could assign their own parameters and have more flexibility with all the movements. Perhaps, having a more advanced model would also help with more than one person demonstrations, where multiple people can engage with single visualization, either by observing it or by producing their own art. Of course, polishing current code, improving casing for Arduino, adding more sensors are all viable avenues as well. It would be cool to add vibration motors so users could also feel the movement in real time, making the experience much more tactile and intuitive.

In the end, I am very glad with how my project turned out and despite facing numerous challenges, overall I had a lot of fun and would love to come up with more creative projects like this in the feature. Hope you guys liked it too!


Final Concept Proposal Confirmation

Final Concept:

I am still going to use ML5 and find a way to implement machine learning with Arduino to develop interactive art project. I did some testing with Canvas painting and while it worked, I found interactive art manipulation to be much more fun and unique. For now I am experimenting with Sin forms and their manipulation with Hand pose. I hope to add new gestures and feature so enhance interactivity of the art. For the Arduino side of things I have not full decided yet but I was thinking of using distance sensor as a tracker for engagement and rest of buttons to change up user interface, colors or perhaps navigate between different mathematical models.



Week 12 Exercises – Tengis & Dachi

Exercise 1

The Arduino continuously reads the electrical signal from a potentiometer. This signal represents the potentiometer’s position based on its rotation. The code then scales this raw sensor reading (ranging from 0 to 1023) to a new range of 0 to 400 using a function called map. This scaled value becomes the control signal for the circle’s movement on the web page.

 The scaled sensor value (now within the 0-400 range) is sent from the Arduino to the p5js code running in the web browser. When data arrives to p5js, the code assigns it to a variable named circleX. This variable essentially controls the circle’s position on the screen. Finally, the code uses this value to dynamically adjust the horizontal position (X coordinate) of the circle drawn on the p5js canvas.


  • Arduino Board
  • Potentiometer
  • Computer
  • Wires
  • Breadboard




Exercise 2

The p5js code continuously tracks the mouse cursor’s position on the canvas (represented by mouseX and mouseY variables). The p5js code directly assigns the mouseX value to the brightness 1 variable and the mouseY value to the brightness2 variable. These variables essentially store the desired brightness levels for the two LEDs. The readSerial function in p5js combines the brightness1 and brightness2 values with a comma (“,”) separator and adds a newline character (“\n”) to create a formatted message. This formatted message is then sent to the Arduino.

Once the data (representing the formatted message) are available, the code reads them using Serial.parseInt. This separates the combined brightness values stored in brightness1 and brightness2. The map function then scales both brightness values from the p5js range (0-400) to the appropriate range (0-255) for controlling the LEDs. The code includes error handling by checking for a newline character (\n) after reading the brightness values. This ensures complete data reception before setting the LED brightness. If no data is received, the Arduino prints an error message “No signal received” to the serial monitor. Finally, the Arduino sets the brightness of each LED (LED 1 on pin 10 and LED 2 on pin 11) based on the corresponding received values (brightness1 and brightness2).


    • Computer
    • Arduino Board
  • Two LEDs
  • Connecting Wires




Exercise 3

The p5js code establishes the core mechanics for simulating a bouncing ball on the screen. It defines various physics concepts like: Gravity, Drag, Acceleration and Wind. The code continuously updates the ball’s position and velocity based on the applied forces and drag. When the ball hits the bottom of the canvas, its vertical velocity is reversed, simulating a bounce. At this point, a variable named ledOn is set to 1, indicating the LED should be turned on. If the serial connection is active, the code sends the ledOn value (0 or 1) as a string followed by a newline character (“\n”) to the Arduino using writeSerial.

The readSerial function gets called whenever new data arrives from the Arduino. Here, it parses the received data and assigns it to the windVale variable. This value updates the wind vector, influencing the ball’s horizontal movement in the simulation. The Arduino code continuously reads the analog value from a potentiometer connected to pin A0. It then maps this value (ranging from 0 to 1023) to a new range of -10 to 10 using the map function. This mapped value represents the wind force affecting the ball in the p5js simulation. The Arduino transmits this wind force value to the p5js code. The Arduino constantly checks for incoming data on the serial port. If data is available, it reads the first character and checks its value. If the character is ‘1’, the LED connected to pin 13 is turned on. If the character is ‘0’, the LED is turned off.


  • Computer
  • Arduino Board
  • Potentiometer
  • LED
  • Connecting Wires
  • Bread board




Challenges and Reflection

Overall we faced quite a few challenges, regarding Arduino malfunctioning due to serial communication error. Sometimes it was the fault of code, adapter or p5js. Either way we found step by step troubleshooting to be a solution.  However, in the end we are happy with our progress and this has helped us prepare for our final project.



Dachi Reading Response – Week 12

The author, Graham Pullin, makes an interesting case that the design of assistive technologies and products for people with disabilities has often been overly utilitarian and has neglected aesthetics, identity, and broader quality of life considerationsHe argues that assistive devices should be designed not just for narrow functionality but as fashionable, desirable consumer products that can enhance the user’s self-image and social interactions.

Pullin’s perspective aligns with the modern social model of disability, which holds that people are often more disabled by environmental and socio barriers rather than by their physical or mental impairments. Well-designed assistive products can help break down those barriers. And by making such products stylish and attractive to use, they may also help combat the stigma frequently associated with disability and assistive tech.

I agree with Pullin’s core ideas. Too often, assistive devices have looked medical, institutional, and perhaps alienating marking the user as different in a negative way. But that’s not always true. For example, Eyeglasses have evolved from purely functional visual aids to fashion accessories worn even by those who don’t medically require them. One could make an argument that other kinds of aids could potentially become part of someone’s fashion. 

The author argues for diversity and choices in design but this is not always desirableSometimes having too many choices just makes the user more confused as we discussed in earlier articles, and a more simplistic approach is the best. 

Redesigning already established tools won’t necessarily solve all the issues associated with the negative stigma surrounding disability. For example, you might improve wheelchair design, and make it more techy and futuristic, but again one will always see it as a wheelchair and I doubt it will become a fashion theme in the general public. Additionally, even if you improve wheelchair design, it’s all about the surrounding environment that makes it accessible, which should often be a key focus. With more complex design choices affordability becomes a problem. More elaborate choices in eyewear have made them pretty expensive where people chase brand values and collaborations rather than their actual utility. 

Despite, all these I firmly believe that creativity always emerges on top and people will find ways to make all “disability” associated designs more mainstream. Nevertheless, we should not forget that first of all, it’s about core utility and accessibility and less about mainstream fashion trends. If costs become marginal, then perhaps one could view them in the same terms. We are far from that future but not as far as one might have imagined when the article was originally published. 

Chroma Casette – Musical Instrument – Dachi and Tengis

The Chroma Cassette is a fun and interactive project that plays pre-loaded songs and allows users to control the playback speed using a distance sensor. The name “Chroma” refers to the project’s ability to switch the color of LEDs based on the song being played, while “Cassette” reflects the inspiration behind the variable playback speed control, similar to the fast-forwarding feature of cassette tapes.

  • Hardware Components:
    • Arduino Uno microcontroller
    • Piezo Speaker for playback
    • Distance sensor (ultrasonic sensor) to detect distance
    • Button for manual song switching
    • LEDs (Red, Green, Blue) for colorful song indication
    • Jumper wires for connecting components
    • Breadboard
  • Software (Code):
    • An array named songNames stores the titles of the pre-loaded songs (Game of Thrones, Imperial March, Pirates of the Caribbean, Silent Night).
    • Each song melody is defined as an array representing musical notes and their corresponding durations.
    • A function named playSong iterates through each note in the current song and plays it based on its duration. The function also calculates a speedFactor based on the distance measured by the sensor. This speedFactor is used to adjust the note duration, essentially changing the playback speed. Higher distances from the sensor result in slower playback, mimicking the fast-forwarding effect.
    • The setRGBColor function assigns specific colors to the LEDs based on the current song being played, adding a visual element to the project.
    • An interrupt service routine is triggered when the button is pressed, and a flag named switchSong is set to true, indicating the need to switch to the next song in the playlist.

The Chroma Cassette project, initially, embarked on a path paved with frustration. Our initial goal was to directly control the volume of the pre-loaded songs. Countless hours were spent crafting code, only to be met with a difficulty: directly manipulating volume on the Arduino platform proved to be an insurmountable hurdle. This limitation stemmed from the inherent architecture of the Arduino kit, lacking dedicated hardware components for fine-grained volume control.

We brainstormed alternative approaches to achieve a dynamic audio experience, eventually agreeing on varying the playback speed of the music. This approach, however, presented its own set of challenges. After lots of trial and error, adjusting code and testing countless iterations. This phase, though time-consuming, ultimately yielded a solution that met our exacting standards.

The foundation of the Chroma Cassette lies in its pre-loaded song library. Each song, be it the epic theme from Game of Thrones or the whimsical melody of Pirates of the Caribbean, was meticulously chosen to complement the project’s functionality. Once the song selection was finalized, we embarked on a critical step: adjusting the speedFactor. This variable acts as the heart of the speed control mechanism. Meticulous adjustments were made to the speedFactorfor each song, ensuring that even at faster playback speeds, the music retained its integrity and remained pleasant to the ears.

The distance sensor served as the conductor in this symphony of sound and speed. It was calibrated to operate within a specific range, from 1 centimeter to 30 centimeters. This precise calibration ensured a smooth and responsive adjustment curve. As the distance between the sensor and an object increased, the playback speed would gradually slow down, mimicking the fast-forwarding effect of a cassette tape.

To enhance user interaction, we incorporated additional functionalities. Pressing the designated button would seamlessly switch between songs in the playlist. To provide a visual cue for song changes, an LED was integrated into the system. Whenever the user switched songs, the LED would illuminate briefly, acknowledging the user’s input.

The RGB LED added a captivating layer of visual flair to the project. This versatile LED, capable of displaying a spectrum of colors, was linked to the songIndex variable. As the user cycled through the song playlist, the RGB LED would change color, reflecting the currently playing song. This color association wasn’t random – it drew inspiration from the source material of each song. For instance, the vibrant hues of green, purple, yellow, and orange adorned the LED when playing the Harry Potter theme, a subtle nod to the four Hogwarts houses.

Faced with an initial hurdle, we pivoted our approach and ultimately delivered a unique and engaging audio experience. The project seamlessly blends pre-loaded songs, dynamic speed control based on sensor input, intuitive user interaction, and a captivating visual element through the RGB LED. 

The Chroma Cassette might be a interesting blend of sound and light, but there’s always space to make it even better. For Instance, enhancing the audio quality, especially at faster playback speeds, could be a priority. Techniques like utilizing digital signal processing libraries on the Arduino might help reduce pitch shifting and distortion. Imagine users having the ability to upload their own personal soundtracks! This could be achieved by incorporating an SD card or a Bluetooth module, significantly expanding the song library and personalizing the experience for each user. The distance sensor integration could be taken a step further. By using a more advanced sensor with a wider range, users would have finer control over the playback speed across a larger distance. This refinement could create a more intuitive user experience. Another exciting possibility is an interactive light show. The RGB LED could be programmed to react to the music’s rhythm and melody, creating a dynamic visual spectacle that complements the audio. This would undoubtedly add a whole new captivating dimension to the user experience.





For the serial output:


Code Snippet: (Too long for whole code)

// Function to play a song
void playSong(int *melody, int melodyLength) {
  // Check if the speaker is turned off
    if (digitalRead(SPEAKER_SWITCH_PIN) == LOW) {
        Serial.println("Speaker is turned off.");
        return; // Exit the function if the speaker is turned off

    // Iterate through each note in the melody
    for (int noteIndex = 0; noteIndex < melodyLength; noteIndex += 2) {
        float distance = getDistance();  // Update distance with each note

        // Adjust speedFactor based on the song and distance
        float speedFactor;
        if (currentSong == 1) { 
            // Slower scaling for Imperial March
            speedFactor = 1.1 + (distance / 30.0); //  slows down the max speed
        } else if (currentSong==2){
            speedFactor = 0.6 + (distance / 30.0);
        } else if (currentSong==3) { 
            speedFactor = 0.4 + (distance / 30.0);
        } else {
            speedFactor = 1.2 + distance / 30.0;

         // Calculate the note duration based on the speed factor
        int noteDuration = (int)(1000 / melody[noteIndex + 1] * speedFactor);

        // Check if the song should be switched or the speaker is turned off
        if (switchSong || digitalRead(SPEAKER_SWITCH_PIN) == LOW) {
            noTone(SPEAKER_PIN); // Stop tone when switching off

         // Play the note
        tone(SPEAKER_PIN, melody[noteIndex], noteDuration);
        setRGBColor(currentSong, melody[noteIndex]);
        delay(noteDuration * 1.30);

        Serial.print(": Playing note: ");
        Serial.print(" at duration: ");
        Serial.print("Distance: ");
        Serial.println(" cm");



Final Project Proposal Dachi


For my Final Project, I don’t have a concrete idea yet that I have committed to but I know the general area which I want to work on as well as some potential projects. 

I want to explore ML5.js which is a machine learning designed for the web. In short, it utilizes the graphical powers of the browser to do machine-learning calculations. It works well with P5.js which is why I want to utilize it. It’s open source library which is beginner-friendly providing high-level interface to TensorFlow. As someone with no machine learning experience, it will be very interesting to explore it in depth. It has some available pre-trained models which work for image classification and recognition. For example, HandPose model would work for detecting hands and doing some action with it.

The initial concept idea would be to create a canvas where you can draw with your hands in P5js with the help of Handpose. I could utilize arduino to add additional functionality. For example put sensors in a cardboard which would act as brush settings. For example, various sensors would change various brush properties. Cut out for ultrasonic sensor could potentially change size of the brush by moving your finger. Ideally, I would like to make the interface very seamless.

I could expand this idea for user to create generative art by varying different mathematical properties using their hands, it does not have to be limited to just drawing. 


I would face many challenges going with this approach as I have no prior experience. Moreover, I would have to combine the machine learning aspect with Arduino to create a seamless experience. My plan to overcome these challenges is to start learning as early as possible. This includes going through the library, learning the basics and finalizing my plan which I can stick with and dedicate much more time to. It should be challenging, yet achievable in the timeframe we are given. I am optimistic that this project will not only be very beneficial by learning lots of new things but will also be quite fun!

Dachi Reading Response – Week 11

The first article, “A Brief Rant on the Future of Interaction Design,” argues that our current touchscreen interfaces are missing out on the rich tactile feedback and 3D manipulation that our hands are capable of. The author thinks future interfaces should engage our bodies more fully, like how we interact with the physical world.

The second article is a follow-up where the author addresses some common responses to his original rant. He clarifies that he’s not against touchscreens entirely, but rather sees them as a sequentual stone to even more immersive and embodied forms of interaction. He also touches on why he’s skeptical of voice commands and gesture controls as complete replacements for physical interfaces.

Putting the two articles together, it seems like the core idea is that our technology should evolve to better adapt to our human capabilities, especially when it comes to how we use our hands to explore and manipulate objects. The author is calling for a future where digital interactions feel just as tangible and expressive as their real-world counterparts.

I actually agree with this vision, even if the exact path to get there is still unclear. I think there’s a lot of room for interfaces to become more tactile and responsive to our natural ways of interacting. At the same time, I wouldn’t want to completely abandon the benefits of touchscreens, like customization. In the real world, there are many examples where people prefer tactility. For example, lots of companies have been trying to introduce laptops with glass clear keyboards. This would undoubtedly make laptops thinner and add an option of a second screen but the majority of users myself included would hate the typing experience because there is something much more satisfying when it comes to physical keystrokes. (This is main reason why people get into expensive mechanical keyboards, I might be people).

On the other hand, despite blackberry’s popularity in the past, keyboards on phones have become a thing of the past as the benefits are simply not worth it due to major compromises like the bulkiness, less space for other components, visuals, and so on. Plus, people treat phones mainly as texting machines, when it comes to typing, it’s on big screens where real work happens.

The articles also raised some good points about the limitations of relying too heavily on voice commands or free-air gestures. While those modalities can be useful in certain contexts, they lack the physical feedback and precision of direct manipulation. So it seems like the ideal interface would offer a mix of input methods that complement each other and cater to different situations.

As someone who spends a lot of time interacting with screens, it’s an exciting prospect to imagine a future where the digital world is combined with a world of haptics to deliver an immersive experience, something akin to Ready Player One.

Week 10 Response Dachi

The articles “Physical Computing’s Greatest Hits (and Misses)” and “Making Interactive Art: Set the Stage, Then Shut Up and Listen” provide many insights and advice for artists and designers who want to create interactive physical computing projects and installations.
In “Physical Computing’s Greatest Hits (and Misses)”, the author reviews some of the most common themes and project ideas that take place in physical computing classes. What stood out to me is how he encourages students not to be discouraged if their ideas have been done before. As he points out, these themes are popular precisely because they offer room for originality and surprising variations. Even if the core interaction (e.g. theremin-like instruments, gloves, floor pads) has been explored, there are always opportunities to put new spins on it through the specific form, aesthetics, narrative, and context.
His different project types made me reflect on my own tendencies and biases when brainstorming IM concepts. I tend to gravitate toward certain projects which have the most creative and interesting aspects in sight. I don’t want to just satisfy assignment requirements but have the project with a certain goal in mind to have a certain effect on the user.
“Making Interactive Art: Set the Stage, Then Shut Up and Listen” article offers advice that somehow conflicts with my instincts as a designer. Author advises interactive artists to avoid pre-scripting the participant’s experience and interpretations. Let the audience discover meaning for themselves through open-ended interaction with the work.
This is quite different from other design contexts like UX, where the goal is often to carefully guide the user through a curated experience towards a specific goal. But with interactive art, you get more authentic audience engagement and emotional resonance by leaving room for ambiguity, surprise, and personal interpretation. It requires the artist to give up some control, which can feel uncomfortable. But the end result is a richer dialog between the artist and the audience. For example, when I was designing my midterm project, I left players with a choice (e.g. pick up the ring or not) so players could act accordingly. For communications lab projects, we also try to design concepts that actively consider user input (e.g. alternating storyline and so on). I still feel like there is some level of prediction that needs to happen, nothing is truly open world experience as we don’t have enough resources or computing power for that, but we can try to get as close as possible.
Author’s advice to “shut up and listen” – pay close attention to how people interact with and react to the work – also struck me as highly relevant to interaction design in general, not just artistic contexts. We can learn so much by observing where our creations fail to improve in the future. Therefore, staying open to that feedback is essential for refining the work. This is something I did for my midterm as well by asking my friends to play game and provide constructive criticism.
In conclusion, these two articles provide many thought-provoking ideas to anyone trying to overcome the challenges of creating meaningful interactive experiences. While some of the advice may be particular to media art contexts, much of it feels widely applicable to interaction design as a whole. These are things I already take into account in my assignment, and now I feel even more aware moving further into complex projects.

Multidetector – Week 10 – Dachi Tarughishvili


For this assignment I really wanted to use Ultrasonic sensor and experiment with its detecting capabilities, hence the name of the project – multi-detector. I wanted to have three separate distances: near, medium, far to have different activations. (For example, different lights or sounds). To satisfy projects requirement (analog input/output + digital input/output) I decided to implement other elements since ultrasonic can be considered as both. This enabled me to expand project and make it more interactive. I decided to add RGB LED as an analog output source which in itself is based on ultrasonic sensor readings. A digital output LED in this case would be the status LED which is activated by digital input slider switch. Conversely, the status is also indicated by short lasting sound by second piezo speaker. The analog input is my potentiometer which controls the pitch of the buzzer. As you can see we have more than one digital/analog inputs and outputs and making sure they all work well together in a sensible manner was the main goal of this project.


The design is quite intuitive. User can turn on the sensor which is indicated by blue led briefly lit up and brief sound played by second piezo speaker. Then user can point the ultrasonic sensor at different objects. RGB LED will light up in different colors based on the distance to that object. For far distances I am using green light, for medium – yellow and for close red. I decided to use RED for close along with buzzing sound to demonstrate a state of emergency, something that is often used in real life applications. Providing both striking visual stimulus (RED color) and audio (high pitch sound) signals the user of proximity in urgent manner. This might be useful for larger scale projects where detection of a foreign object has to be alerted to user at all costs.

I used Tinkercad for simulating my project online and as seen on the video below, it works perfectly fine: (I tried to rearrange objects for best visibility possible).


I am also including a diagram form Tinkercad conversion which did not look fully proper with incomplete connections and questionable alignments, hence I used Circuit Diagram Web Editor to draw design myself from scratch (ultrasonic is in center since it has both inputs and outputs). For the future assignment I will only use whichever is most appropriates based on your feedback.


// pin connections
const int trigPin = 11;               // Pin connected to the trigger pin on the ultrasonic sensor
const int echoPin = 12;               // Pin connected to the echo pin on the ultrasonic sensor
const int redPin = 3;                 // Pin to control the red LED inside the RGB LED
const int greenPin = 5;               // Pin to control the green LED inside the RGB LED
const int bluePin = 6;                // Pin to control the blue LED inside the RGB LED
const int piezoPin = 9;               // Pin for the first piezo speaker (used for siren)
const int potPin = A0;                // Analog pin connected to the potentiometer
const int switchPin = 2;              // Digital input pin for the switch
const int secondPiezoPin = 7;         // Pin for the second piezo speaker (used for feedback on switch toggle)

float distance = 0;                   // to store the measured distance
bool lastSwitchState = HIGH;          // to track the last state of the switch

void setup() {
  // Set pin modes
  pinMode(trigPin, OUTPUT);
  pinMode(echoPin, INPUT);
  pinMode(redPin, OUTPUT);
  pinMode(greenPin, OUTPUT);
  pinMode(bluePin, OUTPUT);
  pinMode(piezoPin, OUTPUT);
  pinMode(secondPiezoPin, OUTPUT);
  pinMode(potPin, INPUT);
  pinMode(switchPin, INPUT_PULLUP);   // internal pull-up resistor for the switch

void loop() {
  bool currentSwitchState = digitalRead(switchPin); // read the current state of the switch

  // check if the switch state has changed
  if (currentSwitchState != lastSwitchState) {
    if (currentSwitchState == LOW) {
      // plauy a brief sound on the second piezo speaker when the switch is toggled
      tone(secondPiezoPin, 1000, 200); //  a 1000 Hz tone for 200 milliseconds
    lastSwitchState = currentSwitchState; // Update the last known state of the switch

  // actions when the switch is active
  if (currentSwitchState == LOW) {
    distance = getDistance(); //  distance - ultrasonic sensor

    Serial.println(" in");

    //  RGB LED control
    if (distance <= 10) {
      analogWrite(redPin, 255);       // Close distance - turn RGB LED red
      analogWrite(greenPin, 0);
      analogWrite(bluePin, 0);
      playSiren();                    // siren on 
    } else if (distance > 10 && distance < 20) {
      analogWrite(redPin, 255);       // Medium distance - turn RGB LED yellow
      analogWrite(greenPin, 50);
      analogWrite(bluePin, 0);
      noTone(piezoPin);               // Stop siren
    } else {
      analogWrite(redPin, 0);         // Far distance - turn RGB LED green
      analogWrite(greenPin, 255);
      analogWrite(bluePin, 0);
      noTone(piezoPin);               // Stop siren
  } else {
    // Turn off all outputs when the switch is not active
    analogWrite(redPin, 0);
    analogWrite(greenPin, 0);
    analogWrite(bluePin, 0);

  delay(50);  // Short delay to stabilize readings

// Function to measure distance using ultrasonic sensor
float getDistance() {
  digitalWrite(trigPin, HIGH);
  digitalWrite(trigPin, LOW);

  float echoTime = pulseIn(echoPin, HIGH);
  return echoTime / 148.0;  // Convert time to distance

// control siren based on distance
void playSiren() {
    int potValue = analogRead(potPin);  // read the potentiometer value
    int volume = map(potValue, 0, 1023, 0, 255);  // Map it to PWM range
    analogWrite(piezoPin, volume);  // control volume


In the setup() function, each component is initialized to its respective pin mode, ranging from outputs for LEDs and piezo speakers to inputs for the ultrasonic sensor and potentiometer. The main loop() continuously checks the state of a digital switch to control the overall system operation, toggling functionality and triggering a brief alert sound from a secondary piezo speaker upon state changes. Distance is measured using an ultrasonic sensor, with the results dictating the color output of an RGB LED and the activation of a siren on the primary piezo speaker, whose volume is adjusted by a potentiometer. Everything else I already discussed in introduction and commented in the code. As you can see it is quite extensive so hopefully it should answer all your questions.



While integrating the various components in this project, I faced some interesting challenges. Managing the piezo speaker’s volume with analogWrite() proved to be tricky because the function doesn’t directly control voltage output but instead modulates the PWM signal, which isn’t ideal for driving piezo speakers. I followed a tutorial for setting up an ultrasonic sensor from Sparkfun itself so that was fairly straightforward. I tried my best to manage the colors of the cables properly but eventually, I ran out of black wires so I used yellow as a substitute. Redrawing schematic was not as much challenging as it was time consuming. In the future, I would like to find a faster way to do this while satisfying project requirements.


All in all, I am quite happy with how my project turned out. I think what I learned through this project might be very useful for my future assignments and final project since I learned more about digital/analog inputs outputs as presented in Arduino, as well as reading/creating schematics and testing out project both in simulation and in real world. This project could easily be expanded by incorporating more components and making it part of something bigger since its sole responsibility for now is just detection.

Week 10: Switch – Dachi Tarughishvili


At first I wanted to create a switch that would act as a detector if someone opened a journal or a personal diary but upon finding out that we are not allowed to use hands I changed my entire idea. I thought it would be most interesting to use a material that does not typically associate with conductive plate image in our mind. After all, we are all taught how it’s dangerous to play around electronics and water due to its conductive properties (it’s poor but better than air). Surely we could use it for something good. One such application could be the implementation of a very small sensor near all cups, specifically designed for use by blind individuals to detect when their cup is getting full. This project is just a physically bigger version of such project involving following materials:


  • Piezoelectric speaker
  • Jumper wires
  • Aligator clips
  • Arduino Board
  • Power Bank
  • Cup
  • Water


I decided to avoid using resistor at all since the audio produced by speaker wasn’t audible enough and would hardly activate by water touching the clips due to reduced voltage. I also decided to use salt mixed with water to improve conductivity and maximize the sound output. Realistically, if one were to transform this into a real project, we would be using cup integrated batteries, soldered electronics, built in sensor into the cup and any liquid substance. Even if conductivity is not terribly high, we could use a dedicated amplifier or a simple transistor.

At first I connected board to power-bank . I connected jumper wires to the board and alligator clips to jumper wires themselves. Piezoelectric speaker is part of the circuit and is directly mounted on bread board. The jumper wires leaving piezoelectric speaker are directly connected to alligator clips which are then connected to brim of the cup. When water is poured into a cup, at some point it touches the alligator clips and speaker makes highly audible buzz. This just so happens when water is about to overspill, providing the practical aspect of this project.



The idea of not using hands at all is a challenging one. I even thought about the aspect of indirect hand involvement. However, since it’s the rising water level that acts as a conductive plate and not a hand movement, I think conceptually it works just as well. The execution is pretty straightforward, with very simple design. Yet it tries to solve a realistic problem with minimum resources. In the future, this project could be expanded into an actual product, with more features, while trying to seamlessly integrate design into the cup.