Final Project – Signify

CONCEPT

My final project is a Sign Language glove that translates American Sign Language (ASL) to English and vice versa. The aim is to facilitate communication and improve accessibility for individuals who are deaf or hard of hearing. This is an idea I have had for years but I finally have the technical skills to implement it. My motivation arises from my aim to break down the boundaries that hinder people with disabilities in society. Unfortunately, sign language is not a common skill for hearing people. On the other hand, while some people with hearing impairment know lipreading, for most of them, Sign Language is their first language.

This interactive system enables individuals that use sign language to have two-way communication with non-sign language users effectively. The user wearing the glove can fingerspell words using the American Sign Language alphabet. The program then vocalizes the word to assist Sign Language users with speech. On the other hand, a hearing person can type their word into the program which will display the signs for each letter so the Sign Language user can interpret it.

IMPLEMENTATION

p5 sketch full screen mode: https://editor.p5js.org/aneekap/full/ZHrr0suY-

The glove incorporates flex sensors on each finger which detects how much the finger is bent. Arduino processes this data and sends the finger configurations to the p5.js sketch.

//fingers
int flexPin1 = A1; 
int flexPin2 = A2; 
int flexPin3 = A3;
int flexPin4 = A4; 
int flexPin5 = A5;

void setup() {
  // Start serial communication so we can send data
  // over the USB connection to our p5js sketch
  Serial.begin(9600);
}

void loop() {
  // Read flex sensor values
  int pinky = analogRead(flexPin1);
  int ring = analogRead(flexPin2);
  int middle = analogRead(flexPin3);
  int index = analogRead(flexPin4);
  int thumb = analogRead(flexPin5);

  // Send flex sensor values to p5.js
  Serial.print(pinky);
  Serial.print(",");
  Serial.print(ring);
  Serial.print(",");
  Serial.print(middle);
  Serial.print(",");
  Serial.print(index);
  Serial.print(",");
  Serial.print(thumb);
  Serial.println(); 

  delay(100); 
}

The p5.js sketch interprets the gestures to recognize the corresponding letters of the alphabet. This is done using the signRecognition function below which checks whether each flex sensor value is in the appropriate range.

function signRecognition() {
  //letter recognition
  if ((120<pinky && pinky<200) && (90<ring && ring<400) && (160<middle && middle<400) && (100<index && index<300) && (240<thumb && thumb<280)) {
      text('a', 102, 255); 
      letter = 'a';
    } 
  else if ((230<pinky && pinky<255) && (0<=ring && ring<50) && (0<=middle && middle<50) && (0<=index && index<50) && (175<thumb && thumb<250)) {
      text('b', 102, 255); 
      letter = 'b';
    } 
  
  else if ((220<pinky && pinky<250) && (0<=ring && ring<100) && (0<=middle && middle<100) && (30<index && index<190) && (220<thumb && thumb<270)) {
      text('f', 102, 255);
      letter = 'f';
    }
  else if ((130<pinky && pinky<250) && (100<ring && ring<270) && (135<middle && middle<280) && (index==0) && (250<thumb && thumb<283)) {
      text('g', 102, 255); 
      letter = 'g';
    }
  else if ((205<pinky && pinky<245) && (70<ring && ring<280) && (80<middle && middle<220) && (70<index && index<240) && (210<thumb && thumb<265)) {
      text('i', 102, 255); 
      letter = 'i';
    }
  else if ((120<pinky && pinky<210) && (60<ring && ring<330) && (50<middle && middle<300) && (30<index && index<300) && (190<thumb && thumb<240)) {
      text('m', 102, 255); 
      letter = 'm';
    }
  else if ((150<pinky && pinky<220) && (0<=ring && ring<100) && (0<=middle && middle<110) && (0<=index && index<50) && (220<thumb && thumb<250)) {
      text('o', 102, 255); 
      letter = 'o';
    }
  else if ((135<pinky && pinky<220) && (80<ring && ring<220) && (0<=middle && middle<20) && (0<=index && index<50) && (230<thumb && thumb<290)) {
      text('p', 102, 255); 
      letter = 'p';
    }
  else if ((170<pinky && pinky<200) && (20<ring && ring<220) && (0<=middle && middle<190) && (0<=index && index<100) && (195<thumb && thumb<260)) {
      text('u', 102, 255); 
      letter = 'u';
    }
  else {
      text('-', 102, 255); // Display '-' if no specific configuration is matched
      letter = ' ';
    }
  
}

It is limited to only 9 letters for now. I did implement a few more letters but later removed it to avoid clashes between the letter ranges. The reason for this is a lot of ASL signs have very similar finger configurations and I would require additional or more accurate sensors to implement all 26 letters.

There will be two options the user can select from: translating ASL to English and translating English to ASL. For the first program, the user spells out a word using the sign for each letter and pressing right arrow to confirm the letter and move to next position.  You can edit the word if you made a mistake by using backspace, and to add a space you input no letter.  This is done using the keyPressed() function.

function keyPressed() {
  if (key == " ") {
    setUpSerial();
  }
  
  if (keyCode === ENTER) {
    if (page === 1) {
      page = 2;
    } else if (page === 2) {
      page = 3;
    } else if (page === 4) {
      finalizeWord();
      // page = 3; // Go back to options page
    }
  } else if (keyCode === BACKSPACE && page === 4) {
      Word = Word.substring(0, Word.length - 1);
  } else if (keyCode === RIGHT_ARROW && page === 4) {
      Word += letter;
  } else if (keyCode === LEFT_ARROW && (page === 4 || page === 5)) {
      page = 3; // Go back to options page
      Word = '';
  }
  
  if (keyCode >= 65 && keyCode <= 90) { // Check if the pressed key is a letter
    enteredWord += key.toLowerCase(); // Add the lowercase letter to the entered word
  } else if (keyCode === BACKSPACE) { // Handle backspace key
    enteredWord = enteredWord.slice(0, -1); // Remove the last character from the entered word
  }
}

The p5.js screen reads the word aloud using text-to-speech, using the SpeechSynthesis interface which is a part of the Web Speech API.

For the second program, users will have the option to input a word via keyboard to display the corresponding ASL sign for each letter on the screen below the word.

function translateEnglishPage() {  
  image(eng, 0, 0, width, height);
  text(enteredWord, width/2 - 120, height/2+5); 

  // Check each letter of the entered word and display the corresponding sign
  let startX = width/2 - 130; 
  let startY = height/2 - 70; 
  let letterSpacing = 35; // Spacing between images
  for (let imgIndex = 0; imgIndex < enteredWord.length; imgIndex++) {
    let currentLetter = enteredWord.charAt(imgIndex).toLowerCase(); 
    //calculate position of image based on letter
    let imageX = startX + imgIndex * letterSpacing; 
    let imageY = startY+120;
    
    // Display the image corresponding to the current letter
    if (currentLetter === 'a') {
        image(sign_a, imageX, imageY, 35, 50); }

    // and so on for each letter ...
}

USER TESTING

User testing was helpful but also a bit worrying. The gesture configurations were calibrated to my hand and fingers. I later noticed that it wasn’t working exactly the same with other people’s hands. I thus had to make the ranges less strict to incorporate other hand shapes. However, editing these ranges caused more issues such as introducing clashes between the letters.

challenges and improvements:

The main challenge was calculating the gesture configurations one by one. The flex sensors are pretty sensitive and tend to randomly give different values. I am using two types of flex sensors: 3 thin film pressure sensors and 2 short flex sensors, so I had to calibrate them differently as well. On top of that, one of my flex sensors stopped working midway so my project came to a stop. Thankfully, Professor came to the rescue and bought a new flex sensor for me promptly. Soldering and arranging the wires were also a hassle but I finally got them to look neat.

I am proud of coming up with the idea in the first place. I wanted to create something that was unique and something I am passionate about. I am also proud of sticking to it despite the challenges and making it as accurate as possible.

There is a lot to improve and I started this as a prototype for a long-term project. One major issue is that since some of the finger configurations are so similar, it mixes up between the letters. I also couldn’t implement the entire alphabet. I could add an accelerometer to detect movements as well.  I could alternatively try using ML5 for more accurate configurations. I hope to get it to work for entire words as well. I aim to one day create a fully functional portable Sign Language glove.

IM Showcase

I made a few changes before I presented my project at the showcase: I recalibrated the ranges for the letters to make it work smoother, I removed a few letters according to Professor’s advice to reduce clashes between letters, and I improved the UI.

During the IM show, when a few people tried on my glove, the tape and wires started coming off, and I had to run back to the IM lab to fix it. Moreover, most of the letters were not working for them since it was still only optimal for my hand. This was because the bending of the flex sensors vary a lot between different hand shapes and sizes. I unfortunately had to resort to only providing them a demonstration after that point and instead gave them the challenge to provide me a word using those letters.

Nevertheless, I had a fun time at the showcase presenting my project and engaging with other people’s projects. I also thoroughly enjoyed taking this course overall and using my creativity and technical skills to come up with projects every week.

 

Final Project Proposal (week 12)

For my final project, I am creating a Sign Language Glove, aimed at facilitating communication and improving accessibility for individuals who are deaf or hard of hearing. It is limited it to fingerspelling words using the American Sign Language alphabet for now. The glove incorporates flex sensors on each finger which detects how much the finger is bent. Arduino will process this data and send the finger configurations to a p5.js sketch, which will interpret the gestures and recognize the corresponding letters of the alphabet. The p5.js screen will display the recognized letters visually and audibly using text-to-speech.

There will be two options the user can select from: translating ASL to English and translating English to ASL. For the first program, the user spells out a word using the sign for each letter and p5 will read it aloud. For the second program, users will have the option to input a word via keyboard to display the corresponding ASL sign for each letter on the screen. This interactive system enables individuals that use sign language to have two-way communication with non-sign language users effectively.

Week 12: Serial Communication (Aneeka + Amiteash)

Serial Exercise 1

For the first exercise, we used the flex sensor in a voltage divider circuit as the analog input on pin A0 to control the x-axis motion of an ellipse on a screen. The more a flex sensor is bent, the higher the resistance of the flex sensor.

Schematic:

p5 Sketch:

Serial Exercise 2

For the second exercise, we represented the movement of an ellipse along the x-axis, as controlled by mouseX, as the brightness of 2 LEDs, on pins 9 and 11. The green LED increases in brightness as the ellipse moves to the right while the red LED increases in brightness as the ellipse moves to the left.

Schematic:

p5 Sketch:

Serial Exercise 3

Here we edited Prof. Aaron Sherwood’s GravityWind example in p5 to connect to Arduino through Serial Communication. This allows the potentiometer on pin A0 to control wind speed and direction depending on turning of the potentiometer (0 to 1023 mapped to -5 to 5) and for the LEDs on pins 9 and 11 to blink alternately when the ball crosses the bottom end of the screen.

Schematic:

p5 sketch:

Code (for all three)

From the Arduino side, we used a common code, mainly to prevent errors from continuously updating the code. The setup() function set up serial communication and the behavior while a connection was yet to be established. The loop() function was as follows:

void loop() {
  // wait for data from p5 before doing something
  while (Serial.available()) {
    digitalWrite(LED_BUILTIN, HIGH); // led on while receiving data
    int left = Serial.parseInt();
    int right = Serial.parseInt();
    if (Serial.read() == '\n') {
      digitalWrite(leftLedPin, left);
      digitalWrite(rightLedPin, right);
      int sensor = analogRead(potPin);
      delay(5);
      Serial.println(sensor);
    }
  }
  digitalWrite(LED_BUILTIN, LOW);
}

From the p5 side, our code didn’t differ greatly from the Serial Communication template. Where our code differed was mainly in mapping analog inputs to fit the range of values of the sketch width/maximum viable wind speed (for exercises 1 and 3 respectively), while variables called ‘left’ and ‘right’ were assigned to track changes to the LED state due to changes in the sketch.

One code we were particularly proud of in Exercise 3 was how we alternated the blinking of the LEDs depending on the number of times the ball bounced.

if (position.y > height - mass / 2) {
  velocity.y *= -0.9; // A little dampening when hitting the bottom
  position.y = height - mass / 2;
  left = (left + 1) % 2;
  if (left == 0) {
    right = 1;
  } else {
    right = 0;
  }
}

The code for assigning the values to the right LED may look needlessly complicated, and it seems that both could use the (x + 1) % 2 for both LEDs, only assigning one as 0 and the other as 1 at start. However, this method allowed for both LEDs to be dark before the first bounce of each ball (which can be seen in the demo if slowed), which we felt was key to the prompt.

Demo (for all three)

Reading Response 9 (Week 11)

“Design meets Disability” was an eye-opening exploration of the complex interplay between design, disability, and dignity. The notion that assistive devices can be both functional and stylish is a powerful one, breaking away from the traditional paradigm that prioritized hiding disabilities rather than celebrating them. As someone passionate about disability activism, it has been my personal mission to make society more accessible for people with disabilities. This reading thus prompted me to reconsider the traditional approaches to designing assistive technologies and encouraged me to explore more inclusive and culturally relevant design solutions. It also helped me ideate my final IM project- making a sign language glove, particularly through its emphasis on simplicity and cognitive accessibility reminding me of the importance of intuitive design in facilitating meaningful interactions and experiences for users with disabilities.

 

The discussion on the evolution of eyewear from medical necessity to fashion accessory was also intriguing, illustrating how incorporating fashion culture into design can lead to more positive perceptions of disability. I have used spectacles since the age of 5. I currently have high myopia (-11! yeah I know…). But I have never been insecure about wearing glasses or never considered laser eye surgery. Now that I look back, one of the reasons is because I enjoyed selecting new spectacles to wear every few years. I knew myopia is not something I could hide or change, so I coped with it by making it my style. Similarly, embracing fashion culture and aesthetics can transform the perception of other assistive devices, empowering individuals with disabilities to embrace their uniqueness rather than hide it.  Overall, this reading reinforced my commitment to creating inclusive and empowering solutions that celebrate diversity and promote social inclusion.

Final Project Proposal (Week 11)

I have three ideas for my final project, each emerging from my interdisciplinary interests.

Encouraging inclusivity with an interactive tool for learning the ASL alphabet — SARAH PAWLETT

The first idea I would like to propose is creating a Sign Language Glove, aimed at facilitating communication and improving accessibility for individuals who are deaf or hard of hearing. I shall limit it to fingerspelling using the alphabet for now. The glove will incorporate flex sensors on each finger to detect bending movements. Arduino will process this data and send the finger configurations to a p5.js sketch, which will interpret the gestures and recognize the corresponding letters of the alphabet.

The p5.js screen will display the recognized letters visually and audibly using text-to-speech. Additionally, users will have the option to input letters via a keyboard to display the corresponding Sign for it on the screen. This interactive system enables individuals that use sign language to have two-way communication with non-sign language users effectively. 

North Star Teacher Resources Adhesive ASL Alphabet Desk Prompts, Pack of 36: Buy Online at Best Price in UAE - Amazon.ae

I initially thought of using American Sign Language (ASL), but the issue is a lot of the signs have the same finger positions, and it will be difficult to differentiate the signs. 

Indian sign language for numbers and alphabets. | Download Scientific Diagram

An alternative is using Indian Sign Language, which uses two hands, but can overcome the above issue. However, this adds complexity of checking 10 finger configurations. 

 

My second idea is conducting a psychology experiment utilizing p5.js for the visual presentation of stimuli and Arduino for participant response collection. I aim to design either Perception experiments, such as Visual search tasks, which involve participants searching for a target stimulus among distractors, or Cognition experiments, which may involve memory tasks, where participants memorize and recall sequences of stimuli presented, or face recognition tasks, where participants identify familiar faces. In these experiments, the p5.js sketch will display visual stimuli, while Arduino buttons will serve as response inputs.

Visual search - Wikipedia

Eg, in the visual search tasks, the p5.js screen will display each of the trials and participants will use buttons connected to the Arduino board to indicate when they have found the target stimulus. Arduino will record response times and accuracy.

At the end of the experiment session, participants will be able to view their performance metrics and compare them to group averages or previous trials. This setup allows for the seamless integration of psychological experimentation with interactive technology, facilitating data collection and analysis in a user-friendly manner.

 

For my third project idea, I propose creating an interactive system that generates music from art! The user will be able to draw on the p5.js canvas, creating their unique artwork. The system will then analyze this artwork pixel by pixel, extracting the RGB values of each pixel. These RGB values will be averaged to create a single value for each pixel, which will then be mapped to a musical note. Additionally, the system will detect sharp changes in color intensity between adjacent pixels, indicating transitions in the artwork. These transitions will determine the length of each note, with sharper changes resulting in shorter notes. The coordinates of each drawn point can influence the tempo or volume of the music, to make it sound better. Once the music composition is generated in p5.js, it will be sent to Arduino, where a piezo buzzer will play the music in real-time. This interactive system lets users create their own art and music. 

Week 11: Music Box

Introduction

For this assignment, Amiteash and I wished to create our own recreation of a music box. This was a popular toy/gramophone in the past (depending on how you viewed it) and we even remember having a small old one back at home in India. While they have taken many different forms, we based ours on a simple electronic one that played music when you opened the box.

Components
  • 1 Arduino Uno R3 SMD
  • 1 Photoresistor
  • Slideswitch
  • Arduino Piezo Buzzer
  • 10 kΩ Resistor
  • Jumper Wires
Circuit Schematic and Simulation

The first step was to prepare a circuit schematic on TinkerCAD. Basically, the digital input took the form of a slideswitch feeding into a digital input pin (pin 4) through a 10 kΩ pull-down resistor. Analog Input came from the photoresistor connected to pin A0. The buzzer output was from pin 8.

Figure 1: Component Simulation View

Figure 2: Schematic View

TinkerCAD also has a handy simulation functionality, that even allows to upload the Arduino code and test how the circuit would work under simulator conditions. This definitely helped in fixing bugs before even testing with the actual circuit, and also helped to individually simulate each component before assembling together.

Usage

Figure 3: The circuit

Basically, there are two main control points: opening/closing the box, and sliding the switch between ON/OFF.

When the box is opened, light falling on the photoresistor exceeds the threshold and thus triggers the playSong() function, which iterates over the notes in the pre-assigned song and plays it using the buzzer. When the box is closed, the light intensity falls below the threshold and the for loop breaks / is not triggered again, causing the music to stop playing.

When the switch is ‘ON’, pin 4 detects a ‘HIGH’ voltage and thus the Arduino plays song 1, which for this example we chose to be Toby Fox’s Megalovania from the game Undertale.

When the switch is ‘OFF’, pin 4 detects a ‘LOW’ voltage and thus the Arduino plays song 2, which for this example we chose to be Mikkel Fabricus Smitt’s Subway Surfers Main Theme from the game Subway Surfers.

#define LDR_PIN A0
#define SWITCH_PIN 4
#define BUZZER_PIN 8

#include "pitches.h"

int ldrThreshold = 500; 

void setup() {
  pinMode(LDR_PIN, INPUT);
  pinMode(SWITCH_PIN, INPUT);
  pinMode(BUZZER_PIN, OUTPUT);
}

void loop() {
  // Read the light level from the LDR
    int lightLevel = analogRead(LDR_PIN);
    Serial.println(lightLevel);
  // If the light level exceeds the threshold, play the first song
  if (lightLevel > ldrThreshold) {
      // Check if the switch is pressed to change to the second song
    if (digitalRead(SWITCH_PIN) == LOW) {
      playSong(2);
      // Add some delay to prevent rapid switching
      delay(10); } 
    else {
      playSong(1);
      delay(10);    }
  }
  else {
    noTone(BUZZER_PIN);
  }
}

void playSong(int songNumber) {
  // Define the melody and duration for each song
  int melody1[] = { NOTE_D4, NOTE_D4, NOTE_D5, NOTE_A4, 0, NOTE_GS4, NOTE_G4, NOTE_F4, NOTE_D4, NOTE_F4, NOTE_G4,
  NOTE_C4, NOTE_C4, NOTE_D5, NOTE_A4, 0, NOTE_GS4, NOTE_G4, NOTE_F4, NOTE_D4, NOTE_F4, NOTE_G4,
  NOTE_B3, NOTE_B3, NOTE_D5, NOTE_A4, 0, NOTE_GS4, NOTE_G4, NOTE_F4, NOTE_D4, NOTE_F4, NOTE_G4,
  NOTE_AS3, NOTE_AS3, NOTE_D5, NOTE_A4, 0, NOTE_GS4, NOTE_G4, NOTE_F4, NOTE_D4, NOTE_F4, NOTE_G4
   };
  int noteDurations1[] = { 8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
   8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
   8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8,
    8, 8, 4, 4, 8, 4, 4, 4, 8, 8, 8 
  };
  
  int melody2[] = {
  NOTE_C5, 0, NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0,
  NOTE_AS4, 0, NOTE_AS4, NOTE_C5, 0, NOTE_AS4, NOTE_A4, 0,
  0,
  NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0, NOTE_AS4, 0, NOTE_E5,
  0,

  NOTE_C5, 0, NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0,
  NOTE_AS4, 0, NOTE_AS4, NOTE_C5, 0, NOTE_AS4, NOTE_A4, 0,
  0,
  NOTE_C5, 0, NOTE_AS4, 0, NOTE_A4, 0, NOTE_AS4, 0, NOTE_E4,0
};

int noteDurations2[] = {
  8, 16, 8, 16, 8, 16, 8, 16,
  8, 16, 8, 8, 16, 8, 8, 16,
  4,
  8, 16, 8, 16, 8, 16, 8, 4, 8,
  4,

  8, 16, 8, 16, 8, 16, 8, 16,
  8, 16, 8, 8, 16, 8, 8, 16,
  4,
  8, 16, 8, 16, 8, 16,8,4,8,4
};

  // Select the melody and note durations based on the song number
  int *melody;
  int *noteDurations;
  int notes;

  if (songNumber == 1) {
    melody = melody1;
    noteDurations = noteDurations1;
    notes = sizeof(melody1) / sizeof(melody1[0]);
  } else {
    melody = melody2;
    noteDurations = noteDurations2;
    notes = sizeof(melody2) / sizeof(melody2[0]);
  }

  // Play the selected melody
  for (int i = 0; i < notes; i++) {
    //Read light level from LDR
    int lightLevel = analogRead(LDR_PIN);
    if(lightLevel > ldrThreshold){
      int duration = 1000 / noteDurations[i]; //duration in ms for each note 
      tone(BUZZER_PIN, melody[i], duration);
      delay(duration * 1.3); // Add a slight delay between notes for better sound quality
      noTone(BUZZER_PIN);
    } else{
      break; //to stop playing song immediately if box is closed
    }
  }
}

The notes refer to the pitches.h header file from the toneMelody example on Arduino IDE.

To switch between two different songs, we assigned the information about the songs to pointers. Also, to ensure that the song switched off as soon as possible when the switch or photoresistor were toggled, we used an if statement that led to a statement to break the loop, which immediately stops playing.

The notes and durations were obtained from this Github repository: https://github.com/hibit-dev/buzzer/tree/master. Since the repository did not have the notes for Megalovania, that was manually transformed from its sheet music online.

Showcase

Reflections

We both really enjoyed working on this project. For something that was relatively simple, it was still quite impressive. We were especially happy with the solution we found in using the break statement. Because earlier, the photoresistor was only taking an input after the ‘for’ loop was completed, and thus the song kept playing even when the box was closed, contrary to our expectations. Breaking the loop avoids that issue. Overall, it was fun listening to our favorite game theme tunes on loop!

Reading response 8 (week 11)

The article “A Brief Rant on the Future of Interaction Design” prompted me to reconsider the fundamental principles of interaction design. The “Pictures Under Glass” paradigm – our daily tapping and swiping on the screen – prioritizes visual feedback over tactile or physical interactions. Humans can do so much with their specialized appendages, yet this is all we do now. This reading thus prompted me to think about the potential limitations of current interaction models. It makes me wonder if we are settling for convenience over innovation and whether there are untapped (pun-intended) opportunities to create more immersive and intuitive interfaces.

In fact, current VR and AR technologies, coupled with advancements in computer vision and spatial computing, have the potential to realize many of the principles advocated by the author back in 2011. However, widespread adoption has been somewhat limited, and the reasons for this are multifaceted. One reason is the current limitations of hardware and software technology, such as the bulkiness of VR headsets or the limited field of view of AR glasses. Additionally, there may be practical challenges in integrating these technologies into everyday workflows or activities, as well as concerns about privacy, accessibility, and social acceptance. Furthermore, while VR and AR can offer compelling experiences for certain applications, they may not always be the most practical or necessary solution for every interaction scenario. In many cases, traditional interfaces like touchscreens or keyboards may be more efficient and effective for completing tasks, particularly in contexts where precision and speed are essential.

While the author’s arguments are compelling and align with my beliefs to some extent, I also recognize the need for balance between innovation and usability. Transitioning from fingertip-centric interfaces to more complex tactile and gestural inputs poses usability challenges and requires careful consideration of accessibility and inclusivity. Moreover, while tactile feedback and physical interaction are crucial, there are contexts where visual interfaces can complement and enhance user experiences.  However, overall the reading has challenged me to reconsider the dominance of touchscreen devices and to explore alternative interaction paradigms that better leverage the full range of human capabilities.

Reading Response 7

The article “Physical Computing’s Greatest Hits (and misses)” offers an exploration of various classes of physical computing projects and the dynamics between implementation and engagement. The concept of “Meditation Helpers” caught my attention as someone interested in cognitive psychology and mindfulness. While these devices aim to guide users into a meditative state by measuring physiological indicators like heart rate and breath rate or stress through measuring sweat gland activity, I wonder about their effectiveness. Can technology truly facilitate mindfulness, or does it risk reducing meditation to a quantifiable metric? The reading prompted me to consider the role of technology in spiritual practices and whether it enhances or detracts from the essence of meditation. Another example that I found to be very innovative was Younghyun Chung’s “Digital Wheel Art”. I have always raised the question of accessibility in interactive installations. However, his project demonstrates that by utilizing technology to track and interpret bodily movements, individuals who may have limited mobility or dexterity can also engage in interactive experiences with greater ease and independence. Thus, physical computing can be leveraged to create inclusive and accessible solutions for individuals with disabilities. Another example of this is Sign Language gloves, which attempt to convert the motions of sign language into written or spoken words, which can help people who are deaf or hard of hearing to communicate easier with a hearing person.

Furthermore, the article prompted me to reconsider the essence of interactive art and its intrinsic connection to human emotions and experiences. The idea that interactive art should evoke feelings and thoughts rather than merely showcase technological prowess struck a chord with me. It reminded me that the true essence of art lies in its ability to stir emotions and provoke introspection, fostering a collaborative relationship between the artist and the audience. Overall, the article sparked a renewed curiosity in exploring the delicate balance between technical innovation and human connection in interactive art.

 

The article “Making Interactive Art” resonated with me deeply, particularly its emphasis on allowing the audience to interpret and engage with the work independently. It reminded me of my own experiences with interactive art installations, where the most impactful moments occurred when I was given the freedom to explore and interact without being told what to think or do. Like I mentioned earlier, art is truly interactive when there is open conversation between the artist and the viewer. In simple terms: show, don’t tell.

I appreciated the comparison the author drew between designing interactive art and directing actors in a performance. It highlighted the importance of providing the audience with tools and cues to guide their interaction while allowing room for individual interpretation and discovery. However, I found myself questioning whether there might be instances where some level of guidance or interpretation could enhance the audience’s experience without detracting from their autonomy. For example, some modern artworks look like random nonsense to me before I’m told the hidden meaning. 

Week 10: Pulse Beat

For this assignment, I wanted to recreate the human pulse / heartbeat. The blinking of the LEDs represents the ‘lub-dub’ sound of our heartbeat and the sound of the piezo buzzer represents our heart rate or pulse. When the button is pressed, the LEDs blink in a pattern synchronized with the buzzer. The potentiometer enables users to adjust the speed of the blinking pattern and the pitch of a piezo buzzer. So turning up the potentiometer symbolizes increasing the heart rate.

Video:

Schematic Diagram:

Arduino Code:

const int buttonPin = A2;          
const int potentiometerPin = A0;   
const int buzzerPin = 12;          
const int ledPin1 = 7;   //red         
const int ledPin2 = 8;   //green         

int buttonState = LOW; //digital input
int potValue = 0;  //analog input
int blinkSpeed = 500; //LED blink speed

void setup() {
  pinMode(buttonPin, INPUT);
  pinMode(buzzerPin, OUTPUT);
  pinMode(ledPin1, OUTPUT);
  pinMode(ledPin2, OUTPUT);
}

void loop() {
  // Read the value of the potentiometer and map it to blink speed range (100-1000 ms)
  potValue = analogRead(potentiometerPin);
  blinkSpeed = map(potValue, 0, 1023, 100, 1000);

  buttonState = digitalRead(buttonPin);
  if (buttonState == HIGH) {  //button pressed
    blinkPattern();
    // Play a tone with frequency based on blink speed
    int buzzerFrequency = map(blinkSpeed, 100, 1000, 500, 1500);
    tone(buzzerPin, buzzerFrequency);
    delay(50);  // Buzz for 100 milliseconds
    noTone(buzzerPin);  // Turn off the buzzer
  } else { //button released
    digitalWrite(ledPin1, LOW);
    digitalWrite(ledPin2, LOW);
    noTone(buzzerPin);
  }
}

void blinkPattern() {
  digitalWrite(ledPin1, HIGH);
  delay(blinkSpeed);
  digitalWrite(ledPin1, LOW);
  digitalWrite(ledPin2, HIGH);
  delay(blinkSpeed);
  digitalWrite(ledPin2, LOW);
}

Overall, I am happy with the outcome. I was relieved to see it finally work because I was trying everything (debugging, restarting circuit, etc.) for 4 hours before I realized my 5V wasn’t connected in properly.

I further want to create more audio-visual displays using LEDs and the piezo buzzer.

Week 9: Chin Up (Unusual switch)

For this assignment, I wanted to create an unusual switch that didn’t use my arms or legs. I thought about different body parts and the next best part for me was my chin. I was then thinking of different conductors to complete the circuit and while I was eating at Indian By Nature, I realized I could use the aluminum foil.

The rest of the circuit was straightforward to create. The switch is created by having two pieces of paper that pop up from each other, to which I stuck some aluminum foil on either side, under which each jumper wire is inserted.

Video:

 

Overall, I am happy with the outcome. It’s nice to see the LED light up after being uncertain if it was going to work or not. Whenever I was struggling in between to get things to stay in place and work, all I had to say to myself was chin up!:)

I would like to further improve my project by adding different “pedals” for different body parts and connect them to separate LEDs.