Final Project Idea

I am developing a small, battery-powered Arduino device that displays the room’s status and simple pixel-style emojis on a 16×2 LCD.  The device is controlled wirelessly from my computer using either an nRF24L01 radio module or a Bluetooth serial module, depending on what hardware becomes available. The device will mount on a wall and serve as a minimal, visually friendly indicator of what’s happening inside the room.

Development Stages

Stage 1 — Crude Functional Prototype

My first goal is to build the simplest version of the system:

  • Connect an Arduino board, a 16×2 LCD, and whichever wireless module I choose.

  •  Load basic firmware that listens for simple incoming messages and updates the LCD with status text and a small emoji.

  • Test commands from a computer program.

  •  Focus on verifying communication and display logic, without worrying about wiring neatness, battery life, or enclosure design.

The objective of this stage is to prove that the device concept works end-to-end.

Stage 2 — Improved Prototype and Physical Enclosure

Once the first prototype is working, I move to making it usable in a real space:

  • Tidy the wiring and make the device compact.

  • Design a simple case in a 3D modeling tool such as Fusion 360 or Tinkercad.

  • 3D-print the enclosure so the LCD is visible from the front, the electronics fit securely inside, and the device can mount flat against a wall.

  • Refine battery placement so the device can be opened or recharged easily.

Stage 3 — Final Visual and Interaction Refinement

After the device is physically assembled:

  • Adjust the display layout so the text and emoji look balanced and readable.

  • Refine how the device reacts to incoming commands (such as smoothing updates, adding small transitions, or improving clarity).

  • Add small visual improvements such as backlight changes for attention or custom character tweaks for better emoji expression.

This stage is about making the device feel polished and pleasant.

Project Architecture

Device Side:

The device contains three main elements:

  1. Microcontroller — the Arduino runs the core program that listens for wireless messages and updates the display.

  2. Display System — the 16×2 LCD shows both text and custom emoji characters.

  3. Wireless Module — either an nRF24L01 or a Bluetooth serial module receives commands from my computer.

Internally, the Arduino software is structured around:

  • A small message handler that receives text commands wirelessly.

  • A display manager that decides what to show based on the message.

  • A custom character bank for emoji graphics.

This architecture keeps the device simple, efficient, and easy to maintain.

Computer Side

On my computer, I run a lightweight program that:

  • Opens a wireless communication link (either through a paired Bluetooth COM port or through a USB radio dongle for the nRF24L01).

  • Sends simple text commands such as “BUSY”, “AVAILABLE”, “MEETING”, or an emoji instruction.

  • Lets me manually choose the room status using a small interface or a command-line tool.

The computer-side software remains minimal because all visual work happens on the Arduino.

Emoji Design Approach

Since the 16×2 LCD uses a 5×8 pixel character grid, I design emojis as tiny pixel icons:

  • Create simple patterns — smiling, neutral face, busy face, resting face, or symbols like checkmarks or caution icons.

  • Define each pattern using the LCD’s built-in custom character feature.

Week 11 Production(Ling and Abdelrahman)

Conceptualization:

The central idea was to build a simple connection between physical and digital worlds.

Step 1: Single-Sensor and p5.js Movement
Using only one analog sensor (a potentiometer), the Arduino continuously reads values and streams them to p5.js over serial. p5.js interprets those readings and moves an ellipse along the horizontal axis, keeping it vertically centered.

Step 2: LED Brightness Controlled by p5.js
Next, I reversed the flow. Instead of only reading from Arduino, I sent numerical values from p5.js back to the board so it could adjust LED brightness using PWM.

Step 3: Gravity + Wind Integration
Finally, I modified the p5.js gravity wind sketch. Each time the ball hits the “ground,” p5 sends a signal to Arduino, turning an LED on briefly before switching back off. Meanwhile, an analog sensor feeds continuous data to p5.js to influence the wind force acting on the falling ball.

Video Demonstration:

https://drive.google.com/file/d/1Morf2y7cxIAgYLHKVnitsadjr813cX4Z/view?usp=sharing

Schematic:

Code Highlight:

oid setup() {
 Serial.begin(9600);
  // wait for p5 to connect
 while (Serial.available() <= 0) {
   Serial.println("0,0");
   delay(300);
 }
}
void loop() {
 // wait for data from p5
 while (Serial.available()) {
   digitalWrite(LED_BUILTIN, HIGH);
    Serial.read();     // read incoming
  
   int sensorValue = analogRead(A0); // read sensor
  
   Serial.println(sensorValue);   // send sensor value
 }
 digitalWrite(LED_BUILTIN, LOW);
}
// serial variables
let port;
let connectBtn;
let sensorValue = 0;

function setup() {
  createCanvas(640, 360);
  
  port = createSerial();   // create serial connection
  
  // create connect button
  connectBtn = createButton("Connect to Arduino");
  connectBtn.position(10, 10);
  connectBtn.mousePressed(connectToArduino);
}

function draw() {
  background(220);
  
  // read from Arduino
  let str = port.readUntil("\n");
  if (str.length > 0) {
    sensorValue = int(str);
  }
  
  port.write("\n");   // send handshake to Arduino
  

  let xPos = map(sensorValue, 0, 1023, 0, width);   // map sensor to horizontal position
  
  // draw ellipse in middle vertically
  fill(0);
  ellipse(xPos, height/2, 50, 50);
  
  // show sensor value
  fill(0);
  noStroke();
  text("Sensor: " + sensorValue, 10, 50);
  text("Turn potentiometer to move circle", 10, 70);
}

// connect to Arduino
function connectToArduino() {
  if (!port.opened()) {
    port.open(9600);
  }
}

Reflection:

This project helped me practice the  bidirectional serial communication between Arduino and p5.js.

Week 11 Reading

This week’s reading provides me new perspectives to look at everyday objects. Pullin questions the space between function and feeling and noticed how assistive devices are related to identity, pride, and possibility. For example,a hearing aid could become jewelry and a prosthetic has potential to become sculpture.

He inspired me to consider how disability can become a kind of creative catalyst. Instead of treating physical difference as something to hide or correct, he treats it as a an invitation to rethink what objects can be. His action reminds me how narrow my own assumptions have been since I was heavily influenced by a world that often prioritizes discretion over expression.

Week 10 Production(Ling and Mariam)

Conceptualization

For this week’s assignment, the requirement is to “Make a musical instrument “while using at least one digital sensor (switch), at least one analog sensor (photoresistor, potentiometer, or distance measuring sensor).” I have worked with photoresistor last week, and potentiomter isn’t aesthetic enough for a musical instrument in my view, so I looked into the toolbox and proposed to apply ultrasonic sensor, or in another word, distance measuring sensor. It works perfectly as a tool to measure the distance between the sensor and any solid object in real-time and we were thinking to map this distance to pitches so that whenever people move their hands in front of the sensor, the sound the buzzer make would change accordingly.

After implement this simple step, I was thinking about two problems: first, how could i integrate a switch into this system, and second, is there anyway to alter the tempo, which is another important factor in making music, of the sound the buzzer made. To resolve the first problem, i first decide to add a system on/off switch since it’s just logical to add a switch to control the entire system. To resolve the second problem, I proposed to add a switch to control the tempo of the sound it made since after the first implementation, the buzzer would just make continuous sound and it would be hard to add any tempo element to the instrument.

We decided there could be 2 means to control the tempo:

  1. The switch would record manual interaction with the switch and play the sound precisely how it is manually inputted.
  2. The switch would record how many times people push the button within a 5-second time period(starting from the first push) and calculate the tempo based on 1000ms dividing the times the button is pushed.

We finally decided to go with the second option as it would provide us with a more steady tempo since manual input could be mathematically unstable otherwise.

So the final output(the video demonstration and the schematic) is demonstrated below.

Video Demonstration

Schematic

Code Highlight:

In my opinion, there are two interesting part about this program. First, how the pitch is controlled(a basically mapping program) and secondly, the program to control the tempo(the logic of the tempo-control is listed above).

duration = pulseIn(echoPin, HIGH);
distance = (duration * 0.0343) / 2.0;  // in cm
// Check if distance is valid and in a usable range
  if (distance > 0 && distance < 200) {
    float d = distance;

    // Limit distances for stable mapping
    if (d < 5)  d = 5;
    if (d > 50) d = 50;

    // Map distance to frequency (closer = higher pitch)
    int frequency = map((int)d, 5, 50, 2000, 200);

This code highlights how the pitch is correlated to the distance in programs.

duration = pulseIn(echoPin, HIGH);

This measures how long the echo signal stayed HIGH. This is the time for the sound to travel to the obstacle and back.

distance = (duration * 0.0343) / 2;

The duration(in microseconds) is multiplied by the speed of sound in air (0.0343 cm/µs). It’s divided by 2 because the sound traveled to the object and back, but we only want one-way distance.

// Detect a press (falling edge: HIGH -> LOW)
  if (tempoButtonState == LOW && lastTempoButtonState == HIGH) {
    if (!tempoSettingActive) {
      // First press: start 5-second capture window
      tempoSettingActive = true;
      tempoStartTime = millis();
      tempoPressCount = 1;  // Count this first press
      Serial.println("Tempo setting started: press multiple times within 5 seconds.");
    } else {
      // Additional presses inside active window
      tempoPressCount++;
    }

    delay(40); // debounce for tempo button
  }

This is the tempo controlling program. I think it is very self-explanatory with the comments in the code.

The entire code is completed with assistance from GPT.

Reflection:

For future improvements, i want to work with buzzer that produce better sound than this one, but the working experience with this buzzer is transitive for future works.

 

Week 10 Reading Response

A Brief Rant on the Future of Interaction Design

This article is extremely poorly written. It spends time criticizing Microsoft’s future vision video but ignore several important factors that are detrimental in the world of business.

First, the author ignore how cost control plays a huge role in designing products. Though the author spent tons of words describing how future technology sacrifice the tactile richness of working with hands, he/she did not calculate the cost and the technological development stage of tactile materials.  There are materials that provide these types of tactile richness, but what is the cost for producing these types of materials? what is the technical difficulties of achieving intended effect when we integrate this technology to our day to day phones?

Second, the author definitely has some illusion regarding the idea that “People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.” This video is created by Microsoft, a billion internet service company, and as far as I know, nobody votes in the process of choosing what vision the video will present, and I don’t know if there’s any public voting process to decide which research gets funded and with current AI development, people rarely have chances to choose their careers while balancing their satisfaction with the jobs. I don’t know how the author comes up with these words, but I think he/she probably lived in a delusional, fictional world.

A follow-up article

The first line of the follow-up article dismantle all my concerns of that I wrote for the first article: “Yes, that’s why I called it a rant, not an essay.” The author is treating the article as a science fiction and therefore in that sense, all words he produced would make sense. He specifically defines his article as a rant that try to catch people’s attention on how we lack design regarding controlling the devices.  However,  I disagree with his opinion regarding brain interface. I believe brain interface, if possible, will be the most important invention in human history. In human history, many horrible decisions/actions we made are due to the deficiencies in brain processing power, if there’s a way to connect our brain to the computer and hugely improve memory, computation speed, I believe it would give us a chance to build a better society.

Week 9 Two Switches

Idea:

Since the problem is to create two types of control, I already created a switch using the idea to use photoresistor to control the circuit. I will just apply the other switch with basic switching on/off control.

Schematic:

 

It is a simple schematic, where I use A0 to read the LDR values and then programmed so that it affect how the built-in LED(D13) changes accordingly.

Then I use D2 to read the switch status and change how the LED light would perform(D9).

Code:

if (value <= lightThreshold) {
    digitalWrite(ledPin, HIGH);   // Dark → LED ON
  } else {
    digitalWrite(ledPin, LOW);    // Bright → LED OFF
  }

This is the part of the code where I control the light based on the A0 values.

 

if (reading != lastButtonState) {
    lastDebounceTime = millis(); // reset timer if state changed
  }

  if ((millis() - lastDebounceTime) > debounceDelay) {
    // If the button is pressed (LOW because of INPUT_PULLUP)
    if (reading == LOW && lastButtonState == HIGH) {
      // Toggle LED state
      ledState = !ledState;
    }
  }

This i the part of the code that I apply to change the LED light status.

The lightThreshold value is determined by experimenting and printing out the A0 value when cover/uncover the LDR.

Serial.print("Light level: ");
Serial.println(value);

 

Reflection:

I still need more practice on connecting the board as I am not familiar with how to design the board to make the wiring more clean and beautiful. I also could come up with more creative idea on how to control the LED.

Week 8 Unusual Switch

Idea:

Since the problem is prompted to create a switch that should use the human body (but not the hands), I instantly come up with the idea to use photoresistor to control the circuit. As photoresistor reacts to the light intensity, basically you could use any body part to control the system as long as it cover/uncover the photoresistor.

Schematic:

It is a simple schematic, where I use A0 to read the LDR values and then programmed so that it affect how the built-in LED(D13) changes accordingly.

Code:

if (value <= lightThreshold) {
    digitalWrite(ledPin, HIGH);   // Dark → LED ON
  } else {
    digitalWrite(ledPin, LOW);    // Bright → LED OFF
  }

This is the part of the code where I control the light based on the A0 values.

The lightThreshold value is determined by experimenting and printing out the A0 value when cover/uncover the LDR.

Serial.print("Light level: ");
Serial.println(value);

 

Reflection:

It took me longer than expected to complete this as I am not familiar enough with working with the board. Nonetheless, it is a fun experience experimenting with both the hardware and the software at the same time. Future improvements would center around controlling other than built-in LEDs and improving the circuit design.

Week 9 Reading Response

Physical Computing’s Greatest hits and misses

This blog lists a number of interesting ways of interacting with the art piece. It reminds me part of the reading from previous weeks that try to re-define what is an interactivity for a designed system. From that perspective, these designs provide us with idea to interact other than the usual input(computer text input). Other body components such as hands, eyes, or even our entire bodies are powerful enough tool to provide information that alter the system output. Also, the response from the system or the output from the system could also be varied. From this blog, I seen response including forms of sound, image, and combination of both.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This blog suggests interactive art artists to let audience dictate the interactive experience. I partially agree. First, I agree that there should be hinted guidance from the art that guides the audience to interact with the device. However, I do not agree that artists should be completely silent in the process. If audiences are not careful enough and they just miss some of the hint, there should be message from either the device or the artist that guides the audience back to the track to the intended interactive experience. A careful or well-designed system should indeed be very informative themselves but for complex enough system, visual hints themselves are not informative enough to guide the correct experience.

Week 8 Reading Response

Norman,“Emotion & Design: Attractive things work better”

The reading provide us with a new perspective on aesthetics design: rather than solely serving a “beauty” function, it improves people’s attention span, positive emotions when interacting with the design, which might end up in improving the problem solving of the product/design. I highly agree with this perspective. For example, when computers were first invented, they dominated with command-line control interface which prevents the majority from using this advanced-system. However, later on, designers of Apple and Microsoft realized this problem and design separately two systems that are heavy with image interfaces. Today, all systems inherit that idea and people today interact heavily with these more well-designed systems.

Her Code Got Humans on the Moon

Other than impressed by the great works done by Margaret Hamilton and her strong spirit that overcomes all the adversaries, I am particular interested in anecdote of the reading where Hamilton proposed that there might be error in the program when preloading P01, her leaders refuse to add error-checking within the software. Though the astronauts are the most well-trained, they still trigger the error during the real mission. This story reminds me the importance of error-preventing programs. Even though something might seem stupid when people first develop them, people might really end up in this “stupid” situation. Luckily, during this mission, the problem is resolved. However, there are numerous examples in history related to how a minor computer error lead to billions of losses.

Midterm

  • Embed or link to your sketch (e.g. link to the sketch in fullscreen mode)
  • Include code snippets and one or more images
  • Describe the overall concept of your project (1-2 paragraphs)
  • Describe how your project works and what parts you’re proud of (e.g. good technical decisions, good game design) 2-3 paragraphs
  • Describe some areas for improvement and problems that you ran into (resolved or otherwise) (1-2 paragraphs)

This is the final output:

Conceptualization:

For this project, I decide to create a design includes interactivity with sound. When I looked at how musicians create their music, they always apply something called as “soundtrack” that could be dragged around to edit their produced songs. So, for this midterm project, I decide to build on this concept.

However, it would be too technical if I just recreate a soundtrack control system and it wouldn’t be very artistic. To add certain aesthetics to the design and to simplify the system. I decided to create a visual system that is similar to this picture.

With different color representing different sound, user could drag the gridlines to increase/decrease the length of certain sound. With colored blocks designated to certain sound, I decrease the difficulty to choose which sounds are available to the sound track. Also, I want to design a system in which the user could click on the grid inside and change the color(grid) of the sound. There should also be a play button which plays the sound block once the user finish design the block.

How the program works:

There are two pages in total, the first page allow users to read instructions so that they will grasp idea on how this program work. Then the second page allow user to interact freely with their creativity.

 

When users adjust the color of the grid, the “music”. that is created with the sound blocks would be different. Additionally, users are allowed to control the length of each block. Overall, I added a adjust tempo feature to control the total length of the block.

The code that I am proud of is the rebuild function. It is the core of this program as user interacted with the grid, its sound will be altered based on how user rebuild the entire grid.

function rebuildBlocks() {
  // Precompute row boundaries (equal height rows)
  const ys = [];
  for (let r = 0; r <= N_ROWS; r++) {
    ys.push(gridY + (r * gridH / N_ROWS));
  }

  blocks = [];
  for (let r = 0; r < N_ROWS; r++) {
    const row = [];
    const xs = [gridX, ...handles[r], gridX + gridW];

    for (let c = 0; c < N_COLS; c++) {
      const x0 = xs[c], x1 = xs[c + 1];
      const y0 = ys[r], y1 = ys[r + 1];

      // Aesthetic default color pattern:
      // Use a repeating gradient based on row/column indices.
      let baseState;
      const pattern = (r + c) % 5;
      switch (pattern) {
        case 0: baseState = 0; break; // blue
        case 1: baseState = 1; break; // red
        case 2: baseState = 2; break; // yellow
        case 3: baseState = 3; break; // green
        case 4: baseState = 4; break; // grey (silent)
      }

      // Slight random variation for natural look
      if (random() < 0.15) {
        baseState = floor(random(0, 5));
      }

      row.push(new Block(x0, y0, x1 - x0, y1 - y0, baseState, r, c));
    }
    blocks.push(row);
  }
}

In the future I believe it is a good practice to improve the aesthetics of the user interface and additionally, I could add several sound effect blocks so that users could alter the entire style of the music.