Week 10 – Post Response

Reading Bret Victor’s “A Brief Rant on the Future of Interaction Design” feels like a wake-up call. His critique of modern interfaces—what he calls “Pictures Under Glass”—is undeniable. We have taken the incredible dexterity of the human hand, capable of thousands of distinct movements, and reduced it to a single, numb motion: the swipe.

Victor argues that the future should be about “Dynamic Media”—screens that can physically morph, allowing us to feel buttons and textures. While I agree with his diagnosis of the problem (we are disconnected from the physical world), I disagree with his solution. I don’t think the future is a morphing screen.

In my opinion, Smart Glasses are the next big thing.

The End of the Swipe

Victor’s main gripe is that touchscreens blind our hands. We have to look at the screen to know where the buttons are because we can’t feel them.

Smart glasses solve this, but not by adding texture. They solve it by freeing our hands entirely. With the advanced hand-tracking and depth sensors we are seeing in emerging tech, the “swipe” becomes obsolete. Instead, we can return to the real-life gestures Victor mourns the loss of.

If I want to turn a dial, I can mime turning a dial in the air. If I want to grab an object, I just grab it. The interface isn’t trapped under a sheet of glass anymore; it is overlaid onto the real world. We can use our full range of motor skills to manipulate digital objects as if they were physical ones.

24/7 Access, Zero Friction

The other massive advantage of the Smart Glasses form factor is integration. Victor worries that interfaces are separating us from our environment. But if the interface is a pair of glasses that look like normal eyewear, the digital world becomes a seamless layer on top of the physical one.

We could have access to the digital world 24/7, completely unnoticeable to the people around us. No more looking down at a phone, hunching over, or disengaging from a conversation to check a notification. The technology recedes into the background, becoming a true extension of the senses rather than a “tool” you have to pick up and hold.

The Futility of Prediction

However, reading through Victor’s “Responses” page reminds me that we should be humble with our predictions.

Victor wrote his rant in 2011, convinced that the iPad was just a transitional phase like black-and-white photography. Yet, over a decade later, we are still swiping on glass.

When we look back at how people in the 1900s predicted the year 2000, they got almost everything wrong. They imagined flying firefighters and personal blimps, but they completely missed the internet, microchips, and AI. We tend to predict the future by exaggerating the present—Victor predicted better physical buttons because he liked physical tools. I am predicting smart glasses because I like visual overlays.

Ultimately, nobody knows what the “next big thing” actually is until it arrives. We can analyze and debate, but in the end, we just have to wait and see.

Week 9 – Simon Says

Description

For this assignment, I created a “Simon Says” style memory game. The goal of the game is to memorize a sequence of colors displayed on the main RGB LED and repeat the pattern back using four push buttons.

To fulfill the assignment requirements, I incorporated both digital and analog controls to affect the game’s behavior:

  1. Digital Inputs (Switches): Four push buttons act as the game controller. These are used to start the game and input the color sequence.

  2. Analog Input (Sensor): A Potentiometer is used as a difficulty selector. Before the game starts, reading the analog value of the potentiometer determines the speed of the flashes and the length of the sequence.

  3. Outputs:

    • RGB LED: Displays the randomized game sequence.

    • Feedback LEDs: Two separate LEDs (Green and Red) indicate if the player won or lost the round.

Schematic

Here is the hand-drawn wiring diagram for the circuit. It details the connections for the RGB LED (Pins 11-13), the Feedback LEDs (Pins 9-10), the Buttons (Pins 2-5), and the Potentiometer (Pin A1).

Logic & Interaction

The system waits for the Yellow button to be pressed to start. Once triggered, the Arduino reads the Potentiometer.

  • If the potentiometer is turned one way, the game is “Easy” (slower flashes, shorter sequence).

  • If turned the other way, the game becomes “Hard” (rapid flashes, longer sequence).

The Arduino then generates a random pattern displayed on the RGB LED. The player must press the buttons in the correct order. If successful, the distinct Green feedback LED flashes; if incorrect, the Red feedback LED flashes.

Gameplay Video

Code

Below is the Arduino code used for this project. 

// RGB LED Pins
const int RGB_RED_PIN = 13;
const int RGB_GREEN_PIN = 12;
const int RGB_BLUE_PIN = 11;

// Feedback LED Pins
const int FEEDBACK_RED_PIN = 9;
const int FEEDBACK_GREEN_PIN = 10;

// Button Pins
const int BUTTON_YELLOW_PIN = 2;
const int BUTTON_BLUE_PIN = 3;
const int BUTTON_GREEN_PIN = 4;
const int BUTTON_RED_PIN = 5;

// Potentiometer Pin
const int POT_PIN = A1;

// Game Settings
const int BASE_SEQUENCE_LENGTH = 3;
const int MAX_SEQUENCE_LENGTH = 12;
const int BASE_LIGHT_DISPLAY_TIME = 1000;
const int FAST_LIGHT_DISPLAY_TIME = 100;
const int PAUSE_BETWEEN_LIGHTS = 50;
const int FEEDBACK_BLINK_TIME = 200;
const int FEEDBACK_BLINK_COUNT = 3;

// Game State
int gameSequence[MAX_SEQUENCE_LENGTH];
int currentSequenceLength = BASE_SEQUENCE_LENGTH;
int currentDisplayTime = BASE_LIGHT_DISPLAY_TIME;

void setup() {
  // Initialize serial communication
  Serial.begin(9600);
  Serial.println("Simon Game Started!");

  // Configure LED pins
  pinMode(RGB_RED_PIN, OUTPUT);
  pinMode(RGB_GREEN_PIN, OUTPUT);
  pinMode(RGB_BLUE_PIN, OUTPUT);
  pinMode(FEEDBACK_RED_PIN, OUTPUT);
  pinMode(FEEDBACK_GREEN_PIN, OUTPUT);

  // Configure button pins
  pinMode(BUTTON_YELLOW_PIN, INPUT_PULLUP);
  pinMode(BUTTON_BLUE_PIN, INPUT_PULLUP);
  pinMode(BUTTON_GREEN_PIN, INPUT_PULLUP);
  pinMode(BUTTON_RED_PIN, INPUT_PULLUP);

  // Reset LEDs
  turnAllRGBOff();
  digitalWrite(FEEDBACK_RED_PIN, LOW);
  digitalWrite(FEEDBACK_GREEN_PIN, LOW);

  // Seed random generator
  randomSeed(analogRead(A0));
  
  Serial.println("Waiting for start...");
}

void loop() {
  // Wait for start button 
  while (digitalRead(BUTTON_YELLOW_PIN) == HIGH) {
    digitalWrite(FEEDBACK_RED_PIN, HIGH);
    digitalWrite(FEEDBACK_GREEN_PIN, LOW);
    delay(150);
    digitalWrite(FEEDBACK_RED_PIN, LOW);
    digitalWrite(FEEDBACK_GREEN_PIN, HIGH);
    delay(150);
    digitalWrite(FEEDBACK_GREEN_PIN, LOW);
    delay(100);
  }
  
  digitalWrite(FEEDBACK_RED_PIN, LOW);
  digitalWrite(FEEDBACK_GREEN_PIN, LOW);

  Serial.println("Game Starting...");
  delay(200);
  
  // Update difficulty based on potentiometer
  updateGamePace();
  
  // Generate and display sequence
  generateSequence();
  printSequence();
  displaySequence();

  // Process player input
  bool correct = getUserInput();

  // Provide result feedback
  if (correct) {
    Serial.println("Correct!");
    feedbackBlink(FEEDBACK_GREEN_PIN, FEEDBACK_BLINK_COUNT, FEEDBACK_BLINK_TIME);
  } else {
    Serial.println("Incorrect!");
    feedbackBlink(FEEDBACK_RED_PIN, FEEDBACK_BLINK_COUNT, FEEDBACK_BLINK_TIME);
  }
}

// Adjust sequence length and speed based on potentiometer value
void updateGamePace() {
  int potValue = analogRead(POT_PIN);

  currentSequenceLength = map(potValue, 0, 1023, BASE_SEQUENCE_LENGTH, MAX_SEQUENCE_LENGTH);
  currentSequenceLength = constrain(currentSequenceLength, BASE_SEQUENCE_LENGTH, MAX_SEQUENCE_LENGTH);

  currentDisplayTime = map(potValue, 0, 1023, BASE_LIGHT_DISPLAY_TIME, FAST_LIGHT_DISPLAY_TIME);
  currentDisplayTime = constrain(currentDisplayTime, FAST_LIGHT_DISPLAY_TIME, BASE_LIGHT_DISPLAY_TIME);
}

// Fill sequence array with random colors
void generateSequence() {
  for (int i = 0; i < currentSequenceLength; i++) {
    gameSequence[i] = random(4);
  }
}

// Output current sequence to serial monitor for debugging
void printSequence() {
  Serial.print("Sequence: [");
  for (int i = 0; i < currentSequenceLength; i++) {
    Serial.print(gameSequence[i]);
    if (i < currentSequenceLength - 1) Serial.print(", ");
  }
  Serial.println("]");
}

// Play back the sequence on the RGB LED
void displaySequence() {
  for (int i = 0; i < currentSequenceLength; i++) {
    switch (gameSequence[i]) {
      case 0: turnOnRGBRed(); break;
      case 1: turnOnRGBGreen(); break;
      case 2: turnOnRGBBlue(); break;
      case 3: turnOnRGBYellow(); break;
    }
    delay(currentDisplayTime);
    turnAllRGBOff();
    delay(PAUSE_BETWEEN_LIGHTS);
  }
}

// Capture player input and verify against sequence
bool getUserInput() {
  int inputCount = 0;

  while (inputCount < currentSequenceLength) {
    int pressedButton = readButtons();
    
    if (pressedButton != -1) {
      // Visual feedback for button press
      switch (pressedButton) {
        case 0: turnOnRGBRed(); break;
        case 1: turnOnRGBGreen(); break;
        case 2: turnOnRGBBlue(); break;
        case 3: turnOnRGBYellow(); break;
      }
      delay(100);
      turnAllRGBOff();
      delay(50);

      // Check against expected sequence
      if (pressedButton != gameSequence[inputCount]) {
        return false;
      }
      
      inputCount++;
      
      // Wait for button release
      while(digitalRead(BUTTON_RED_PIN) == LOW || digitalRead(BUTTON_GREEN_PIN) == LOW ||
            digitalRead(BUTTON_BLUE_PIN) == LOW || digitalRead(BUTTON_YELLOW_PIN) == LOW);
    }
  }
  return true;
}

// Return index of pressed button or -1 if none
int readButtons() {
  if (digitalRead(BUTTON_RED_PIN) == LOW) return 0;
  if (digitalRead(BUTTON_GREEN_PIN) == LOW) return 1;
  if (digitalRead(BUTTON_BLUE_PIN) == LOW) return 2;
  if (digitalRead(BUTTON_YELLOW_PIN) == LOW) return 3;
  return -1;
}

// Blink specified LED for feedback
void feedbackBlink(int pin, int count, int blinkTime) {
  for (int i = 0; i < count; i++) {
    digitalWrite(pin, HIGH);
    delay(blinkTime);
    digitalWrite(pin, LOW);
    delay(blinkTime);
  }
}

// --- RGB Control Helpers ---

void turnAllRGBOff() {
  digitalWrite(RGB_RED_PIN, LOW);
  digitalWrite(RGB_GREEN_PIN, LOW);
  digitalWrite(RGB_BLUE_PIN, LOW);
}

void turnOnRGBRed() {
  turnAllRGBOff();
  digitalWrite(RGB_RED_PIN, HIGH);
}

void turnOnRGBGreen() {
  turnAllRGBOff();
  digitalWrite(RGB_GREEN_PIN, HIGH);
}

void turnOnRGBBlue() {
  turnAllRGBOff();
  digitalWrite(RGB_BLUE_PIN, HIGH);
}

void turnOnRGBYellow() {
  turnAllRGBOff();
  digitalWrite(RGB_RED_PIN, HIGH);
  digitalWrite(RGB_GREEN_PIN, HIGH);
}

 

Week 9 – Post Response

Good Design Needs No Explanation

This week we read “Physical Computing’s Greatest Hits and Misses” and “Making Interactive Art: Set the Stage, Then Shut Up and Listen.” These readings made me think about how we interact with objects.

In my opinion, the main factor for a brilliant design is intuition. It should be unnoticeable. It should not require an instruction manual. I am a self-proclaimed kinesthetic learner. This means I learn by doing. I despise manuals. When I get something new, I like to jump in headfirst and discover how it works.

The reading on “Greatest Hits and Misses” talks about reliability and clarity. However, modern tech companies often ignore this. They prioritize simple, sleek designs to look futuristic. This often sacrifices intuitivity.

For example, you might have to double tap the back of a phone to take a screenshot. Or you have to triple tap to turn on a flash. These are hidden secrets, not clear controls.

I have a personal example of this. I used to go into the settings menu on my phone every time I wanted to turn on noise cancellation on my AirPods. Eventually, someone pointed out that I can just hold the stem of the AirPod to change the mode. I felt silly, but I shouldn’t have. The design gave me no clue that the feature existed.

Of course, there are exceptions. An airplane cockpit is very complex. It has hundreds of buttons. This affords complexity because knowledge is a prerequisite. You have to go to flight school to fly a plane.

But for everyday items or art, we should not need flight school.

This connects to the second reading, “Set the Stage, Then Shut Up and Listen.” This text argues that an artist should not stand next to their work and explain it.

If you let a user interact with your art without context, and they do it wrong, that is important data. It does not mean the user is stupid. It means the design is flawed or unintuitive.

As a designer, you have to set the stage. You have to give clues through the shape and feel of the object. If you have to write a manual for your art installation, you have failed to make a connection. Good interaction is about letting the user figure it out with their own hands.

Week 8 – Post Response

Design is More Than Function: Why Feeling Right Matters

When we think about technology, it’s easy to focus only on what it does. But good design isn’t just about function, it’s about how it feels. The readings “Emotion & Design: Attractive Things Work Better” and Her Code Got Humans on the Moon reminded me that emotion, perception, and clever design tricks shape how we interact with technology every day.

Take something as simple as a volume bar on your phone. On paper, 50% should be half the sound. But our ears don’t perceive sound linearly, they perceive it logarithmically. This means that a volume slider isn’t just a slider, it’s an emotional experience. Apple and Google realized this and adjusted the curves so that the volume feels smooth and natural to our ears. YouTube, by contrast, leaves most of the adjustment in the last 10%, making it feel unpredictable and frustrating.

Some designs don’t actually change the outcome but give the illusion of control. Elevators are a perfect example. About 80% of “close door” buttons don’t do anything, the door closes automatically, but the button makes users feel empowered. That illusion of control creates a better experience, even if it’s technically unnecessary. This is exactly the kind of trick designers use to shape our perception and emotions.

Another example from the transcript is the Nintendo Switch joycon mirrors. The solution was low-tech and simple: a small mirror to reflect the LED lights. It’s deceptively simple but brilliantly effective. It shows that design isn’t always about complexity, it’s about solving the human problem in a way that feels right.

Even everyday software hides these design decisions. Progress bars, icons, and animations are not just decorative, they are carefully crafted to keep users calm, informed, and engaged. Google tested 50 shades of blue to see which one users clicked most, and Apple uses subtle motion blur to make screens feel smoother. These small touches are invisible if done well, but they make a huge difference in the user experience.

Week 5 – Post Response

Computer vision works in a very different way from how people see. We read situations without thinking. We notice feelings, intent, and small details that are not written anywhere. A computer has none of this. It only receives a grid of values and waits for someone to tell it what to look for. Nothing has meaning until a developer builds rules or trains a model to reach a goal.

There are simple methods that can guide the system. Finding outlines helps the computer separate one shape from another. Tracking color ranges can point it toward a moving object. Contours make it easier to follow form. With the right setup and some training, the computer can get close to noticing what matters in a scene, even if it never truly understands it.

This matters a lot in interactive art. Vision gives the work a way to notice the audience and respond. A camera can capture movement, distance, and presence at the same time, which makes it one of the strongest sensors an artist can use. At the same time, people take in art through many senses. We see, listen, and feel. Computer vision tries to narrow that gap by giving machines a way to observe instead of just wait. This opens new creative space, but it also raises questions about how much the artwork should watch and how that changes the experience for the viewer.

Week 3 – Post Response

Rethinking Interactivity

After reading Chapter 1 of The Art of Interactive Design by Chris Crawford, I started to see how we often talk about interactivity as if it is all or nothing. You either have it or you do not. In my opinion, it is not that simple. There are levels to it.

Crawford draws a clear line between reaction and interaction. He says that a reactive system only answers a user action with a single output. An interactive system listens, responds, and listens again. It becomes a loop between the user and the system. I agree with this idea, but I still think even simple reactions sit on the same scale, only at the very bottom.

A fridge light turning on when the door opens is a good example. It changes based on what the user does, so it is not completely passive. Still, it does not take in new information after that first moment. There is no ongoing exchange. It is interactive in a very low way.

What separates the two is the direction of the flow. A reactive system goes from input to output and then stops. An interactive system keeps the conversation going. It listens to the user, gives feedback, and then listens again. The more complex the loop becomes, the higher the interactivity feels.

Thinking of interactivity as a spectrum makes design more interesting. It helps us see why some systems feel alive while others feel flat. It also shows that even simple actions can be part of a larger idea when we look at them in context.

Week 2 – Post Response

Chance Operations and Ordered Randomness

Watching Casey Reas talk about chance operations changes how we see art. It is not full control. It is not total randomness. It is something in between.

Determinism is boring because everything is already decided. There is no surprise. Chaos is also boring because nothing has meaning when anything can happen.

Ordered randomness is different. The artist sets simple rules. The system creates the result. The final work is not chosen by hand. It grows on its own. It feels natural and alive.

I think the best balance is when the user knows how their input affects the system, but not the exact result. If the output is predictable with total certainty, the art becomes flat. On the other hand, if the output is completely random, it feels like TV noise. There is nothing to hold on to.

We see this balance in nature. Small rules create big patterns. Clouds form. Birds move in groups. No one controls every part, yet it still makes sense.

Chance operations show that we do not need to choose between control and chaos. We can guide the process without forcing it. When we create space for the unexpected, the outcome becomes more interesting.

Week 12 – Documentation on Final Project

Finalized Concept: “The Snail’s Journey to School”

Project Summary
The Snail’s Journey to School is an interactive physical–digital storytelling installation.
The player controls a small snail robot navigating a handmade obstacle course, helping it reach “school.” As the player moves the snail using physical buttons, a connected p5.js sketch narrates the adventure, updates animations, and reacts to the snail’s progress in real time through serial communication with Arduino.

1. Finalized Detailed Concept

Physical World (Arduino + Snail Robot)

The snail robot sits on a small motorized base (the base will be the robot that can be built from Arduino Uno kit). The user controls the snail with three buttons:

  • FORWARD
  • LEFT
  • RIGHT

A “finish line” sensor (photoresistor or IR distance sensor) detects when the snail reaches the school.

The obstacle course includes:

  • A puddle (painted blue)
  • Small pebbles
  • A small cardboard ramp
  • A school gate with the sensor hidden inside

Digital World (p5.js Storytelling Screen)

The p5 screen visually tells the story through 4 scenes:

  1. Intro Scene
    “The snail is late for school! Guide it through the obstacle course.”
  2. Instructions Scene
    Shows button directions + images.
  3. Live Story Mode
    Reacts to every button press and displays animations such as:
  • “Snail moves forward!”
  • “Turning left…”
  • Little movement animations or sound effects.
  1. Ending Scene
    When the sensor triggers finish, p5 displays:
    “You made it! The snail is in class now!” with a cute animation.
  2. If the player doesn’t make it on time, which is 2.5 minutes, or fails to cross more than 1 obstacle the screen shows “oh no, you’re late to school!”

Project Interaction Loop (Final Version)

  1. Player presses physical button
  2. Arduino moves snail robot + sends message to p5
  3. p5 receives message and updates story animation
  4. Snail reaches sensor
  5. Arduino sends “finish” to p5 ->  p5 plays ending

Arduino Program Design

Final Project Idea

I am developing a small, battery-powered Arduino device that displays the room’s status and simple pixel-style emojis on a 16×2 LCD.  The device is controlled wirelessly from my computer using either an nRF24L01 radio module or a Bluetooth serial module, depending on what hardware becomes available. The device will mount on a wall and serve as a minimal, visually friendly indicator of what’s happening inside the room.

Development Stages

Stage 1 — Crude Functional Prototype

My first goal is to build the simplest version of the system:

  • Connect an Arduino board, a 16×2 LCD, and whichever wireless module I choose.

  •  Load basic firmware that listens for simple incoming messages and updates the LCD with status text and a small emoji.

  • Test commands from a computer program.

  •  Focus on verifying communication and display logic, without worrying about wiring neatness, battery life, or enclosure design.

The objective of this stage is to prove that the device concept works end-to-end.

Stage 2 — Improved Prototype and Physical Enclosure

Once the first prototype is working, I move to making it usable in a real space:

  • Tidy the wiring and make the device compact.

  • Design a simple case in a 3D modeling tool such as Fusion 360 or Tinkercad.

  • 3D-print the enclosure so the LCD is visible from the front, the electronics fit securely inside, and the device can mount flat against a wall.

  • Refine battery placement so the device can be opened or recharged easily.

Stage 3 — Final Visual and Interaction Refinement

After the device is physically assembled:

  • Adjust the display layout so the text and emoji look balanced and readable.

  • Refine how the device reacts to incoming commands (such as smoothing updates, adding small transitions, or improving clarity).

  • Add small visual improvements such as backlight changes for attention or custom character tweaks for better emoji expression.

This stage is about making the device feel polished and pleasant.

Project Architecture

Device Side:

The device contains three main elements:

  1. Microcontroller — the Arduino runs the core program that listens for wireless messages and updates the display.

  2. Display System — the 16×2 LCD shows both text and custom emoji characters.

  3. Wireless Module — either an nRF24L01 or a Bluetooth serial module receives commands from my computer.

Internally, the Arduino software is structured around:

  • A small message handler that receives text commands wirelessly.

  • A display manager that decides what to show based on the message.

  • A custom character bank for emoji graphics.

This architecture keeps the device simple, efficient, and easy to maintain.

Computer Side

On my computer, I run a lightweight program that:

  • Opens a wireless communication link (either through a paired Bluetooth COM port or through a USB radio dongle for the nRF24L01).

  • Sends simple text commands such as “BUSY”, “AVAILABLE”, “MEETING”, or an emoji instruction.

  • Lets me manually choose the room status using a small interface or a command-line tool.

The computer-side software remains minimal because all visual work happens on the Arduino.

Emoji Design Approach

Since the 16×2 LCD uses a 5×8 pixel character grid, I design emojis as tiny pixel icons:

  • Create simple patterns — smiling, neutral face, busy face, resting face, or symbols like checkmarks or caution icons.

  • Define each pattern using the LCD’s built-in custom character feature.

Week 11 Production(Ling and Abdelrahman)

Conceptualization:

The central idea was to build a simple connection between physical and digital worlds.

Step 1: Single-Sensor and p5.js Movement
Using only one analog sensor (a potentiometer), the Arduino continuously reads values and streams them to p5.js over serial. p5.js interprets those readings and moves an ellipse along the horizontal axis, keeping it vertically centered.

Step 2: LED Brightness Controlled by p5.js
Next, I reversed the flow. Instead of only reading from Arduino, I sent numerical values from p5.js back to the board so it could adjust LED brightness using PWM.

Step 3: Gravity + Wind Integration
Finally, I modified the p5.js gravity wind sketch. Each time the ball hits the “ground,” p5 sends a signal to Arduino, turning an LED on briefly before switching back off. Meanwhile, an analog sensor feeds continuous data to p5.js to influence the wind force acting on the falling ball.

Video Demonstration:

https://drive.google.com/file/d/1Morf2y7cxIAgYLHKVnitsadjr813cX4Z/view?usp=sharing

Schematic:

Code Highlight:

oid setup() {
 Serial.begin(9600);
  // wait for p5 to connect
 while (Serial.available() <= 0) {
   Serial.println("0,0");
   delay(300);
 }
}
void loop() {
 // wait for data from p5
 while (Serial.available()) {
   digitalWrite(LED_BUILTIN, HIGH);
    Serial.read();     // read incoming
  
   int sensorValue = analogRead(A0); // read sensor
  
   Serial.println(sensorValue);   // send sensor value
 }
 digitalWrite(LED_BUILTIN, LOW);
}
// serial variables
let port;
let connectBtn;
let sensorValue = 0;

function setup() {
  createCanvas(640, 360);
  
  port = createSerial();   // create serial connection
  
  // create connect button
  connectBtn = createButton("Connect to Arduino");
  connectBtn.position(10, 10);
  connectBtn.mousePressed(connectToArduino);
}

function draw() {
  background(220);
  
  // read from Arduino
  let str = port.readUntil("\n");
  if (str.length > 0) {
    sensorValue = int(str);
  }
  
  port.write("\n");   // send handshake to Arduino
  

  let xPos = map(sensorValue, 0, 1023, 0, width);   // map sensor to horizontal position
  
  // draw ellipse in middle vertically
  fill(0);
  ellipse(xPos, height/2, 50, 50);
  
  // show sensor value
  fill(0);
  noStroke();
  text("Sensor: " + sensorValue, 10, 50);
  text("Turn potentiometer to move circle", 10, 70);
}

// connect to Arduino
function connectToArduino() {
  if (!port.opened()) {
    port.open(9600);
  }
}

Reflection:

This project helped me practice the  bidirectional serial communication between Arduino and p5.js.