Voice- Controlled Robotic Car

Concept

This project is a small car that connects to my laptop with a USB cable. Instead of using buttons, I control it with my voice. I say simple words like “forward,” “left,” “right,” “back,” or “stop,” and the car moves instantly. There is also an auto mode that lets the car drive itself while the screen shows what it’s doing.

My goal was to make something fun, hands-free, and beginner-friendly. Some  videos of  the project interaction can be found below

Interaction- 1

Interaction – 2

Interaction-3

Here are some videos from the IM showcase:

Showcase

How It Works

My laptop runs the  p5.js and the Web Speech API. When I speak, the browser turns my speech into text, then the code matches the text to a command. It sends a single letter over the Web Serial API to an Arduino.
The Arduino reads that letter and moves the motors to go forward, back, turn, or stop.

A small “health check” in the JavaScript restarts the microphone if it stops listening. This makes the whole loop smooth and fast:
I talk → the browser listens → Arduino moves the car.

Interaction Design

The webpage shows everything clearly: whether the car is connected, if the mic is listening, the last command heard, and if you are in auto or manual mode.

A pulsing microphone icon tells you when to speak. Quick onboarding text teaches the basic words. The screen also shows what the system heard, so you know why the car reacted in a certain way.

Arduino Code Description

The Arduino code controls all the movement and safety features of the car. It uses the Adafruit Motor Shield to run four DC motors and two ultrasonic sensors to check the front and back distances. The laptop sends one-letter commands over serial—f, b, l, r, and s—and the Arduino reacts right away.

When a command arrives, the Arduino checks the sensors first. If the car is too close to something in the front and you say forward, the car protects itself by going backward instead. The same logic works for the back sensor. This gives the car a simple but smart safety layer.

Two LEDs show safety states:

  • Green → safe

  • Red → danger or warning

The code also includes turning functions that rotate the car in place for about 1 second to make a clean left or right turn. Everything is written in a clear, modular way so you can change speeds, turning time, and safety distances easily.

Below is the full Arduino sketch, which includes motor control, safety checks, distance reading, and serial command handling.

Arduino code : Github

Circuit Schematic

 

Schematics:

p5.js Code Description 

The p5.js code is the control center of the whole project. It listens to your voice, figures out what you meant, and sends a one-letter command to the Arduino. It uses the Web Speech API to hear your words and a fuzzy-matching system so it can understand similar phrases like “go,” “ahead,” or “move forward.” Even small mistakes or accents are handled using a simple distance-matching algorithm.

The sketch draws the full interface: the microphone icon, the last command, auto-mode status, and a short list of voice commands. It also shows what the system heard in real time. If needed, you can use the arrow keys as backup control. The canvas resizes automatically, and there’s a fullscreen button for demos.

code : p5 – code

p5 project view : p5

Communication Between Arduino and p5.js

The connection between p5.js and the Arduino happens through the Web Serial API in the browser. When you press start button, the browser asks you to choose the Arduino’s USB port. After you allow it, p5.js opens the port at 9600 baud.

Whenever the voice system recognizes a command, p5.js sends a single letter to the Arduino:

  • f → forward/go

  • b → backward

  • l → left

  • r → right

  • s → stop

  • a → auto mode

The Arduino listens on its serial port and reacts immediately to whichever letter arrives. p5.js also sends an s (stop) right after connecting to make sure the car doesn’t move unexpectedly.

This simple one-letter system makes communication fast, reliable, and easy to debug.

What I’m Proud Of

I like  the voice system is, at first I was having a lagging in the voice control system , and also getting wrong inputs , so I saw this python system(Fuzzy string matching – Levenshtein distance)., which simplified everything for me. Also the  auto-restart loop keeps the mic alive even during long testing sessions. And the UI makes the whole system feel clear and  great.  These little details make the whole experience smoother.

Future Improvements

Here are some things I want to add:

  • A wake word, so the mic listens only after a trigger phrase

  • Smoother motor speed and softer turns

  • Sensors for obstacle avoidance in auto mode

  • A stronger chassis with cleaner wiring and a power switch

  • Logging data to help tune performance over time

REFERENCES: This project used AI tools mainly to support coding  and debugging  . AI was used to help clarify technical concepts. It  assisted in explaining algorithms such as fuzzy matching and Levenshtein distance, and in organizing the projects. The creative decisions, programming logic, user testing,  and system design were all done by me, with AI acting only as a supportive tool for communication and explanation.

sources:  Fuzzy-source

 

Final Project Documentation

Concept 

This project is a voice-controlled Arduino car that allows a user to control a moving robot using simple spoken commands. Instead of buttons, remotes, or apps, the user speaks commands such as “forward,” “turn left,” “reverse,”or “stop.” The car listens and responds in real time.

The system uses the Web Speech API (built into modern browsers) to recognize speech. A fuzzy matching algorithmhelps the system understand different ways of saying the same thing. For example, if the user says “go forward,” “move forward,” or simply “forward,” the system will run the same command.

The computer communicates with the Arduino through serial communication (USB cable). Once the browser identifies the command, it sends a single character (f, b, l, r, s) to the car. The Arduino then interprets the command and drives the motors.

This creates a smooth and natural interaction where your voice becomes the controller.

 Project Images

Image 1

Image 2

Schematics

User Testing

First User: User 1

Second User : User 2

IMPLEMENTATION

The implementation is designed to feel natural, simple, and automatic. When the user opens the web page, the system asks for microphone access and immediately begins listening for voice commands once permission is given. It listens continuously, and a microphone icon provides live feedback by turning into a green pulsing circle when actively listening and a gray circle when idle. When the user speaks, the microphone captures the audio, the Web Speech API converts it into text, and a fuzzy matching algorithm chooses the closest valid command, even if the person says it in a different way. The interface then shows what the system heard, which command was selected, whether the Arduino is connected, and the current auto-mode status, helping the user understand what is happening at all times. Once a command is recognized, the browser sends it through USB to the Arduino, which drives the motors using the Motor Shield so the car can move, turn, stop, or reverse immediately.

Parts I’m Proud of

I am especially proud of how smart and flexible the command recognition system is, because the fuzzy matching allows the car to understand many different ways of saying the same command, making the interaction feel natural. I also like that the user experience is very smooth—the system starts listening automatically, works without any extra buttons, and gives clear visual feedback showing what it heard and what command it chose. The error handling is strong as well, so even if the microphone fails, the Arduino disconnects, or a command is unclear, the system continues running and informs the user about what went wrong. The real-time indicators for listening status, recognized text, executed commands, and connection state make the experience easy to follow. I am also proud of the continuous listening design, which removes the need to press a talk button and makes the car react instantly to the user’s voice. Finally, the code is clean and uses modern web APIs and smart algorithms, giving the project a professional and reliable feel.

AI Assistance. : This project used AI tools mainly to support coding, hardware wiring, testing, and final implementation were completed by me. AI was used to help clarify technical concepts. It  assisted in explaining algorithms such as fuzzy matching and Levenshtein distance, and in organizing the projects. Any AI-generated text was reviewed, edited, and adapted to fit the actual project work. The creative decisions, programming logic, user testing, debugging, and system design were all done by me, with AI acting only as a supportive tool for communication and explanation.

Week 12: Final Project Proposal and Progress

Concept

For my final project, I am building a Voice-Controlled Car.
The main idea is to create a small robotic vehicle that moves based on simple spoken commands such as “forward,” “left,” “right,” “backward,” and “stop.”

Instead of using a remote controller, buttons, or joysticks, the user interacts with the car entirely through speech. The goal is to make the interaction feel natural and intuitive, almost like the car is “listening” and responding in real time.

The project uses bi-directional communication between Arduino and p5.js:

  • p5.js listens to the user’s voice, recognizes the command, and sends it to the Arduino.

  • Arduino moves the motors and sends back status messages to p5.js.

This creates a complete loop where the user, p5.js, and the car are constantly communicating.

Arduino and P5 Communication

These are the main components I’m using in the project:

  • Arduino Uno

  • Adafruit Motor/Stepper/Servo Shield

  • 4 DC motors + wheels

  • Wooden board chassis

  • Battery pack (to power the motors)

  • USB cable or Bluetooth module (Adafruit Bluefruit EZ-Link)

  • Optional: LEDs or buzzer for feedback

Here is how the system is organized:

  • Microphone is connected to  p5.js to detect speech.

  • p5.js sends simple movement commands to the Arduino.

  • The Arduino controls the motors through the motor shield.

  • Arduino also sends responses back to p5.js to display status.

This makes the communication two-way, not just one-directional.

Arduino → P5 Communication

Every time the Arduino receives a command and performs the movement, it sends back a message so p5.js can update the interface.

For example: car_forward, car_left, car_right, car_backward, car_stopped

These messages allow the p5.js program to show real-time feedback about what the car is doing.
This also helps with testing and makes the experience feel more responsive.

Later on, this system can be expanded to include things like speed

P5 → Arduino Communication

The p5.js sketch uses the p5.SpeechRec() library to listen for specific keywords.
When a valid direction is heard, p5.js sends a short command to the Arduino through serial.

Spoken Word Code Sent to Arduino
forward F
left L
right R
backward B
stop S

Using one-letter codes keeps communication fast and reduces errors.

Arduino then reads the code and moves the motors accordingly using the Adafruit motor shield

Project Progress So Far

I began by building the physical structure of the car before working on the code.

What I have completed:

  • Attached four DC motors to the corners of my wooden chassis

  • Mounted the Adafruit Motor Shield onto the Arduino Uno

  • Fitted the wheels onto the motors

  • Tested spacing and placement for the battery pack

  • Ensured that the chassis is stable and balanced

  • Confirmed that all electronic components fit properly

Preliminary Concept : Voice-Controlled Car (Speech to Movement)

Concept  : Voice-Controlled Car

The main idea is to build a car that responds to spoken words like “forward,” “left,” “right,” and “stop.”
Instead of a remote control or buttons, the user interacts only through speech, making the experience feel natural and intuitive.

Interaction (How it Works)

On the laptop, a p5.js sketch uses the SpeechRec library to listen through the microphone.
When the user says a direction:

  • “forward” → the car should drive straight

  • “left” → the car turns left

  • “right” → the car turns right

  • “stop” → the car stops moving

p5.js identifies the spoken word and sends the corresponding command through serial communication to the Arduino.

The Arduino receives these simple messages and activates the correct motors to move the car.
The entire system becomes a loop of:

You speak → p5 listens → p5 processes → Arduino moves the car

The goal is to make it feel almost like the car is “listening” and responding to the user in real time.

Arduino Components

  • 1 Arduino Uno

  • 2 DC motors (for movement and turning)

  • 1 motor driver (L298N or similar)

  • Wheels + chassis

  • External battery pack for the motors

  • USB cable for serial communication

I might later add optional components like LED indicators or a small buzzer that reacts to commands, depending on how the project develops.

Why I Like This Idea

I like this concept because it feels fun, intuitive, and a bit magical.
Anyone can walk up and immediately understand how it works since people naturally speak commands like “go” or “stop.”

Reading Reflection

When I read this, I was struck by how much it changed the way I see the things we design for people with disabilities. I always thought the main goal was to make these products as invisible and discreet as possible,  thus to hide the need for them. It made sense to me that if something was medical, it should be quiet and blend in.

But then the reading talked about glasses. I wear glasses, and I never think of them as a medical device. I think about which frames look good on me. I feel more confident in some pairs than others. The reading pointed out that glasses became successful not by being hidden, but by becoming a fashion item , it’s something people are proud to wear. This was a big shift in my thinking. Why should a hearing aid or a prosthetic leg be any different? Why does it have to be hidden or made from “skin-colored” plastic?

I loved the story about Aimee Mullins and her collection of legs. She has running legs, elegant legs, and even beautifully carved wooden legs that she matches with her outfits. To her, they are not just tools; they are part of her identity and style. That idea felt  human to me. It’s not about ignoring the disability, but about embracing it as part of who you are and even celebrating it through design.

It also made me realize that good design isn’t just about solving a problem in the most efficient way. It’s about emotion, identity, and culture. When designers from fashion or art get involved, they bring new perspectives. They ask not just “How does it work?” but “How does it make you feel?”

I now  see disability not as something to be corrected or hidden, but as a source of inspiration for more creative, more personal, and more beautiful design. It made me hope that in the future, everyone will have the option to use assistive products that they don’t just need, but truly love.

Week 11

Exercise 1: Arduino to p5.js  – Potentiometer

Concept

This project shows how a physical sensor can control a digital object. A potentiometer on the Arduino sends its values to p5.js . In the sketch, these values move an ellipse left or right on the screen. Turning the knob changes the ellipse’s position instantly. 

Arduino Code

The Arduino reads the potentiometer value and sends it over serial:

int interval = 100; // milliseconds between readings

void setup() {
  Serial.begin(9600); // start serial communication
}

void loop() {
  int potValue = analogRead(A1); // read potentiometer
  int mappedValue = map(potValue, 0, 1023, 0, 255); // scale value
  Serial.println(mappedValue); // send value to p5.js
  delay(interval); // avoid flooding serial
}

p5.js Code

The p5.js sketch reads the serial data and moves the ellipse:

let positionX = 0;
let port;
let sensorValue = 0;

function setup() {
  createCanvas(400, 400);
  positionX = width / 2;

  port = createSerial();
  let used = usedSerialPorts();
  if (used.length > 0) port.open(used[0], 9600);
}

function draw() {
  background(220);

  let line = port.readUntil("\n");
  if (line.length > 0) {
    sensorValue = int(line.trim());
  }

  positionX = map(sensorValue, 0, 255, 0, width);

  ellipse(positionX, height / 2, 50, 50);
}

Video of the final result : Video

Excercise 2 : p5.js-to-Arduino LED Control

Concept

This project demonstrates how a digital signal from p5.js can control hardware on Arduino. Using a p5.js sketch, we can send commands over serial to turn an LED on or off.

Arduino Code

The Arduino listens for serial commands from p5.js and controls the LED accordingly:

The Arduino listens for serial commands from p5.js and controls the LED accordingly:
int ledPin = 13; // onboard LED

void setup() {
  pinMode(ledPin, OUTPUT);
  Serial.begin(9600); // start serial communication
}

void loop() {
  if (Serial.available() > 0) {
    String command = Serial.readStringUntil('\n'); // read command
    command.trim(); // remove whitespace

    if (command == "ON") {
      digitalWrite(ledPin, HIGH); // turn LED on
    } 
    else if (command == "OFF") {
      digitalWrite(ledPin, LOW); // turn LED off
    }
  }
}

p5.js Code

The p5.js sketch sends commands to the Arduino based on events:

let port;
let connectBtn;

function setup() {
  createCanvas(400, 400);
  background(220);

  // initialize serial connection
  port = createSerial();
  let used = usedSerialPorts();
  if (used.length > 0) port.open(used[0], 9600);

  connectBtn = createButton("Connect");
  connectBtn.position(10, 10);
  connectBtn.mousePressed(toggleConnection);
}

function draw() {
  background(220);
}

function keyPressed() {
  if (port.opened()) {
    if (key === 'L') {         // press L to turn LED on
      port.write("ON\n");
    } else if (key === 'K') {  // press K to turn LED off
      port.write("OFF\n");
    }
  }
}

Video of the final result : Video

Excercise 3 : Bidirectional : Bouncing Ball with LED and Potentiometer

Concept

This project demonstrates bidirectional communication between Arduino and p5.js, combining physics simulation with hardware interaction. In this : A ball bounces on the p5.js canvas, following simple gravity physics. The horizontal wind affecting the ball is controlled by a potentiometer connected to Arduino. Turning the knob changes the wind force in real-time. Every time the ball hits the floor, p5.js sends a “BOUNCE” signal to Arduino, lighting up an LED. When the ball stops bouncing, p5.js sends a “STOP” signal, turning the LED off.

Arduino Code

Arduino reads the potentiometer and controls the LED based on serial commands:

 

int ledPin = 13;   // LED pin
int potPin = A1;   // potentiometer pin
int potValue = 0;

void setup() {
  pinMode(ledPin, OUTPUT);
  Serial.begin(9600);
}

void loop() {
  // Read potentiometer value
  potValue = analogRead(potPin);
  Serial.println(potValue);  // send to p5.js

  // Check for incoming serial commands
  if (Serial.available() > 0) {
    String command = Serial.readStringUntil('\n');
    command.trim();

    if (command == "BOUNCE") {
      digitalWrite(ledPin, HIGH);   // light LED on bounce
    } else if (command == "STOP") {
      digitalWrite(ledPin, LOW);    // turn LED off when ball stops
    }
  }

  delay(50); // small delay for stability
}

p5.js Code

The p5.js sketch simulates a bouncing ball and sends commands to Arduino:

 

let velocity, gravity, position, acceleration, wind;
let mass = 50;
let drag = 0.99;

// SERIAL
let port;
let sensorValue = 0;

function setup() {
  createCanvas(640, 360);
  noFill();

  // physics
  position = createVector(width/2, 0);
  velocity = createVector(0, 0);
  acceleration = createVector(0, 0);
  gravity = createVector(0, 0.5 * mass);
  wind = createVector(0, 0);

  // serial
  port = createSerial();
  let used = usedSerialPorts();
  if (used.length > 0) port.open(used[0], 9600);
}

function draw() {
  background(255);

  // read potentiometer from Arduino
  let line = port.readUntil("\n");
  if (line.length > 0) sensorValue = int(line.trim());
  wind.x = map(sensorValue, 0, 1023, -1, 1);

  // apply forces
  applyForce(wind);
  applyForce(gravity);

  velocity.add(acceleration);
  velocity.mult(drag);
  position.add(velocity);
  acceleration.mult(0);

  ellipse(position.x, position.y, mass, mass);

  // bounce detection
  if (position.y > height - mass/2) {
    position.y = height - mass/2;

    if (abs(velocity.y) > 1) {
      velocity.y *= -0.9;
      if (port.opened()) port.write("BOUNCE\n"); // LED on
    } else {
      velocity.y = 0;
      if (port.opened()) port.write("STOP\n");   // LED off
    }
  }
}

function applyForce(force) {
  let f = p5.Vector.div(force, mass);
  acceleration.add(f);
}

Video of the final result : Video


Challenges and Improvements

In all three projects, the hardest part was making sure the Arduino and p5.js talked to each other correctly. Sometimes the data from the potentiometer didn’t come through clearly, which made the ellipse or bouncing ball move strangely or the LED turn on and off at the wrong time. It was also tricky to scale the sensor values so they controlled the visuals in a smooth way. In the bouncing ball project, making the LED light only when the ball bounced was challenging because the ball slowed down naturally. To improve, I could smooth the sensor readings to make movement less jumpy, make the ball bounce more realistically, or add more sensors or LEDs for extra interaction. These changes would make the projects work better and feel more interactive.

Ref:  In the last exercise, ChatGPT helped us by explaining and guiding the use of p5.Vector, physics calculations, and serial communication, which made Implementation easier.

Reflection

When I read “A Brief Rant on the Future of Interaction Design” by Bret Victor and its follow-up article, I began to think differently about how technology shapes the way we interact with the world. Victor argues that designers often focus too much on screens and touch gestures, forgetting that humans are physical beings with hands meant to explore, build, and create. He believes that the future of design should go beyond flat screens and should instead give people tools that let them use their bodies, senses, and creativity more naturally.

This idea really connected with me because, as someone interested in computer science and interactive media, I often think about how to make technology feel more human. I realized that many modern interfaces such as  phones, tablets, laptops imit our movements and creativity, even though they feel advanced. Victor’s point made me reflect on my own projects and how I can design in ways that allow people to move, touch, and engage with technology more freely.

The second  article deepened this idea by exploring how designers and engineers are beginning to create more physical and immersive experiences. It reminded me that innovation isn’t just about new technology but about how it connects with human experience. Reading both pieces made me want to think more carefully about how design can make people feel present and creative, not just efficient.



Week 10 Assignment

Concept

This week, Jayden and I  made a small musical instrument inspired by the piano. The piano is easy to understand and fun to play, so it was perfect for our Arduino project. We used three switches to play notes.  We also added a potentiometer that can change the pitch of the notes while playing. This means the player can make higher or lower sounds with the same switches. Using the switches and potentiometer together makes the instrument more interactive and fun, giving the player control over both the notes and their frequency.

Video : Link

Schematic

Code:

// Mini Piano: 3 switches, Piezo buzzer, optional pitch adjustment

const int buzzerPin = 8;      // Piezo buzzer
const int switch1 = 2;        // Key 1 (C)
const int switch2 = 3;        // Key 2 (E)
const int switch3 = 4;        // Key 3 (G)
const int potPin = A0;        // Optional: pitch adjust

// Base frequencies for the notes (Hz)
int note1 = 262;  // C4
int note2 = 330;  // E4
int note3 = 392;  // G4

void setup() {
  pinMode(buzzerPin, OUTPUT);
  pinMode(switch1, INPUT);
  pinMode(switch2, INPUT);
  pinMode(switch3, INPUT);
  Serial.begin(9600); // optional for debugging
}

void loop() {
  // Read potentiometer to adjust pitch
  int potValue = analogRead(potPin);        // 0-1023
  float multiplier = map(potValue, 0, 1023, 80, 120) / 100.0;  // 0.8x to 1.2x

  bool anyKeyPressed = false;

  // Check switches and play corresponding notes
  if (digitalRead(switch1) == HIGH) {
    tone(buzzerPin, note1 * multiplier);
    anyKeyPressed = true;
  }

  if (digitalRead(switch2) == HIGH) {
    tone(buzzerPin, note2 * multiplier);
    anyKeyPressed = true;
  }

  if (digitalRead(switch3) == HIGH) {
    tone(buzzerPin, note3 * multiplier);
    anyKeyPressed = true;
  }

  // Stop sound if no switch is pressed
  if (!anyKeyPressed) {
    noTone(buzzerPin);
  }

  delay(10); // short delay for stability
}

Github link : Paino

Reflection

This week’s assignment felt highly interactive, building upon previous projects while introducing multiple input elements and user-controlled parameters. We learned how to combine both digital and analog inputs to create a responsive musical instrument. For future improvements, we would like to implement a more realistic note duration system, where each note fades out naturally after being played, similar to a real piano. Additionally, adding more switches and possibly multiple buzzers could allow for more complex melodies and chords, enhancing the expressive possibilities.

Reflection- Week 9

When I read Physical Computing’s Greatest Hits and Misses and Making Interactive Art: Set the Stage, Then Shut Up and Listen, I started to think more deeply about what it really means to make something interactive. The first reading talked about how many beginner projects in physical computing repeat the same ideas, like using a sensor to make lights blink or to trigger sound. At first, I felt a little unsure because my own project also used simple tools like a light sensor and a button. But as I continued reading, I understood the real message: it’s okay to build something that has been done before, as long as I make it my own and give it a purpose. That made me feel more confident about my project. It reminded me that creativity doesn’t always mean doing something completely new, but doing something familiar in a meaningful or personal way.

The second reading focused on how interactive art should let people explore freely. It said that once we build something, we should “set the stage” and then step back, allowing others to interact with it in their own way. I really liked this idea because it made me think differently about my project. When I pressed the button or covered the light sensor, I realized that I was not just testing the circuit, I was actually engaging with it and discovering what it could do.

Both readings made me see that physical computing is not just about coding or wiring but it’s about creating an experience. It’s about giving people something they can explore and learn from on their own.

Analog Sensor

Concept

For this project, I used one analog sensor and one digital sensor (switch) to control two LED lights.

The analog sensor I used was a photoresistor (light sensor). It changes how much electricity passes through it depending on how bright the light in the room is. The Arduino reads this change and adjusts the brightness of one LED  when it’s dark, the LED gets brighter, and when it’s bright, the LED becomes dimmer.

For the digital sensor, I used a pushbutton connected to a digital pin. When I press the button, it turns the second LED on or off.

To make it different from what we did in class, I added a “night light” feature. When the photoresistor detects that the room is very dark, the button-controlled LED automatically turns on, like a small night light. When the light comes back, the button goes back to working normally.

This made my project more interactive and closer to how real sensors are used in everyday devices.

 Schematic of my circuit
It shows the Arduino connected to:

  • A photoresistor and 10 kΩ resistor forming a voltage divider to read light levels.

  • A pushbutton connected to a digital pin.

  • Two LEDs , one controlled by the light sensor and the other controlled by the button

Final Results

When I tested the circuit:

  • The first LED smoothly changed its brightness depending on how much light the photoresistor sensed.

  • The second LED turned on and off with the button as expected.

  • When the room got dark, the second LED automatically turned on, working like a night light.

It was a simple but satisfying project, and the extra feature made it stand out from the class example.

Video: video-url

Arduino Code

Part of Cold I am proud of

void loop() {
  // --- Read photoresistor ---
  int lightValue = analogRead(lightPin); // 0–1023
  int brightness = map(lightValue, 0, 1023, 255, 0);
  analogWrite(ledAnalog, brightness);

  // --- Button toggle ---
  if (digitalRead(buttonPin) == LOW) {
    ledState = !ledState;
    delay(200);
  }

  // --- Night light feature ---
  if (lightValue < 300) { // If it's dark, auto turn on LED
    digitalWrite(ledDigital, HIGH);
  } else {
    digitalWrite(ledDigital, ledState ? HIGH : LOW);
  }

  // --- Print readings ---
  Serial.print("Light: ");
  Serial.print(lightValue);
  Serial.print(" | Brightness: ");
  Serial.print(brightness);
  Serial.print(" | LED State: ");
  Serial.println(ledState ? "ON" : "OFF");

  delay(200);
}

Github url: Github

Challenges and Further Improvements

While I was able to make both the analog and digital sensors work, I struggled a bit with arranging all the wires and resistors neatly on the breadboard. It took a few tries to get everything connected correctly.

I also had to test different threshold numbers for the night light feature to decide when the LED should automatically turn on. Once I found the right value, it worked well.

For my next project, I want to try using other kinds of sensors, like sound or temperature sensors, and make the circuit respond in new ways. I’ll also practice reading the code line by line to understand how each part works better before adding new features.