Concept
For my final project, I am creating an interactive face-controlled visual system using p5.js, ml5.js, and Arduino.
Instead of using buttons, keyboards, or joysticks, the user interacts purely through facial expressions. Different expressions will trigger different visual reactions:
-
Smile → blooming flowers
-
Frown → dark clouds
-
Open mouth → fireworks/spark bursts
-
Wink → a visual character pops up
The main idea is to combine machine-learning-based facial expression detection (using ml5.js) with real-time Arduino interaction, creating a system where the computer responds visually while the Arduino can react physically using LEDs or other actuators.
System Structure
p5.js + ml5.js
-
Accesses webcam
-
Detects expression intensities (smile amount, mouth-open distance, eye openness for winks)
-
Uses these values to control visuals
-
Sends intensity values to Arduino through serial
Arduino
-
Receives values like:
-
S:0.8(smile) -
M:0.4(mouth open) -
W:1(wink)
-
-
Controls physical feedback such as:
-
LED brightness
-
servo movement
-
-
Sends sensor data back to p5.js to influence animation
This creates bi-directional communication between p5.js and Arduino.
Progress So Far
Even though I’m still early in the build process, I have started working on the software pipeline, which is the foundation of the entire project.
Here’s what I have accomplished this week:
1. Set up p5.js + ml5.js face tracking
I created a basic p5.js sketch that uses ml5.faceApi() to detect the user’s face in real time.
Right now, I am printing the expression values (e.g., smile probability) to the console.
2. Defined the visual reactions
I have mapped facial expressions to visual events:
smile → flowers
frown → clouds
open mouth → fireworks
wink → pop-up character
3. Next Steps
My next goals are:
-
Connect p5.js to Arduino using p5.serialport
-
Map expression intensity to physical output (e.g., a brighter LED when smiling more)
-
Make Arduino send sensor data back to p5.js
-
Build the final visual layout and animations