Final Project

Concept

During my visit to the US this November, I had the chance to see an original Claude Monet painting at Princeton University, and that experience I believe became the spark for this entire project. Standing in front of the canvas, I was struck not just by the colors, but by how alive the surface felt, how the brush strokes seemed dissolve depending on where I focused my eyes. There was a sense of motion and atmosphere that I had never fully appreciated in prints or online images.

After the initial direction of my project didn’t go as intended, I knew immediately that I wanted to recreate that feeling, trying to capture the energy of his mark-making using code, video, and interactive hardware. In a way, this project became my attempt to translate the emotional impact of that painting into a dynamic, generative medium of my own.

So, yes, my idea is simply a filter that turns the live footage from camera to a relatively similar style.

Photos / Video of Project Interaction

IMG_2924

How the implementation works

The implementation consists of three interconnected layers:

1. Hardware (Arduino Uno)

The Arduino reads five physical inputs:

    1. Potentiometer – controls global scaling of brush strokes
    2. Button 1 – switches to “Venice Sunset” palette
    3. Button 2 – switches to “Water Lilies” palette
    4. Button 3 – switches to “Sunrise + Viridian” palette
    5. Button 4 – triggers a photo snapshot

The Arduino packages these values into a comma-separated serial string and sends them to the browser.

2. p5.js

The browser receives the serial data via the Web Serial API.
p5.js:

    1. Analyzes the webcam image pixel-by-pixel
    2. Computes local edge gradients using a Sobel operator
    3. Generates a flow field for stroke direction
    4. Combines brightness, edge magnitude, radial position, and noise to determine stroke color, size, and jitter
    5. Paints small stroking particles each frame and adjust color modes, scale, and triggers photo capture

 

3. Interaction Design

Physical Inputs → Arduino → Serial → p5.js → Painter Engine → Screen Output

Arduino Code
const int POT_PIN = A0;
const int BTN1_PIN = 2;
const int BTN2_PIN = 3;
const int BTN3_PIN = 4;
const int BTN4_PIN = 5;  

void setup() {
  Serial.begin(9600);
  pinMode(BTN1_PIN, INPUT_PULLUP);
  pinMode(BTN2_PIN, INPUT_PULLUP);
  pinMode(BTN3_PIN, INPUT_PULLUP);
  pinMode(BTN4_PIN, INPUT_PULLUP);  
}

void loop() {
  int potValue = analogRead(POT_PIN);
  int btn1 = !digitalRead(BTN1_PIN);
  int btn2 = !digitalRead(BTN2_PIN);
  int btn3 = !digitalRead(BTN3_PIN);
  int btn4 = !digitalRead(BTN4_PIN); 
  
  Serial.print(potValue);
  Serial.print(",");
  Serial.print(btn1);
  Serial.print(",");
  Serial.print(btn2);
  Serial.print(",");
  Serial.print(btn3);
  Serial.print(",");
  Serial.println(btn4);  
  
  delay(50);
}

Description of p5.js code and embed p5.js sketch in post

The p5.js sketch is responsible for everything visual. It uses several coordinated systems:

A. Webcam + Pixel Processing

Each frame is analyzed for, brightness, edges (using Sobel filter), and radial distance from center. This information determines where strokes should appear.

B. Flow Field

Edge gradients produce an angle for each region of the image. Brush strokes follow this angle, creating the illusion of form and direction.

C. Monet-Inspired Color Logic

I handcrafted three color transformation modes that reinterpret facial features differently depending on shadow, mid-tones, and highlights.

D. Stroke Drawing

Hundreds of tiny “tracers” travel across the video frame, leaving curved paths like brush strokes. Noise adds natural variation.

E. Arduino Controls

    1. Potentiometer adjusts scale
    2. Buttons switch color modes
    3. Button 4 triggers takePhoto()

Communication Between Arduino and p5.js

Communication relies on the Web Serial API, which allows a webpage to read from an Arduino.

What I’m proud of the most

What I’m most proud of is the generative painting engine. I wasn’t sure at first if I could combine live video input with a brush-stroke system that feels expressive instead of random, but the structure I ended up building feels genuinely satisfying. The way I compute brightness and gradient values from the webcam frame using a Sobel operator was a challenge, but finally writing:

let gx = (tr + 2 * cr + br) - (tl + 2 * cl + bl);
let gy = (bl + 2 * bm + br) - (tl + 2 * tm + tr);

and seeing the flow field come to life was one of the most rewarding moments of the whole project. That logic is what allows the brush strokes to “hug” the edges of a face, giving the portrait dimension and structure rather than just noise.

I’m also proud of the way the color transformation system turned out. I built three Monet-inspired palettes, but what makes them feel special is the conditional logic that adjusts them based on brightness, edge detail, and whether the region seems to belong to the “face” or the background. In the code, that meant writing a lot of nuanced rules, like:

if (bright < 120) {
  r = r * 0.8 - 10;
  g = g * 0.85 - 5;
  b = b * 1.35 + 40;
}

How this was made

Generative AI (ChatGPT) played a supportive but important role throughout the making of this project. I used it to debug unexpected behavior in my Arduino and JavaScript code, and understand algorithms like Sobel filters and flow fields in more depth. AI also helped me brainstorm interaction ideas and refine parts of the painting logic when I was unsure how to structure them. For the written documentation, AI assisted me in organizing my thoughts, expanding sections, and polishing the final narrative so that it would read clearly and cohesively.

 

Leave a Reply