Cover Photo


The ANXIETI-E robot car is a wireless robot designed to perform both autonomous movements using object detection and edge detection, and manual control using radio control. This movement would be both controlled and reversed mapped to an interface in p5. The project aimed to create a robot that is capable of performing these functions while also being a fun and engaging project for me. The name, ANXIETI-E, was inspired by the robot WALL-E and the fact that the robot makes everyone anxious while operating over a table and moving toward the edge just before activating its edge detection.

GitHub Code | P5JS Sketch | Schematic Diagram


The idea of creating ANXIETI-E robot car was born out of my desire to build a robot that could perform both autonomous and manual movements. I was inspired by the character WALL-E. This inspired me to create a robot that could capture the same spirit of adventure and fun while also being functional. Before submitting the proposal for my project, I spent some time watching YouTube videos on the power of Arduino and sensors. I was amazed by the wide range of possibilities that these components provided, and I knew that I could incorporate them into my robot car.

Concept and Interaction Design

The ANXIETI-E robot car is a wireless robot capable of both autonomous and manual control. The project consists of a p5 interface that allows the user to choose between these two modes. The p5 sketch connects to an Arduino using the serial port, and this Arduino communicates with another Arduino attached wirelessly to the robot car through radios. This communication route is also traced back to p5 to allow bidirectional communication.

When the user selects the autonomous mode, all the sensors work together to ensure the smooth and safe movement of the robot car. The robot car is equipped with several sensors and motors all working together to ensure that the robot moves smoothly and safely, avoiding obstacles and edges on its way. When the manual mode is activated, the user can control the robot directly using the arrow keys on their computer.

In both modes, as soon as a mode is selected, the robot enters a playground screen in p5 where p5 receives data back from the robot and then maps its movement on the screen. This allows the user to see the robot’s movement and trajectory in real time, giving them full control and understanding of the robot’s actions.

The project is a unique project that combines creativity, innovation, and technology to create a fun and functional robot car. The integration of p5, Arduino, and sensors allows for seamless communication and control between the user and the robot, making it an exciting and engaging project for anyone interested in robotics and technology.

Implementation  (Challenges included)

The implementation of ANXIETI-E was a journey of constant iteration and improvement. Starting with a rough prototype, I gradually added sensors, integrated bidirectional wireless control, and implemented p5.js for the user interface. In this section, we will go through each component of the project in detail, outlining the challenges and solutions I encountered along the way. I will also outline the challenges I faced and the solutions I found for each, providing valuable insights for those looking to embark on similar projects in the future.

Rapid Prototyping

I found that rapid prototyping is an essential part of any product development process. It involves creating quick, rough versions of a product or system in order to test and refine its design. This approach allowed me to quickly iterate on my ideas and refine my designs based on the results of testing and feedback.

Building Frame

The exterior frame was built with wood. I was going through the materials in the IM lab and I found a piece of wood that felt like a pretty good size for the robot’s exterior. I scouted for another piece and found a little larger than the one I had. I took out the handsaw and made both pieces of the same size.

Sawing wood
Setting Lower Level

After I had both the pieces for the frame, I started setting the lower level by pasting the DC motors using tape and trying out different placements before committing to the final positioning. I even connected a small breadboard and motor driver and unit tested the DC motors by running a small piece of code to check if they were working fine in their placements. The code allowed me to test the movement of the car in different directions using input from the Serial monitor. I had attached 4 DC motors using a single motor driver by connecting the motors on one side together in parallel. This made sure that the robot was able to move in all 4 directions and at the same time ensured adequate current to all the four motors. After I made sure all the motors were working properly, I went forward to secure them in place using hot glue. I also added a bigger breadboard running along the middle of the lower level to facilitate easier connections.

Taped motors
Unit Testing Motors
Securing DC motors
Setting upper level

After setting the lower level, I started attaching the Arduino and another breadboard (smaller). I drilled some holes in the upper layer to let wires pass in easily. between both levels. I then used some velcro on top of the motors to attach both layers. This was essential as it allowed me to open and access the lower layers anytime for adding or changing connections.

Drilling Upper Level
3D printing head

I was just really inclined on giving the robot a face and though about 3D printing a head for the distance sensor. I found this model on Thingiverse, sliced it using Cura and put it to print. I was really happy with the final result and people loved it!

3D printing robot head
Integrating Servo

The next step involved attaching a Servo to the upper layer. I hot glue it to place. I now had to put the 3D-printed head onto it as it also functioned as the case for the distance sensor. This was a really tough job as the top of the servo was barely able to hold the case in place and anything I tried would fall off after a while. Finally, after a lot of failed attempts, I was able to use a combination of hot glue and velcro to securely strap the robot’s head on the servo. I then went on to unit test the servo to look right and left with a small piece of code.

Robot Head

Sensors and Wireless Communication

The next phase of the project was to integrate all the sensors, power, and wireless radios; and unit test them to make sure they are working properly individually. After they worked properly individually, they were combined to work together and then tested.

Integrating Distance Sensor – Collision Detection

To employ the collision detection mechanism, it was essential to use the data from the ultrasonic distance sensor. One challenge with this sensor is that the readings have a lot of noise in them. To filter this noise out, I implemented an exponential moving average filter that takes a weighted average of the new data while also putting in some weight to the previous data available. This made the sensor detect collisions much better.

The collision detection and avoidance mechanism was a -step process. The distance sensor keeps pinging the distance continuously. Whenever a collision is detected (which is when the reading from the distance sensor is less than the threshold set), the robot immediately stops and rolls back a bit. During the same time, the servo looks left and right and the distance sensor pings distance from both sides and evaluates which direction has an object that is farther away. Based on this computation, it decides to turn either left or right and move in that direction. If suppose the distance in both directions is still below the threshold, it continues to move back and re-compute the distance until it becomes more than the threshold set.

This video shows me unit testing the ultrasonic distance sensor and checking for collisions

Integrating Power

All this while, the Robot was running off by taking power from the laptop which provided it with a constant power source. The next step was to make it wireless. I faced a lots of challenges here and tried out a plethora of things before finally finding a proper solution with the guidance of Professor Shiloh. Initially when I had plugged in a 9v battery thinking it would be enough to supply the whole unit. But the robot started behaving eratically even though it felt like it was receiving more power than usual. This led me to shift the servo to a separate power channel of 6V. But the problem still didn’t get solved. Upon checking the power connections with a multimeter we realized that due to a larger internal resistance of the 9v battery, the board was only receiving a current of about 5v or less and was even dropping further. This led to the code getting resetted again and again and making the robot behave erratically. Upon consultation with Professor Shiloh, I decided to replace the 9v batter with 6 AA batteries on top of the 4 AA batteries that were already used by the servo. This made the total count of batteries to 10. Now, I had initially not accounted for these and hadn’t made enough space for so many battery holders. But with some careful placement (which also involved cutting up a 4 piece battery holder into 2) and a lot of soldering, I was finally able to place all the things in the lower layer itself and also made sure that nothing was causing problems with the DC motors or the wires.

Servo Batteries
Arduino Batteries
Integrating Radio: One Way (Arduino – Arduino)

The next step was to make the robot wireless. I was able to implement this using two NRF24 radios and the RF24 library. The radio was unit tested by modifying the code provided by Professor Shiloh. After the unit testing was successful, one of the radios was integrated to the Arduino on top of the robot car. This was a specifically difficult process as the entire wiring had to changed to the radio shield and all the wires using pins 9-13 had to be shifted to the analog pins as the radio used up pins 9-13. The shield also supported a power channel for the servo and it made the process easier.

Integrating Radio
Integrating IR Distance Sensor – Edge Detection

Edge detection was one of the easiest things to implement. I knew it could be implemented using an IR distance sensor, cause their rays are much more concentrated (unlike the US distance sensor which sends out waves of signals). When I checked the IR sensor, it just worked with me barely writing any code. The integration was also smooth enough and so was the inclusion of the edge detection. The one problem that I faced here was the fact that the sensor is actually adjusted to its surrounding conditions and when I shifted places, the sensor needed to be physically recalibrated for edge detection using a screwdriver. This did happen to me in the exhibition setting causing me to manually calibrate the edge detection.

Integrating all sensors together

The final part was to integrate all the sensors and motors together to create the autonomous part of the robot (unidirectional for now). One important thing here was to make sure that there was no halt in the physical execution of the code due to arduino’s delay function so I designed a custom delay function that checked for difference in milliseconds and entered a while loop for that time period and allowed asynchronous detection by several sensors avoiding delays to cause problems in the overall execution.

This is how the code progressed. The radio receives a signal to enter autonomous mode. The  DC motors start running to move the robot forward. Both the Ultrasonic distance sensor and the IR distance sensors keep recording data. If at any instant, the pinged distance by the US distance sensor comes below the threshold, the collision detection process is activated (as described in detail above). The same thing happens if any of the IR sensors start receiving a value of 1 detecting that an edge has been detected. The aftermath involves the robot stopping and moving back a few steps while the US distance sensor checks for safety and moves in the right direction.

The only unsolved issue here that remains is, since there are no sensors that are present in the back of the car, it can be a bit unsafe sometimes for the car to randomly roll backwards at edges and fall off causing damage.

p5.js Component +Integration

Creating the p5 interface and Integrating it with the arduino code had its own process and set of challenges but finally I was successful in doing it.

Basic Interface

To test the code I first had to create a simple interface with 2 rectangles (signifying the manual and autonomous mode) and another screen for the robot playground. This was a pretty simple interface (rather wireframe), but built for testing and didn’t have any graphics. This is the link to the sketch.

p5-Arduino-Arduino Integration (One way)

I took the template code by Professor Aaron Sherwood and Professor Michael Ang for one-way serial communication code between p5 Serial port and Arduino and combined it to the uni-directional radio communication code I had unit tested previously. This helped me setup a one way communication setup between the p5 canvas to the arduino connected to it using a serial port and this arduino was then able to transmit the message over to the other arduino present on the robot car.

Using this mechanism, I was able to send some messages from p5js and start controlling the basics of the robot like movement, directly from the p5 browser

Creating Robot Playground

For the robot playground, I had an MSLogo Turtle sort of idea in mind. I started out by creating the code for the movement of the turtle. It was actually much more difficult than it felt initially, especially turning the triangle at specific angles. With some difficulty, I was finally able to control the turtle using the cursor (the direct arrow control was just for unit testing – the final version has the proper and better implementation). Then I went on to create a trail when the turtle moved. This trail was a trail of spheres. Whenever the turtle went off course, it started again from the center of the screen.

This is the code for the sketch.

Final Integration

The final phase of development included the implementation of a last few but very important features.

Advanced Interface – Autonomous and Manual

The first step in this phase was to create a properly functional and aesthetic interface. The flow of the website remained similar like the one I unit tested on, but this time it came with more instructions and better graphics. The second screen this time had one extra button over the back button. It was the save button that let the user save the canvas. This feature I felt would be pretty beneficial later when the reverse mapping worked.

This is the link to the initial sketch.

Manual Mode

Coding the manual mode was also important as it provided user the ability to control the robot just like any other RC car.  The process was quite straightforward. I used one bit in the string to send command for “autonomous” or “manual” and then used the other bit to specify the direction based on the key that was pressed on p5. This way I was able to control the car remotely using the arrow keys of my computer.

Bidirectional Communication

Making the communication between the Arduino and p5 bidirectional was perhaps one of the most difficult tasks of the entire project. By the time I reached this part, I had already finished creating a unidirectional communication route between p5 and both the arduinos. I also had a bidirectional communication route between p5 and the arduino that was connected serially. The major problem that I faced was to make the radio communication bidirectional. Now the chip I used was capable of the but the implementation was tricky as the sender and receiver had to be continuously switched. This led to the big problem of packet losses and made the task quite difficult.

I employed several ways to mitigate this including switching in every loop, switching after success statements, switching after a fixed timeout interval or fixed number of retransmissions, etc. But all this just led to the increase in complexity of the program and barely helped with the objective.

Finally, I fiddled around with a code put up by Professor Shiloh, tried out several delay values, and was finally able to make the bidirectional communication thing work.

Final integration – Data Control and Reverse Mapping

After figuring out the bidirectional flow of information, the project was almost complete. I just had to stay set on a specific format of information exchange. I chose a string that consisted of 2 numbers separated by a comma. The data control schematic is as follows:

From p5 to arduino:

– 1st number is used to provide mode-selection information about whether autonomous or manual.

– 2nd number is used to provide direction-selection information about which direction to turn to – if the mode is manual.

Now there was a similar reverse mapping data flow, based on whose information, the trail of the robot was mapped on the p5 canvas.

From arduino to p5:

– 1st number provides the mode information and whether there was a collision or edge detection.

– 2nd mode provides the direction that the robot car takes after such a collision or edge detection.

Based on this information, both the robot and the reverse mapping on p5 were able to run smoothly.

Potential Future Developments

  • Adding additional sensors for detection at the back of the robot
  • Making the p5 interface even more aesthetic with more options for speed control as well.
  • Finding a more permanent fix for the power issue.
  • Giving the robot an even more polished look.

Reflections (Proud of!)

One of my proudest moments in the production phase was when I was finally able to set up the bidirectional communication between p5-Arduino-Arduino. It took me days and I had almost tried out everything to the point of giving up. I had a fallback code ready where I had to ditch the idea of reverse mapping data to p5. But I just didn’t wanna compromise with this feature and kept trying my best. And finally, it worked out!

Some of the other things that I feel very proud of even though they might seem small was the effort I put into the organization of the wires and connections in both layers and figuring out the best and most efficient use of space. I was able to create two floors of connections and kept the wires visible outside to a minimum. I fit in 10 AA batteries, 4 DC motors, and a long breadboard along with a large number of wires all in the lower level. I faced many challenges through the process of making the robot, but the one thing that never caused a problem was wrong connections or failing individual sensors. This was because I made sure to unit-test everything multiple times before moving on to the next feature. I also made sure the connections I made were quite secure by soldering and covering the ends later.

When I had initially written the proposal for the idea, it had felt very ambitious for a person working individually to be able to finish it before the deadline especially considering the complexity of the project. I was very prepared to leave some parts of if required. But somehow, by starting early and putting in a lot of time for the project every week, I was finally able to achieve everything I had promised in my initial proposal and more. It felt surreal to be presenting it in the IM showcase and I feel super proud of myself for the things that I have done and learned through the journey.

User Testing


The project was presented at the NYU Abu Dhabi Interactive Media End-of-the-Semester Showcase. These are some of the pictures and videos from the exhibition.


Some canvas sketches created by people

Final Project Proposal


The aim of this project is to create a robot that can avoid collisions and sudden edges while moving around. The robot will be controlled using an Arduino board and p5.js, a JavaScript library for creative coding. The robot will be equipped with an ultrasonic distance sensor and an infrared sensor to detect collisions and edges respectively. The robot will have two different modes of operation which can be controlled using serial communication between the Arduino and p5.js. One mode will allow the robot to roam freely, while the other mode will allow the user to control the robot using keys on the computer. The movement of the robot will be mapped onto the p5.js canvas.

Design: Arduino to p5 and p5 to Arduino

There are two arduinos – one of them is connected to the robot car and the other one is connected to the laptop. They contact with each other wirelessly. The arduino takes in data from p5 through Serial communicationand relays it over to the arduino connected on the robot car. This data helps in driving the robot car and switching between different modes. The robot car looks out for collisions and edges and moves making sure to avoid them. The direction that the car moves for is transmitted back from one arduino to the other (connected to the computer) wirelessly and then finally relayed back to p5 using serial communication.


In Class Exercises (Arduino + p5)

Team: Swostik, Naz, Aibar

Exercise 1

“Make something that uses only one sensor on arduino and makes the ellipse in p5 move on the horizontal axis, in the middle of the screen, and nothing on arduino is controlled by p5”

The code provided in the lecture notes for both Arduino and P5.js was used as-is, except for the draw function in P5.js. We modified the code to make the ellipse’s horizontal location change when the potentiometer was turned, and the vertical location change when the LDR input was changed. To prevent multiple visual responses from P5.js when the potentiometer and LDR were manipulated, we removed the background and fill functions.

Arduino Code

p5 Sketch

Demo Video


Exercise 2

“Make something that controls the LED brightness from p5”

We made some changes to both the Arduino and P5.js code for this task. To adjust the brightness of the LED, we utilized the keyboard’s Up and Down arrows as the input for the P5.js code. This changing value was then sent to the LED connected to PWM pin 5. We used analog write to this pin, which increased or decreased the LED value. If the value exceeded 255, it reset to 0 and vice versa.

Arduino Code

p5 Sketch

Demo Video

Exercise 3

“Take the gravity wind example and make it so every time the ball bounces one led lights up and then turns off, and you can control the wind from one analog sensor”

To complete the exercise, we made changes only to the P5.js code by incorporating the gravity wind example. The original Arduino code from the lecture notes remained unchanged. We divided the range of the potentiometer into three intervals, which set the wind vector to -1, 0, and 1 depending on its direction. These values are acceptable for the wind. We also implemented a collision detection flag. When a collision occurs, the LED on pin 2 is turned on. Whenever there is no collision, the LED is turned off.

Arduino Code

p5 Sketch

Demo Video

Final Project Idea – Collision and Edge Detection Robot with Arduino and p5.js


The aim of this project is to create a robot that can avoid collisions and sudden edges while moving around. The robot will be controlled using an Arduino board and p5.js, a JavaScript library for creative coding. The robot will be equipped with an ultrasonic distance sensor and an infrared sensor to detect collisions and edges respectively. The robot will have two different modes of operation which can be controlled using serial communication between the Arduino and p5.js. One mode will allow the robot to roam freely, while the other mode will allow the user to control the robot using keys on the computer. The movement of the robot will be mapped onto the p5.js canvas.


One of the Arduino boards will be connected to the computer which receives serial information from p5 and transmits them over to the other Arduino board. This is a two way link communication between the robot and the computer. The ultrasonic distance sensor will be connected to the servo motor, which will rotate the sensor to scan the environment. The infrared sensor will be attached to the front of the robot chassis to detect edges. The robot moves around using regular motors and wheels. Whenever an edge is detected or the distance sensor senses a possible collision, it starts moving backward in order to avoid imminent danger. It also transmits back the movement data that is later displayed in the form of a path in the p5 canvas.

HW 7: Musical Instrument – Irem Naz Celen & Swostik Pati

If you have ever wondered how it would be to a starting DJ, this design is for you!

Link to Video


The inspiration for this project comes from a DJ controller. We wanted to design a physical interface that would allow a range of motions by the DJ. Our focus was on an easy and intuitive design that would mimic a basic DJ controller. The 2 requirements that had to be fulfilled were a beat-making mechanism as well as a musical mechanism in which the frequency can be controlled.


In order to make our inspiration become a reality, we have decided to produce the musical notes in different frequencies of 7 tones through the buzzer while making a beat maker using the servo. A push button connected to the servo slams the handle on the wooden surface imitating the beat sound. On the other side, another button is used to switch between 7 different notes that are played in the buzzer. Potentiometer that feeds back to the buzzer allows the frequency of the tone that the buzzer is playing to go higher when it is twisted right, and lower when twisted left.


In the implementation of the project an Arduino Uno, a servo, 2 push buttons, a buzzer, and a potentiometer are used. Servo’s rotating wing is taped to plastic handled screwdriver while the servo itself is fixed on a wooden plate.

The first push-button retains information on the occasions where it is pressed. A push that is recorded prompts the screwdriver that is connected to the rotating wing of the servo to switch from a levitating position to slam on the wooden fixation plate. The sound produced is similar to a beat sound that can be achieved from an electronic device. On the other side of the board, the push button and potentiometer both record information that changes the state of the buzzer. The buzzer is constantly creating sound. The range of sound is modeled from a piano’s octaves, and therefore there are 7 keys in 7 different octaves that can be produced from this buzzer according to the code. The frequencies of these notes are numerically written into the notes double array.  The push button for the buzzer (stationed towards the right side of the breadboard) switches between the 7 notes in the order of B, C,D,E,F,G, and A. Potentiometer on the other hand, switches into higher pitch/frequency of the same note, visually can be thought of as higher octave of the same note on the piano, when turned towards the right. There are again 7 octaves possible, in which the most left position of the potentiometer is the lowest frequency the note exists in while the most right position is the highest frequency of the note.

This is the code for the said implementation.

//Include Servo Library
#include <Servo.h>

int servoPin = 9;
int pos1 = 40;
int pos2 = 110;
int dt = 100;
Servo myServo;

//pushButton for servo
int pbPin = 13;
bool pbFlag = true;

//pushButton for note change
int notePin = 2;
bool nbFlag = true;
int currNoteI = 0;
int mappedValue;

int potPin = A0;

//notes array
int tones[7][7] = {{31, 62, 123, 494, 988, 1976, 3951}, {33, 65, 131, 262, 523, 1047, 2093}, {37, 73, 147, 294, 587, 1175, 2349}, {41, 82, 165, 330, 659, 1319, 2637},
  {44, 87, 175, 349, 698, 1397, 2794}, {49, 98, 196, 392, 784, 1568, 3136}, {55, 110, 220, 440, 880, 1760, 3520}

int tonePin = 4;

void moveServo() {

void setup() {
  // put your setup code here, to run once:

  //  setting up servos

  //  setting up pushbutton
  pinMode(pbPin, INPUT);

  //  setting up buzzer
  pinMode(tonePin, OUTPUT);


void loop() {
  // put your main code here, to run repeatedly:
  //pushbutton and servo implementation
  int pbVal = digitalRead(pbPin);
  if (pbVal == 1 && pbFlag) {
    pbFlag = false;
  else if (pbVal == 0) {
    pbFlag = true;

  //potentiometer implementation
  int potValue = analogRead(potPin);
  mappedValue = map(potValue, 0, 1023, 6, -1);
  //  Serial.println(mappedValue);
  //  Serial.println(potValue);

  //  pushbutton and note change implementation
  int nbVal = digitalRead(notePin);
  if (nbVal == 1 && nbFlag) {
    if (currNoteI < 6) {
    else {
      currNoteI = 0;
    Serial.print("The current note is: ");
    nbFlag = false;
  else if (nbVal == 0) {
    nbFlag = true;

  tone(tonePin, tones[currNoteI][mappedValue]);


Schematic Diagram


The main challenge that we have come across was making the location of the push buttons and the potentiometer user-friendly while taking wires into consideration. We had to play around with different versions of the breadboard construction multiple times to make sure the musical instrument was approachable and easy to use.

Moreover, we also considered multiple circuit designs for the buzzer in order to control the volume by adding resistors. Upon multiple trials, we have decided not to add a resistor for maximum volume and clarity.

For servo attachment, we looked around for a considerable time to find the most suitable extension to make the optimum beat sound. Before deciding on attaching the screwdriver to the rotational component of the servo, we considered using plastic and wooden extensions. For gaining the right sound of a beat, we realized we needed more weight for the attached piece, hence the final screwdriver extension was chosen. Putting a wooden plate below enhanced the experience, after our trial and error.

Another consideration was given to the response time of the servo, by adjusting the delay time in the moveServo() function. The final adjusted value of the delay was given according to what sounded natural according to the frequency at which the push button of the servo was pushed.


Making this project together as a group was very enjoyable, which made the design process more exciting and fun. A part to improve on in the next projects would be in terms of having more power in choosing which tone to play on the buzzer rather than having to follow the set order that is programmed in the current version of the code. On a more general note, we would love to improve our ability to follow up what is happening in the circuit as it gets more complex, and making the schematic together definitely allowed us to work on that.

HW 6: Automating the Chrome Dinosaur Game

Have you ever played the Google Chrome Dinosaur Game while waiting for your internet connection to be restored? It’s a simple game that can become addicting as you try to beat your high score. But what if you could take it to the next level by automating the game with an Arduino circuit?

Link to Video


I was inspired to automate the Chrome Dinosaur Game after seeing similar projects on YouTube and other online platforms. I was fascinated by the idea of using an Arduino circuit to control a physical object in response to the game. It was a great opportunity to practice my programming and electronics skills while having fun.


The concept of my project was to automate the game using a light sensor and a servo motor. The light sensor detects changes in pixel brightness on the screen when obstacles like cacti show up. When an obstacle is detected, the servo motor presses the spacebar key on the laptop to make the dinosaur jump over the obstacle. A push button was added to the circuit to start and stop the entire automation process. Additionally, I added a red LED to indicate the circuit is on and a blue LED to track the progress of the game.


To implement my project, I used an Arduino Uno board, a light-dependent resistor (LDR) as my light sensor, a push button, a red LED, a blue LED, and a servo motor.

The push button provides input digitally to turn on or off the circuit. As soon as the circuit is turned on, the red led starts glowing. digitalWrite is used for this operation. The light sensor which is stuck to the laptop screen at an appropriate position to detect obstacles in the path of the chrome dinosaur game, starts reading light intensity values. As soon as an obstacle passes by the light sensor detects a value above (because my browser is in dark mode) the set threshold (which it ideally should be), it sends a signal to the servo to rotate a specific angle in order to hit press the spacebar. This causes the running dino to jump and cross over the obstacle. The blue light glows more brighter with each passing jump symbolising the progress through the game.

This is the code for the said implementation.


//Dinosaur Chrome Automation - Analog and Digital IO

//Include Servo Library
#include <Servo.h>

// initializing global variables
int servoPin = 9;
int pos1 = 40;
int pos2 = 110;
int dt = 100;
int count = 0;
int lightInt = 0;
int pushbuttonPin = 13;
int pushbuttonState = 0;
int redLED = 12;
int blueLED = 6;
int blueBr = 0;
int lSPin = A2;
int val;
int thres = 110;
int a;

bool switchONOFF = false;
bool flag = true;
bool switchFlag = true;

Servo myServo;

void setup()

  //  setting up servos

  //  setting pinModes
  pinMode(lSPin, INPUT);
  pinMode(pushbuttonPin, INPUT);
  pinMode(redLED, OUTPUT);
  pinMode(blueLED, OUTPUT);

//controls the movement of the servo
void spaceBarServo() {
  delay(dt );

void loop() {
  // checks for when the button is pressed
  pushbuttonState = digitalRead(pushbuttonPin);
  //  filters out only button clicks and removes button being continuously pressed
  if (pushbuttonState == 1 && switchFlag) {
    //    states of switch on or off
    if (count % 2 == 0) {
      switchONOFF = true;;
    else {
      switchONOFF = false ;
    switchFlag = false;
  else if (pushbuttonState == 0) {
    switchFlag = true;

  if (switchONOFF == true)
    //    denotes that the ciruit is ON and listening to obstacles
    digitalWrite(redLED, HIGH);

    //reads the light sensor value
    val = analogRead(lSPin);
    //    debugging output on screen
    //    checking for if the light sensor value crossed the threshold - denoting obstacle detected on screen
    if (val > thres and flag) {

      // incrementing the brightness of the blue led
      if (blueBr + 2 < 255) {
        blueBr += 2;
      // calling the servo function to hit the spacebar
      // turning on the blue LED with the specific intensity
      analogWrite(blueLED , blueBr);
      //      flag is used to avoid multiple increments on one jump above the threshold
      flag = false;
    else if (val < thres) {
      flag = true;
  else {
    //    resetting everything if the light is turned off
    blueBr = 0;
    analogWrite(blueLED , blueBr);
    digitalWrite(redLED, LOW);



Schematic Diagram


One of the biggest challenges I faced during this project was getting the light sensor to accurately detect the obstacles. I had to experiment with different threshold values to make sure the sensor was triggered at the right time. It took a lot of trial and error to adjust the sensor and threshold values to my ambient conditions. Switching off a light in the room caused some noticeable changes in its behavior.

Additionally, I had to ensure the servo motor was placed in the perfect place so that it was firmly pressing the spacebar key at the right time while making sure that it didn’t damage it.

A very weird challenge that I hadn’t ever expected to encounter was the inconvenience of working on the computer as soon as I struck the servo and the light sensor on my laptop. It took a lot of time to navigate around this but finally I was able to get the work done.


This project was a great learning experience for me. I learned a lot about programming and electronics, and it was fun to see my circuit in action. I was also surprised at how well my circuit worked in the end. The thing that I would like to improve on would be to make cleaner circuits so as to prevent bugs in my future works.

HW 5: Creative Switch using a trash bin


Making a creative hands-free switch that could turn on a light inside a trash can served as the motivation for this project. The intention was to design a switch that could be activated with the foot or another body part, eliminating the need to touch the trash can with your hands. This could be useful in situations where your hands are dirty or full, and you want to avoid the spread of germs or in the case where the room is dark.


The concept for the switch was to use two pieces of aluminum foil attached to any two surfaces of the trash bin that come in contact as soon as the pedal of the trash bin is pressed. When the bin is opened, the foils come into contact, completing the circuit and allowing current to flow through an LED. The switch is powered by an Arduino board, and the LED provides visual feedback to let the user know when the switch has been activated.


To implement the switch, I first attached two conductive aluminum foils to the lining of the trash bin, making sure they were securely attached and wouldn’t come loose over time. I then connected one wire to the positive terminal of the LED and the other wire to the positive terminal of the Arduino board (3.3V). I connected the negative terminal of the LED to the ground terminal of the board, completing the circuit.

When the trash bin is opened, the two wires come into contact, allowing current to flow through the LED and light it up. The LED provides visual feedback to the user, letting them know that the switch has been activated.


Circuit Image
Aluminum foil plates


One of the main challenges I faced was making sure that the conductive wires were securely attached to the lining of the trash bin. I experimented with a few different adhesives before finding one that worked well. I also tried out by putting the foil in different parts of the trash can before finally finding a place that remained completely hidden to the user and would definitely come in contact every time the bin was opened. Another challenge was making sure that the wires were positioned in such a way that they wouldn’t come into contact accidentally and complete the circuit when the switch wasn’t being used.


I am pleased with how the switch turned out. It is simple yet effective, and it meets the goal of creating a hands-free switch for a trash bin light. I learned a lot about working with conductive materials and designing circuits, and I feel that this project could be a useful starting point for other similar projects in the future.

Midterm Project – Pong (Blindspot Version)


Sketch Link:


As a lover of classic arcade games, I have always been fascinated by the simple yet addictive gameplay of Pong. Originally developed in the 1970s by Atari, Pong started as a simple slot machine game before becoming the first commercially successful video game. Its simplicity, accessibility, and fun factor quickly made it a favorite among gamers of all ages.

My inspiration for creating a Pong game on p5.js stemmed from my desire to learn more about game development and to challenge myself to create a game that was both simplistic, visually appealing, and fun to play. Even while doing so, I wanted to add my own creative touch to the game while still retaining its classic feel.


The concept of the project is simple – to implement the Pong game. But I had to add some unique elements to the game to make it a different experience from the existing pong video games. I thought of revving up the difficulty and consequently the adrenaline rush of the game by adding a “blindspot” in the middle of the screen where the ball disappears and continues to travel in the same trajectory until it appears again from the other side of the blindspot. As the players continue through each rally, the blindspot region keeps increasing and so does the speed of the moving ball making it more and more difficult for both players.


The game implementation has two parts to it – the creative process and the technical implementation.

Creative Design

The creative design part of the implementation focuses on the visual aspects of the game, including the graphics, sound effects, animation, and user interface (color choices, font choices, etc).

The game’s colors are kept minimal with almost all the elements switching between absolute black and white.

There are three screens in total – the Start Screen, the Game Screen, and the End (Game Over) Screen.

The Start screen displays the rules of the game with the least bit of information possible as players generally skip the instructions. The controls are shown on a separate half of the page in order for the players to at least get the most essential information required (as the other parts can soon be figured out intuitively). This screen also features a ball moving around the screen just to give a feel of the game before the user starts playing.

The game screen has a background board with two rectangular paddles, one on each side of the screen, and a ball in the center. For the animation, I added smooth movement to the paddles and the ball to give the game a more polished feel. The ball moves in a straight line until it hits a paddle or wall (the top and bottom walls only), at which point it bounces off at an angle based on where it made contact. When the ball hits either the right or left wall, a point is scored, and the ball resets to the center of the screen. I have also made sure that at the start of every rally, the blindspot is temporarily disabled so that the players are able to view the ball at the center of the screen. As soon as the ball makes contact with a paddle, the blindspot is reactivated and the usual gameplay continues. I added a score tracker on the top of the screen for each player. The score is displayed in a large, bold font that is easy to read.

The final screen is the Game Over screen. This screen displays the player who won with a trophy. It also contains a button that gives the user the option to restart the game. The user doesn’t need to refresh the page for this.

I included several sound effects throughout the game to enrich the user experience. There were different sound effects used for a ball tap against the wall or a paddle, a point score, and game over. The fonts that were used at  different places were carefully thought about. The background images that have been imported into the sketch and used in different screens were designed by me on Canva.

The game was tested several times, with several paddle speeds, ball speeds, ball speed increments, blindspots and blindspot increments to make sure there were no glitches or bugs. All in all, every design decision throughout the game was carefully thought about to enhance the overall user experience of the game.

Technical Design

The technical implementation of the game included designing the switch between game screens, implementing the mechanics of the pong game itself with the enhancements as proposed that included paddle and ball movements, collision detection and response, adding sound effects at particular times, etc.

The entire code was written in an object-oriented fashion with classes of ball and paddles, and several functions were used to increase code reusability and modularity.

The code snippets below are used to implement important functionalities in the game.

Paddle Class

// Paddle class
class Paddle {
  constructor(x, y) {
    //     Stores the position of the paddles
    this.x = x;
    this.y = y;

    //     Flags to define whether the paddles are currently moving or not
    this.up = false;
    this.down = false;

  //   displays the paddles(as rects) based on their x and y positions
  draw() {
    rect(this.x, this.y, paddleWidth, paddleHeight);

  //   updates the position of paddles based on their movement
  update() {
    //     paddles move up or down by 10 units based on which key is clicked
    if (this.up) {
      this.y -= paddleSpeed;
    if (this.down) {
      this.y += paddleSpeed;
    this.y = constrain(this.y, paddleHeight / 2, height - paddleHeight / 2);

Snippets from Ball Class

update() {
    //game over condition
    if (rightScore >= maxScore || leftScore >= maxScore) {;
      //       storing the winner
      if (rightScore == maxScore) {
        winner = 1; //Player A won
      } else {
        winner = 0; //Player B won
      // print(winner);
      //       transitioning to the gameover screen
      screenView = 2;

    //     updating current position of the ball
    this.x += this.speedX;
    this.y += this.speedY;

    // check for collisions with paddles
    if (
      this.x - ballSize / 2 <= paddleWidth &&
      this.y >= leftPaddle.y - paddleHeight / 2 &&
      this.y <= leftPaddle.y + paddleHeight / 2
    ) {
      //       plays collision sound;
      //       increments ball speed in a particular rally
      this.speedX = abs(this.speedX) + ballSpeedInr;
      //       increments the blindspot region - there is an upper limit set as well to prevent it from covering too much of the screen
      blindSpot = Math.min(blindSpotInr + blindSpot, maxBlindSpot);


// check for scoring - one side
    if (this.x - ballSize / 2 <= 0) {
      //       sound effect when a player scores;

      //       ball is brought back to the center of the screen
      this.x = width / 2;
      this.y = height / 2;

      //       speedX and speedY are reset to the original ball speed
      this.speedX = ballSpeed;
      this.speedY = ballSpeed;

      //       score of the corresponding player is incremented

      //       to switch off blindspot just till the game starts
      ballFlag = false;

      //       resetting the blindspot region
      blindSpot = 50;

Screen Switching

function draw() {
  // Switches between game screens based on the variable screenView
  if (0 === screenView) {
  if (1 === screenView) {
  if (2 === screenView) {

Event Listeners

// listens for key presses to start motion of paddles
function keyPressed() {
  //   listens for up and down movements by player A using up down arrow keys
  if (keyCode === UP_ARROW) {
    rightPaddle.up = true;
  } else if (keyCode === DOWN_ARROW) {
    rightPaddle.down = true;
  //   listening for up and down movements by player B using "W" and "S" keys
  else if (keyCode === 87) {
    leftPaddle.up = true;
  } else if (keyCode === 83) {
    leftPaddle.down = true;

// listens for when the keys are released to stop motion of paddles
function keyReleased() {
  if (keyCode === UP_ARROW) {
    rightPaddle.up = false;
  } else if (keyCode === DOWN_ARROW) {
    rightPaddle.down = false;
  } else if (keyCode === 87) {
    leftPaddle.up = false;
  } else if (keyCode === 83) {
    leftPaddle.down = false;

// listens for mouse clicks
function mouseClicked() {
  //   transitions from Start Screen to Game Screen and starts the game
  if (0 === screenView) {
    leftScore = 0;
    rightScore = 0;
    ball.x = width / 2;
    ball.y = height / 2;
    screenView = 1;

  //   transitions from End Screen to Start Screen to help user to restart a game without reloading. This happens on clicking the button on the end screen
  if (
    2 === screenView &&
    mouseX > width / 2 - buttonWidth / 2 &&
    mouseX < width / 2 + buttonWidth / 2 &&
    mouseY > height - 100 - buttonHeight / 2 &&
    mouseY < height - 100 + buttonHeight / 2
  ) {
    screenView = 0;

Key Challenges and Solutions

  • Implementing different screen views and navigating around them – It took some time for me to figure out the navigation across different screen views – especially returning back to the “Start Screen” from the “Game over” screen without refreshing the sketch. In the end, I implemented a simple flag-based mechanism to do so.
  • Identifying the paddle and ball speeds, increments, and blindspot regions – If even one among the paddles or the ball moved too slow or too fast, it would take away a lot from the overall user experience of the game. Even the increments in speed and the blindspot region had to be decided on carefully. The game had to be the right amount of “challenging” and stimulating for it to hook the users. Too easy and the users get bored. Too difficult and the users get frustrated. The game had to be the right mix of challenging and rewarding. This might feel like a simple thing of deciding a few numbers but it does have a huge impact on the overall gameplay and user experience. I spent quite some time playing around and extensively testing several mixes of numbers before arriving at the ones used in the final code.
  • Figuring out the Game Mechanics and the collision detection algorithm (the most frightening part) – I had talked about this in my previous post where I shared my progress in the midterm project. This was the most frightening part initially as the game was nothing without these two elements figured out and working perfectly.
  • Integrating more features while still keeping the minimalistic feel intact – It was a challenge to maintain the simple and minimalistic feel to the game intact and at the same time integrate more features into it. The balance was key as I wanted to preserve the game as it has been for ages and still make the experience a completely new one.

Potential Developments

  • Giving the option to the players to add their names in the Start screen rather than going by “Player A” and “Player B”.
  • Integrating a database to store high scores and user accounts. This would allow me to also have a leaderboard in the end that could be fetched based on the data from the database.
  • Adding more elements to the game including game animations during scoring and powerups like freezing the opponent’s paddle for a while or combo points for a rally, etc.
  • Making the blindspot region generate at any random position throughout the screen and making it move will add a new layer of challenge to the game.


Throughout the development of this project, I have learned a lot about the process of game design and the intricacies that go into creating a simple yet engaging game. I had to experiment with different approaches and techniques to achieve the desired functionality. I was also able to create an algorithm that worked effectively and provided a satisfying gameplay experience. I made sure to focus both on the technical implementation and the creative design process of developing the game. The experience has been both challenging and rewarding, and I am proud of the final product that I have created.


Start Screen
Game Screen
Game Over Screen

Midterm Project Progress – Pong Game


Pong is a classic two-player game that simulates table tennis. The players use paddles to hit a ball back and forth across the screen. The objective of the game is to score points by getting the ball past the opponent’s paddle.

Project Concept and Design

The goal of this project is to create a basic Pong game that can be played by two players. The game will have two paddles, one for each player, and a ball that bounces around the screen. The players will use their paddles to hit the ball back and forth, with the objective of getting the ball past the opponent’s paddle. The game will have a scoring system that keeps track of each player’s score, and the first player to reach a certain number of points (e.g., 10) will be declared the winner. The game will also have sound effects to enhance the gameplay experience.

The Most Frightening Part (Implemented)

Initially, the most frightening part of this project was not knowing how the game would pan out with the collisions and movement of the ball and paddles. It was essential to get that part of the game right to ensure that the game could be implemented. To reduce this risk, I decided not to leave the implementation of the game to the end but started implementing the project slowly with one feature at a time. This approach allowed me to test each feature as I implemented it and make adjustments as needed to ensure that the game was working as expected. There sure are more improvements that can be added to this system, but a decent amount of work has already been done.

The paddles now move with the UP and DOWN arrows, and the “W” and “S” keys.

Future Enhancements

  • Adding a picture background in the play area.
  • Adding an instruction screen(with text) in the beginning.
  • Adding sound effects.
  • Adding a game termination check and a “Game Over” screen.
  • Adding a restart button in the “Game Over” screen that restarts the game.


HW4: Generative Text Video


The inspiration for this generative text art was to use text to render a live video. The goal was to use the brightness of each pixel in a video capture to determine the color of a letter displayed in the center of the pixel and to keep rendering this video live. The code was inspired from a coding challenge by coding train.


The concept was to create a generative piece of art that was unique to the video input. The text displayed would change based on the brightness of each pixel, resulting in a dynamic and ever-changing piece of art. The text used was limited to the string “ME” and the letters were displayed in the center of each pixel. The string can also be changed to say something else.


The implementation involved using the p5.js library to capture video input and display text on a canvas. The code loops through each pixel in the video capture, calculates the brightness of the pixel in grayscale, and uses that value to set the color of the displayed text. The text is displayed in the center of each pixel and changes based on the brightness of the pixel.


One of the main challenges that I faced was to decide on an appropriate dimensions of the rendered video in order for p5.js to not break while looping through the video.


Overall, this project was a successful exercise in creating generative art using code. The dynamic and ever-changing nature of the art is an interesting concept that can be expanded upon in future projects. The ability to use video input as the source for the art adds an extra level of interactivity and uniqueness to the piece.

Sketch Link: