Final Project – Design and Description

Concept:

The main concept of this project is to have a in-position interactive robot that can talk with, think and reply to the user. Furthermore, a conversation can influence the robot’s mood depending on which its outward interactions will change. These moods will be chosen from a series of 4-5 discrete moods. The outward movements that move include a head that turns towards the user depending on where they are talking from, ears that move when you pat them. Eyes that blink as the character talks, changing facial expressions and lights that indicate the robot’s current mood. Finally, I incorporate minimal movement using 2 DC motors.

P5 Design:

The code uses the p5.Speech library for speech recognition. the ElevenLabs API for realistic text to speech and the OpenAI ChatGPT library for text generation and reasoning. We use regular expressions to interpret the prompt engineered ChatGPT responses, and maintain a list of moods and a class that consists of actions based on every mood type. Finally, the remaining code handles the handshake mechanism between the Arduino and p5. The program will receive light sensor/distance sensor data from the Arduino, and various button related state data such as that to start a new conversation.

Arduino Design:

The Arduino design is relatively simple with Servo motors for head movement, eye blinking and facial expression changes. I use LED lights as is, but may choose to use different lights for a more aesthetic result. I use various sensors such as the light sensor on the head to get results when we attempt to pat the robot and distance/light sensors on the body to get information about how far a person’s hand/other objects are. The Arduino passes all the relevant sensor data to the p5 program and gets all relevant data from the program. Some minimal interactions such as basic ear movements or movements motivated by proximity of any sort are handled directly within the Arduino itself.

Leave a Reply