Week 13 – User Testing

Gladly, I had finished most of the project before the user testing – although it turned out that as soon as the uncertainty a user brought into the system arose, bugs followed.

Since my final project is built upon my midterm, I invited people outside the class to conduct user testing to avoid any pre-perceived knowledge about the system. The two samples also came from different backgrounds that varied in terms of their familiarity with video games, contributing to the comprehensiveness of my user testing.

On balance, I would say the tests were successful in terms of conveying the gist mechanism of the project – from the PVE experience to the presence of the robotic hand. Both participants have (almost) no issue in carrying out the experience (although the fact that people always stumble at how to config the game when pressing the keyboard persists, and a bug related to flow control popped up).

Other suggestions I collected include (but are not limited to):

  1. The purpose of the robot hands is a bit vague at the beginning. In the final presentation, with the aid of a larger screen and closer installation of the robot hands, this should be more obvious.
  2. The meaning of ‘X’, ‘tactic engine.’ More prominent notification is applied.
  3. The static AI difficulty may not be exciting enough (from a video game player). The gradual increase of AI difficulty is now applied.
  4. The pace of the interaction is a bit quick – the same input to control the game stages may cause accidental/undesired input. This is solved by adding buffer time between stage changes.
  5. The game objective could be a bit unclear – given some will skip the instruction or skim through. Another global notification in the game is added.
  6. The enemy may be too small on the screen. It is now moved closer to the player on the z axis.

Week 13 – Final Project User Testing

For user testing, I connected my p5.js sketch to the Arduino and tested the gameplay with a friend who had a general idea of the game. The user interacted with the arduino buttons and the falling fruits synced to the beats of the song.

The feedback highlighted that the timing between the falling fruits and button presses felt responsive and engaging. The user appreciated the dynamic note generation for each fruit, which added a musical layer to the experience. Minor improvements were suggested for button responsiveness and visual feedback when a fruit was missed.

The next steps include building the box for the basket and integrating the arcade buttons into it, which I have soldered. This test successfully validated the functionality of the game logic, Arduino integration, and user interaction, paving the way for completing the physical setup.

finalProject_UserTesting

Pingu Pounce — User Testing

I have made significant progress with my work, and have come close to a finish. For my project, a game called Pingu Pounce, I’ve decided to add physical controls to make the overall experience much more enjoyable and interactive.

I have used 4 buttons — A restart, left, right, and jump button.

Here I have attached a video of a classmate playing with my game and I received some overall feedback. This is not the final version of my project as I have many improvements left to do – namely implementing my buttons in a much more accessible way.

Over the remaining days, I will work on how to make my game more intuitive without me having to explain the instructions too much — perhaps I will add labels to my buttons to do so. 

IMG_8879

Sense of Music – User Testing

Today I conducted the the alpha version user testing of my final project, and I was more than happy to hear the reaction and feedback. The person did not know anything about my project except for the fact that it was connected to music. The reaction and a small part of the feedback can be found in the video below.

First of all, I want to note that I have not finalized my final project yet as some laser cutting as well as connection fixing need to be done. Nevertheless, the core functions that represent my overall idea work as intended. My friend emphasized that the controls are quite intuitive and he enjoyed the process of exploration of what each of the controls stands for. Moreover, he managed to combine the effects in a way that even I have never tried, which is certainly a great thing because it shows that the project incentivizes user’s curiosity and interest.

The thing I will work on in the next couple of days is the signs written on the panel that will prompt the purpose of the controls, because my friend, although explored most of the functions, did not pay attention to one of them. I will try to find a balance so as not to make the explanations too heavy to leave room for exploration. I have also decided, thanks to a recommendation from my friend, to borrow a set of headphones from the equipment center and connect them to my laptop during the IM showcase to create a more ‘ambient’ experience around my project by fully connecting the users with the music and the visualizations on the screen.

Overall, I loved to hear that my project is especially perfect for people who like music because it is exactly what I kept in mind when I was coming up with this idea for my final project. Music enjoyers are my target audience, so I will do my best to finish the project as I see it, and, hopefully, give a chance to many people to test it during the IM showcase.

Week 13: Final Project User Testing

User Testing Experience
I conducted a user test with my friend, creating a scenario where he had to interact with the clock from scratch. I completely reset the device and watched how he would handle the initial setup and configuration process.

Clock Setup

Clock interaction

What I Observed
The first hurdle came with the connection process. Since the clock needs to connect to WiFi and sync with the time server, this part proved to be a bit challenging for a first-time user. My friend struggled initially to understand that he needed to connect to the clock’s temporary WiFi network before accessing the configuration interface. This made me realize that this step needs better explanation or perhaps a simpler connection method.
However, once past the connection stage, things went much smoother. The configuration interface seemed intuitive enough – he quickly figured out how to adjust the display settings and customize the clock face. The color selection and brightness controls were particularly straightforward, and he seemed to enjoy experimenting with different combinations.

Key Insights
The most interesting part was watching how he interacted with the hexagonal display. The unusual arrangement of the ping pong balls actually made him more curious about how the numbers would appear. He mentioned that comparing different fonts was satisfying , especially with the lighting background effects.

Areas for Improvement
The initial WiFi setup process definitely needs work. I’m thinking about adding a simple QR code on the device that leads directly to the configuration page, or perhaps creating a more streamlined connection process. Also, some basic instructions printed on the device itself might help first-time users understand the setup process better.
The good news is that once configured, the clock worked exactly as intended, and my friend found the interface for making adjustments quite user-friendly. This test helped me identify where I need to focus my efforts to make the project more accessible to new users while keeping the features that already work well.

Moving Forward
This testing session was incredibly valuable. It showed me that while the core functionality of my project works well, the initial user experience needs some refinement. For the future I will focus particularly on making the setup process more intuitive for first-time users.

Final Project User Testing

User Testing

For the user testing, I think the fact that the same plushy in real life was on the p5 screen kind of hinted at what users were supposed to do, especially since the sensors were not so subtly taped onto the back of the plushy. The physical aspect of the project is still in the makes, which is why the sensors were on the back, because I’m still trying to figure out the best way to implement the sensors without taking away from the capabilities of the flex sensors. Ideally, I wanted to stuff the flex sensors as well as the arduino board and bread board into the plushy, but I realized how heavy it would be and also the struggle of the jumper wires potentially disconnecting. I think that if I stuck with the plan where the only thing visible was the arduino cable to connect to my laptop, it would be less self-explanatory and a bit harder to figure out since the sensors wouldn’t be in plain sight.

I think there was some confusion on how exactly the plushy should be squeezed. I don’t think the user knew it was supposed to just be hugs, but he probably hugged the plushy simply because it’s a cozy plushy. If it were a different object, I don’t think his first instinct to interact with it would be to hug it. He ended up squishing the plushy between his palms as well, which was a really interesting way of playing with it.

I think the overall concept is working well, where the pressure value and p5 connection is working properly, but there could be major improvements in where the sensors/arduino components could be within the plushy, as well as better p5 display and design with instructions and maybe some context on the project. I think I would have to explain the purpose of it because knowing your tight hugs make the plushy’s day would definitely add more positivity into the interactive experience. I would also have to explain how exactly it would work, which is a bit complicating because saying to just hug it is a bit vague since I only have two flex sensors, so if they hug an area without the sensor, the p5 screen wouldn’t reflect anything, which would be a pretty disappointing outcome. Having just brief words on the p5 screen before or maybe during its start would help make sense of the idea

Final Project Progress: User Testing

Progress

Over the break, I made quite a few changes to the hardware component. While i had originally planned to attach 4 wires to individual guitar strings and use a wired conductive guitar pick to create a controller in which strumming each string triggers a different in-game action, this failed to work as all the strings were connected to a metal saddle, which made all the strings one conductive body and therefore could not be utilized for individual actions. I ultimately decided to turn the guitar’s metal frets into the buttons of the controller. I covered three separate frets with copper tape for enhanced visibility and connected them to wires, doing the same for the metal saddle; when the player presses any string against the taped frets, a circuit is completed and a signal is sent from arduino to p5js to trigger 3 actions — start, jump, and attack — depending on which fret is pressed.

The overall structure of the p5 game is now complete. The only work left to do is to finish the visuals of the game, as some of the power-ups and enemies are displayed as basic black placeholders. A system also has to be set up to keep track of high scores over the course of multiple games.

User Testing

At first, it was confusing for users to figure out how to operate the guitar controller, as I haven’t created any signage to attach to the frets and indicate their functions. It took them a few seconds of fiddling before getting to the actual gameplay. The gameplay also wasn’t as straightforward as I thought it was; users were confused as to how to increase their score (which is by collecting power-ups to build up attack charge and attack enemies). I will have to work on the visual components of the guitar controller (adding small labels, etc.) and adding an instruction menu before the game starts.

Final Project User Testing

Without clear instructions the user was confused between how the buttons and notes corresponded to each other, so I had to explain the game before the user tested it. I think everything will be fine when I combine the game and the tutorial and maybe if I add an instructions page if the user is still confused with the tutorial.

Everything in my experience is working well, the buttons are corresponding to the notes and sound, and if a wrong button is pressed 5 points are deducted and if you press the correct button corresponding to the note you get 10 points, I might want to add a library to save the highest score so far so it becomes competitive.

What I want to improve in my experience is the overall look, I want to laser cut the box that would hold the Arduino and I want to paint some of the buttons to match the music therapy note colors.

Final project User testing

Video: 

IMG_4223

 

User testing analysis: 

The basics of my project were easily understood by users, without needing much instructions, they all understood that the buttons corresponds with the graphics that are shown on p5. There was nothing that particularly confused them regarding the mapping of the controls and the experience. However, the drum selection dropdown was mostly ignored by everyone that I have shown the project to. I felt the need to tell the users that that was a feature that was built in, so I think i can improve on making it more clear to the users that the dropdown menu is available on for them choose different drum kits from.

 

 

 

Week 10 Reading Response

The Future of Interaction

Reading this article really made me rethink how I interact with technology. I’ve always been fascinated by touchscreens and their simplicity, but I never stopped to consider how limiting they actually are. The author’s critique of “Pictures Under Glass” really hit me, especially when they described how flat and numb these interfaces feel. It’s true—I use my phone every day, but when I think about it, the experience of swiping and tapping feels so disconnected compared to how I interact with physical objects.

One part that really stood out to me was the comparison to tying shoelaces. It reminded me of when I was little and struggling to learn how to tie mine. My hands learned by feeling, adjusting, and figuring things out without needing to rely on my eyes all the time. That’s such a natural way for us to interact with the world, and it’s crazy to think how little that’s reflected in our technology today.

The section about making a sandwich was also a moment of realization for me. It’s such a simple, everyday task, but it involves so much coordination and subtle feedback from your hands—how the bread feels, the weight of the knife, the texture of the ingredients. None of that exists when I swipe through apps or scroll on a website. It made me wonder: why do we settle for technology that ignores so much of what our hands can do?

This article really inspired me to think differently about the future of technology. I agree with the author that we need to aim higher—to create interfaces that match the richness of our human abilities. Our hands are capable of so much more than sliding on glass, and it’s exciting to imagine what might be possible if we started designing for that.

Responses: A Brief Rant on the Future of Interaction Design

I found this follow-up just as thought-provoking as the original rant. The author’s unapologetic tone and refusal to offer a neatly packaged solution make the piece feel refreshingly honest. It’s clear that their main goal is to provoke thought and inspire research, not to dictate a specific path forward. I really appreciated the comparison to early Kodak cameras—it’s a great reminder that revolutionary tools can still be stepping stones, not destinatione.

The critique of voice and gesture-based interfaces resonated with me too. I hadn’t really considered how dependent voice commands are on language, or how indirect and disconnected waving hands in the air can feel. The section on brain interfaces was particularly interesting. I’ve always thought of brain-computer connections as a futuristic dream, but the author flipped that idea on its head. Instead of bypassing our bodies, why not design technology that embraces them? The image of a future where we’re immobile, relying entirely on computers, was unsettling but eye-opening.

I love how the author frames this whole discussion as a choice. It feels empowering, like we’re all part of shaping what’s next. It’s made me more curious about haptics and dynamic materials—fields I didn’t even know existed before reading this. I’m left thinking about how we can create tools that actually respect the complexity and richness of human interaction.