Assignment #11 – Midterm Idea

For the final project, I am thinking of making a camera on wheels. I was inspired by a panther dolly, which consists of a track and a mount with wheels on which the camera goes. Essentially, there is someone pushing the dolly when scenes require the camera to sturdily follow the character along a line.

 

 

While there is usually a camera operator working on the dolly, I thought to myself, why not make a moving camera that follows the actor without needing someone to push it on the track?

 

That way, it can even make cellphone cinema easier!

 

The Arduino part would consist of a button to start and stop recording, a LED light that indicates whether it is recording or not, and a Piezo buzzer that makes a sound when the button is first clicked to record. I would also need to incorporate the wheels and a way to control them, perhaps with a remote?

 

The p5 part would essentially consist of a video feedback, and maybe a way to save the videos somewhere.

 

Now, I feel like this may be a bit too ambitious, especially that I have struggled more with Arduino than with p5. If I feel like I won’t be able to do it, I might tweak some elements. But I would be really interested in creating something like this!

Final Project Proposal Dachi

Concept

For my Final Project, I don’t have a concrete idea yet that I have committed to but I know the general area which I want to work on as well as some potential projects. 

I want to explore ML5.js which is a machine learning designed for the web. In short, it utilizes the graphical powers of the browser to do machine-learning calculations. It works well with P5.js which is why I want to utilize it. It’s open source library which is beginner-friendly providing high-level interface to TensorFlow. As someone with no machine learning experience, it will be very interesting to explore it in depth. It has some available pre-trained models which work for image classification and recognition. For example, HandPose model would work for detecting hands and doing some action with it.

The initial concept idea would be to create a canvas where you can draw with your hands in P5js with the help of Handpose. I could utilize arduino to add additional functionality. For example put sensors in a cardboard which would act as brush settings. For example, various sensors would change various brush properties. Cut out for ultrasonic sensor could potentially change size of the brush by moving your finger. Ideally, I would like to make the interface very seamless.

I could expand this idea for user to create generative art by varying different mathematical properties using their hands, it does not have to be limited to just drawing. 

Challenges

I would face many challenges going with this approach as I have no prior experience. Moreover, I would have to combine the machine learning aspect with Arduino to create a seamless experience. My plan to overcome these challenges is to start learning as early as possible. This includes going through the library, learning the basics and finalizing my plan which I can stick with and dedicate much more time to. It should be challenging, yet achievable in the timeframe we are given. I am optimistic that this project will not only be very beneficial by learning lots of new things but will also be quite fun!

Week 11 – Response

The author raises valid concerns regarding the current trajectory of interface design, particularly the over-reliance on “Pictures Under Glass” – flat, touch-based interfaces that neglect the incredible capabilities of human hands. I wholeheartedly agree that this approach is limiting and fails to tap into the full potential of human-computer interaction.

The article beautifully highlights the richness of tactile experience and the intricate ways our hands manipulate and interact with the world. Reducing interaction to mere sliding gestures on a flat surface ignores this wealth of human capability and expression.

Instead of simply extrapolating current trends, the author urges us to envision a future where interfaces are dynamic, engaging our senses of touch and manipulation in ways that are intuitive and expressive. This vision resonates deeply and calls for a shift in focus towards technologies that leverage the full potential of human hands.

The article rightly emphasizes the limitations of purely visual interfaces. Haptic feedback technology, which recreates the sense of touch, holds immense potential in enriching user experience. Imagine feeling the texture of fabrics while online shopping, or experiencing the resistance of virtual objects in a design program.

The article challenges the dominance of flat screens and encourages exploration of 3D interfaces. Technologies like volumetric displays and mid-air haptics could enable us to interact with digital content in a more natural and intuitive manner, mimicking real-world manipulation of objects.

When I ply videogames, I prefer playing with a controller, versus a mouse and keyboard. This is for many reasons, but I specifically enjoy the haptic feedback I get when I play. It adds and extra dimension and an extra sense for me when I play a game which is lost on a mouse and keyboard. I also appreciate the haptics on a Nintendo Switch, their quality. I like how they are integral to many games, which just makes them more fun.

While the current state of research in these areas might be nascent, the author’s call for ambitious, long-term vision is crucial. We need researchers, designers, and engineers to be inspired by the possibilities beyond “Pictures Under Glass” and work towards interfaces that truly empower and enhance human capabilities.

Assignment #11 – Reading Response – Manipulate, Move, Feel No More

Our bodies are made to manipulate, to move, to feel. I mean, the author states that too. When these «technologies» of the future are introduced, they not only hinder our bodies’ abilities, but also replace them with much more harmful ways of being.
First, to manipulate. In a way, we still manipulate these technologies and those to come in the future. We turn on, we scroll, we tap… Perhaps, but, how much agency do we actually have over what these technologies present to us? Particularly in the age of media, data privacy (or lack thereof), and consumption, these devices may become not only biased, but also use our own information against us. A hotel key card such as the one in the video, combined with all of one’s other passes and documents, can easily lay ground for infringement of privacy. But it’s not like this is not already present in some way. Apple wallet, for example, can keep all your cards and passes in one place. Although this digital wallet may be efficient, how safe do we know it is? How do we know that we are not giving it control over us, instead of it being the other way around?
Simultaneously, this digitization of everything limits our movement. We become lazy. When I was traveling back to Abu Dhabi from Paris this January, I was surprised to find out at the airport that check-in now happened through a machine. Clerks were only available if an issue arose. And well, many of the people checking in were facing issues, and there were only two people assisting. So it seems that technology now and in the future, under the pretense of efficiency, is just a way to lift work off of people that have a job to do – without even being efficient! Even the other day, I went to Mamsha, and found out that you don’t get a parking ticket anymore. The camera at the entrance reads your plate number, which you then give to the restaurant so they can validate your «ticket». It’s all so lazy, isn’t it? And even though these two examples may sound very banal, it applies to bigger things.
I think, at the end of the day, the issue is that quickness is prioritized over efficiency. Things are being transformed without actually taking into account how that will impact user capability AND behavior. They say, don’t fix what’s not broken. But not only do they «fix» what’s not broken, they also render the experience much harder than before.

Dachi Reading Response – Week 11

The first article, “A Brief Rant on the Future of Interaction Design,” argues that our current touchscreen interfaces are missing out on the rich tactile feedback and 3D manipulation that our hands are capable of. The author thinks future interfaces should engage our bodies more fully, like how we interact with the physical world.

The second article is a follow-up where the author addresses some common responses to his original rant. He clarifies that he’s not against touchscreens entirely, but rather sees them as a sequentual stone to even more immersive and embodied forms of interaction. He also touches on why he’s skeptical of voice commands and gesture controls as complete replacements for physical interfaces.

Putting the two articles together, it seems like the core idea is that our technology should evolve to better adapt to our human capabilities, especially when it comes to how we use our hands to explore and manipulate objects. The author is calling for a future where digital interactions feel just as tangible and expressive as their real-world counterparts.

I actually agree with this vision, even if the exact path to get there is still unclear. I think there’s a lot of room for interfaces to become more tactile and responsive to our natural ways of interacting. At the same time, I wouldn’t want to completely abandon the benefits of touchscreens, like customization. In the real world, there are many examples where people prefer tactility. For example, lots of companies have been trying to introduce laptops with glass clear keyboards. This would undoubtedly make laptops thinner and add an option of a second screen but the majority of users myself included would hate the typing experience because there is something much more satisfying when it comes to physical keystrokes. (This is main reason why people get into expensive mechanical keyboards, I might be people).

On the other hand, despite blackberry’s popularity in the past, keyboards on phones have become a thing of the past as the benefits are simply not worth it due to major compromises like the bulkiness, less space for other components, visuals, and so on. Plus, people treat phones mainly as texting machines, when it comes to typing, it’s on big screens where real work happens.

The articles also raised some good points about the limitations of relying too heavily on voice commands or free-air gestures. While those modalities can be useful in certain contexts, they lack the physical feedback and precision of direct manipulation. So it seems like the ideal interface would offer a mix of input methods that complement each other and cater to different situations.

As someone who spends a lot of time interacting with screens, it’s an exciting prospect to imagine a future where the digital world is combined with a world of haptics to deliver an immersive experience, something akin to Ready Player One.

Week 11 – Final Project Proposal

Concept:

This project involves building a radar system using an Arduino, an ultrasonic sensor, and a joystick or buttons for control. The sensor data will be sent to p5.js, which will visualize the readings on a radar-like display, allowing you to detect objects in the surrounding environment.

Components:

  • Arduino: Controls the ultrasonic sensor and reads input from the joystick/ buttons.
  • Ultrasonic Sensor: Emits ultrasonic pulses and measures the time it takes for the echo to return, determining the distance to objects.
  • Joystick/Buttons: Provides input for controlling the servo motor that rotates the ultrasonic sensor.
  • Servo Motor: Rotates the ultrasonic sensor to scan the environment.
  • Computer with p5.js: Receives data from the Arduino and generates the radar visualization.

Implementation:

  1. Hardware Setup:
    • Connect the ultrasonic sensor to the Arduino.
    • Connect the servo motor to the Arduino.
    • Connect the joystick/buttons to the Arduino.
  2. Arduino Code:
    • Initialize the sensor, servo, and joystick/buttons.
    • In the loop function:
      • Read the joystick/button values to determine the desired rotation angle for the servo.
      • Rotate the servo to the specified angle.
      • Trigger the ultrasonic sensor and measure the distance to the nearest object.
      • Send the distance and angle data to the computer via serial communication.
      • Ensure wires connecting the sensor cannot get tangled.
  3. p5.js Sketch:
    • Establish serial communication with the Arduino.
    • Receive distance and angle data from the Arduino.
    • Create a radar-like display:
      • Draw a circular background representing the scanning area.
      • Convert the distance and angle data to Cartesian coordinates (x, y) on the display.
      • Draw points or shapes at the calculated coordinates to represent the detected objects.
      • Implement features like:
        • Different colors or sizes for objects based on distance.
        • Trail effect to visualize the movement of objects.
        • Numerical distance display.

Possible Additional Features:

  • Multiple Sensors: Use multiple ultrasonic sensors for wider coverage.
  • Sound Effects: Play beeps or tones that vary in pitch or frequency based on the distance to objects.
  • Object Tracking: Implement an algorithm to track the movement of objects over time.

Challenges and Considerations:

  • Sensor Accuracy and Noise: Ultrasonic sensors can be affected by environmental factors and may require calibration.
  • Visualization Design: Create a clear and intuitive radar display that effectively represents the sensor data.

Final Project Concept – Khalifa Alshamsi

Concept: To create a gyroscope controller wearable controller that is connected to a spaceship game in which there would be power-ups and better graphics for the final project.

Objectives:
Design a Gyroscope Controller: Create a hardware prototype of a gyroscope-based controller that can detect and interpret the user’s hand motions as input commands.
Develop Interface Software: Write software that interprets the gyroscope data and translates it into game commands that are compatible with the game.
Integrate with a Space Simulation Game: Modify my existing Midterm game or develop a simple new game designed to work specifically with this controller.
User Testing and Feedback: Conduct testing sessions to gather feedback and improve the interface and game interaction based on user responses.

Methodology
Hardware Development: Use a 3-axis gyroscope sensor to capture tilt and rotation. Design the controller’s physical form factor to be strapped around the hands.
Integrate with Bluetooth or USB for wireless connectivity with the P5js game.

Software Development: Develop software to read data from the gyroscope sensor. Convert the gyroscope data into P5js-compatible input commands.
Ensure the software supports real-time interaction with minimal latency.

Game Integration: Adapt a spaceship simulation game to respond to the new controller inputs. Implement basic gameplay features such as navigation, obstacle avoidance, and speed control using gyroscope inputs.

Testing and Iteration: Test the controller with users of varying gaming experience. Collect qualitative and quantitative feedback on usability, responsiveness, and enjoyment. Refine the hardware and software iteratively based on this feedback.

Week 11 – Theremin-like instrument

This project provided valuable insights into the potential of technology in musical expression and exploration. Despite its seemingly simple design, utilizing two push buttons for sound generation and an ultrasound sensor for frequency modulation, the project unveiled a range of creative possibilities and highlighted areas for further development.

The incorporation of the ultrasound sensor was particularly intriguing. By translating physical distance into audible frequencies, the sensor effectively transformed space into a controllable musical parameter. This interaction proved to be a captivating demonstration of how technology can facilitate new forms of musical interaction and expression. The concept invites further exploration, prompting questions about the potential for incorporating additional sensors to create a multi-dimensional instrument responsive to a wider range of environmental stimuli.

// Check if distance is within range for playing sound
if (distance < 200 && distance > 2) {
// Map distance to frequency
int frequency = map(distance, 2, 200, 200, 2000);
tone(buzzerPin, frequency); // Play sound at calculated frequency
} else {
noTone(buzzerPin); // Stop sound if distance is out of range
}

 

While the project successfully demonstrated the feasibility of generating sound through Arduino, the limitations of pre-programmed sounds became evident. The lack of nuance and complexity inherent in such sounds presents a challenge for creating truly expressive and dynamic musical experiences. This observation underscores the need for further exploration into sound generation techniques, potentially involving machine learning or other advanced algorithms, to expand the sonic palette and introduce more organic and evolving soundscapes.

Week 11 Reading | A Century’s Solution on the Future of Interaction Design

Victor ‘ranted’ in his post to remind us how our current definition of interactivity is limited to glassy touch screens that do not utilize our touch senses to their full potential. He argues that texture, the ‘feel’ of things is not accompanied within these slidy screens.

Although he does not include solutions and was writing the article purely to raise awareness, I believe he asked important questions. One idea that I am fond of is to think of these designs as tools that react to our hands.

Elon Musk Uses Iron Man-inspired Holographic 3-D User, 42% OFF

Tony Stark in his lab. Disney Marvel.

Hiro Hamada microbots presentation Big hero 6 1080p

Hiro presenting nanobots. Disney.

Imagine this: a breed between nanobots from Disney’s Big Hero 6 and holograms used by Tony Stark in his research lab. The hologram can sketch schematics in thin air, and then magically materialize from the nanobots. It would become an interactive object, that can be touched, felt, scrubbed, and molded only limited by the user’s imagination.

Any sufficiently advanced technology is indistinguishable from magic. – Arthur c. clarke

Although it may still take centuries away with dozens of research, prototypes, and testing before we can commercially mass produce this kind of technology, perhaps we can pick a point or two from Victor’s argument to take the current developments into an alternative shape and features.

Final Project Idea: SoundSculpt – Shaping Music with Your Hands

Concept:
SoundSculpt will be an interactive music controller that allows users to sculpt and manipulate sounds in real-time using physical objects and sensors. Inspired by the concept of sculpting, users can mold and shape musical compositions by interacting with a variety of tactile inputs connected to an Arduino board. The project aims to provide an intuitive and immersive platform for music creation, where users can explore their creativity through hands-on experimentation.

Inspiration:
The inspiration for this project comes from the desire to bridge the gap between traditional musical instruments and digital music production tools. Drawing inspiration from sculptors who mold physical materials into expressive forms, SoundSculpt will empower users to sculpt sonic landscapes using tangible inputs. The project also takes inspiration from the growing maker movement and DIY culture, where individuals seek to build and customize their own interactive devices.

How it Works:
Hardware Setup:
– SoundSculpt consists of a variety of physical inputs, such as rotary knobs, sliders, sensors, etc, all connected to an Arduino board.
– Each physical input is mapped to a specific parameter or function within the accompanying P5 sketch, allowing users to control various aspects of the music, such as pitch, volume, tempo, and effects.

P5 Audio Generation:
– The P5 sketch generates and manipulates audio in response to the user’s interactions with the physical inputs.
– Users can select from a range of synthesized sounds, samples, and effects presets, which can be dynamically modulated and layered to create complex musical textures.
– P5’s audio libraries enable real-time processing and playback, ensuring that users experience immediate feedback as they interact with the controller.

Real-time Interaction:
– As users manipulate the physical inputs, the Arduino continuously reads sensor data and sends corresponding control signals to the P5 sketch.
– Users can experiment with different combinations of inputs to sculpt evolving musical compositions, adjusting parameters on-the-fly to explore new sonic possibilities.
– SoundSculpt encourages exploration and improvisation, allowing users to discover emergent patterns and melodies through playful interaction.

Visual Feedback:
– To enhance the user experience, SoundSculpt will provide visual feedback in the form of dynamic visualizations and graphical representations of sound parameters.
– Visual elements, such as waveform displays, frequency spectra, and animated effects, respond in real-time to changes in the music, offering users additional insight into the sonic transformations they’re creating.