For this reading, I agree with the author on the limitation of the types of interaction we are using today. He criticized today’s digital device for only limited to “pictures under class”. Of course this type of interaction only allow the hands to feel and receive limited signals. The “pictures under class” leaves out the potentials of texture, weight and multiple other factors, for example, playing piano on a screen does not have the same satisfying feelings as playing on the real piano with hands interaction feedback.
However, I disagree with the author in two main points. The reading focuses too much on the capability of humans but does not take into consideration how we can use that capability to fit our needs. Of course we can have a more dynamic medium, but would that be better than the functionality of the Iphone? Iphones are designed compact and multifunctional, which I think can only achieved best through the “pictures under glass” interaction type. The second thing is that even though our hands are magnificent in many ways, other body parts can create interactions that are not less interesting. For example, the eyes can calculate the distance between objects, recognizing colors, emotions, etc. The “picture under glass” fits well to this capability of the eyes. Hence, I think it’s not a bad medium, it’s just a medium that makes use of other human capability.
Bret Victor – A Brief Rant on the Future of Interactive Design
My initial instinct upon reading Bret Victor’s article was to push back and think, “But you’re not really improving anything.” This reaction softened after reading his direct responses to these criticisms. Victor’s defensiveness, in some ways, protects his vision—arguably so. But even beyond that, the true point of his article lies in his challenge to our current conception of interactivity. He questions why we’ve limited ourselves to “single-finger” interaction, arguing that we’re barely scratching the surface of what interactive technology could become. I found myself agreeing, especially when he mentioned that if, 20 years down the line, all we had were glorified iPads, it would be a sign of stagnation. Now, over a decade since the article was written, we’ve indeed developed more advanced interfaces—like VR, AR, and even some early-stage holographic tech—but these technologies haven’t become mainstream, and they haven’t revolutionized interaction to the degree Victor imagined thus proving his point to a degree.
Reflecting on his perspective today, it’s clear he raises valid points. For the field of interactive design to truly evolve, we need critical voices like his, highlighting what’s lacking and pushing the boundaries of what’s possible. Yet, while I appreciate his vision, I also think it’s worth noting that the demand for fully immersive tech isn’t universal. Not everyone wants full-body interaction, and realistically, innovations often emerge only if there’s sufficient market interest. While technologies like VR and AR are groundbreaking, they remain largely inaccessible to many—especially those in marginalized or economically disadvantaged communities. In contrast, iPads and similar devices, while more limited, have found a place even in lower-income communities. Victor’s perspective is compelling and reminds us of the potential for interactive design, but it also underscores the need for accessibility and practical applications.
Thoughts on the Video:
The video accompanying Victor’s article showcases futuristic and visually stunning technologies, like transparent phones and computers. These concepts seem efficient, fast, and intuitive, presenting a vision of an accessible tech-forward society. But this vision quickly becomes complicated when you consider the societal implications. The choice to illustrate this world in a city like Johannesburg, for instance, inadvertently sidelines the broader realities of poverty and inequality. The technology depicted is only accessible to specific demographics, creating an illusion of widespread accessibility that falls short in practice. Can such tech really deliver on its promise of happiness if it only deepens capitalist divides?
Moreover, there’s an unsettling irony in the interactions depicted in the video. People engrossed in advanced technology appear disconnected and isolated, as though their devices detract from meaningful social interactions. This tension isn’t new; many feared that the rise of technology would eventually isolate us, even as it ostensibly connects us. The video seems to highlight this concern, making me reflect on whether such advancements genuinely enhance human happiness or merely serve to reinforce certain fears about a tech-saturated future.
In A Brief Rant on the Future of Interaction Design, Bret Victor argues that most interaction design today isn’t meaningful enough. He believes designers focus too much on making things look nice on screens rather than creating tools that help people think or solve real problems. This stood out to me because I agree that design should do more than just look good—it should make our lives easier or allow us to do more.
As someone studying computer science and interested in interactive media, I think Victor’s ideas are important. He makes me want to focus on designing tools that actually help users accomplish things rather than just looking nice. His views remind me that good design should be about creating real benefits for people, not just entertainment or convenience.
The responses to A Brief Rant on the Future of Interaction Design show different views. Some people agree with Victor and think design should be more useful, while others say his ideas are too difficult to make real. One response I read pointed out that many companies prefer simple screen designs because they make money more easily. This made me think about the challenges of aiming high in design while facing real-life limits, like budgets or technology.
These responses remind me that good design is a balance between what’s possible and what’s ideal. While Victor’s ideas are inspiring, they also show the need for practical solutions. Moving forward, I want to think more about how to push for meaningful design within real-world limits.
The WaveSynth project is inspired by the theremin, one of the first electronic instruments, invented in 1920 by Russian physicist Léon Theremin. Known for its eerie, vocal-like sound and its unique, touchless control, the theremin uses two antennas to detect the position of the player’s hands: one antenna controls pitch, and the other controls volume. By moving their hands through electromagnetic fields, players can create smooth, flowing sounds without touching the instrument. This expressive control has influenced generations of musicians and has become iconic in sci-fi, horror, and experimental music.
Concept
The WaveSynth is a gesture-controlled musical instrument that turns hand movements and environmental factors into dynamic sound. Designed to be both intuitive and expressive, the WaveSynth combines multiple sensors—ultrasonic, temperature, and a potentiometer—to create a cohesive interface.
The ultrasonic sensor detects hand distance, adjusting either pitch or volume based on the player’s proximity. The potentiometer serves as a mode selector, allowing the user to switch between pitch and volume control, as well as access different sound effects like vibrato, pulse, and temperature modulation. The temperature sensor adds an additional layer of subtlety, with ambient temperature shifts introducing slight pitch modulations, making the instrument responsive to its surroundings.
List of the hardware components used in the WaveSynth project:
TMP36GZ Temperature Sensor (for ambient temperature-based modulation)
10k Ohm Potentiometer (for mode and effect selection)
Piezo Speaker (for sound output)
Connecting Wires (for connections between components and the Arduino)
Breadboard (for prototyping and circuit connections)
310 Ohm Resistor (for LED circuit)
Schematic Diagram:
Code:
// Pin definitions
const int potPin = A0; // Analog pin for potentiometer
const int tempPin = A1; // Analog pin for TMP36GZ temperature sensor
const int trigPin = 3; // Digital pin for sonar trigger
const int echoPin = 4; // Digital pin for sonar echo
const int speakerPin = 9; // Digital pin for speaker
const int ledPin = 5; // Digital pin for LED (PWM-enabled)
// Variables
int effectType = 0; // Tracks which effect is active (0: none, 1: vibrato, 2: pulse, 3: temperature modulation)
void setup() {
pinMode(speakerPin, OUTPUT); // Speaker as output
pinMode(trigPin, OUTPUT); // Sonar trigger as output
pinMode(echoPin, INPUT); // Sonar echo as input
pinMode(ledPin, OUTPUT); // LED as output
Serial.begin(9600); // For debugging output
}
// Function to read distance from the sonar sensor
long readDistance() {
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
// Measure the pulse duration on the echo pin
long duration = pulseIn(echoPin, HIGH);
// Calculate distance in centimeters
long distance = duration * 0.034 / 2;
return distance;
}
// Function to read temperature from the TMP36GZ
float readTemperature() {
int tempReading = analogRead(tempPin); // Read analog value from TMP36
float voltage = tempReading * (5.0 / 1023.0); // Convert reading to voltage (0-5V)
float temperatureC = (voltage - 0.5) * 100.0; // Convert voltage to temperature in Celsius
return temperatureC;
}
void loop() {
// Potentiometer to control mode and effect
int potValue = analogRead(potPin); // Read potentiometer (0-1023)
bool pitchMode = potValue < 512; // Below midpoint is pitch mode, above is volume mode
// Determine the effect based on the potentiometer value ranges
if (potValue < 256) {
effectType = 0; // No effect
} else if (potValue < 512) {
effectType = 1; // Vibrato
} else if (potValue < 768) {
effectType = 2; // Pulse
} else {
effectType = 3; // Temperature modulation
}
// Read sonar distance and map to a lower pitch range for soothing tones
long distance = readDistance(); // Distance in cm
int baseToneValue = pitchMode ? map(distance, 5, 50, 150, 600) : 440; // Map distance to pitch if in Pitch Mode
// Control LED brightness based on distance
int ledBrightness = map(distance, 5, 50, 255, 0); // Closer is brighter (5 cm = max brightness)
ledBrightness = constrain(ledBrightness, 0, 255); // Constrain within 0-255
analogWrite(ledPin, ledBrightness); // Set LED brightness
// Read temperature and map it to a gentle pitch effect
float temperature = readTemperature();
int tempEffect = map(temperature, 20, 35, 20, 80); // Map temperature to subtle pitch modulation
// Debug output to Serial Monitor
Serial.print("Distance: ");
Serial.print(distance);
Serial.print(" cm, LED Brightness: ");
Serial.print(ledBrightness);
Serial.print(", Pot Value: ");
Serial.print(potValue);
Serial.print(", Effect Type: ");
Serial.print(effectType);
Serial.print(", Temperature: ");
Serial.print(temperature);
Serial.println(" C");
// Play sound based on the selected effect type
switch (effectType) {
case 0: // No effect
tone(speakerPin, baseToneValue); // Basic tone based on distance
break;
case 1: // Smooth Vibrato
for (int i = 0; i < 20; i++) {
int vibratoTone = baseToneValue + (sin(i * 0.3) * 10); // Soft vibrato effect with lower amplitude
tone(speakerPin, vibratoTone, 50); // Short tone bursts for vibrato
delay(20); // Slightly slower delay for soothing vibrato effect
}
break;
case 2: // Gentle Pulse
tone(speakerPin, baseToneValue); // Play base tone continuously
analogWrite(speakerPin, 128); // Soft fade for pulse effect
delay(100); // Adjust pulse duration for gentler effect
noTone(speakerPin); // Turn off sound briefly to create pulse
delay(100); // Wait before next pulse
break;
case 3: // Temperature Modulation
int tempModulatedTone = baseToneValue + tempEffect; // Adjust pitch slightly based on temperature
tone(speakerPin, tempModulatedTone); // Continuous tone with slight modulation
delay(200); // Keep tone smooth
break;
}
delay(50); // Small delay for stability
}
Media:
Working Process:
Initial Setup and Calibration:
When powered on, the Arduino initializes all sensors and components, including the ultrasonic sensor, temperature sensor, potentiometer, and speaker.
The potentiometer’s position is read to determine the initial mode (Pitch or Volume) and effect (Vibrato, Pulse, Temperature Modulation, or None). The instrument is ready to interpret the player’s gestures and environmental inputs to start producing sound.
Gesture Detection and Distance Measurement:
The player positions their hand near the ultrasonic sensor and moves it to change sound properties.
The ultrasonic sensor measures the distance between the player’s hand and the sensor by sending out an ultrasonic pulse and timing how long it takes for the pulse to bounce back.
The distance value is calculated and then mapped to control either pitch or volume based on the selected mode:
Pitch Mode: The distance between the sensor and the player’s hand changes the pitch of the sound. Closer hand positions produce higher pitches, while farther positions result in lower pitches.
Volume Mode: In this mode, the distance controls the volume of the sound. Closer distances yield louder sounds, and farther distances make the sound quieter.
Sound Modification through Effects:
The potentiometer serves as a selector for various sound effects that add dynamic layers to the base tone. Depending on the potentiometer’s position, the following effects are applied:
No Effect (Basic Tone): The sound responds directly to the pitch or volume based on the hand distance with no additional modulation.
Vibrato Effect: The instrument adds a wave-like oscillation to the pitch, producing a gentle, undulating sound. This effect is applied continuously, allowing the sound to vary smoothly.
Pulse Effect: The sound output is pulsed, creating a rhythmic on-and-off pattern. This effect provides a percussive quality, ideal for rhythmic play.
Temperature Modulation: Ambient temperature subtly adjusts the pitch, creating an atmospheric and evolving sound that changes with the surrounding environment. This effect responds more slowly, allowing the sound to naturally vary over time.
Environmental Adaptation with Temperature Modulation:
When Temperature Modulation is selected, the temperature sensor reads the ambient temperature. The Arduino then uses this temperature reading to modulate the pitch subtly.
For example, warmer temperatures gradually increase the pitch, while cooler temperatures lower it. This effect is gradual and blends naturally with the other sound properties, adding a unique, ambient quality to the instrument’s sound.
Real-Time Sound Output:
The piezo speaker produces sound based on the interpreted data, transforming distance measurements, temperature readings, and selected effects into real-time audio.
The speaker continuously updates its output to reflect the current settings and environmental conditions, providing an immediate response to hand movements and mode changes.
As the player moves their hand closer or farther from the ultrasonic sensor, the sound changes instantly in pitch or volume. Additionally, adjustments to the potentiometer instantly modify the effect applied to the sound.
Interactive Feedback Loop:
The player continuously interacts with the WaveSynth by adjusting their hand position, changing the potentiometer setting, and experiencing the evolving sound.
This interactive feedback loop allows the player to dynamically control and modify the instrument’s output, creating an immersive musical experience that feels responsive and alive.
Future Improvement and Challenges
One of the primary challenges encountered was calibrating the sensors to respond smoothly and accurately to the user’s hand movements. Fine-tuning the pitch range and ensuring that the effects—such as vibrato and pulse—blended naturally with the sound output took several iterations to achieve a pleasing result.
The temperature sensor was tough to work with on the Arduino board.
Additionally, integrating digital sound synthesis or MIDI compatibility would enable users to connect the WaveSynth with other musical devices or software, greatly expanding its versatility as a tool for music creation.
Another possible enhancement could be the inclusion of LEDs or other visual feedback elements to indicate mode selection and provide dynamic light effects that correspond to the sound output. This would enhance the visual aspect of the instrument, making it even more engaging for live performances.
As a reader and technology enthusiast, I find Bret Victor’s “A Brief Rant on the Future of Interaction Design” to be a thought-provoking critique of current trends in human-computer interaction. Victor’s argument against “Pictures Under Glass” technology and her call for more tactile, three-dimensional interfaces resonates with my own experiences and frustrations with touchscreen devices.Victor’s vivid descriptions of how we use our hands to manipulate objects in the real world highlight the limitations of current touchscreen interfaces. I’ve often felt that something was missing when using my smartphone or tablet, and Victor’s examples of reading a book or drinking from a glass perfectly capture that sense of disconnection. The richness of tactile feedback we get from physical objects is indeed absent in our flat, glassy screens
However, I believe Victor’s critique, while insightful, doesn’t fully acknowledge the benefits of touchscreen simplicity and accessibility. In my experience, touchscreens have made technology more approachable for a wider range of users, including children and the elderly. The ease of use and intuitiveness of swiping and tapping have democratized access to digital tools in ways that more complex interfaces might not.That said, I agree with Victor’s call for more ambitious visions in interaction design. Her example of Alan Kay envisioning the iPad decades before its creation is inspiring and reminds us of the power of long-term, visionary thinking.
As someone who uses technology daily, I’m excited by the possibility of interfaces that better utilize our hands’ capabilities and even our entire bodies.Victor’s argument extends beyond just hands to encompass our entire bodies, noting that we have “300 joints” and “600 muscles”
This resonates with my own experiences of how we naturally use our whole bodies when interacting with the physical world. I’ve often felt constrained by the limited range of motion required to use current devices, and the idea of more holistic, full-body interfaces is intriguing.While I appreciate Victor’s vision, I also recognize the practical challenges of implementing more tactile and three-dimensional interfaces. Issues of cost, durability, and scalability would need to be addressed. Additionally, I believe the future of interaction design will likely involve a combination of approaches, including enhanced haptic feedback, hybrid interfaces that combine touchscreens with physical controls, and multimodal interaction incorporating touch, voice, and gesture.
Yup, I was watching “Minions: Rise of Gru” over the weekend in order to find some inspiration for this week’s project, and I sure found the basis of my project when I watched this:
Concept
I decided on using a wheatstone bridge setup with two variable resistors (one photoresistor, and ‘the’ banana), and find the absolute difference between the two values and emit different sounds. I also decided to use a potentiometer to determine the pitch of the notes.
Apart from that, the user also has an option to not use the photoresistor at all by toggling the fixed 330 Ohm resistor in its place.
Material Required
Arduino Board
Jumper Wires
Breadboard
Potentiometer
10k Ohm Resistors
330 Ohm Resistors
SEN-09088 Photocell
Potentiometer
Banana
Schematic
Demonstration Video
Code
This Arduino code reads the voltage difference between V1 and V2, maps it to a frequency for the speaker, and uses a potentiometer to shift the octave.
// Pin configuration
const int v1Pin = A1; // V1 of Wheatstone bridge
const int v2Pin = A2; // V2 of Wheatstone bridge
const int potPin = A3; // Potentiometer for octave control
const int speakerPin = 9; // Speaker pin
// Variables for calibration and tuning
float baseFrequency = 220.0; // Base frequency in Hz for a low A note
float maxVoltageDifference = 5.0; // Maximum expected voltage difference (adjust as necessary)
int maxOctaveShift = 7; // Maximum number of octaves to shift
void setup() {
Serial.begin(9600);
pinMode(v1Pin, INPUT);
pinMode(v2Pin, INPUT);
pinMode(potPin, INPUT);
pinMode(speakerPin, OUTPUT);
}
void loop() {
// Read voltage difference across V1 and V2
float v1 = analogRead(v1Pin) * (5.0 / 1023.0);
float v2 = analogRead(v2Pin) * (5.0 / 1023.0);
float voltageDifference = abs(v1 - v2);
// Map voltage difference to a frequency
float noteFrequency = map(voltageDifference * 100, 0, maxVoltageDifference * 100, baseFrequency, baseFrequency * 2);
// Read potentiometer and map to octave shift
int potValue = analogRead(potPin);
int octaveShift = map(potValue, 0, 1023, 0, maxOctaveShift);
// Apply octave shift to the frequency
noteFrequency *= pow(2, octaveShift);
// Output the frequency to the speaker
tone(speakerPin, noteFrequency);
// Debugging output
Serial.print("Voltage Difference: ");
Serial.print(voltageDifference);
Serial.print(" V, Frequency: ");
Serial.print(noteFrequency);
Serial.println(" Hz");
delay(50); // Delay for bouncing error
}
Conclusion and Reflection
Constructing this banana-powered musical instrument was a fun and an instructive process combining artistic design with technological principles. By using a Wheatstone bridge, I was quite free to play around on how I wanted to shape my music instrument.
Challenges Faced
1. Stabilizing the Output:
One major challenge was stabilizing the audio output, as random values were being read when the pencil jockey wasn’t in contact with the banana. This caused unwanted noises and a “wobbling” effect. To overcome this, I decided that I needed a pulldown resistor (here I used 2 Mega Ohms) to avoid reading noise in the Analog Pin A2
2. Mapping Voltage to Frequency Accurately:
Mapping the voltage difference to frequencies that sound musical took some experimentation. Adjusting the base frequency and range, as well as the octave shift, required fine-tuning to achieve the desired range of notes without distorting the sound.
In this assignment, I collaborated with @Nelson and we both love Christmas. The famous song Jingle Bells brings memories of the the times. So we explored various possibilities and decided to come up with speed variation of the Jingle Bells melody with respect to distance.
Here is the demonstration Video:
Schematic
Here is the Schematic for our Arduino connections:
Code:
In the implementation of our our idea, we searched for possible combinations of the notes and durations to match the Jingle Bells melody and stored them in an array. We then implemented the code mapping distance with durations. The variations in durations for each note make it seem playing faster or slower. Here is the code:
#include "pitches.h"
#define ARRAY_LENGTH(array) (sizeof(array) / sizeof(array[0]))
// Notes and Durations to match the Jingle Bells
int JingleBells[] =
{
NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_G4,
NOTE_C4, NOTE_D4, NOTE_E4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4,
NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_D4, NOTE_E4,
NOTE_D4, NOTE_G4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_G4,
NOTE_C4, NOTE_D4, NOTE_E4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4, NOTE_F4,
NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_E4, NOTE_D4, NOTE_D4, NOTE_E4,
NOTE_D4, NOTE_G4,
};
int JingleBellsDurations[] = {
4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4,
4, 4
};
const int echoPin = 7;
const int trigPin = 8;;
const int Speaker1 = 2;
const int Speaker2 = 3;
int volume;
void setup()
{
// Initialize serial communication:
Serial.begin(9600);
pinMode(echoPin, INPUT);
pinMode(trigPin, OUTPUT);
pinMode(Speaker1,OUTPUT);
}
void loop()
{
long duration,Distance;
// Distance Sensor reading
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(5);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
Distance = microsecondsToCentimeters(duration);
// Map Distance to volume range (0 to 255)
volume = map(Distance, 0, 100, 0, 255);
volume = constrain(volume, 0, 255);
// Play melody with adjusted volume
playMelody(Speaker1 , JingleBells, JingleBellsDurations, ARRAY_LENGTH(JingleBells), volume);
// Debug output to Serial Monitor
Serial.print("Distance: ");
Serial.print(Distance);
Serial.print(" Volume: ");
Serial.print(volume);
Serial.println();
}
// Get Centimeters from microseconds of Sensor
long microsecondsToCentimeters(long microseconds)
{
return microseconds / 29 / 2;
}
// PlayMelody function to accept volume and adjust note duration
void playMelody(int pin, int notes[], int durations[], int length, int volume)
{
for (int i = 0; i < length; i++)
{
// Adjust the note Duration based on the volume
int noteDuration = (1000 / durations[i]) * (volume / 255.0);
// Play the note with adjusted Durations
tone(pin, notes[i], noteDuration);
// Delay to separate the notes
delay(noteDuration * 1.3);
noTone(pin);
}
}
Reflection
This week’s assignment was especially interesting because we had a chance to collaborate on the project and combine our imagination and skills to create something original. I really enjoyed working with @Nelson. We worked great as a team by first coming up with an idea and then adjusting the concept to choose the ideal balance between randomness and something ordinary to achieve the final result.
I believe that this project has a lot of potential for future improvements, and perhaps I will use some parts of this week’s assignment for my future ideas and projects. Looking forward to the next weeks of the course!
Victor expands on this idea by arguing that the future of interaction design should move beyond just screens and graphical user interfaces. He contends that our current reliance on touchscreens and GUIs is limiting, and that we need to explore more natural and intuitive ways for humans to interact with technology.
Victor argues that interaction design should leverage our innate abilities to manipulate physical objects and navigate three-dimensional space. He suggests that future interfaces should allow users to interact with information and digital content as if they were tangible objects, rather than abstract representations on a flat screen.
The article emphasizes the importance of incorporating more natural hand and body movements into interface design. Victor contends that touchscreens and mice are poor substitutes for the rich expressiveness and dexterity of human hands. He envisions interfaces that can interpret subtle gestures, manipulations, and movements to control digital systems more intuitively. I agree with Victor’s core argument that interaction design needs to evolve beyond just screens and GUIs to create more natural and intuitive interfaces.
However, I would add that while moving beyond touchscreens and traditional GUIs is important for pushing interaction design forward, we shouldn’t completely discard these technologies which call for the use of a single finger. This simple design he is ranting about is especially handy for users with certain disabilities or limitations.
For example, touchscreen interfaces with large, easy-to-tap buttons can be very beneficial for users with motor control issues or limited dexterity. The simplicity of tapping a screen with one finger opens up digital experiences to many who might struggle with more complex gestural interfaces.
I agree with many points the author makes about the current vision of future technology. The author believes that these “pictures under glass” — screens we interact with using simple gestures like sliding or tapping — limit the true potential of our hands and bodies. This reading made me think more about how I use my hands in ways I usually don’t notice.
The author highlights an important problem, which she also addresses in his response to critics. It’s essential to consider how people actually use technology when creating designs, so that the designs fit naturally into human behavior, not the other way around. Since this is already an issue today, it’s crucial to avoid imagining a future with similar limitations. The author is doing his part by writing about this problem to raise awareness and inspire more research and funding in this area.
This reading has encouraged me to think beyond the devices we have now. Instead of just improving screens, we could push for innovations that respect and enhance our human abilities. It makes me hopeful that the future of technology can be something much richer and more connected to our senses and actions. Looking forward, I hope to create interactive designs that put human capabilities first, rather than adding interaction elements just for the sake of it.
A new angle on creating meaningful interactions is provided by Tom Igoe’s observations on interactive art and physical computing. One crucial takeaway is his recommendation to let individuals participate in a project in their own way rather than directing every detail. It can be tempting to give directions or explanations in interactive art, but Igoe contends that doing so can restrict the audience’s creativity. This method emphasizes how crucial it is to give consumers room to explore and interpret on their own terms, which will make the experience more memorable and intimate.
I became aware of the importance of basic, intuitive actions in design after seeing Igoe’s examples of ordinary gestures—such as tapping or moving through a space—used as interactive features. People can interact with technology naturally when these well-known motions are turned into interesting experiences. A project that combines commonplace activities with artistic involvement, such as one in which a person’s movement or touch activates music or graphics, seems both familiar and unexpected. It helps me consider how I may use such movements in my projects to produce interactions that seem natural and grab viewers’ interest.
My comprehension of user-centered design is further enhanced by his analogy between creating interactive art and directing a play. A skilled director creates the scene yet lets the actor interpret and react freely, not controlling every step. Similarly, creating a project that allows for user exploration changes the emphasis from the designer’s intention to the user’s experience, making every interaction special. In the future, I hope to develop designs that lead users through subliminal clues, empowering them to come to their own conclusions and derive personal meaning, transforming the encounter into a cooperative dialogue.