Week 5 Reading Response: Computer Vision for Artists and Designers

Computer Vision has consistently been a prominent subject among programmers. It is a principal catalyst in the AI sector today, emerging as a viable option in the healthcare business and enhancing security. However, this reading not only provides a historical context to Computer Vision being the cold, technical niche used by military and mainstream industry; it has now become an accesible tool for artists, paving the way for an increased integration between computers and artists. This seems especially applicable in the modern day, when the lines separating engineering and art are becoming increasingly hazy and anyone can create interactive, visually stunning experiences.

The recurring theme in the reading is how artists have used Computer Vision to to build interactive exhibits that engage spectators in real time. Myron Kruger’s “VideoPlace” is a fitting example of this, turning a cold, impersonal technology to something deeply human-centered: using full-body interactions as a dynamic input to create immersive environments. In Computer Engineering, this is closely tied to the evolving user interfaces, which today powers technologies like motion sensors and gesture-based controls in gaming (Like VR).

Regarding the difference between Computer and Human Vision, one important difference between computers and humans is that computers use algorithms to understand images, but human vision is intuitive and contextual. The computer can “see” what “we” want it to by using techniques like frame differencing, background removal, and brightness thresholding, but these approaches are frequently inflexible, periodic and task-specific in contrast to the flexible nature of human vision. Another key difference is the number of input channels in human and computer vision. Humans take multiple inputs like colors, contrast, visual acuity and so on to make a cohesive perception, while computers only take a limited input based on the task they are intended to perform.

In interactive art, this rigidity in Computer Vision can actually be useful as it helps in simplifying the interaction to specific movements or gestures, allowing for a clearer, more focused experience for the viewers. However, as seen in the case of “Sorting Daemon” by David Robesky, such automated systems can profile people, leading to intrusive monitoring and raise other ethical concerns. As Computer Vision technology develops further, it is imperative to guarantee accountability, equity, and transparency in its application.

Week 5: Computer Vision for Artists and Designers

The topic of computer vision has been an increasingly popular idea, and I believe in the next couple years there will continue to be big leaps in its development and applications. To think that less than a decade ago computer vision was solely for the military and for those in higher education has now turn into a readily available technology for artists and the general public is amazing. As such, I really enjoy how the author presented the paper for beginners in computer vision and provided advice on how to approach projects with computer vision. The projects on tracking and surveillance were actually some of the projects that stuck with me the most throughout the reading. The Suicide Box project I felt on one hand sort of telling the story of the individual whom society, especially when their death wasn’t accounted for, but on the other hand super demoralizing because the project relies on the death of these individual. As such, I feel the capacity for tracking and surveillance for computer vision is a sensitive and difficult issue to uncover fully. There is of course a huge capacity and room for growth in the field of computer vision, however the ethnicity needs to be checked and balance with the morality and freedoms of individuals.

Through the reading, the author mentions how computer visions programs will need to be chosen mindful in order to optimally tackle the problem at hand. With complicated terms such as background subtract and frame differencing, I believe the complexity of differentiating between objects, backgrounds, and lighting is the biggest different from computer vision to human vision. As humans, we process millions of tiny information at once without noticing it and as programmers, the little things like recognizing an object are magnified as humans attempt to describe in code what that object is in computer language. Working with interactive media and computer vision for the rest of the semester, I believe the techniques regarding background subtraction and brightness tracking will play the biggest role in our projects. I feel many of our projects will rely on clear differentiating of human movements and interactions with the project. Without these techniques, I fear our projects may fall apart or not work as the screen would act as a camera and not a program which can be interacted with.

Week 2 Reading Response: Casey Reas

In the realm of digital art, Casey Reas’s perspectives on ‘Order’, ‘Chaos’, and ‘Controlled Randomness’ are quite intriguing. Art was perceived as a conservatively rigorous field that necessitated meticulous, well-planned strategies in contemporary forms. Nevertheless, Casey proposes to broaden the definition of “Digital Art” through his views. His perspective on the use of algorithms and randomization as creative mechanisms enables the emergence of unanticipated patterns and themes that never would have been anticipated. He draws inspiration from historical examples from the 20th century and presents examples of how randomness can be incorporated into a creative process.

Casey’s “Process 18” is an appropriate example for this. Using parameters to control the extent of randomness is a subtle masterstroke as it is able to encompass the artist’s intentions and also show the creative factor. His work raises the question of whether the concept of “randomness” is genuinely random or if it is a more complex form of order that has yet to be fully comprehended. This phenomenon is also observed in other domains, such as the “Lava Lamps” of Cloudfare for secure data encryption, the “Quantum random number generator” of Google for the generation of random numbers based on quantum mechanics, and the “Ivy Bridge” of Intel for the generation of random numbers based on thermal noise.

I intend to integrate Casey’s perspectives into my work by employing parameters to produce unpredictable variations in my projects. These variations would be beneficial in assisting me in the exploration of a variety of aesthetics, patterns, themes, and interactions that are prompted by user inputs. These would ensure that my projects offer a distinctive experience to all users.

Week 3 Reading Response: “Art of Interactive Design”

The Art of Interactive Design” by Chris Crawford is a fascinating book as it explores the distinction between a really interactive system and basic “reactive” gimmicks. Crawford characterizes the interaction as a dynamic process with three fundamental elements: hearing, thinking, and speaking. He also emphasizes on how these three elements are crucial for a meaningful interaction between the technology and the user and extends on them separately. This is why, in my opinion, every single interaction of the system with the user needs to be meticulously prepared and ensure that all three ways in which we interact with the system—listening, speaking, and thinking—are in line with the intended usage.

Regarding implementing the reading’s insights into my work, I think I might increase the degree of interaction by paying more attention on how the system “listens” and “speaks” to the user. My p5.js sketches could react to basic inputs like mouse movements or clicks right now, but they don’t necessarily provide deliberate or meaningful response. For examples, in the pixelated text sketch (Week 4: Generative text), I could have incorporated nuances into the ways user could react with my sketch. Maybe a random song plays when the user presses the “Enter” key, or an interesting home screen with brief instructions, and so on. This would mean that I would have to place myself into the user’s shoes and make design choices according to that.

Week 4: Reading Reflection of Don Norman’s “The Design of Everyday Things”

Hey everyone! 👋

After reading Don Norman’s “The Design of Everyday Things”, I was reminded to take a look around me, and find out other examples of bad user experience. One of the first examples that comes to mind, is from my short time at NYUAD’s dorms. As you can see, there are 2 blinds in the same window, one partially translucent, and one fully opaque.

NYUAD's blinds
NYUAD’s blinds

While the mechanism to control a particular blind is simple enough, and a common enough pattern for us to intuitively use it, what is extremely confusing is which string controls which blind, as they are both identical! Also, it seemed liked different windows had a different arrangement/side of strings controlling each blind! My  solution to fix this, would be to have the opaque one keep it’s metal beads, but to switch the translucent one’s beads to like a white plastic. This way, you can both see and feel the difference between the 2 strings, allowing you to easily choose which blind to control. Additionally, they can be in any order/side, but it must be consistent for all the windows, allowing you to develop muscle memory.

Another example, though not affecting everyone, is USB. You might think I’m referring to the USB A style connectors, that somehow always take 3 tries to get in.

USB A, 3 tries and states
I didn’t know they exhibited quantum properties!

No, that has mostly been solved with USB C (ah, a godsend). Unfortunately, USB C has introduced it’s own complications, by becoming the universal standard!

Wait, a minute, that doesn’t make sense. Isn’t that a good thing?

Well mostly yes, but this also means that USB C’s capabilities have far expanded outside the original scope of a USB connector, allowing the transfer of not just power and files (each at different levels), but also HDMI, DisplayPort, Thunderbolt, Ethernet, and so much more. The result is an incredibly convenient ecosystem and user experience, where everything fits together seamlessly… as long it works. The issue isn’t USB C’s reliability, but rather that since it supports so many protocols and extensions, almost all of which are optional, it can be quite hard to know which cable or device (or even port on the same device!) supports which features (this could  supporting a protocol, like DisplayPort, or a certain power and speed output). As a result, it often leads to frustration, realising only after a long time wondering why it isn’t working, that the cable I have in hand is inadequate for the task.

The solution I would recommend, would be to support everything that makes sense, and the highest speed & power level. Although, of course I would say that, I’m the user! It’s true that going this route is going to considerably increase costs for manufacturers, though as Don Norman said, poor design is sometimes due to decisions aimed at reducing cost, and this is certainly an example of that, so maybe it would be worth it for manufacturers to implement it, at least for our sanity.

Note:

Obviously, the capabilities supported should be within reason. FOr example, it makes no sense for a router which lacks any direct graphical interface, to support HDMI or DisplayPort in their USB C ports. Also, there does exist a standard that supports almost everything, and the best power & speed levels, which makes it much easier to know what works, Thunderbolt Thunderbolt icon. However, it is usually pretty expensive and isn’t offered/supported on many devices. Oh also, USB’s naming needs to be fixed. They have changed it so many times, leading to a massive amount of confusion. Search it up.

 

Coming to second part of our prompt, I believe the main way we can apply his principles, is by integrating discoverability, affordances, & signifiers, with deliberate thought. Namely, we should make sure that we strive towards adapting our work around the person, having intuitive design, where the user doesn’t need to read an instruction manual (or ideally, read any instructions at all, possibly by using familiar visual elements and design practices people have already gotten accustomed to) to interact with our work. It should instead nudge or guide the user subtly (if required), and perhaps most importantly, provide immediate (and satisfying) feedback, and provide the space for people to make mistakes, but gracefully get back on track.

They should enjoy interacting with our work, not be afraid or overwhelmed by it that, so that it doesn’t end up like the unfortunate (but probably technically great) Italian washer-dryer combo 😉

Sad washer-dryer
A sad, abandoned boi, forever waiting for someone to enjoy using it🥺

Week 4 Assignment: Generative Text

Concept

This assignment is mainly based on the text patterns of early games like “Space Invaders” or “Tetris,” where the text and images were made out of square pixels. The way the blocks move and respond to user input is like the jumpy, random effects we see in old games where graphics were powered by simple hardware.

The key interactions involve entering any word which you want to display, and pressing “Generate” forms the word in the grid. The grid can be toggled on/off with a separate “Toggle Grid” button. When the mouse hovers over the text, the blocks near the cursor change color and become more diverse. When we hold down the “Enter” key, it causes the blocks representing the text to scatter randomly, simulating random jumpy movements. As soon as the key is let go, the boxes come back to their original positions.

Sketch

Code I am proud of

// Setup Function
function setup() {
  // Create the canvas
  createCanvas(600, 400);
  cols = floor(width / gridSize);
  rows = floor(height / gridSize);
  
  // Taking the input and generating the other buttons
  input = createInput();
  input.position(20, height + 20);
  button = createButton('Generate');
  button.position(input.x + input.width + 10, height + 20);
  button.mousePressed(startAnimation);

  toggleButton = createButton('Toggle Grid');
  toggleButton.position(button.x + button.width + 10, height + 20);
  toggleButton.mousePressed(toggleGrid);

  textSize(fontSize);
  textAlign(CENTER, CENTER);

  pg = createGraphics(width, height);
  pg.pixelDensity(1);
}

// Function to show/hide the grid
function toggleGrid() {
  showGrid = !showGrid;
}

// Draw function
function draw() {
  // Smooth background blending
  bgColor.r = lerp(bgColor.r, targetBgColor.r, 0.01);
  bgColor.g = lerp(bgColor.g, targetBgColor.g, 0.01);
  bgColor.b = lerp(bgColor.b, targetBgColor.b, 0.01);
  background(bgColor.r, bgColor.g, bgColor.b);

  if (showGrid) {
    drawGrid();
  }

  if (animating && word.length > 0) {
    fillLetterPixels();
  }

  for (let i = 0; i < pixelIndex; i++) {
    let px = letterPixels[i].x * gridSize;
    let py = letterPixels[i].y * gridSize;
    
    // Jumpy explosion effect
    if (exploding) {
      // Randomize position for the explosion effect
      px += random(-10, 10);
      py += random(-10, 10);
    } 
    else {
      // Smooth return to original positions
      px = lerp(px, originalPositions[i].x * gridSize, 0.1);
      py = lerp(py, originalPositions[i].y * gridSize, 0.1);
    }

    let d = dist(mouseX, mouseY, px + gridSize / 2, py + gridSize / 2);
    if (d < gridSize * 4) {
      let blockColor = blockColors[i];
      fill(blockColor.r, blockColor.g, blockColor.b, map(d, 0, gridSize * 4, 255, 50));
    } 
    else {
      fill(0);
    }

    noStroke();
    rect(px, py, gridSize, gridSize);

    if (random(1) < sparkleChance) {
      fill(random(255), random(255), random(255));
      rect(px, py, gridSize, gridSize);
    }
  }
}
  • Setup() function sets up the canvas, makes the Graphics buffer and initializes the canvas color to white.
  • Draw() function is the function which is the main loop that runs continuously. It controls the background color blending, sparkle effect, jumpy explosion effect, stores which keys are pressed, checks grid is toggled on/off, and so on.
// Startup animation after the generate button is clicked
function startAnimation() {
  word = input.value();
  pixelIndex = 0;
  animating = true;
  getLetterPixels();

  blockColors = letterPixels.map(() => {
    return { r: random(255), g: random(255), b: random(255) };
  });

  originalPositions = letterPixels.map((p) => ({ ...p })); // Store original positions

  targetBgColor = {
    r: random(255),
    g: random(255),
    b: random(255)
  };
}

– This function is responsible for the short simulation of making the letters of the word when “Generate button” is pressed.

function getLetterPixels() {
  letterPixels = [];

  // Use the off-screen buffer (pg) to render the text
  pg.background(255);
  pg.textSize(fontSize);
  pg.textAlign(CENTER, CENTER);
  pg.fill(0);
  pg.text(word, pg.width / 2, pg.height / 2);

  pg.loadPixels();
  for (let y = 0; y < pg.height; y += gridSize) {
    for (let x = 0; x < pg.width; x += gridSize) {
      let index = (x + y * pg.width) * 4;
      let r = pg.pixels[index];
      if (r < 128) {
        let gridX = floor(x / gridSize);
        let gridY = floor(y / gridSize);
        letterPixels.push({ x: gridX, y: gridY });
      }
    }
  }
}
  • This function uses the off-screen buffer (pg) to render the input text and identifies the grid positions where the pixels are black (indicating a letter).
function fillLetterPixels() {
  if (pixelIndex < letterPixels.length) {
    let px = letterPixels[pixelIndex].x * gridSize;
    let py = letterPixels[pixelIndex].y * gridSize;
    fill(0);
    noStroke();
    rect(px, py, gridSize, gridSize);
    pixelIndex++;
  } 
  else {
    animating = false;
  }
}
  • This function simulates the pixels that form the letters. It gradually reveals the blocks one by one.
// Run the function if ENTER key is pressed
function keyPressed() {
  if (keyCode === ENTER) {
    exploding = true; // Start the explosion effect
  }
}

// Run the function if ENTER key is released
function keyReleased() {
  if (keyCode === ENTER) {
    exploding = false; // Stop the explosion and return to original positions
  }
}
  • When the Enter key is pressed or released, it starts or stops the jumpy explosion effect. When the Enter key is pressed, the exploding flag is set to true, causing the blocks to scatter randomly. When the Enter key is released, the exploding flag is set to false, and the blocks return to their original positions.

Challenges/ Future Improvements

Some of the major challenges I had faced were:

  • Creating the random “jumpy” explosion effect and making sure that the boxes making up the letters stay in place after key is released.
  • The “sparkle” effect was another challenge as it required creating intermittent flashes of random color across specific blocks.
  • The color changing due to the “hovering  mouse” was another one. However, by measuring the distance (dist()) between the mouse and the block’s center, the program smoothly blends colors based on proximity to the cursor, giving the effect of interaction with the text.

However, the code can be further improved upon by introducing some collission, introducing different layers to create a 3D pixelated text feel, and so on.

Reading Reflection: Week 04

From the readings, when I came across the issues with door handling, it immediately reminded me of the doors at NYUAD, especially in D2. They’re supposed to be semi-automatic with sensors, but I often find myself having to pull them with all my strength just to get them to open. It’s incredibly frustrating, especially when I’m carrying something heavy or have my hands full. Struggling with these doors can be really annoying.

What really drives me crazy is that we live in a world that talks about inclusivity and accessibility, yet my own university isn’t fully disability-friendly. I can’t understand how a person in a wheelchair can access the dining halls when the doors require someone else to pull them open. Drawing from Don Norman’s ideas, I really connect with his emphasis on human-centered design to create better usability. He points out that “Engineers are trained to think logically,” which is something I’ve noticed over the years. However, I believe there’s been a shift in recent times.

As someone aspiring to be an engineer, I resonate with this statement. We often focus more on mathematical modeling rather than on accessibility and the psychological aspects of human behavior. Many engineering projects—like billboards and airport dashboards—do help by displaying information clearly, but they often overlook people with dyslexia. For example, using fonts like Sansa can make reading difficult for them.

Norman also talks about cognitive load, which refers to the mental effort required to use a device. In our fast-paced world, having a high cognitive load can be overwhelming. Take, for instance, a multifunctional printer with tons of buttons and features. If I have to remember multiple steps just to print a document, it can be exhausting. A better design would simplify the process, reducing the steps and making controls more intuitive. This aligns perfectly with Norman’s argument that good design should minimize cognitive load, letting users focus on their tasks instead of trying to figure out how to use the device.

Overall, this reading has been the most enlightening for me in the past four weeks.

Week 04: Data Visualisation

Inspiration:

As someone who believes in the transformative power of data visualization, I see it as an essential tool for making complex information accessible and understandable. Visualizing data allows us to grasp patterns, trends, and insights that would otherwise remain hidden in raw numbers. In the context of healthcare, especially with something as critical as blood pressure, effective visualization can significantly impact how we interpret and respond to health data.

One of my inspirations for this project comes from the recent work of renowned data artist, Giorgia Lupi, a renowned information designer known for her innovative approach to data visualization. I loved how she turned data visualisation in an artistic human connector engaging us with tales, emotions, informations through data.

Concept: The blood pressure dataset, originally from Stata, serves as a crucial resource for understanding the impacts of various factors on blood pressure among different demographic groups. I have taken the dataset from the Github Open Source For Data Science account. The idea came into my mind after remembering how blood pressure runs into families and sometimes we don’t care about it. Though the dataset is pretty small and not too complex, I enjoyed creating a visualisation for the first time.

Codes which I’m proud of:

I had trouble with managing datasets so I used the console.log technique that we learned in the class to check if  the CSV data loads correctly before any processing occurs or not. This is crucial because if the data is not available, the subsequent calculations will fail. I created two charts as the dataset had more columns, so I made two error handling for each chart.

    function preload() {
    data = loadTable('blood_pressure.csv', 'csv', 'header');
}

else {
        console.error("Data not loaded properly"); // Error handling if data isn't loaded
    }

I couldn’t find it possible nto fit the data in one chart. To make it easy to understand I created two charts which are connected based on the parameters. Here is the snippet for dynamically creating two separate charts for age and sex while keeping the layout clear. It required careful organization of data and chart settings.

function createAgeGroupChart() {
    const ageGroups = {}; // Object to hold blood pressure data by age group
    
    // Ensure the data is loaded
    if (data) {
        // Iterate through each row of data
        data.rows.forEach(row => {
            const agegrp = row.getString('agegrp'); // Get age group
            const bp_before = row.getNum('bp_before'); // Get blood pressure before
            const bp_after = row.getNum('bp_after'); // Get blood pressure after

            // Initialize arrays for this age group if it doesn't exist
            if (!ageGroups[agegrp]) {
                ageGroups[agegrp] = { before: [], after: [] };
            }
            // Add blood pressure data to the respective arrays
            ageGroups[agegrp].before.push(bp_before);
            ageGroups[agegrp].after.push(bp_after);
        });

        // Extract labels and average values for the chart
        const labels = Object.keys(ageGroups);
        const avgBefore = labels.map(label => avg(ageGroups[label].before)); // Average BP before
        const avgAfter = labels.map(label => avg(ageGroups[label].after)); // Average BP after

        // Get context for the first chart
        const ctx1 = document.getElementById('chart1').getContext('2d');
        chart1 = new Chart(ctx1, {
            type: 'bar', // Bar chart type
            data: {
                labels: labels, // X-axis labels (age groups)
                datasets: [
                    {
                        label: 'BP Before', // Dataset label
                        data: avgBefore, // Data for BP before
                        backgroundColor: 'rgba(75, 192, 192, 0.6)', // Fill color
                        borderColor: 'rgba(75, 192, 192, 1)', // Border color
                        borderWidth: 1 // Border width
                    },
                    {
                        label: 'BP After', // Dataset label
                        data: avgAfter, // Data for BP after
                        backgroundColor: 'rgba(153, 102, 255, 0.6)', // Fill color
                        borderColor: 'rgba(153, 102, 255, 1)', // Border color
                        borderWidth: 1 // Border width
                    }
                ]

Similarly, I did another chart for the average counts according to sex.

function avg(array) {
    return array.reduce((a, b) => a + b, 0) / array.length;
}

Lastly, I tried to ensure that the charts adjust to different screen sizes while keeping everything readable and visually appealing. I used this part of code to implement it. I used Inspiration from Bar_Chart_in_P5.js video.

options: {
    responsive: true,
    scales: {
        y: { beginAtZero: true, title: { display: true, text: 'Blood Pressure' } },
        x: { title: { display: true, text: 'Age Group' } }
    }
}

 

p5.js Sketch:

Challenges and further improvements: 

The main challenge was to find a proper dataset. I tried to use different data sets from Google Dataset Search, Kaggle, Data.gov etc but they were not showing into charts, sometimes error was coming or the dataset was too big to use for this project. I took help from chatgpt to give me resources for fine datatsets but it didn’t give me any dataset or link, suggested me to check on Github so I used the idea and searched github.

 

 

Week 4 Reading Response: Ch 1, “The Design of Everyday Things”

In Chapter 1 of the “The Design of Everyday Things” titled as, “The Psychopathology of Everyday things,” Norman emphasizes how everyday objects can be confusing and fail in their design because they can’t be understood by its users. Throughout the chapter, he introduces key concepts like:

  • “Discoverability,” which is determined by users who instinctively know how to work with something, just by seeing it, and,
  • “Feedback” which is some kind of stimulus to the user that the task they intended to perform has been carried out successfully or has failed in between.

He gives the example of doors (like the ‘Norman Doors’, named after himself) which set people thinking on how to use it. His main argument is one which I agree with too, i.e., the designs should be “Human-Centered” and ensure that it is easily usable for everyone, no matter how simple or complex the object is.

“Human-Centered Design” is a term which I have also previously encountered when I took a J-term class “Interactive Media in the World”. I learnt that this principle can be easily applied to various situations ranging from how things are arranged (like electric tools in a lab) to flow of traffic in a city (to prevent accidents), smart home technologies, human-computer interactions, and so on.

If I had to think of a product which annoys me a lot, it would be the “Electrical Adapters” with a bad design. There is a particular one which I always keep with me as a reminder of the same.

Similar products with the same design can be found on Amazon and other places. However, it is quite clear that while mobile devices can be charged without any issues, it is the laptop chargers or desktop power cords where the issue arises (especially in UAE). There is not enough distance between laptop charging cord and the wall, rendering it unusuable for devices which have bigger plugs.

In terms of interactive media, I believe Norman’s principles can significantly improve user experiences. For example, when designing websites or apps, discoverability should be a primary goal. For example, making discoverability a top priority when creating apps or interactive displays aids users in immediately understanding how to engage with the system. The user may be led through the interface with the use of obvious signifiers like labels or icons. Feedback is also important here, as when interacting with digital elements; for example, a button’s color changing when clicked or a sound signaling the completion of a task are both visual and auditory signals that boost user confidence and enhance the overall experience.

Week 4 – Reading Reflection

Don Norman’s “The Design of Everyday Things” highlights how poor design impacts our daily interactions with common objects. In the first chapter, he introduces the concept of discoverability, which emphasizes that people should be able to figure out how something works simply by looking at it. Norman explains that objects like doors or light switches can become confusing when they lack clear visual cues, often leading users to make mistakes. The idea of feedback is also essential, ensuring users receive confirmation that their actions—such as opening a door or turning off a light—are completed successfully. Norman’s focus is on making designs intuitive and straightforward, so users can easily understand them without needing instructions.

One frustration I experience, which Norman’s work sheds light on, is how stressful the use of modern smart devices like home assistants or thermostats are. These devices often overwhelm users with too many options and hidden functions, making them difficult to use. Applying Norman’s principles of signifiers and mapping could make these systems more user-friendly. For example, clearer labels, icons, or simple gestures could improve usability. Additionally, feedback through visual or sound cues would help users feel confident that they’ve completed a task correctly. Norman’s focus on human-centered design highlights the importance of keeping the user in mind, ensuring that products are approachable and simple to navigate. This approach could significantly improve our interactions with modern technology, reducing frustration and making these tools more accessible.