Week 8 Reading Response

Norman’s chapter on emotions in good design was enlightening and reminds us that humans are more than just rational, logical machines. People appreciate art and beauty, and I suppose that even in the field of design, where ones of the goal is to make something as logically simple for a user as possible, there is still room for aesthetics. He wrote that ‘attractive things work better’, and I was reminded of how that applies to human interactions too — pretty privilege is a real thing. “Pretty people are perceived as smarter, funnier, more sociable, healthier, and successful” (First link I found on Google). Between two teapots that achieve the same goal of brewing tea, the more attractive design is usually the favored one, and sometimes you might even be willing to give up some functionality in favor of the design. Norman also talks about how different designs might be interacted with depending on the user’s mood or situation. In a stressful situation, a panicking user might not know how pull the fire doors, and might just push harder. This principle should be applied to all manners of design, and consider users that might not have the privilege of the time of figuring out a badly-designed place. For example, a hospital should be well-designed and clearly marked, because a panicking person bringing in their mother will not have the mental capacity to stand and read signs on where the Emergency Room is.

I found the anecdote in Margaret Hamilton’s article on her repeatedly warning the higher-ups about a potential bug really funny, and representative of the experiences I have had so far. A common design principle is to assume the user is stupid. Any error that is possible to be made can be made, even if they’re trained, or given the manual, or even if there’s a written “Do not Touch” sign in front of the object they’re not supposed to touch. From a design perspective, they have done everything right — verbally warn the astronauts not to touch the program, add a reminder in front of the screen to not touch the program. Yet, the astronauts touched the program even when it is usually not run in this scenario. Designs should always have tolerance for fault, no matter how unlikely the fault is. Of course, not every fault can be covered, for example you can’t design a door that can handle being torn apart by someone, so there should be some risk-reward management. How unlikely is it for this fault to be encountered, and how bad will it be if this fault is triggered? In the case of Hamilton’s code, the fault was unlikely to be triggered, but it causes a very scary crash with data loss if it was triggered, thus it would made sense to guard against that case, though I am saying this with the gift of hindsight.

Leave a Reply