After reading “Her Code Got Humans on the Moon—And Invented Software Itself” by Robert McMillan, what made the biggest impression on me was how much Margaret Hamilton thought about human error. I couldn’t get over how NASA brushed off her warning that an astronaut might make a simple mistake and run the wrong program. As humans, we’re bound to make mistakes, especially under pressure, and it’s always better to be safe than sorry. They told her it couldn’t happen because astronauts were “trained to be perfect.” Of course, during the Apollo 8 mission, the exact thing she predicted actually happened. Instead of blaming anyone, Hamilton treated it as proof that software should be built to handle mistakes. I found that amazing – she didn’t just write code; she designed systems that expected humans to make mistakes.
Her story made me think about how I use and create technology. I’ve gotten frustrated when an app crashes or rejects input that seems fine, but I rarely stop to consider all the thought and safeguards behind it. Hamilton’s approach – designing for human error – is the basis for how we handle mistakes in programming now, such as with try/except statements in Python or input validation. It made me realize that good software isn’t just about making things work; it’s about making sure they still work when people slip up.
Her persistence and foresight were inspiring. She didn’t just follow rules; she questioned assumptions, planned for the unexpected, and thought about people using her code. That’s the kind of thinking I want to carry into my own work with technology: designing not just for perfection, but for real humans.