Week 8: Unusual Switch

Concept

My switch is based on the sleepless student stereotype, where students are often so sleep-deprived that they fall asleep in their classes. This phenomena, while at times may be attributed to time management (I can attest ), is not entirely so. Scientific research has demonstrated that students do better with later school start times, as teenage brains typically track a circadian rhythm that makes it difficult to fall asleep before midnight.

Thus, to counteract this sleepy-student syndrome, I have designed a switch using a pressure-sensitive resistor and buzzer, powered by a basic voltage divider circuit. My pressure-sensitive resistor is placed underneath a pillow, on a desk. Then, if a sleep-ridden student accidentally places their head on the pillow, it will adjust the resistor and trigger the switch, sounding the wake-up alarm to bring them back to consciousness.

Week 8: Reading Response

Her Code Got Humans on the Moon

I found a lot in this article particularly interesting and inspiring. One aspect of Margaret Hamilton was all of the challenges that she had to overcome to be where she ended up. Beyond being an incredible scientist, she had to be comfortable being the only woman in many of the spaces she occupied. In this regard, she was a definite trailblazer.

One line that stood out to me, however, was “I was one of the guys.” From what I’ve experienced (and my colleagues), STEM is a highly male-dominated field, and often caters to a very specific “guy culture.” It can be ostracizing for those who may not fit into the “guy culture,” dynamic.

All of that to say–I am remarkably inspired by Margaret Hamilton’s story–but I fear how many scientists we must have lost out on because they didn’t have a personality that fit into “guy culture.” I love the story of Margaret Hamilton, but wonder if she would have been held back if her personality was less compatible to the cultural expectation? To be explicit, this is in no way a critique of Margaret Hamilton–she is surely entitled to her own personality and cultural preferences. I am instead calling out the “one of the guys,” culture that many men likely imposed then and still impose now. Why don’t we be a bit more accepting of whatever culture bright scientists bring to the table–regardless of how it may or may not conform to existing expectations?

Attractive Design

Designing systems that feel intuitive is surely a skill, which does not necessarily come naturally to many of us. However, I can definitely agree with the article, in that tools that feel nice to use genuinely just make me feel better. The value of a pleasant looking UI and sensible inputs cannot be understated. This, in my opinion, is one of the essential reasons that Apple has such a prolific market share: They understand how to engineer products that users will enjoy using.

Reading this article made me remember a talk given by Prof. Scott Shenker, where he mentioned about how engineers typically find joy in mastering complexity. However, systems that require mastering complexity is often diametrically opposed to intuitive user design. In other words, while engineers may find joy in learning to understand complex systems so that they can interact with them effectively, the average user does not want or need to understand this complexity in order to interact with something in a simple way.

We may also introduce additional nuance, however, considering how there’s often a balance between tools for experts and tools for non-experts. Microsoft Excel may be a nice example, where non-experts can generally make a basic table with almost no guidance or prior knowledge of working with the software. However, put an accountant down in the same seat and watch them setup the most complex formulas and use every keyboard shortcut you didn’t know existed. This exemplifies a system that is generally usable to anyone, but is increasingly useful to those who truly know it.

The program that made me consider this paradigm was actually the Vim text-editor, which is infamous for being unusable to new users. However, once you learn the in and outs of it, I can confirm that it’s almost impossible to start using any other text-editor. The keyboard shortcuts are just too useful, and too enjoyable, to go back to anything else–despite the incredibly unintuitive design.

Another example of this is the Rust programming language, which also has an incredibly steep learning curve. To be honest, I had to take Computer Systems Organization before Rust really clicked in my brain. Yet, despite the high barrier to entry, Rust consistently is ranked as a developer favorite–above Python and JavaScritpt. Rust genuinely feels fun to program–even when you’re struggling through the basics. To me, these two examples come down to ergonomics. Despite their high complexity and profound lack of intuitive design, the thoughtful ergonomics more than compensate–providing an overall unique experience.

Week 6: Midterm

Midterm Report

Link to Project: https://editor.p5js.org/corbanvilla/full/nXKMcC8T7

Overall Concept

My project is created as a project after an article I wrote for The Gazelle newspaper a couple of years ago, commenting on the often toxic hustle cultured exhibited by NYUAD students. For instance, students often carry around their various responsibilities, commitments, and academic struggles as “badges” that provide social status. In other words, the busier you appear to be, and the less sleep you are getting, the more you are seen as “successful” in our hustle culture. While this type of problem is obviously cannot be generalized to all students, it is certainly applicable to a sizable subset, in my experience. Looking back through my time at NYUAD, there are definitely times where I see myself falling into this trap of quantity over quality.

My project, “Drowning @ NYUAD,” brings this phenomena to life, exaggerating the absurdity of students drowning in commitments. The scoring system of the game is measured in negative hours of sleep–the longer you play, the more sleep you miss. Better scores are proportional to less sleep.

Project Architecture

The architecture of the project was important to me, to ensure that all parts of the code are modular and easily reusable. To that end, all screens of the game are split into different JavaScript files, under the “/screens” subdirectory. They are then imported into the global context under the “index.html” file, and registered into the screen controller in “sketch.js.” Speaking of which, the screen controller (ScreenController class) provides a screen rendering management interface, equipped with a transition animation feature and a “nextScreen” callback function.

To utilize this modularity, screens are “registered” in the order that they should be used in “sketch.js.” Each screen receives a callback function, exposed by the screen controller, that allows any screen to trigger a transition to the next screen at any time. This allowed me to focus on each screen in isolation, without worrying about how screens may overlap or interfere with each other. Furthermore, each screen class then exposes it’s own “draw” method, that is only called when the screen is active. Otherwise, resources are not wasted rendering a background screen. An example of this is shown below:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
screenController = new ScreenController();
let nextScreenCallback = screenController.nextScreen.bind(screenController);
screenController.registerScreen(new SplashScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new CharacterSelectScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new WeaponSelectScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new GameScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new GameOverScreen(nextScreenCallback, gameConfig));
screenController = new ScreenController(); let nextScreenCallback = screenController.nextScreen.bind(screenController); screenController.registerScreen(new SplashScreen(nextScreenCallback, gameConfig)); screenController.registerScreen(new CharacterSelectScreen(nextScreenCallback, gameConfig)); screenController.registerScreen(new WeaponSelectScreen(nextScreenCallback, gameConfig)); screenController.registerScreen(new GameScreen(nextScreenCallback, gameConfig)); screenController.registerScreen(new GameOverScreen(nextScreenCallback, gameConfig));
screenController = new ScreenController();

let nextScreenCallback = screenController.nextScreen.bind(screenController);

screenController.registerScreen(new SplashScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new CharacterSelectScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new WeaponSelectScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new GameScreen(nextScreenCallback, gameConfig));
screenController.registerScreen(new GameOverScreen(nextScreenCallback, gameConfig));

A new JavaScript feature I stumbled upon whilst creating this feature is the “.bind” method. It appeared as an autocomplete suggestion from GitHub copilot while I was writing the code itself. I did not know what “.bind” did, so I asked ChatGPT how it works. It explained that it creates a version of the function, which binds internal references to “.this” to the object it belongs to. Thus, called “.bind” allows me to pass this screen transition function to the various screens. Then, when they call the transition, it will successfully refer to the ScreenController class instance.

Areas for Improvement

One area for improvement that I have identified is a desire to improve the artwork in the game. For instance, while one character is animated, I want to enable the rest of the characters to be animated. Furthermore, I want to animate some of the weapons, such as the triangle ruler, to make it spin as it flies through the air.

I would also like to work on the difficulty of the game. While monsters do begin to spawn faster as time goes on, I want the player’s speed and ammunition speed to also increase, allowing the player to more successfully reach longer stages of the game. Furthermore, I think that power-up abilities, such as those in Mariokart, could be interesting to make the game a bit more balanced.

Screenshots

Here are a few screenshots from the available game screens:

Week 5: Midterm Progress

Project Concept

My midterm project is based on a concept I wrote in The Gazelle a few years ago: Drowning At NYUAD: Finding Space In A Ceaseless Work Culture. The article discusses the at times unhealthy work and hustle culture of NYUAD students, where a lack of sleep, overloading classes, and extracurricular commitments can be used as a metric of how “successful,” you are.

The midterm game aims to exaggerate the absurdity of it–underscoring the need to find a personal work-life balanace that enables you to enjoy (and not just survive) your undergraduate career.

Design

The game is designed to parallel mobile games, where a character is dragged along the bottom of the screen–launching projectiles towards oncoming enemies. Except, instead of enemies, the character will be defeating NYUAD-specific monsters, like SIG Training, homework all-nighter, readings, etc.

Prior to playing the game, the user will be presented with several screens that allow for customization. Specifically, the user will need to select a character and an academic weapon from a list of three options each. Below includes screenshots of the game storyboarding, designed in Figma.

Key Risks

One of the aspects that I am most concerned about is the number of screens involved in the game. To alleviate the potential issues and minimize complexity, I am going to work to encapsulate each screen as a class object, with a

.hide()
.hide() and
.show()
.show() method. This way, the code may stay maintainable and navigable. I may also opt to include a configuration dataclass object, to pass to each scene as a reference, in order to avoid defining many global configuration variables.

Week 5: Reading Response

Computer Vision

The articles juxstaposing of computer vision with human vision was very interesting. While some aspects are similar, such as a common use of types of lenses for both recording video and through retinas, the image processing appears to be where differences begin to proliferate.

An interesting point I found the article made was that videos are inherently recorded to store pixel information, but not necessarily scene information. For instance, a night sky is recorded as lots of black pixels–rather than some encoding of a night sky parameter enabled. This fundamental concept means that complex algorithms must be constructed to reconstruct and interpolate the scene information from pixel values. Furthermore, there are still many video encoding formats (e.g., H.264, H.265), so standardization is further lacking in this regard–introducing additional complexity to the process.

One of the techniques I found intriguing is the background subtraction technique, where an initial reference image of the set is first captured. Then, the reference is used to systematically distinguish which objects belong to the scene, and which do not.

The surveillance art, which monitored the Golden Gate Bridge, sparked considerable reflection. I found the author’s point particuarly pointed, when it was revealed that the art had captured a considerably higher number of suicides than what was noticed through traditional systems. However, I can also see how recording these events is also uniquely invasive to the subjects, who are likely unaware that they have become part of an art piece–and did not sign up to be so. This work was only made possible through computer vision.

Week 4: Reading Response

One of the most annoying things that drives me crazy is how frustrating it is to use and deal with printers. For instance, the interfaces on our printers on campus are unnavigable. Simple things, such as wanting to print double-sided, have to be queued from the computer and cannot be changed at all in the printer interface. When the printers run out of paper, they simply error out but do not attempt to reprint once you add more paper. In my opinion, printers should not delete jobs from the print queue until they have been successfully confirmed to have been printed.

I think responsive designs, such as feedback mechanisms as mentioned in the reading, are important for giving users an intuitive sense of how a system will react to their input. For instance, when programming a site, on-hover CSS effects are incredibly helpful for users to understand when an element is clickable. Other techniques, such as user testing, are also crucial, where you watch people use the systems you build. If a user is not able to intuitively figure out how to use something I built, I usually take it as a sign that it’s not as simple as I had originally thought, and that it likely needs to be redesigned.

Week 4: Graduate School Applications

Overview

My concept this week is inspired by the past two semesters (and years) I have spent on graduate school applications. For my data, I sourced a list of self-reported applicant statistics for Computer Science programs. To preprocess the data, I used Python Pandas to drop unnecessary columns and rows that didn’t contain a GPA score. I also truncated the data to 10,000 rows, since I figured that 963,845 data points might be a bit unnecessary for my purposes.

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
df = (
pd
.read_csv('gradcafe.csv')
.drop(columns=['comment', 'gre_quant', 'gre_verbal', 'major', 'season', 'date_of_result', 'gre_awa', 'date_added', 'applicant_status'])
.dropna(subset=['undergrad_gpa'])
.query('decision == "Accepted" or decision == "Rejected"')
.query('undergrad_gpa <= 4.0')
.reset_index(drop=True)
)
df[:10000].to_csv('gradcafe_cleaned_10k.csv', index=False)
df = ( pd .read_csv('gradcafe.csv') .drop(columns=['comment', 'gre_quant', 'gre_verbal', 'major', 'season', 'date_of_result', 'gre_awa', 'date_added', 'applicant_status']) .dropna(subset=['undergrad_gpa']) .query('decision == "Accepted" or decision == "Rejected"') .query('undergrad_gpa <= 4.0') .reset_index(drop=True) ) df[:10000].to_csv('gradcafe_cleaned_10k.csv', index=False)
df = (
    pd
    .read_csv('gradcafe.csv')
    .drop(columns=['comment', 'gre_quant', 'gre_verbal', 'major', 'season', 'date_of_result', 'gre_awa', 'date_added', 'applicant_status'])
    .dropna(subset=['undergrad_gpa'])
    .query('decision == "Accepted" or decision == "Rejected"')
    .query('undergrad_gpa <= 4.0')
    .reset_index(drop=True)
)
df[:10000].to_csv('gradcafe_cleaned_10k.csv', index=False)

Discussion

The piece is centered around the at times reductionist nature of graduate school applications, and applications of other natures (e.g., visa applications). These applications often hinge major aspects of your life on how well a sheet of paper communicates your story to the fatigued person reading it on the other side, tasked with reviewing hundreds of applications in a single sitting. The stakes can feel prolific, where years of your life depend on these single decisions. Furthermore, applicants pour their life stories into these applications–providing another level of vulnerability.

I chose to use GPA as the reductionist metric in this case, which appears slowly as the person’s profile disappears. It emphasizes how these applicants, while not in all cases, often get reduced to numbers and metrics to quantitatively compare and evaluate. This process often does not acknowledge the nuance, story, and personhood of applicants.

Code

Week 3: Reading Response

1. What do you consider to be the characteristics of a strongly interactive system?

Reading the essay, and considering the technological systems we interact with every day made me reflect considerably about interactivity, and especially how it relates to intuitive design, user experience, and user interfaces.

I would consider strongly interactive systems to be systems that users have significant control over what the system performs, with significant room for user creativity. In this regard, a command line interface (CLI) terminal is an incredibly interactive system. However, simultaneously a CLI is also extremely unintuitive and can require years of experience to become comfortable and fully acquainted.

Researchers at Xerox PARC developed graphical user interfaces (GUIs) to provide significantly more intuitive experience for users. However, in my opinion, a GUI can be significantly more limiting for interactivity in certain cases, as compared to a CLI (e.g., most internet servers require operators to interact through CLIs). Other tools however, such as Adobe Photoshop, could likely not exist without the existence of a GUI. In these cases, it is the GUI that enables the creativity and all interaction. Furthermore, developers must design these GUIs to be open-ended, providing a list of tools that artists can operate to accomplish a wide array of goals.

As tools such as ChatGPT become common place, I again began to wonder how interactivity will adapt as AI agents begin to interact with computers for us, and we interact with these models through natural language or even speech. I suspect we are again at the precipice of a revolution in human computer interaction (HCI), where our interactions with computers changes profoundly.

2. What ideas do you have for improving the degree of user interaction in your p5 sketches?

In regards to interactivity with p5.js, the design comparison from before again comes to mind: some GUIs are designed by developers to enable users to accomplish a very specific set of goals (e.g., Microsoft Word, or Grammarly). In these applications, users are relatively limited to a specific way of doing things, and features that are not implemented in the GUI are likely not possible. Other creative applications (i.e., Photoshop) provide users a set of tools that enable open-ended problem solving, where the developers likely do not foresee many of the ways their tools will be applied. I would like to experiment more with the latter–enabling users to inject their own creativity into the system, and sparking a higher degree of interactivity.

Week 3: OOP Light Cycles

Overview

My assignment this week builds upon my concept from last week, introducing objects that interact with each other–inspired by light cycles as seen in the 1982 film, Tron. In my p5.js work, 5 light cycles are drawn on the board, each with a light trail that follows behind it. Upon running into another light train–whether it be it’s own or another cycle’s trail–the cycle will disappear and respawn elsewhere.

Code a Light Cycle arcade minigame | Wireframe #47 - Raspberry Pi

Discussion

One of the interesting coding patterns I used in this work is the use of a global array to track all trails in a single array, rather than including a trail array in each cycle object. This is helpful in two key ways: 1) It makes it simple to perform a check to see if a cycle has crashed into a trail. 2) It optimizes for spacial locality in cache memory, to provide better performance.

I named the array of trails

garbageCollector
garbageCollector as a reference to garbage collection mechanisms in programming languages, that analyze memory usage and automatically cleanup unused memory by tracking references. Similarly, we decrement counter on every call to
draw()
draw() until it is negative, at which point we remove it from the list.

A problem I ran into was figuring out how to efficiently check for collisions. While I could have used the dot product from linear algebra to detect whether two lines are perpendicular (and actually colliding), I instead opted to align all light cycles to multiples of 10, and just test whether the latest point is identical to any points currently in the garbage collector.

The trail code in reference is shown below:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
let garbageCollector = [];
// ....
function draw() {
// ...
// Tail management
let nextDel = 0;
for (let i = 0; i < garbageCollector.length; i++) {
// decrement the point value
point = garbageCollector[i];
point.counter -= LINE_DECREMENT;
// First draw black to erase
stroke(0);
strokeWeight(2);
line(point.x1, point.y1, point.x2, point.y2);
strokeWeight(1);
// Draw the colored line with opacity
let c = color(point.color);
c.setAlpha(point.counter);
stroke(c);
line(point.x1, point.y1, point.x2, point.y2);
// garbage collect
if (point.counter <= 0) {
nextDel = i;
}
}
// trim negative counters
if (nextDel > 0) {
garbageCollector.splice(0, nextDel);
}
let garbageCollector = []; // .... function draw() { // ... // Tail management let nextDel = 0; for (let i = 0; i < garbageCollector.length; i++) { // decrement the point value point = garbageCollector[i]; point.counter -= LINE_DECREMENT; // First draw black to erase stroke(0); strokeWeight(2); line(point.x1, point.y1, point.x2, point.y2); strokeWeight(1); // Draw the colored line with opacity let c = color(point.color); c.setAlpha(point.counter); stroke(c); line(point.x1, point.y1, point.x2, point.y2); // garbage collect if (point.counter <= 0) { nextDel = i; } } // trim negative counters if (nextDel > 0) { garbageCollector.splice(0, nextDel); }
let garbageCollector = [];

// ....
function draw() {
  // ...

  // Tail management
  let nextDel = 0;
  for (let i = 0; i < garbageCollector.length; i++) {
    // decrement the point value
    point = garbageCollector[i];
    point.counter -= LINE_DECREMENT;
    
    // First draw black to erase
    stroke(0);
    strokeWeight(2);
    line(point.x1, point.y1, point.x2, point.y2);
    strokeWeight(1);
    
    // Draw the colored line with opacity
    let c = color(point.color);
    c.setAlpha(point.counter);
    stroke(c);
    line(point.x1, point.y1, point.x2, point.y2);
    
    // garbage collect
    if (point.counter <= 0) {
      nextDel = i;
    }
  }

  // trim negative counters
  if (nextDel > 0) {
    garbageCollector.splice(0, nextDel);
  }

 

Code

 

Week 2: Reading Reflection

One of the interesting ideas mentioned in the video was how order eventually arises from randomness. This was a concept that I played around with in my work, where the longer my program runs, the more perfect a grid is formed. In other words, the more randomness or entropy that is introduced, the more order arises (see images below).

My work incorporates randomness by using a random value to determine where the line will draw next: move straight, turn left, or turn right. It does this random check at every frame. As the user interacts with the work, it speeds up the frame rate, increasing the amount of randomness experienced per unit of time. As the work is left alone by the user, the randomness slows down back to its base speed.

The balance that I attempted to achieve between total randomness and complete control is one of harmony and complementing. By that, I chose to make that the randomness controlled, but the randomness controls the piece. This idea is also calls to a similar one shown in the video at 33 minutes and 33 seconds, where the probability selected completely changes the piece of art that is formed.