Week 10 Reading Response

A Brief Rant on the Future of Interaction Design

This article is extremely poorly written. It spends time criticizing Microsoft’s future vision video but ignore several important factors that are detrimental in the world of business.

First, the author ignore how cost control plays a huge role in designing products. Though the author spent tons of words describing how future technology sacrifice the tactile richness of working with hands, he/she did not calculate the cost and the technological development stage of tactile materials.  There are materials that provide these types of tactile richness, but what is the cost for producing these types of materials? what is the technical difficulties of achieving intended effect when we integrate this technology to our day to day phones?

Second, the author definitely has some illusion regarding the idea that “People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.” This video is created by Microsoft, a billion internet service company, and as far as I know, nobody votes in the process of choosing what vision the video will present, and I don’t know if there’s any public voting process to decide which research gets funded and with current AI development, people rarely have chances to choose their careers while balancing their satisfaction with the jobs. I don’t know how the author comes up with these words, but I think he/she probably lived in a delusional, fictional world.

A follow-up article

The first line of the follow-up article dismantle all my concerns of that I wrote for the first article: “Yes, that’s why I called it a rant, not an essay.” The author is treating the article as a science fiction and therefore in that sense, all words he produced would make sense. He specifically defines his article as a rant that try to catch people’s attention on how we lack design regarding controlling the devices.  However,  I disagree with his opinion regarding brain interface. I believe brain interface, if possible, will be the most important invention in human history. In human history, many horrible decisions/actions we made are due to the deficiencies in brain processing power, if there’s a way to connect our brain to the computer and hugely improve memory, computation speed, I believe it would give us a chance to build a better society.

Week 9 Two Switches

Idea:

Since the problem is to create two types of control, I already created a switch using the idea to use photoresistor to control the circuit. I will just apply the other switch with basic switching on/off control.

Schematic:

 

It is a simple schematic, where I use A0 to read the LDR values and then programmed so that it affect how the built-in LED(D13) changes accordingly.

Then I use D2 to read the switch status and change how the LED light would perform(D9).

Code:

if (value <= lightThreshold) {
    digitalWrite(ledPin, HIGH);   // Dark → LED ON
  } else {
    digitalWrite(ledPin, LOW);    // Bright → LED OFF
  }

This is the part of the code where I control the light based on the A0 values.

 

if (reading != lastButtonState) {
    lastDebounceTime = millis(); // reset timer if state changed
  }

  if ((millis() - lastDebounceTime) > debounceDelay) {
    // If the button is pressed (LOW because of INPUT_PULLUP)
    if (reading == LOW && lastButtonState == HIGH) {
      // Toggle LED state
      ledState = !ledState;
    }
  }

This i the part of the code that I apply to change the LED light status.

The lightThreshold value is determined by experimenting and printing out the A0 value when cover/uncover the LDR.

Serial.print("Light level: ");
Serial.println(value);

 

Reflection:

I still need more practice on connecting the board as I am not familiar with how to design the board to make the wiring more clean and beautiful. I also could come up with more creative idea on how to control the LED.

Week 8 Unusual Switch

Idea:

Since the problem is prompted to create a switch that should use the human body (but not the hands), I instantly come up with the idea to use photoresistor to control the circuit. As photoresistor reacts to the light intensity, basically you could use any body part to control the system as long as it cover/uncover the photoresistor.

Schematic:

It is a simple schematic, where I use A0 to read the LDR values and then programmed so that it affect how the built-in LED(D13) changes accordingly.

Code:

if (value <= lightThreshold) {
    digitalWrite(ledPin, HIGH);   // Dark → LED ON
  } else {
    digitalWrite(ledPin, LOW);    // Bright → LED OFF
  }

This is the part of the code where I control the light based on the A0 values.

The lightThreshold value is determined by experimenting and printing out the A0 value when cover/uncover the LDR.

Serial.print("Light level: ");
Serial.println(value);

 

Reflection:

It took me longer than expected to complete this as I am not familiar enough with working with the board. Nonetheless, it is a fun experience experimenting with both the hardware and the software at the same time. Future improvements would center around controlling other than built-in LEDs and improving the circuit design.

Week 9 Reading Response

Physical Computing’s Greatest hits and misses

This blog lists a number of interesting ways of interacting with the art piece. It reminds me part of the reading from previous weeks that try to re-define what is an interactivity for a designed system. From that perspective, these designs provide us with idea to interact other than the usual input(computer text input). Other body components such as hands, eyes, or even our entire bodies are powerful enough tool to provide information that alter the system output. Also, the response from the system or the output from the system could also be varied. From this blog, I seen response including forms of sound, image, and combination of both.

Making Interactive Art: Set the Stage, Then Shut Up and Listen

This blog suggests interactive art artists to let audience dictate the interactive experience. I partially agree. First, I agree that there should be hinted guidance from the art that guides the audience to interact with the device. However, I do not agree that artists should be completely silent in the process. If audiences are not careful enough and they just miss some of the hint, there should be message from either the device or the artist that guides the audience back to the track to the intended interactive experience. A careful or well-designed system should indeed be very informative themselves but for complex enough system, visual hints themselves are not informative enough to guide the correct experience.

Week 8 Reading Response

Norman,“Emotion & Design: Attractive things work better”

The reading provide us with a new perspective on aesthetics design: rather than solely serving a “beauty” function, it improves people’s attention span, positive emotions when interacting with the design, which might end up in improving the problem solving of the product/design. I highly agree with this perspective. For example, when computers were first invented, they dominated with command-line control interface which prevents the majority from using this advanced-system. However, later on, designers of Apple and Microsoft realized this problem and design separately two systems that are heavy with image interfaces. Today, all systems inherit that idea and people today interact heavily with these more well-designed systems.

Her Code Got Humans on the Moon

Other than impressed by the great works done by Margaret Hamilton and her strong spirit that overcomes all the adversaries, I am particular interested in anecdote of the reading where Hamilton proposed that there might be error in the program when preloading P01, her leaders refuse to add error-checking within the software. Though the astronauts are the most well-trained, they still trigger the error during the real mission. This story reminds me the importance of error-preventing programs. Even though something might seem stupid when people first develop them, people might really end up in this “stupid” situation. Luckily, during this mission, the problem is resolved. However, there are numerous examples in history related to how a minor computer error lead to billions of losses.

Midterm

  • Embed or link to your sketch (e.g. link to the sketch in fullscreen mode)
  • Include code snippets and one or more images
  • Describe the overall concept of your project (1-2 paragraphs)
  • Describe how your project works and what parts you’re proud of (e.g. good technical decisions, good game design) 2-3 paragraphs
  • Describe some areas for improvement and problems that you ran into (resolved or otherwise) (1-2 paragraphs)

This is the final output:

Conceptualization:

For this project, I decide to create a design includes interactivity with sound. When I looked at how musicians create their music, they always apply something called as “soundtrack” that could be dragged around to edit their produced songs. So, for this midterm project, I decide to build on this concept.

However, it would be too technical if I just recreate a soundtrack control system and it wouldn’t be very artistic. To add certain aesthetics to the design and to simplify the system. I decided to create a visual system that is similar to this picture.

With different color representing different sound, user could drag the gridlines to increase/decrease the length of certain sound. With colored blocks designated to certain sound, I decrease the difficulty to choose which sounds are available to the sound track. Also, I want to design a system in which the user could click on the grid inside and change the color(grid) of the sound. There should also be a play button which plays the sound block once the user finish design the block.

How the program works:

There are two pages in total, the first page allow users to read instructions so that they will grasp idea on how this program work. Then the second page allow user to interact freely with their creativity.

 

When users adjust the color of the grid, the “music”. that is created with the sound blocks would be different. Additionally, users are allowed to control the length of each block. Overall, I added a adjust tempo feature to control the total length of the block.

The code that I am proud of is the rebuild function. It is the core of this program as user interacted with the grid, its sound will be altered based on how user rebuild the entire grid.

function rebuildBlocks() {
  // Precompute row boundaries (equal height rows)
  const ys = [];
  for (let r = 0; r <= N_ROWS; r++) {
    ys.push(gridY + (r * gridH / N_ROWS));
  }

  blocks = [];
  for (let r = 0; r < N_ROWS; r++) {
    const row = [];
    const xs = [gridX, ...handles[r], gridX + gridW];

    for (let c = 0; c < N_COLS; c++) {
      const x0 = xs[c], x1 = xs[c + 1];
      const y0 = ys[r], y1 = ys[r + 1];

      // Aesthetic default color pattern:
      // Use a repeating gradient based on row/column indices.
      let baseState;
      const pattern = (r + c) % 5;
      switch (pattern) {
        case 0: baseState = 0; break; // blue
        case 1: baseState = 1; break; // red
        case 2: baseState = 2; break; // yellow
        case 3: baseState = 3; break; // green
        case 4: baseState = 4; break; // grey (silent)
      }

      // Slight random variation for natural look
      if (random() < 0.15) {
        baseState = floor(random(0, 5));
      }

      row.push(new Block(x0, y0, x1 - x0, y1 - y0, baseState, r, c));
    }
    blocks.push(row);
  }
}

In the future I believe it is a good practice to improve the aesthetics of the user interface and additionally, I could add several sound effect blocks so that users could alter the entire style of the music.

Week 5 Reading Response

Prompt:

What are some of the ways that computer vision differs from human vision? What are some techniques we can use to help the computer see / track what we’re interested in? How do you think computer vision’s capacity for tracking and surveillance affects its use in interactive art?

Response:

From my understanding plus the reading’s interpretation regarding computer vision, computer vision differs from human vision as human’s ability is sort of built-in inside our brain and takes way less energy and time to train our ability to recognize certain objects. For CV, however, they not only need numerous examples to learn certain facts, but also they consume huge amount of energy. Also, during the process of learning the ability to recognize, computer algorithms would often “cut” pictures into small pieces and train based on that while human learn to recognize as a whole.

Techniques that assist computer to see and track include frame differencing, background subtraction, brightness threshold, and simple tracking. I believe computer’s ability to track and surveillance could assist interactive art to be more interactive. For example, if I want to design a background music/scenes for a dancer, and if i want to change the background according to the dancer’s movements, I need CV as a technique to help me calculate parameters that could be applied into algorithms which later on could turn into shapes/sounds/etc..

 

Week 5 Midterm Progress

Concept:

For this project, I decide to create a design includes interactivity with sound. When I looked at how musicians create their music, they always apply something called as “soundtrack” that could be dragged around to edit their produced songs. So, for this midterm project, I decide to build on this concept.

However, it would be too technical if I just recreate a soundtrack control system and it wouldn’t be very artistic. To add certain aesthetics to the design and to simplify the system. I decided to create a visual system that is similar to this picture.

With different color representing different sound, user could drag the gridlines to increase/decrease the length of certain sound. With colored blocks designated to certain sound, I decrease the difficulty to choose which sounds are available to the sound track. Also, I want to design a system in which the user could click on the grid inside and change the color(grid) of the sound. There should also be a play button which plays the sound block once the user finish design the block.

Complex parts:

I always insist on making my design graphically dominant. Ideally, there’s no word included in the system. However, I want to achieve the effect of asking users to discover they could drag the grid lines so that they could alter the duration of a certain sound. Also, the operation of changing the color of the grid should also be very intuitive.

Design:

There should be a class for blocks as user could alter the color of the grid and the width of the grid.

First Stage work:(press f to enter full screen)

As displayed above, this version is extremely crude.

There are several Improvements I could make in the coming week

  1.  the entire interface should be more user friendly, right now it’s just ugly.
  2. it should be a 5*5 grid instead of a 1*5 grid
  3. the grid line alter function should be designed more beautifully
  4. there should be no word instruction

However, this version works right now, and I can play with the sound to make some simple music.

 

Week 4 Coding Assignment

Concept:

When I see words like generative text, I immediately think of the code rain effect that happened in the movie matrix.

(screenshot of the code rain visual effect in the movie matrix)

I intend to recreate this effect by using p5js.

Code that I am proud of:

To recreate this effect, there must be at least two types of objects that we want to define. First, the “word stream” that appears in the screen. Second, the individual elements of the characters. By writing these two classes, the rest of the setup should be easy.

draw() {
    // Slight green variance between streams
    const g = map(sin(this.colorPhase), -1, 1, 180, 255);
    if (this.isHead) {
      fill(180, g, 180); // brighter/whiter head
    } else {
      fill(120, g, 120);
    }
    text(String(this.value), this.x * this.glyphSize, this.y * this.glyphSize);

when writing matrix symbol class, I am proud of these lines of code. by using very simple mathematics, I successfully achieved the effect of having green variance across different streams.

output:

reflection:

after reading the design post, I feel like if i want to add any design to this animation, I would try to add the effect of when users swipe their mouses across the screen, the length of the streams would change or the stream would be stroked by where the mouse is at.

 

Week 4 Reading Response

Prompt:

  • What’s something (not mentioned in the reading) that drives you crazy and how could it be improved?
  • How can you apply some of the author’s principles of design to interactive media?

Response:

One of the inventions that drives me crazy on weekly basis is the washing machine. When I tried to do laundry, I am always confused on multiple options that are provided on the panel.

As demonstrated by the picture above, I am confused by the information presented. What is easy care? what is its difference with skin care? when should I choose easy care? When choosing the temperature, how will different temperature affect the washing effect? As a result, I need to go through multiple websites to look for my answer. To improve on this process, I believe the machine could be more interactive. For example, using a display screen to ask users to choose what type of clothes are they washing and then asking how long would they want to wait, etc.. To save the machine from asking repeated question, the screen could provide a default option for users if users have found the best washing mode.

I want to focus on HCD later on for my interactive media works. I always admire how Steve Jobs design the iPad. He successfully incorporated the touch screen technology with very intuitive human interaction as he stated that a 6-year-old could start playing games on iPad without any instruction(visual, text, etc.) (https://www.forbes.com/sites/michaelnoer/2010/09/08/the-stable-boy-and-the-ipad/). Everything should be intuitive and audience could receive very clear feedback after they interact.