Assignment 5 – Late Night

Concept

The unusual switch I decided to create is not as unusual as I anticipated. I just decided to create something out of that awkward experience when your roommate turns on the light in the middle of the night. I used the arduino to power an LED connected to a photo resistor. The photo resistor produces less resistance and it is exposed to more intense light.

 

Ideas for Future Improvement

I believe it would have been more fun with some form of text display or a buzzing sound, but hearing a buzzing sound in the middle of night isn’t a good idea (smiling emoji).

Midterm Project – ArtfulMotion: A Digital Canvas of Creativity

Link to edit

Images from Sketch

 

Concept

ArtfulMotion is an innovative fusion of interactive art creation and musical immersion, designed as a captivating digital canvas for users to unleash their creativity. Grounded in the p5.js framework, the program offers a dynamic platform where art and music coalesce seamlessly. A richly textured backdrop, comprising the fifth print from Casey Reas’s Tissue prints, sets the visual stage, creating an aesthetic foundation for users to craft their masterpieces. The harmonious integration of various audio tracks and intuitive music controls allows users to select a musical score that perfectly complements their artistic journey, cultivating a multi-sensory and immersive experience.

The project pushes the boundaries of interaction with the implementation of machine learning through ml5.js. Leveraging the handpose model, users can shape their art through real-time hand gestures, offering a tangible link between the digital canvas and the physical world. A diverse array of artistic modes empowers users to explore varied styles and tools, encouraging experimentation and artistic expression. The user-friendly interface facilitates smooth navigation through different sections, while responsive design ensures the project adapts flawlessly to different screen sizes and resolutions. This program aspires to inspire creativity, fostering an environment where art and music converge to create a captivating and enjoyable artistic experience.

 

Implementation

In developing this project, I adopted a systematic approach that involved several key steps. First, I conducted an initial assessment to distinguish between frames containing a hand and those without. Subsequently, I delved into verifying the accuracy of the model’s key points. To ensure that these points corresponded correctly to the hand’s landmarks, I employed green ellipses as visual indicators. During this process, I identified a mirroring issue, which arose from my attempt to mirror the webcam feed. To address this, I placed the drawing function for the points within the push() and pop() functions, effectively correcting the mirroring problem.

A pivotal element of my approach was the creation of a dedicated Point class, designed to render the model’s points onto the screen with precision. I seamlessly integrated this class into the existing sketch responsible for drawing the hand’s landmarks. I also fine-tuned the drawKeyPoint() function to generate points that were stored in an array, allowing for streamlined rendering on the canvas.

For the project’s second mode, I took a different artistic approach by implementing curveVertex() in the draw function of the Point class. This choice lent a unique aesthetic to this mode, differentiating it from the first. Furthermore, I embarked on the design of the program’s landing page and other sub-interfaces. These elements were connected through functions I defined in separate JavaScript files. The collective result of these steps was a visually engaging project, which not only translated logic into code but also integrated modularity, object-oriented programming, event-driven programming, and image handling.

[Sketch of planning phase]

Highlight of Code

 The draw() function plays a pivotal role in the code, as it manages the application’s state and navigation. It renders different pages, such as the menu, instructions, color selection, mode selection, and the art creation canvas, by utilizing a switch statement to determine the current state. It also incorporates a back button for easy navigation between pages, excluding the menu and art creation canvas. Additionally, the function is responsible for rendering music control buttons, allowing users to adjust the soundtrack as they create art. The code’s modularity and clear separation of rendering logic for distinct states contribute to improved code organization and maintenance.

function draw() {
  
  // determines which page to display
  switch (state) {
    case "menu":
      drawMenu();
      break;

    case "instructions":
      drawInstructions();
      break;
      
    case "select_colors":
      drawSelectColors();
      break;

    case "select_mode":
      drawSelectMode();
      break;
      
    case "create_art":
      drawArt();
      break;      
  }
  
  // adds back button if on any other page except menu and main sketch
  if (state != "menu" && state != "create_art") {
    drawBackButton();
  }
  
  // draws music control buttons
  drawMusicControls();
}

One of the primary technical challenges encountered in the project was related to resizing the video feed within the drawArt() function while preserving its original aspect ratio and ensuring it was correctly positioned. The challenge involved intricate mathematical calculations to determine the new dimensions and position for the scaled video feed. Additionally, it required addressing the mirroring of the feed to ensure that user interactions remained intuitive and natural. The process demanded rigorous testing and iterative adjustments to strike the right balance and achieve the desired outcome, ensuring that the video feed displayed in the intended location on the canvas while preserving its correct proportions.

function drawArt() {
  
  // scaling video to avoid distortion
  let aspectRatio = video_feed.height/video_feed.width;
  
  // mirroring feed from the webcam
  // -- beginning of transformation
  push();
  translate(width, 0);
  scale(-1, 1);
  if (drawing_flag) {
    loadKeyPoints();
    drawKeyPoints();
  }
  image(video_feed, width - ( width / videoScale ), 0,  
        width / videoScale , (width /videoScale) * aspectRatio);
  pop();
  // -- ending of transformation
  
  // print button
  updatePrintButton();
  
  // end button
  createEndButton();

}

 

Areas for Improvement

In future iterations of ArtfulMotion, key considerations include implementing real-time collaboration, providing an intuitive and secure environment for multiple users to collectively create art; incorporating interactive art elements, enabling users to seamlessly add animations, dynamic effects, or physics simulations, with a focus on user-friendliness and real-time previewing; and achieving music and art synchronization, allowing users to link music parameters to visual elements for a multimedia experience. Furthermore, the project could offer customizable brushes and visual effects, giving users the tools to add unique artistic touches to their creations through an accessible interface that balances customization and usability. These enhancements have the potential to expand the project’s creative horizons and provide users with a more versatile and engaging platform for artistic expression.

 

Midterm Progress Report 2 – ArtfulMotion: A Digital Canvas of Creativity

Initially, my plan was to create an engaging game by incorporating various elements, including sprites, background parallax scrolling, object-oriented programming (OOP), and TensorFlow.js integration for character control through either speech recognition or body motion detection.

However, for several reasons, I have changed my midterm project idea. The game I was initially going to create would have likely been a remake of an existing game, and it didn’t sound very authentic. My goal for taking this class is to challenge myself creatively, and I gained valuable insights during Thursday’s class, which greatly influenced the ideas I’m going to implement in my project. The part I was missing was probably deciding which machine learning model to use. After observing Professor Aya’s demonstration of the poseNet model in class, my project’s direction became clearer. I have transitioned from creating a game to crafting a digital art piece.

As I write this report on October 7 at 4:29 PM, I have been experimenting with the handpose model from the ml5 library. Handpose is a machine-learning model that enables palm detection and hand-skeleton finger tracking in the browser. It can detect one hand at a time, providing 21 3D hand key points that describe important palm and finger locations.

I took a systematic approach, first checking the results when a hand is present in the frame and when it isn’t.

 

My next step was to verify the accuracy of the points obtained from the model. I drew green ellipses using the model’s points to ensure they corresponded to the correct locations on the hand. I noticed that the points were mirrored, which was a result of my attempt to mirror the webcam feed.

I resolved this issue by placing the drawing function for the points between the push() and pop() functions I used to mirror the feed.

I also discovered that the object returned from the prediction included a bounding box for the hand. I drew out the box to observe how it was affected by the hand’s movements. I plan to use the values returned in topLeft and bottomRight to control the volume of the soundtrack I intend to use in the application.

I have also spent time brainstorming how to utilize the information from the model to create the piece. The relevant information I receive from the model includes landmarks, bounding box, and handInViewConfidence. I am contemplating whether to compute a single point from the model’s points or to utilize all the points to create the piece. To make a decision, I have decided to test both approaches to determine which produces the best result.

In light of this, I created a new sketch to plan how to utilize the information from the model. In my first attempt, I created a Point class that takes x, y, and z coordinates, along with the handInViewConfidence. The x, y, and z coordinates are mapped to values between 0 and 255, while the handInViewConfidence is mapped to a value between 90 and 200 (these values are arbitrary). All this information is used to create two colors, which are linearly interpolated to generate a final color.

After creating the sketch for the Point class, I incorporated it into the existing sketch for drawing the landmarks on the hand. I adjusted the drawKeyPoint() function to create points that were added to an array of points. The point objects were then drawn on the canvas from the array.

// A function to create points for the detected keypoints
function loadKeyPoints() {
  for (let i = 0; i < hand.length; i += 1) {
    const prediction = hand[i];
    for (let j = 0; j < prediction.landmarks.length; j += 1) {
      const keypoint = prediction.landmarks[j];
      points.push(new Point(keypoint[0], keypoint[1], 
                            keypoint[2], prediction.handInViewConfidence))
    }
  }
}

 


I also worked on creating different versions of the sketch. For the second version I created, I used curveVertex() instead of point() in the draw function of the Point class to see how the piece would turn out. I liked the outcome, so I decided to include it as a different mode in the program.


In my efforts to make the sketch more interactive, I have been attempting to utilize the SoundClassification model from the ml5 library as well. I tried working with the “SpeechCommands18w” model and my own custom pre-trained speech commands. However, both models I have tried are not accurate. I have had to repeat the same command numerous times because the model fails to recognize it. I am exploring alternative solutions and ways to potentially improve the model’s accuracy.

Although I am still working on the core of my project, I have begun designing the landing page and other sub-interfaces of my program. Below are sketches for some of the pages.

 

Summary

The progress I’ve made so far involves shifting my initial plan from creating a game to crafting a digital art piece. This change came after attending a class that provided valuable insights, particularly in selecting a machine learning model. I’ve been working with the handpose model, addressing issues like mirroring points and exploring the use of bounding box data for sound control.

I’m also brainstorming ways to utilize landmarks and handInViewConfidence from the model to create the art piece, testing various approaches to mapping data to colors. Additionally, I’ve been experimenting with the SoundClassification model from the ml5 library, though I’ve encountered accuracy challenges.

While the core of my project is still in progress, I’ve started designing the program’s landing page and sub-interfaces. Overall, I’ve made progress in refining my project idea and addressing technical aspects while exploring creative possibilities.

Below  is a screenshot of the rough work I’ve been doing.

Midterm Progress

Concept

For my midterm project, I want to create an engaging game by incorporating various elements, including sprites, background parallax scrolling, object-oriented programming (OOP), and TensorFlow.js integration for character control through either speech recognition or body motion detection (not sure if I can pull it off).

Most of the work I’ve done so far focuses on researching 2D side-scrolling game development, sprite creation, and background parallax scrolling techniques. I have also begun to explore TensorFlow.js, gaining a foundational understanding of its capabilities. The design phase is ongoing, with a focus on brainstorming methods to seamlessly integrate TensorFlow.js into the core gameplay experience. Although I have not yet begun coding, I plan to create preliminary prototypes to evaluate and refine my TensorFlow.js-based character control concept.

Potential challenges

The integration of TensorFlow.js introduces a learning curve and the potential for technical challenges. I am committed to investing the necessary time and effort to overcome these hurdles.

Reading reflection – Week 5

The accessibility and applications of computer vision in the arts are explored in the document “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers.” It discusses how advancements in software development tools, open-source communities, processing speeds, and affordable hardware have made computer vision techniques available to novice programmers and interactive-media artists. The document covers various aspects of computer vision, including its integration into interactive art installations, elementary vision algorithms, the importance of physical environments, artist-friendly toolkits, and a student project example. However, alongside the opportunities presented by computer vision, it is crucial to address the ethical concerns associated with its implementation.

Ethical concerns arise from the use of computer vision technology. One major concern is bias and discrimination, as algorithms trained on biased or unrepresentative datasets can perpetuate societal biases and lead to discriminatory outcomes. Invasion of privacy is another pressing issue, with facial recognition capabilities enabling surveillance systems to track and monitor individuals without their consent, raising questions about privacy rights and consent. Ensuring informed consent is important, as individuals may not always be aware of the extent to which their personal data is being collected and used in computer vision applications. Security and surveillance risks exist, as misuse of computer vision technology can result in unauthorized access, breaches of cybersecurity, and potential mass surveillance. Lastly, the widespread adoption of computer vision can have social impact and inequality implications, exacerbating existing inequalities and raising concerns about job displacement and socioeconomic disparities.

Addressing these ethical concerns is crucial to ensure the responsible and equitable use of computer vision technology. It requires measures such as curating diverse and representative datasets, implementing bias mitigation techniques, safeguarding privacy rights, providing transparent communication and consent mechanisms, prioritizing security and cybersecurity best practices, and considering the broader social implications to promote equitable outcomes. By integrating ethical considerations into the development and deployment of computer vision systems, we can harness the potential of this technology while mitigating its potential harms.

Reading Reflection – Week 4

The first chapter of “Design of Everyday Things” emphasizes the importance of good design, discoverability, and understanding in creating user-friendly experiences. These key points have significant implications for computer programmers and designers, shaping their approach towards creating technology that aligns with human needs and behaviors.

One of the primary implications is the adoption of a user-centered design approach. This means placing the needs, capabilities, and behaviors of end-users at the forefront of the design process. Understanding user perspectives and involving them in the design feedback loop becomes crucial to ensure that the final product meets their expectations and requirements.

Modern technology often involves complex systems and devices. Computer programmers and designers should strive to simplify this complexity to make products and interfaces more intuitive and user-friendly. By eliminating unnecessary features and controls and focusing on providing clear signifiers (indicators of how to use an object) and instructions, they can create interfaces that match users’ mental models and reduce the learning curve associated with technology.

Discoverability, the ability for users to quickly understand what actions are possible and how to perform them, is a key aspect of effective design. Computer programmers and designers should prioritize creating intuitive interfaces with clear feedback mechanisms. Visual cues, affordances (perceived functionalities of an object), and informative feedback can guide users in their interactions and minimize confusion or frustration.

Acknowledging the role of emotions in user experiences is another important implication for computer programmers and designers. Beyond functionality, they should consider the emotional impact of their designs. By focusing on aesthetics, providing meaningful feedback, personalizing interactions, and addressing user desires and aspirations, they can create positive and enjoyable experiences that go beyond basic usability.

The book emphasizes the need to shift the focus from making humans adapt to machines to designing systems where machines serve the needs of humans. Computer programmers and designers should understand how humans naturally interact with technology and create interfaces and interactions that align with their cognitive abilities and behaviors. By considering the human element in human-machine interaction, they can create more intuitive and efficient systems.

Computer programmers and designers should also apply fundamental interaction design principles. Affordances, signifiers, constraints (limitations that guide usage), mappings (relationship between controls and their effects), and conceptual models (users’ understanding of how a system works) provide guidelines for creating effective and usable interfaces. By incorporating these principles, they can design interfaces and interactions that are intuitive, reducing the cognitive load on users and enhancing their overall experience.

Assignment 4 – Rhythm of Numbers

Concept

For this assignment, my inspiration came from Khaleeqa Garrett’s captivating work on her third assignment, specifically FlowFields. I invested time in studying her code to comprehend her approach and remix it to create something unique. Upon reviewing the prompt, I was initially uncertain about which data to visualize and how to do it. Eventually, I settled on using statistics from the music industry, focusing on three of my favorite rappers: Kendrick Lamar, J. Cole, and Drake. I gathered data from Google Trends and utilized it as parameters for Khaleeqa Garrett’s code.

During my initial attempt, the output looked nothing like my envisioned result. I only observed small dots clustered in the upper left corner of the canvas. It dawned on me that the range of values for the number of searches was limited, prompting me to employ the map() function to expand the range and cover the entire canvas.

Below, you can see some of the pieces I created by adjusting the colors representing the different artists.

Highlight of Code

The central modification I made to the existing code involved parsing data from the CSV file and linking it to the Point class. I utilized the hour component of the time for the x-coordinate of the particles and the minute component for the y-coordinate. The numerical value at the time was employed to manipulate the noiseScale within the Point class.

// goes over all entries and uses hours as x coordinate, minutes as y coordinate,
  // and the number of searches as an argument for the noiseScale
  for (let i = 0; i < artistes.getRowCount(); i++) {
    
    // time is split into hour (mapped from 0 - 24 to 0 - width) and minute
    // (mapped from 0 - 59 to 0 - height)
    particles[0].push(new Point(map(split(time[i], ':')[0], 0, 24, 0 ,width), 
                      map(split(time[i], ':')[1], 0, 59, 0, height), kendrick_lamar[i]));
    particles[1].push(new Point(map(split(time[i], ':')[0], 0, 24, 0 ,width), 
                      map(split(time[i], ':')[1], 0, 59, 0, height), j_cole[i]));
    particles[2].push(new Point(map(split(time[i], ':')[0], 0, 24, 0 ,width), 
                      map(split(time[i], ':')[1], 0, 59, 0, height), drake[i]));
    
  }

Additionally, I introduced a slider that influences how the particles move by adjusting the noiseScale within the Point class. This adds an extra layer of interactivity beyond mouse clicks.

// updating noiseScale based on value of slider
    this.noiseScale = slider.value()/map(sin(this.noiseScaleArg), -1, 1, 1, 2);

Reflection and Ideas for Future Work

I consider this piece one of my best creations in this course thus far. For future projects, I aim to enhance the interpretability of my work. Despite using data for this piece, I find it challenging to establish a direct relationship between the numbers and the data inputted into the program.

Reading Reflection – Week 3

The concept of interactivity, as defined by Chris Crawford, resonates with me. In my view, interactivity involves an ongoing dialogue between the user and the medium, where both parties actively communicate, process information, and respond to each other. It’s a collaborative effort that draws inspiration from each other’s actions and words to create a compelling experience.

While caring for a plant can be considered an interactive experience in terms of attentively responding to its needs and enjoying the rewards of a thriving home, there is a counter-example to consider. Some argue that a plant only reacts to whatever substances are provided to it and does not necessarily interact with the caregiver beyond those basic responses. In this perspective, the plant is seen as a passive recipient rather than an active participant in the interaction.

This point highlights the challenges designers face when incorporating interactivity into their work. If the only meaningful interactions can be achieved with other intelligent life forms, it raises questions about whether designers should focus on interactivity or on creating deep connections and eliciting emotional reactions to their works. From this perspective, if a designer’s work is able to emotionally resonate with the audience and foster a deep connection, it may be more satisfactory than merely aiming for a certain level of interactivity that might ultimately detract from the intended purpose.

In summary, interactivity involves active dialogue and reciprocal engagement between the user and the medium. While caring for a plant may have limited interactivity, experiences such as having a pet exemplify the dynamic, engaging nature of true interactivity. However, when it comes to computer art, it raises intriguing questions about what it would truly mean to make such art interactive.

Should interactivity in computer art be limited to basic responses, or should it aim for a deeper connection that elicits emotional reactions from the audience? Can computer art truly engage users in a meaningful dialogue, or does it fall short in replicating the richness of interactions found in human-to-human or human-to-animal relationships? These questions challenge us to reconsider the essence of interactivity in the context of computer art and explore new possibilities for creating genuine interactive experiences through artistic mediums.

Ultimately, the definition and achievement of interactivity in computer art require thoughtful exploration and experimentation. It prompts us to question our assumptions about interactivity and consider how computer art can transcend its limitations to establish meaningful connections and engage users in profound dialogues that go beyond mere surface-level interactions.

Assignment 3 – Artwork Using Object-Oriented Programming

Concept

Assignment 3 aims to utilize object-oriented programming to generate artwork. Following my work in Assignment 2, which already employed object-oriented programming, I decided to address the bug present in the previous code, which caused tiles to glitch, and introduce interactivity by allowing for tile rotation.

I determined that tile glitching resulted from the repetitive displacement and resetting of tiles. My next step was to establish a condition in which no action would occur, leaving the tile at rest. After tinkering with the code, I found the best way to resolve the glitch was to check if one of the cursor’s coordinates matched those of a tile, and if so, refrain from performing any translation.

Subsequently, I worked on enabling tile rotation when they were within a certain range of the cursor. The idea was to use the translate() and rotate() functions to make the mouse cursor coordinates the reference for rendering and rotation. My initial attempts were unsuccessful due to a lack of consideration for the order of operations in the code.

 

Highlight of Code

The highlight of my code lies in the draw function. I utilize a flag called this.event to assess whether the tile is within the specified distance of the cursor and employ a decision structure to determine how to display the tile. If it falls within the range, mouseX and mouseY serve as the reference points instead of the origin (0, 0). The tile is then rotated by an angle assigned when creating an object of the Tile class.

// render function for the tiles
  draw() {
    this.update();
    noStroke();
    fill(this.r, this.g, this. b, ALPHA);
    
    // decision structure to determine how tiles are rendered
    if (this.effect) {
      push();
      
      // makes coordinates of the cursor reference for drawing to canvas
      translate(mouseX, mouseY);
      rotate(this.rotate_angle);
      rect(this.x, this.y, this.breadth, this.length);
      
      pop();
      
      // setting coordinates of Tile to initial values with an offset
      this.x = this.xBeforeDisplacement + random(-60, 60);
      this.y = this.yBeforeDisplacement + random(-60, 60);
    }
    else {
      rect(this.x, this.y, this.breadth, this.length);
    }
  }

Reflection and Ideas for Future Work:

While I accomplished some of the tasks I set out to do, the end result did not meet my creative expectations, which is more a reflection of my creative challenges than issues with implementation. In future work, I intend to dedicate more time to creating pieces that aesthetically please me rather than focusing primarily on technical implementation.

 

Reading Reflection – Week 2

Casey Reas’s presentation provided a captivating exploration of the intricate relationship between randomness, computing, and art. His presentation invited us to question our fundamental assumptions about the inner workings of computing, the role of software, and how these concepts intertwine with diverse hardware platforms.

The heart of Reas’s talk in my perspective lay in his ongoing work over the past two and a half years, a collaborative project with esteemed authors. Through this project, Reas explored the 10-print algorithm outside the constraints of the Commodore 64’s hardware. This journey led him into the realm of Processing, where randomness took center stage.

One of the key takeaways was the revelation that in Processing, the entire image updates simultaneously, a stark contrast to the Commodore 64’s incremental character-based updates. This shift enabled Reas to manipulate randomness, adjust probabilities, control line thickness, and experiment with frame rates. These subtle alterations profoundly affected our perception of the generated art.

As the project progressed, Reas and his collaborators delved deeper into the creative potential of randomness. They introduced diverse color palettes, harnessed randomness to dictate the number of rows and line thickness, and unveiled a delicate balance between structure and serendipity driven by random values.

Reas’s talk culminated with a glimpse into his ambitious “Century” project. This ongoing endeavor aims to encapsulate a century of visual art by pixelating specific artists’ works and exposing the underlying algorithms. Users can interact with the piece, freezing frames and modifying parameters, providing a unique window into a century of artistic innovation.

What struck me most about Reas’s work was the juxtaposition of simplicity and complexity. His graphics, while minimalistic and seemingly primitive, encourage closer examination, revealing the intricate interplay of algorithms. “Century” serves as a testament to how random processes can yield profound insights and unique perspectives, all within the digital realm.

Casey Reas’s speech illuminated the potential of randomness in computing to generate art that defies conventions and provides new perspectives on creativity, technology, and the interplay between the two. His work stands as a testament to the limitless possibilities that arise when we explore the convergence of computing and randomness in the world of art.