Blog Feed

Ideation and Prototyping: Photo Essay

The photo essay assignment required that I pick one of their prompts in addition to the object I used in my 50 renderings assignment (a camera). The protagonist that stood out to me was the colorful butterfly. The reason I chose this was because I’ve been wanting to shoot some black and white photos recently and I thought it could be an interesting collage of color, to use an animated butterfly along with black and white photography.

A butterfly protagonist with a camera was definitely a challenge to come up with a story for. So I started with what I had, a New York City backdrop with black and white photography.

The story

It was just like any other day for National Geographic Photographer Bobby the butterfly.  He was on assignment in New York City, tasked with capturing the city skylines.  He had always been interested in landscape photography his whole life.  The scenery captivated him, and working in black and white was his preferred method of shooting.  He swears by Ansel Adam’s book “The Negative.”

Setting up his first shot he couldn’t help but be bored with cloudless sky.  There were no gradients in the mountains, only greenery and building reflections.  He was uninspired to say the least. Maybe for the first time in his career, he was doubting the format in which he grew up using.

While setting up a shot using the railing on a balcony he noticed a fellow butterfly, one with colors he had never seen before. For the first time in his life, Bobby was seeing color. “I can’t believe the most beautiful butterfly I’ve ever seen, is in New York!” he thought to himself. Bobby needed to see if he could ask for a photo, it was not his usual style but for the first time he wanted to shoot in color.

Bobby switched lenses and chased after her, almost flying into foliage on the way! He was worried he had scared her off, after all even butterflies do not like the paparazzi.

Bobby lost sight of her and diligently went back to work. All day shooting landscapes, but he couldn’t get the image of the butterfly that got away. He even stopped to take a picture of a dilapidated plant. This is how Bobby was feeling inside, out of his comfort zone and decaying into his own sadness.

After a long day of work bobby decided to rest on a chair outside and drink a nectar-beer. He had finished his work for the day but still couldn’t get the other butterfly out of his mind. To his surprise, she approached him out of nowhere stating she was camera shy and didn’t appreciate him chasing after her with his camera! Bobby learned a valuable lesson that day, sometimes the best moments are experienced with his camera in his bag.

Ideation and Prototyping: New Metaphors, New Models

For this week’s assignment I decided to use the new model exercise to imagine new ways to think about capture devices. While my thesis will be largely centered around analog photography, I want to also explore new ways current technologies could take capturing forward.

What exists now?

In the landscape of virtual production, we have many new tools for creation. These tools however are often driven by large companies who follow large leading productions/studios. An example of this being Star Wars/Avatar/Blockbuster films that can invent new strategies for filmmaking/capture. Live production with LED walls has become a huge market recently as well. Some of the technologies they employ are camera data/camera location/tracking backgrounds/live updating backgrounds. Also Apple has introduced the true sense tracking and depth information for apps to use. Traditionally, photography cameras only have technology related to exposure control, automatic features, film advancing, and now WIFI enabled sharing. Would depth data bring any new features to these photos? What possibilities could be found with integrated different kinds of capture technology and data?

To explore this I used a card deck from Dan Lockton’s New Metaphors.

The cards that stuck with me:

  • Adaptability
  • The presence of AI & The opportune moment
  • Privacy of your data
  • The backstory of a product or service

In the concept of a camera/capturing device how can these concepts and metaphors be utilized? I did some sketching to think through these concepts.

For adaptability I really was interested in the idea of a modular format camera. This currently is impossible but was a fun thought experiment. The biggest barriers are the fact that flange focal distance for different formats can change dramatically. I think the use of bellows and interchanging film backs could be one solution. I was inspired by the Hasselblad modular design that allows film backs to be changed, for different types of film (not format/size).

I think there is something DIY and magical about the use of bellows in photography. I think it could help with the creation of newer low budget medium format cameras, as this was the case in the past. Collapsible for convenience.

Performative Avatars: 3D Model Head Scan

I was able to use my phone and have my roommate scan myself for this week. I tried all of the different capture apps for Iphone and found one used for Dentists called Bellus3D provided the best mesh/high resolution scan of my face. It was a little more expensive to export the scan but their algorithm for wrapping the head and mesh was far superior to the others. There was no cleanup and I found the other apps would result in messy meshes with more sparse point clouds. This system actually uses a variety of poses as opposed to working around in realtime while trying to focus on keeping the mesh clean. It then runs its system based on the photos and outputs a very clean model.

Ideation | Prototyping: Use, Affordances and Practice

This week we have to translate digital to physical affordances, and I chose the Twitch Online platform. I wanted to explore the physicality of these interactions but also explore how this platform could add more interfaces as well.

I want to look at interactions on both sides of this platform, creators & audience. Moving forward I think many types of live entertainment will become more interactive and much of that is built upon twitch.

Comments also appear on the bottom left after they’ve filtered

The main areas of interaction are in this right hand side of the frame. There are a few ways to do so.

  • Commenting
  • Sending Emojis
  • General Donations

Infinite Scroll

A very literal scroll. My favorite part about twitch is the way broadcasters interact with the infinite scroll of comments. This is especially satisfying because it is displayed for everyone including the broadcaster. You know that the broadcaster can see it and I think it would be great to have it exist in physical space to solidify that. I printed out some comments to demonstrate the scroll after users type in to the stream.

Emojis

Emojis are a simple yet effective way of more passive viewership. Without physical presence this is a great way to get feedback.

The happy or not stand that many see at airport terminals and DMVs would be a perfect physical way of responding with emojis. Each twitch platform can actually create custom emojis as well to express more.

How emojis work on twitch

Donations

Navigating cameras and the screen

The way twitch works now is that broadcasters will change their cameras based upon where they want the viewer to look. I think this is an interesting opportunity to add more physical interaction. The viewer should be able to toggle and change the position of the camera (digital movement of overall view) based upon their own physical switches and joystick.

I think overall it would be a really interesting experience to be able to physicalize a lot of the affordances on this online platform and if a lot of development was made there could be a whole host of new interactions.

Performative Avatars: Unreal Engine Short Film

For the short film assignment I wanted to play around with version 4.26 of Unreal Engine. I was very excited to find a brand new environment available for free on the marketplace that I could use as my setting for experimenting. I also wanted to bring back the old man with the cigar character.

When I tried to load in the pre built worlds I was running into space issues even though everything was on my separate drive, some things were constrained to my local disk. This along with the large textures kept crashing the site. Instead I began working with an HDR background texture which looked realistic and was fun to play with and I began working in the sequencer to create some shots. I was definitely a little in over my head, trying to animate the cine cameras and I’ve realized 24 fps looks quite bad coming out of unreal. I need to keep working on creating animations and spawning events.

Ideation and Prototyping: 50 Renderings

For the 50 renderings project I decided to explore cameras, what it means to capture an image & explorations throughout that process.

I chose to work only with physical sketching and use of objects as I have found myself relying too much on the digital fabrication tools that I use.

To begin exploring I broke down the renderings into five camera categories.

classic slr style

Point and shoot

medium format

polaroid

video camcorder

slr pt.2 (first is mirrorless but in style)

exploring never used shapes

flash placement

optical mechanism

shutter

leaf shutter example

modular system

trigger placement

bolex style

solenoid shutter/mechanisms

building blocks using parts

gearing mechanisms

grip placement

Performative Avatars: Skeletal Meshes

Exploring skeletal meshes was a lot of fun. I’m still getting the hang of the Unreal workflow but the ability to start playing around with animations and simulated gravity immediately in a third person environment is amazing.

I added two different animations that I thought could work together. I imported the old man smoking a cigar from Mixamo and the “swing to land” animation. It almost looks like Spiderman swinging down to hit a villain. I’d love to create a trigger for the animations through the main character colliding with an object or something like that, so it’s not just a repeating cycle.

Things I want to keep exploring are changing the environment and customizing the world and controls more.

Ideation and Prototyping: Endangered Animal

The endangered animal I chose to work with is the Hawksbill Turtle. The Hawksbill Turtle is most threatened by the illegal wildlife trade. Their beautiful tortoise shell is used in many products such as jewelry and ornaments. They are also affected by the lack of nesting and feeding habitats. They are an integral part of maintaining the health of coral reefs, and are also affected by pollution such as the plastic fork I used in the design. I definitely need to do some more collecting of recycling materials for future prototypes because I didn’t have much to work with.

Materials shown: six plastic germination domes, a plastic fork, and Starbucks box
Starbucks box cut out

Sketching

I started with a Starbucks coffee box that looked almost like it had fins already. I had the plastic fork from takeout, as well as six small plastic germination domes used for an automatic planter. I like that the dome was shaped almost like a turtle shell, although it turned out to be too small and rigid to manipulate. I started to sketch and formulate how I was going to design the turtle.

I was able to bend one of the germination domes in half to create the narrow beak that the Hawksbill Turtle is also known for.

I used larger pieces of the fork to create the turtle’s eyes.

I cut out the fins in the most natural way I could with the cardboard and scissors.

Final Product

While turtles cannot fly, I enjoyed seeing my creation in the air as if it was suspended in water. I hope these beautiful creatures stick around for a long time to come.

Performative Avatars: Week 1

The two online avatar systems I chose were the South Park Studios online creator and the Pop Yourself avatar creator on Funko.com.

They were both very fun, with the South Park one having the more offensive and weirder options while the funko product was cute and funny. It really is a difficult task to set up a successful avatar creator. How can we create a tool with enough agency to truly represent anyone who comes across it? It is quite an impossible task and makes me re-examine what an avatar truly is. As the PBS video “Controlling vs. “Being” your Avatar” brings up, are we creating a character or a true representation of ourselves? It makes me think of the video games like NBA 2K and others that allow you to take a photo of yourself so they can wrap those pixels around your avatar. That would be a better example of a truer representation of self. But what are the desired outcomes for your avatar? In what world will it live? The South Park avatar is very successful because all of the options are within the language of South Park characters. It was like re-living an episode of South Park choosing through all the different options available. There was enough abstraction that I did not feel it resembled me at all. Instead I chose things like the “New Jersey” skin tone which was a jab at bad New Jersey spray tans, as an Italian from New Jersey I thought it was funny. This kind of character building I find more interesting.

The Funko avatar creator also had quite a bit of abstraction as there weren’t that many options to choose from. I chose the 3D goggles as a student learning 3D environments and being able to add the cat was a nice touch. I believe the avatar creators with fewer options must allow the user to buy into the environment in order to care about the characters they are creating. South Park was definitely the more successful builder for this reason.

Discussion Questions:

Are we more empathic towards avatars that look like us? Is this a tactic used for engagement by game designers? In what context is that ethical or unethical?