I used a royalty-free video of some fish underwater to test the next frame prediction model on colab. This was epoch 120 and it looks pretty realistic even towards some later frames, I believe this was because I didn’t add any digital zoom (in the input, I added it to the final video) so the stationary video stayed pretty natural looking. This is good to know going forward and I definitely want to train an NFP model with more movement in the future.
latent walk of stylegan2 model trained on cyanotype dataset
I created a dataset for cyanotypes using the instagram-scraper tool. I narrowed down all of the images to include only the prints, some with borders and some without. I did this intentionally so that there could be brushstrokes and the edges of cyanotypes in the frames. I learned a lot from training this model and feel it’s a great starting off point for a larger project. I will either make a dataset completely of my own cyanotypes (if I can make a large amount of small scale prints) or just focus in my dataset even further. There were a couple colors besides blue and white that made its way into the model, which would’ve been trained out had I kept training the dataset but wanted to stay within the free session I got on RunwayML at the moment. I think the fact that there are only two colors mainly involved, and that all true cyanotypes are prints on watercolor paper, it helps make for a colorful and textured model. I specifically like the sun prints of leaves and similar- I will probably make a more strict dataset of sun prints for next time.
This week I wanted to make something that focused a lot on scale and camera movement. It’s taking me longer than expected to feel comfortable with building environments and working with textures. Here I placed a large ocean with my keyed out video of Santa with a laptop. I thought it’d be fun to frame it as a picture and enlarge the scale of the frame and then have a character run along that plane. To do so I added a racer character who dances, the camera swivels around him until he jumps off of the plane. Working with the animations was fun as well but hard to dissolve into smooth motion using keyframes. I look forward to continue to get more comfortable in Unreal.
This week we learned how to use the sequencer in Unreal and create planes with media textures for displaying video in the world. I brought in my favorite character from Mixamo to do some dancing for a virtual audience. I wanted to build cathedral like walls using Quixel assets as well. I had some technical hiccups this week but look forward to continue building more elaborate worlds as I get more comfortable with the software.
It took a minute to get up and running with Colab but I’m enjoying seeing the code and understanding the process a bit better. I started with style transfers again in order to get familiar and it dawned on me that I should start experimenting with some Cyanotype prints that I’ve made.
input
style
output
INPUT
style
output
I think the first style transfer came out better than the second but I’m excited to keep experimenting with this, and making specific style inputs to make my own custom cyanotype transfer. The way that the foliage in the style transfer image translates is interesting and I’m going to start using photoshop to create floral patterns (I find they create an interesting pattern once converted in a cyanotype print).
input
style
output
input
style
output
input
style
output
I’m very excited to keep testing out models in different colab notebooks, as well as begin to create ideas for creating custom content based on the results I’m getting already.
My name is Will Politan and this is where I will be documenting the work I do for Mixed Reality Filmmaking. I am a second year ITP student and I got my BFA from NYU in filmmaking with a background as a cinematographer. I’m very interested in understanding these tools for the work I already do, but also to experiment in creating 3D assets and new worlds as well. I took Performative Avatars last semester so I have some basic understanding of avatar creation and the Unreal Engine, but I’m excited to learn more about the production workflow.
After getting set up in RunwayML I trained my first image of a black and white scene of ducks. I trained it on the Picasso model.
input – picasso model
ducks in batsto lake by author
output – picasso model
ducks in batsto lake trained with the Picasso model in RunwayML
I was really excited to train different models on this photograph of the Jersey Devil at Lucille’s in New Jersey, especially the MUNIT model. I thought a menacing figure comprised of flowers would be a fun experiment. First I trained with two different models (sketch model & picasso) before changing the “style” function number for tons of variation on the MUNIT model.
input – for all following output images trained on different models
jersey devil at Lucille’s by author
output- photo sketch model
jersey devil trained on photo sketch model in RunwayML
output – picasso model
jersey devil trained on the picasso model in RunwayML
output – picasso model
jersey devil trained on the picasso model in RunwayML
output – munit model for subsequent images
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained with the MUNIT model in RunwayML
After working with the MUNIT model I wanted to train a color photograph, so I used this beach sunset image to train the subsequent images.
input – for following output images
beach sunset by author
OUTPUT – KANDINSKY style transfer
beach sunset trained with Kandinsky style transfer model in RunwayML
output – cubist style transfer
beach sunset trained with the cubist style transfer model in RunwayMl
output – hokusai style transfer
beach sunset trained with the Hokusai style transfer model in RunwayMl
output – wu guanzho style transfer
beach sunset trained on the Wu Guanzho style transfer model in RunwayMl
I really loved what the Hokusai style transfer looked like and wanted to train it on this photograph of a lake at Cedar Bridge Tavern in New Jersey. It looks like the darkest areas of the image turned into just blurred light so in the future I would adjust the brightness to try and avoid that effect with this model.
input – hokusai style transfer
lake at cedar bridge tavern by author
output – hokusai style transfer
lake at cedar bridge trained with the Hokusai style transfer model in RunwayMl
Jude updated the questionnaire part of our project with a mock run through.
cards
Irwin finished the card design and I printed them and cut them to begin working out how it feels in the physical space.
Card design by Irwin.
In the image below you can see that I chose virtual reality. I landed on some cards I thought I would need for a virtual reality production. This included investment, script writing, storyboarding, wireframing, animation, rigging, motion capture, testing, editing, marketing and screening. I really enjoyed how the extra descriptions can provide even more context for exactly what each step is requiring.
Actually printing out the deck was extremely helpful for getting a feel for how the system will work. While we didn’t have time to test on users yet I found in the short period of time that I was already realizing ways in which we can improve. Firstly, I think the use of real cards and numbers was very nice to see (also providing a dual purpose) but might slightly take away from the productivity of the game. If we wanted to keep it more fun, I’d say we could keep the traditional card numbers but if we wanted to make it more serious it might be better to brand the cards only on their categories.
To make diving into the brainstorming easier we could color code to only categories consisting of “channels, pre-production, production, post-production.” I found myself spreading out all of the cards in order to organize them to my liking. It was really great to see how we could start connecting the dots in order to visualize the process for a production. I feel like often the tools available to us for scheduling and many of these tasks are never visualized. It’s most often a to-do list, or excel spreadsheet with a timeline. It was really nice to piece a production together using the cards.
I am proud of the work we’ve done on this project so far. I think doing more user testing in person with the physical cards will be extremely valuable feedback. I also think getting feedback on the categories would be very helpful. We may need to adjust how many cards we have in each category to be more useful. Also, I think running users through both the website and cards back to back is necessary to see how seamless that process can be.
This week we continued to further our research into the best practices for choosing/preparing a visual story within the landscape of emerging media. We also identified the aspects of production we will be including for our card deck. In order to do this we broke down the corresponding details into four categories: pre-production, production, post-production, and channels.
Pre-Production
Script writing
Interview
Storyboard
Recruitment / Casting
Drafting
Character design
Financial / investment
Location Scout
Order production insurance
Create equipment list
Production planning
Table-read
Production/Prototyping
Creation of assets
Building infrastructure
Filming / Cinematography / Photography
User research/ User experience
3D modeling
Rigging
Animation
Drawing / illustration
Motion Capture
Iteration
Wireframing
Curation
Post production
Editing
VFX
Sound design
Color grading
Distribution
Advertising
Marketing
Review
Screening
Rating
Testing
Publishing
Channels
Traditional print media
Small print materials
Outdoor advertising
Broadcasting
Website
Videos
Movies
Apps
Social media
VR
AR
MR
Games
Physical Space
AI
These will be turned into cards with vector graphics.
Mockup design created by Irwin.
This project is partially inspired by the production book I had to create and use in the undergraduate film program. It was extremely intensive to fill out and is necessary to obtain the student production insurance. While it is not necessary to show such documentation to purchase production insurance outside of school, it is extremely helpful and necessary to think through all of the same details. This process we are creating can be used by anyone to achieve two results: better understanding of the project they are embarking on and whether their current medium is best suited for that.
This culminated in our idea taking two forms, an online questionnaire and a card deck. The card deck will include all of the principles listed above. This is to help the producer/stakeholder plan out the various aspects of production. The questionnaire will serve as the guide for choosing which medium is likely the best to carry out.
Jude created a mockup of what the questionnaire will look like.
In the video you can see that this portion of the project will be focused on pairing down the most important details of the shoot that correspond to their ideal medium.
persona
Example use case:
Jake: Independent filmmaker in his 30’s creating commercials for clients. Jake wants to take his filmmaking to the next level for social engagement that goes past traditional methods. He sees augmented reality potentially as a budding industry and great opportunity for more viewership/interaction. The problem is the pipeline for augmented reality is quite different than that for traditional filmmaking. Also, is augmented reality going to be the best medium? Or would virtual/mixed reality or something else be closer to his goals.
Jake starts with our questionnaire website:
He has to answer a number of questions ranging from how you want your product to be experienced, what is the budget, and who is your target audience.
Based on these questions we will make a “best case” suggestion. This will either solidify or potentially change his ideas up to this point. Once a decision is made on the target audience and medium, it’s time to plan out the shoot.
Jake must break his shoot down into our four categories of pre-production, production, post-production and channels. Here he can use the design deck to flesh out all of the details for the medium he chooses. He will pick the corresponding cards that fit the medium and begin to flesh out the shoot armed with more confidence and preparedness.
We still need to do more user testing and figure out how to make the process more informative and intuitive for our stakeholders.
I wanted to use this new tool to visualize some of the search terms on the topics my group is working on. We are making a tool for producers and storytellers to work through and design their experience in the correct new medium of their choice. This includes Virtual Reality, Augmented Reality, and Mixed Reality.
I started trying to connect some dots on this infographic program (InfraNodus) that scours the internet and creates connections between the nodes and their corresponding concepts/metadata. I tried inputting and searching for different terms. It was interesting to see the connections made by VR, AR, MR and Storytelling.
Some of these connections show the connected nature of these new mediums. For example blending reality with the virtual and the use of computers and their environment. Is this the perfect environment for all new stories? Definitely not, but our tool will help creators pick the perfect medium for the story they are telling.
The preliminary votes/responses from my group’s survey
Rani started this survey and we’ve gotten some prelimary responses already.
These responses are very interesting pertaining to the use of social media and AR. This seems to be the way to gather the largest audience for your work.
A key response here is the ease of use for AR. This is definitely a big advantage as it is definitely more accessible for the end user. The downsides are often the quality/accuracy of the experience itself.
We will be reaching out to experts in the field like the AR/VR association to gain some insights as well.
Stakeholders
These are the main stakeholders we are targeting for our research project.
For the last part of my time capsule I decided to first start out sketching some potential scenes. I was having a hard time finding interesting revelations from whoever finds the time capsule. I was almost more interested in trying to predict what the world in 2145 would look like.
A rough sketch of potential scenes.
I’m not super confident in my drawing skills so I started looking online for comic creators to help me make something that would be enjoyable to read. I tried this site first but found the character editor wasn’t very good and you couldn’t edit their poses.
I was able to find the website Storyboardthat which turned out to be an awesome editor with just enough customization. I began building the scenes and working around some of the objects and backgrounds that they had available. It was honestly really fun to play around with and they give students a 14 day free trial so I could export everything.
I exported the final comic in multiple formats including a gif of the entire thing playing, I attached it at the bottom.