Making Visual Art with GANs: Final Update

Next Frame prediction

I used a royalty-free video of some fish underwater to test the next frame prediction model on colab. This was epoch 120 and it looks pretty realistic even towards some later frames, I believe this was because I didn’t add any digital zoom (in the input, I added it to the final video) so the stationary video stayed pretty natural looking. This is good to know going forward and I definitely want to train an NFP model with more movement in the future.

latent walk of stylegan2 model trained on cyanotype dataset

I created a dataset for cyanotypes using the instagram-scraper tool. I narrowed down all of the images to include only the prints, some with borders and some without. I did this intentionally so that there could be brushstrokes and the edges of cyanotypes in the frames. I learned a lot from training this model and feel it’s a great starting off point for a larger project. I will either make a dataset completely of my own cyanotypes (if I can make a large amount of small scale prints) or just focus in my dataset even further. There were a couple colors besides blue and white that made its way into the model, which would’ve been trained out had I kept training the dataset but wanted to stay within the free session I got on RunwayML at the moment. I think the fact that there are only two colors mainly involved, and that all true cyanotypes are prints on watercolor paper, it helps make for a colorful and textured model. I specifically like the sun prints of leaves and similar- I will probably make a more strict dataset of sun prints for next time.

example of images in dataset

random Generated Images from model

used gigapixel AI to up-res some outputs

Side by side comparison

Up-res’d output

More output images

Visual Art with Gans: Google Colab

It took a minute to get up and running with Colab but I’m enjoying seeing the code and understanding the process a bit better. I started with style transfers again in order to get familiar and it dawned on me that I should start experimenting with some Cyanotype prints that I’ve made.

input

style

output

INPUT

style

output

I think the first style transfer came out better than the second but I’m excited to keep experimenting with this, and making specific style inputs to make my own custom cyanotype transfer. The way that the foliage in the style transfer image translates is interesting and I’m going to start using photoshop to create floral patterns (I find they create an interesting pattern once converted in a cyanotype print).

input

style

output

input

style

output

input

style

output

I’m very excited to keep testing out models in different colab notebooks, as well as begin to create ideas for creating custom content based on the results I’m getting already.

Making Visual Art with GANs: Week 1 – Testing RunwayML

After getting set up in RunwayML I trained my first image of a black and white scene of ducks. I trained it on the Picasso model.

input – picasso model

ducks in batsto lake by author

output – picasso model

ducks in batsto lake trained with the Picasso model in RunwayML

I was really excited to train different models on this photograph of the Jersey Devil at Lucille’s in New Jersey, especially the MUNIT model. I thought a menacing figure comprised of flowers would be a fun experiment. First I trained with two different models (sketch model & picasso) before changing the “style” function number for tons of variation on the MUNIT model.

input – for all following output images trained on different models

jersey devil at Lucille’s by author

output- photo sketch model

jersey devil trained on photo sketch model in RunwayML

output – picasso model

jersey devil trained on the picasso model in RunwayML

output – picasso model

jersey devil trained on the picasso model in RunwayML

output – munit model for subsequent images

jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained on the MUNIT model in RunwayML
jersey devil trained with the MUNIT model in RunwayML

After working with the MUNIT model I wanted to train a color photograph, so I used this beach sunset image to train the subsequent images.

input – for following output images

beach sunset by author

OUTPUT – KANDINSKY style transfer

beach sunset trained with Kandinsky style transfer model in RunwayML

output – cubist style transfer

beach sunset trained with the cubist style transfer model in RunwayMl

output – hokusai style transfer

beach sunset trained with the Hokusai style transfer model in RunwayMl

output – wu guanzho style transfer

beach sunset trained on the Wu Guanzho style transfer model in RunwayMl

I really loved what the Hokusai style transfer looked like and wanted to train it on this photograph of a lake at Cedar Bridge Tavern in New Jersey. It looks like the darkest areas of the image turned into just blurred light so in the future I would adjust the brightness to try and avoid that effect with this model.

input – hokusai style transfer

lake at cedar bridge tavern by author

output – hokusai style transfer

lake at cedar bridge trained with the Hokusai style transfer model in RunwayMl