Seminar: Between Images (2020)
This project is about the use of artificial intelligence to create video material and the control of images by sound. It is a music video that shows a journey from the countryside into a city. It aims to show the emotions of someone travelling through land- and/or cityscapes, starring out of the window, maybe listening to some music, daydreaming, while the world passes by.
The images are generated through RunwayML with four different StyleGAN models I trained with datasets based on selected found footage of landscapes, highways, cities and city-streets. As these images are generated through an array of numbers (vectors), I was able to control them with code in the p5.js web-editor. The code maps the speed of the sound (bpm) to the images, so that it synchronizes the images to the beats, for example intervals of 2 seconds at 120bpm. How fast the images between these “new images” change can also be controlled by an “animation-factor” in the code, so I was able to give the movements some kind of rhythm.
With this code I calculated and downloaded thousands of images from the different models and with different “animation-factors”, then put the images together in After Effects so I had several sequences I could then export as videos. The last step was then to cut these video-snippets together and arrange them to the sound.
Special thanks to:
Joshua Stofer for his great piece of music,
Ludwig Zeller for writing me the code.