LiveGAN
2021, Karlsruhe, GermanyGenerative ArtAudio-ReactiveGenerative Adversarial Networks
LiveGAN is an audio-reactive application where ones auditive impulse is converted to faces in realtime. For this work a dataset called FLICK_KA has been used which consists of over 50k photo booth participants at ZKM. The dataset was used to train a DCGAN which was chosen to meet realtime requirements. Experiments with 256x256 @60hz and 512x512 @30hz have proven to be achievable using mid-level GPU acceleration. However, I failed to maintain image quality while upscaling. The work was shown several times as an example of using ofxTensorFlow2 as a way to combine creative coding and machine learning.