Recent progress in deep generative models such as Generative Adversarial Networks (GANs) has enabled synthesizing photo-realistic images, such as faces and scenes. However, it remains much less explored on what has been learned in the deep generative representation and why diverse realistic images can be synthesized. In this talk, I will present our recent series work from GenForce (
https://genforce.github.io/) on interpreting and utilizing latent space of the GANs. Identifying these semantics not only allows us to better understand the inner working of the deep generative models but also facilitates versatile image editings. I will also briefly talk about the inverse problem (how to invert a given image into the latent code) and the fairness of the generative model.