High-fidelity image generation with fewer labels

Authors

Mario Lucic, Michael Tschannen, Marvin Ritter, Xiaohua Zhai, Olivier Bachem, and Sylvain Gelly

Reference

prerint, 2019, submitted.

[BibTeX, LaTeX, and HTML Reference]

Abstract

Deep generative models are becoming a cornerstone of modern machine learning. Recent work on conditional generative adversarial networks has shown that learning complex, high-dimensional distributions over natural images is within reach. While the latest models are able to generate high-fidelity, diverse natural images at high resolution, they rely on a vast quantity of labeled data. In this work we demonstrate how one can benefit from recent work on self- and semi-supervised learning to outperform state-of-the-art (SOTA) on both unsupervised ImageNet synthesis, as well as in the conditional setting. In particular, the proposed approach is able to match the sample quality (as measured by FID) of the current state-of-the art conditional model BigGAN on ImageNet using only 10% of the labels and outperform it using 20% of the labels.

Keywords

Generative adversarial networks, semi-supervised learning, self-supervised learning

Comments

Mario Lucic, Michael Tschannen, and Marvin Ritter contributed equally to this work


Download this document:

 

Copyright Notice: © 2019 M. Lucic, M. Tschannen, M. Ritter, X. Zhai, O. Bachem, and S. Gelly.

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.