Published in TPAMI (co-first author)
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predeﬁned prior distributions (e.g., Gaussian noises). However, such prior distributions are often independent of real data and thus may lose semantic information (e.g., geometric structure or content in images) of data. In practice, the semantic information might be represented by some latent distribution learned from data. However, such latent distribution may incur difﬁculties in data sampling for GAN methods. In this paper, rather than sampling from the predeﬁned prior distribution, we propose a local coordinate coding GAN, termed LCCGAN-v1, to improve the performance of image generation. First, we propose a local coordinate coding (LCC)-based sampling method in LCCGAN-v1 to sample meaningful points from the latent manifold. With the LCC sampling method, we can explicitly exploit the local information on the latent manifold and thus produce new data with promising quality. Second, we propose an improved version, namely LCCGAN-v2, by introducing a higher-order term in the generator approximation. This term is able to achieve better approximation and thus further improve the performance. More critically, we derive the generalization bound for both LCCGAN-v1 and LCCGAN-v2 and prove that a small-dimensional input is sufﬁcient to achieve good generalization performance. Extensive experiments on four benchmark datasets demonstrate the superiority of the proposed method over existing GAN methods.