Yu's MemoCapsule

ClassicPaper

Generating Images Like Texts

Can we generate images in the same way as autoregressive language model? Although this sounds simpler than diffusion models, we still need to deal with many computational cost problems. But don’t worry too much, there are serval brilliant methods to try to make this idea more competitive. Taming Transformer -> Patrick Esser, et al. CVPR 2021 The key challenge of autoregressive generation is how to solve the quadratically increasing cost of image sequences that are much longer than texts.

Learning the Multi-modal Feature Space

In multi-modal tasks, one of the key challenges is the alignment between feature spaces of different modals. CLIP is representative of this type of work. Although its motivation is to learn a transferable visual model (like BERT) for downstream vision tasks, CLIP has brought a lot of inspirations for multi-modal tasks. Therefore, I prefer to describe CLIP and variants as how to learn a better multi-modal feature space. CLIP -> Alec Radford, et al.

DDPM and Early Variants

Although Diffusion Model is a new generative framework, it still has many shades of other methods. Bayes’ rule is all you need Generation & Diffusion Just like GANs realized the implicit generation through the mapping from a random gaussian vector to a natural image, Diffusion Model is doing the same thing, by multiple mappings, though. This generation can be defined as the following Markov chain with learnable Gaussian transitions:

Image Generation based on Score Model

Both likelihood-based methods and GAN methods have have some intrinsic limitations. Learning and estimating Stein score (the gradient of the log-density function $\nabla_{ x} \log p_{\text {data }}( x)$) may be a better choice than learning the data density directly. Score Estimation (for training) We want to train a network $s_{\theta}(x)$ to estimate $\nabla_{ x} \log p_{\text {data }}( x)$, but how can we get the ground truth (the real score)?