Models & ArchitectureDeep Dive

GAN (Generative Adversarial Network)

Source
Definition
A model architecture where two neural networks (a generator and a discriminator) compete to produce increasingly realistic synthetic data. GANs dominated image generation before diffusion models took over.
Why it matters
GANs were the breakthrough that proved AI could generate photorealistic content, launching the entire generative AI wave. While diffusion models have largely replaced GANs for image generation, the adversarial training concept remains influential. GANs are still used in specialized applications: super-resolution, data augmentation, and real-time video synthesis where their speed advantage over diffusion matters. Understanding GANs matters for historical context and because the adversarial framework appears in many other AI techniques, including RLHF, where a reward model evaluates a policy model in an adversarial-like setup.
In practice
Ian Goodfellow introduced GANs in 2014, and they dominated image generation through 2021. StyleGAN from NVIDIA produced the famous 'This Person Does Not Exist' website. NVIDIA's GauGAN turned sketches into photorealistic landscapes. However, GANs suffered from training instability, mode collapse, and difficulty generating diverse outputs. Diffusion models solved these problems, and by 2023, GANs were largely replaced for new image generation applications. GANs remain relevant in real-time applications where diffusion's iterative process is too slow, and the adversarial training framework continues to influence alignment research.

We cover models & architecture every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.