AGI (Artificial General Intelligence)
- Definition
- A hypothetical AI system that matches or exceeds human-level reasoning across every cognitive domain. No AGI exists today, but the race to build one is driving hundreds of billions in investment.
- Why it matters
- AGI is the North Star that justifies the massive capital expenditures at OpenAI, Google DeepMind, and Anthropic. Whether you believe it is five years away or fifty, the pursuit itself is reshaping markets: GPU demand, energy infrastructure, talent wars, and regulatory frameworks all orbit around AGI timelines. For business leaders, the practical question is not whether AGI arrives but how the intermediate capabilities developed along the way disrupt your industry. Companies that dismiss AGI as sci-fi risk being blindsided by the very real, very narrow breakthroughs that emerge from AGI-focused research.
- In practice
- OpenAI published an internal framework in 2024 categorizing AI progress into five levels, from L1 (chatbots) to L5 (organizations). CEO Sam Altman stated the company was approaching L2 (reasoners) with o1 and targeting L3 (agents) by 2025. Meanwhile, Anthropic's Dario Amodei wrote a 15,000-word essay arguing transformative AI could arrive by 2026-2027, while Meta's Yann LeCun maintains that current architectures cannot reach AGI at all. The disagreement itself matters: it drives divergent billion-dollar bets on architecture, safety, and deployment strategy.
We cover models & architecture every week.
Get the 5 AI stories that matter — free, every Friday.
Related terms
ASI (Artificial Superintelligence)
A theoretical AI that dramatically surpasses the best human minds in every field. ASI remains speculative, but its possibility shapes long-term safety research and existential-risk debates.
Narrow AI
AI systems designed for a specific task or domain, such as image classification or fraud detection. All commercially deployed AI today is narrow, despite the generality of modern LLMs.
Alignment
The challenge of making an AI system's goals and behaviors match human intentions and values. Misalignment risk grows as models become more capable, making this a top priority for safety teams.
Scaling laws
Empirical relationships showing that model performance improves predictably as you increase data, compute, and parameters. Scaling laws are why labs are pouring billions into ever-larger training runs.
Know the terms. Know the moves.
Get the 5 AI stories that matter every Friday — free.
Free forever. No spam.