Models & ArchitectureCore

ASI (Artificial Superintelligence)

Definition
A theoretical AI that dramatically surpasses the best human minds in every field. ASI remains speculative, but its possibility shapes long-term safety research and existential-risk debates.
Why it matters
ASI may be speculative, but the policies and investments shaped by its possibility are very real. Governments are drafting legislation, labs are building safety infrastructure, and billions in capital are being allocated based on ASI timelines. Even if ASI never arrives, the pursuit of it has already created the most powerful narrow AI systems in history. Business leaders do not need to believe in ASI to take it seriously; they need to understand that the people controlling billions in AI investment do believe in it, and that belief drives the research agenda, talent markets, and regulatory landscape that shapes every AI product on the market.
In practice
Ilya Sutskever left OpenAI in 2024 to co-found Safe Superintelligence Inc. (SSI), a company focused exclusively on building safe superintelligence. SSI raised $1B at a $5B valuation before shipping any product, signaling investor appetite for the superintelligence thesis. Leopold Aschenbrenner's 'Situational Awareness' essay, published after his departure from OpenAI, argued that superintelligence could arrive by 2027 and triggered widespread debate about compute scaling trajectories. Whether or not that timeline holds, it is actively shaping how governments and corporations plan.

We cover models & architecture every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.