Models & ArchitectureCore

Weights

Definition
The numerical values inside a neural network that encode everything the model has learned during training. Model weights are the core intellectual property of AI companies and the subject of intense open-vs.-closed debates.
Why it matters
Weights are the product of billions of dollars in compute, data, and engineering. They represent accumulated knowledge, and controlling access to them is the central business decision for every AI company. Open weights (Meta's Llama, Mistral) democratize access and build ecosystem, but sacrifice direct monetization. Closed weights (OpenAI, Anthropic) enable API-based business models but create vendor dependency. For enterprises, the weights question determines whether you can self-host (data sovereignty), fine-tune (customization), and audit (transparency) your AI systems. The open-weight movement has been the biggest driver of AI democratization, enabling startups and researchers worldwide to build on frontier-quality models.
In practice
A 70B-parameter model's weights occupy approximately 140GB in FP16 precision, requiring specialized hardware to load and run. Meta's decision to release Llama weights publicly was the most consequential open-source decision in AI history, spawning thousands of fine-tuned variants and enabling the entire open-source AI ecosystem. Weight theft is a major security concern: a single exfiltrated weight file could give a competitor years of training investment for free. Google and OpenAI invest heavily in weight security, including physical security at data centers, encrypted storage, and access logging. The legal status of weights (are they copyrightable? patentable?) remains unsettled and varies by jurisdiction.

We cover models & architecture every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.