Models & ArchitectureCore

Natural language processing (NLP)

Definition
The branch of AI focused on enabling machines to understand, interpret, and generate human language. NLP underpins chatbots, translation, search, and document analysis across every industry.
Why it matters
NLP is the foundation that LLMs are built on, and understanding its evolution helps contextualize the current moment. Before transformers, NLP relied on rule-based systems, statistical models, and recurrent neural networks, each generation improving but still brittle. The transformer architecture and pre-training paradigm changed everything: instead of building task-specific NLP models, you train one massive language model and adapt it to any task. NLP as a distinct discipline is being absorbed into general-purpose LLM capability, but the problems it defined, machine translation, sentiment analysis, named entity recognition, information extraction, remain commercially important.
In practice
Traditional NLP tasks that once required specialized models (sentiment analysis, named entity recognition, text classification) are now handled by general-purpose LLMs with simple prompting. Google Translate's shift from phrase-based to neural machine translation, and then to LLM-based translation, illustrates the NLP evolution. Enterprise NLP applications include: contract analysis (extracting clauses and obligations from legal documents), customer feedback analysis (classifying and routing support tickets), and regulatory monitoring (scanning documents for compliance language). The NLP market, now largely synonymous with the LLM applications market, is projected to exceed $50B by 2027.

We cover models & architecture every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.