Model WarsApril 16, 2026via SiliconAngle

Anthropic launches Claude Opus 4.7 with coding, visual reasoning improvements

Why it matters

Anthropic is systematically closing the coding capability gap with each Claude release. For founders building with LLMs, this shifts the calculus on model selection for software engineering workflows.

Key signals

  • Claude Opus 4.7 launched with improved coding and visual reasoning
  • SWE-Bench Pro score: 64.3% (vs. Opus 4.6's ~54.3%)
  • ~10 percentage point improvement over predecessor on programming benchmarks
  • Published April 16, 2026

The hook

64.3%. That's Claude Opus 4.7's new SWE-Bench Pro score—nearly 10 points ahead of its predecessor.

Anthropic PBC today opened access to Claude Opus 4.7, the latest addition to its popular line of large language models. The company says that the LLM is significantly better than its predecessor at coding tasks. Opus 4.7 scored 64.3% on the SWE-Bench Pro programming benchmark, nearly 10% higher than Opus 4.6. The new model also […] The post Anthropic launches Claude Opus 4.7 with coding, visual reasoning improvements appeared first on SiliconANGLE.
Relevance score:82/100

Get stories like this every Friday.

The 5 AI stories that matter — free, in your inbox.

Free forever. No spam.