Model WarsApril 17, 2026via The Decoder

Alibaba's open model Qwen3.6 leads Google's Gemma 4 across agentic coding benchmarks

Why it matters

Alibaba's Qwen3.6 demonstrates that efficient sparse activation (MoE-style parameter sparsity) can outperform larger dense models on agentic coding tasks, challenging Google's approach and validating open-source competitive viability against closed alternatives.

Key signals

  • Qwen3.6-35B-A3B activates only 3 of 35B parameters per inference
  • Beats Google Gemma 4-31B on coding and reasoning benchmarks
  • Open-source model release
  • Sparse activation/MoE architecture advantage demonstrated
  • Agentic coding task performance as evaluation metric

The hook

Alibaba's open model just beat Google on coding benchmarks. With 35B parameters. Using only 3 at a time.

Alibaba's new open-source Qwen3.6-35B-A3B activates just three of its 35 billion parameters at a time, yet beats Google's larger Gemma 4-31B on coding and reasoning benchmarks. The article Alibaba's open model Qwen3.6 leads Google's Gemma 4 across agentic coding benchmarks appeared first on The Decoder.
Relevance score:78/100

Get stories like this every Friday.

The 5 AI stories that matter — free, in your inbox.

Free forever. No spam.

Alibaba's open model Qwen3.6 leads Google's Gemma 4 across agentic coding benchmarks | KeyNews.AI