The DropApril 4, 2026via Hacker News
Show HN: sllm – Split a GPU node with other developers, unlimited tokens
Why it matters
A new service democratizes access to expensive large language models by enabling cost-sharing among developers, potentially disrupting how AI infrastructure is consumed by smaller teams and individual developers.
Key signals
- DeepSeek V3 requires 8×H100 GPUs at $14k/month
- Pricing starts at $5/month for shared access
- Most developers only need 15-25 tokens per second
- OpenAI-compatible API
- Private traffic with no logging
The hook
$14k/month. That's what running DeepSeek V3 costs solo. This startup splits the bill among developers for $5/mo.
Running DeepSeek V3 (685B) requires 8×H100 GPUs which is about $14k/month. Most developers only need 15-25 tok/s. sllm lets you join a cohort of developers sharing a dedicated node. You reserve a spot with your card, and nobody is charged until the cohort fills. Prices start at $5/mo for smaller models.
The LLMs are completely private (we don't log any traffic).
The API is OpenAI-compatible (we run vLLM), so you just swap the base URL. Currently offering a few models.
Comments URL: https://news.ycombinator.com/item?id=47639779
Points: 33
# Comments: 21
Relevance score:75/100