CumulusCumulus Labs
LeaderboardCategories
PlaygroundLogin
Get Started
LeaderboardCategoriesPlayground
Get Started
Back to Leaderboard
LLMs

Qwen 3.5 35B A3B

+28.6%

~$QW35M

Hybrid Mamba-SSM and Transformer architecture with 256 experts, 3B active. 128K context with near-zero KV cache memory. FP8 quantized for maximum throughput.

42.0k stars2.1M HF downloads
Deploy Now
#13 overall · 2 deploys
1.5s
Cold Start
avg on Ion
215ms
Avg Inference
per request
2
Active Replicas
right now
2
Total Deployments
all time

7-Day Trend

+28.6% this week
MonTueWedThuFriSatSun

Cold Start vs Competitors

Lower is better
●~$QW35M1.5s
~$QW3-81.2s
~$QW301.7s
~$QW321.8s

Cold start comparison vs similar models. Lower is better.

Cost Estimate

Cost per 1K Inferences
$0.90
Est. Daily (1K req/day)
$0.90
Est. Monthly (30K req)
$27.00

No subscriptions. Buy credits, pay per inference. Scale to zero when idle.

Quick Deploy

cumulus-sdk
import cumulus from "cumulus-sdk"

// Deploy Qwen 3.5 35B A3B on Ion
const client = await cumulus.deploy("qwen3-5-35b-a3b")

// Run inference
const result = await client.run({
  prompt: "Your prompt here",
  // model-specific params...
})

More in LLMs

Qwen 3 8B~$QW3-8
1.2s+22.4%
Qwen 3 30B A3B~$QW30
1.7s+16.4%
Qwen 2.5 32B~$QW32
1.8s+11.3%
Qwen 2.5 14B~$QW14
1.4s+9.8%
View all LLMs models →
← Iondex

© 2026 Cumulus Compute Labs Corporation