~$QW3527
Dense Qwen 3.5 model with vision support and FP8 quantization. 128K context window with hybrid SSM attention. Exceptional long-document reasoning.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy Qwen 3.5 27B on Ion const client = await cumulus.deploy("qwen3-5-27b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })