~$FW14
Distilled Wan 2.2 text-to-video at 14B scale. 3-step generation completes in under 60 seconds — 4× faster than the full model. Maximum video throughput on Ion.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy FastWan T2V 14B on Ion const client = await cumulus.deploy("fastwan-t2v-14b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })