~$WAN14
Wan 2.2 text-to-video at full 14B scale. Generates cinematic 5-second 720p video in ~4 minutes. Highest quality video model on the platform.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy Wan 2.2 T2V A14B on Ion const client = await cumulus.deploy("wan2-2-t2v-a14b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })