~$FW1-3
Lightweight 1.3B text-to-video model with sub-20 second generation. Ideal for high-volume video applications where speed and cost are the priority.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy FastWan T2V 1.3B on Ion const client = await cumulus.deploy("fastwan-t2v-1-3b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })