~$WANTI
Wan 2.2 text+image-to-video at 5B scale. Balanced quality and speed for interactive use. Accepts both a text prompt and reference image for controlled generation.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy Wan 2.2 TI2V 5B on Ion const client = await cumulus.deploy("wan2-2-ti2v-5b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })