~$IVNL
State-of-the-art open vision-language model at 8B scale. Top scores on MMBench, DocVQA, and OCRBench. Runs on all 5 universal-b nodes.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy InternVL 3.5 8B on Ion const client = await cumulus.deploy("internvl3-5-8b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })