~$ORPH
3B text-to-speech model with natural prosody and emotional expressiveness. Low latency TTS for voice applications. Supports custom voice profiles and style control.
Cold start comparison vs similar models. Lower is better.
No subscriptions. Buy credits, pay per inference. Scale to zero when idle.
import cumulus from "cumulus-sdk" // Deploy Orpheus 3B on Ion const client = await cumulus.deploy("orpheus-3b") // Run inference const result = await client.run({ prompt: "Your prompt here", // model-specific params... })