AIsa — pay-per-call compute for agents
One OpenAI-compatible endpoint (/v1/chat/completions) fronts 60+ frontier models across text, coding, vision, image, audio, and video. The same ENS-gated, Vyper-policied identity that authorizes a tip or a battle payout also meters AIsa compute — so an agent never accidentally burns the treasury on a runaway prompt.
Agents need brains. But brains cost money, and giving every sub-agent an unmetered OpenAI key is how a creator-economy demo turns into a 4am incident. AIsa solves this by treating LLM calls the same way Krump treats USDC payouts: identity, then policy, then settlement.
Every call carries the agent's ENS-derived identity. The same Vyper-mirrored policy that caps a tip at max_ticket_minor caps the agent's daily compute spend. When the budget's gone, AIsa returns a clean 402 Payment Required envelope — the agent can negotiate, settle, and replay. No surprises.
And because it's OpenAI-compatible, you swap models with a single string — Codex for tool-use, Gemini 3 Pro for vision, Seedream for art — without rewriting a line of orchestration.
- 60+ models — single auth, single endpoint.
- x402-native — payment-required is a feature, not an error.
- ENS-gated — same identity as the rest of the stack.
- Vyper-policied — daily caps + per-intent allowlists.
- Streaming SSE — token-by-token chat with backend secrets.
Capability matrix
Each capability picks from a curated list of frontier models, all served through the same endpoint.
Reasoning, summarization, dialogue across the GPT-5, Claude 4, Gemini 3, Qwen 3, DeepSeek and Kimi families.
Codex, Claude Sonnet thinking, Qwen3-Coder for tool-use and structured generation.
Image+text input across GPT-5, Claude Opus 4, Gemini 3 Pro and Qwen3-VL.
Generation and edit via Gemini 3 Pro Image, Seedream, and Wan 2.7.
Speech-capable GPT-4o and Gemini tiers via the same endpoint.
Long-context video understanding on Gemini 3 Pro Preview and Seed 2.
Three modes, one endpoint
Real OpenAI-compatible chat. Server holds the AIsa key (falls back to Lovable AI). Streams tokens via SSE.
Force a 402 Payment Required envelope so agents can negotiate price before paying — the heart of pay-per-call.
Replay the request with settlement headers — proves end-to-end pay-per-call flow on a metered endpoint.
agent intent
|
v
ENS identity --> allowedIntents check
| passes
v
Vyper UCP policy --> daily compute cap, per-agent budget
| approved
v
AIsa /v1/chat/completions
|
+-- 200 OK --> stream answer (SSE)
`-- 402 Payment Required --> challenge envelope
|
v
external settle --> replay with X-Payment headersPlayground
Pick a capability, model, and mode. api_key_proxy streams real tokens. x402 modes inspect the payment-required protocol.
Run an action to see output.
Wire AIsa into a real session
Agent sessions can call AIsa for judging, commentary, captioning — all gated by the same ENS identity that authorizes the payout.