Vendor Agnostic
Normalized invocation interface across LLM providers—not output equivalence.
Problem
Each provider has incompatible APIs: OpenAI uses messages, Anthropic uses messages with different schemas, local models use OpenAI-compatible endpoints but break on response_format. Switching from OpenAI to Claude requires rewriting request logic, error handling, retry semantics. Estimated migration cost: 2–4 engineering weeks.
Neusnap Behavior
Adapters normalize request/response shapes across providers. Developer writes against a single interface. Neusnap translates to provider-specific API calls. Retry semantics are preserved across providers. Provider-specific rate limits are handled internally (exponential backoff, fallback on 429).
Guaranteed
- →No workflow code changes when swapping providers (config-only change)
- →Fallback across vendors works inside the same transaction
- →Retry semantics consistent across all providers (exponential backoff + jitter)
- →Authentication handled per adapter (API keys never exposed to client)
NOT Guaranteed
- × Provider pricing parity (OpenAI ≠ Anthropic cost per token)
- × Identical latency across providers (regional differences persist)
- × Identical reasoning depth or output quality
- × Feature parity (e.g., vision models, function calling support varies)
Example: Provider Swap
// Application code (unchanged)
await neusnap.llm({
model: "openai/gpt-4o",
prompt: "Explain ACID transactions"
});
// Config change only (no code modification)
{
"routingPolicy": {
"primary": "anthropic/claude-3-opus", // swapped from OpenAI
"fallback": ["openai/gpt-4o"]
}
}
// Adapter handles translation:
// - OpenAI: POST /v1/chat/completions
// - Anthropic: POST /v1/messages
// - Developer sees identical interfaceSupported Providers
OpenAI
GPT-4, GPT-4o, o1, o3-mini
Anthropic
Claude 3.5 (Opus, Sonnet, Haiku)
Local (OpenAI-compatible)
Llama, Mistral, any /v1/chat/completions endpoint
Planned
Gemini, Cohere