Managed Platform Overview
What is the any-llm Managed Platform?
Section titled “What is the any-llm Managed Platform?”The any-llm managed platform is a cloud-hosted service that provides secure API key vaulting and usage tracking for all your LLM providers. Instead of managing multiple provider API keys across your codebase, you get a single virtual key that works with any supported provider while keeping your credentials encrypted and your usage tracked.
The managed platform is available at any-llm.ai.
Why use the Managed Platform?
Section titled “Why use the Managed Platform?”Managing LLM API keys and tracking costs across multiple providers is challenging:
- Security risks: API keys scattered across
.envfiles, CI/CD pipelines, and developer machines - No visibility: Difficult to track spending across OpenAI, Anthropic, Google, and other providers
- Key rotation pain: Updating keys means touching multiple systems and codebases
- No performance insights: No easy way to measure latency, throughput, or reliability
The managed platform solves these problems:
- Secure Key Vault: Your provider API keys are encrypted client-side before storage—we never see your raw keys
- Single Virtual Key: One
ANY_LLM_KEYworks across all providers - Trace Analytics: Track tokens, costs, and performance metrics without logging prompts or responses
- Zero Infrastructure: No servers to deploy, no databases to manage
How it works
Section titled “How it works”The managed platform acts as a secure credential manager and trace-based usage tracker. Here’s the flow:
- You add provider keys to the platform dashboard (keys are encrypted in your browser before upload)
- You get a virtual key (
ANY_LLM_KEY) that represents your project - Your application uses the
PlatformProviderwith your virtual key - The SDK authenticates with the platform, retrieves and decrypts your provider key client-side
- Your request goes directly to the LLM provider (OpenAI, Anthropic, etc.)
- OpenTelemetry spans produced during each platform-provider call are reported back for analytics, with prompt/response content attributes redacted before export
┌─────────────────────────────────────────────────────────────────────────┐│ Your Application ││ ││ from any_llm import completion ││ completion(provider="platform", model="openai:gpt-4", ...) │└──────────────────────────────┬──────────────────────────────────────────┘ │ ▼┌─────────────────────────────────────────────────────────────────────────┐│ any-llm SDK (PlatformProvider) ││ ││ 1. Authenticate with platform using ANY_LLM_KEY ││ 2. Receive encrypted provider key ││ 3. Decrypt provider key locally (client-side) ││ 4. Make request directly to provider ││ 5. Report in-scope OTel spans (with content redaction) to platform │└────────────────┬─────────────────────────────────────┬──────────────────┘ │ │ ▼ ▼┌─────────────────────────────┐ ┌────────────────────────────────────┐│ any-llm Managed Platform │ │ LLM Provider ││ │ │ (OpenAI, Anthropic, etc.) ││ • Encrypted key storage │ │ ││ • Trace tracking │ │ Your prompts/responses go ││ • Cost analytics │ │ directly here—never through ││ • Performance metrics │ │ our platform │└─────────────────────────────┘ └────────────────────────────────────┘Key Features
Section titled “Key Features”Client-Side Encryption
Section titled “Client-Side Encryption”Your provider API keys are encrypted in your browser using XChaCha20-Poly1305 before being sent to our servers. The encryption key is derived from your account credentials and never leaves your device. This means:
- We cannot read your provider API keys
- Even if our database were compromised, your keys remain encrypted
- You maintain full control over your credentials
Privacy-First Trace Tracking
Section titled “Privacy-First Trace Tracking”The platform tracks OpenTelemetry span data generated during each platform-provider request to provide cost and performance insights:
What we track for you:
- Token counts (input and output)
- Model name and provider
- Request timestamps
- Performance metrics (latency, throughput)
- Additional OpenTelemetry span attributes/events emitted in the same request scope
What we never track:
- Your prompts
- Model responses
- Any content from your conversations
Prompt/response payload attributes are removed from traces before export.
Project Organization
Section titled “Project Organization”Organize your usage by project, team, or environment:
- Create separate projects for development, staging, and production
- Track costs per project
- Set up different provider keys per project
Platform vs. Gateway
Section titled “Platform vs. Gateway”any-llm offers two solutions for managing LLM access. Choose the one that fits your needs:
| Feature | Managed Platform | Self-Hosted Gateway |
|---|---|---|
| Deployment | Cloud-hosted (no infrastructure) | Self-hosted (Docker + Postgres) |
| Key Storage | Client-side encrypted vault | Your own configuration |
| Budget Enforcement | Coming soon | Built-in |
| User Management | Per-project | Full user/key management |
| Request Routing | Direct to provider, no proxy | Through your gateway |
| Best For | Teams wanting zero-ops key management and usage tracking | Organizations needing full control |
You can also use both together—store your provider keys in the managed platform and use them in a self-hosted gateway deployment.
Current Status
Section titled “Current Status”The any-llm managed platform is in open beta. During the beta:
- Free access to all features
- Core encryption and key management are production-ready
- Dashboard UX and advanced features are being refined
- Feedback is welcome at any-llm.ai
Getting Started
Section titled “Getting Started”Ready to try the managed platform?
- Create an account at any-llm.ai
- Add your provider API keys
- Get your virtual key
- Make your first request