Skip to content

Managed Platform Overview

The any-llm managed platform is a cloud-hosted service that provides secure API key vaulting and usage tracking for all your LLM providers. Instead of managing multiple provider API keys across your codebase, you get a single virtual key that works with any supported provider while keeping your credentials encrypted and your usage tracked.

The managed platform is available at any-llm.ai.

Managing LLM API keys and tracking costs across multiple providers is challenging:

  • Security risks: API keys scattered across .env files, CI/CD pipelines, and developer machines
  • No visibility: Difficult to track spending across OpenAI, Anthropic, Google, and other providers
  • Key rotation pain: Updating keys means touching multiple systems and codebases
  • No performance insights: No easy way to measure latency, throughput, or reliability

The managed platform solves these problems:

  • Secure Key Vault: Your provider API keys are encrypted client-side before storage—we never see your raw keys
  • Single Virtual Key: One ANY_LLM_KEY works across all providers
  • Trace Analytics: Track tokens, costs, and performance metrics without logging prompts or responses
  • Zero Infrastructure: No servers to deploy, no databases to manage

The managed platform acts as a secure credential manager and trace-based usage tracker. Here’s the flow:

  1. You add provider keys to the platform dashboard (keys are encrypted in your browser before upload)
  2. You get a virtual key (ANY_LLM_KEY) that represents your project
  3. Your application uses the PlatformProvider with your virtual key
  4. The SDK authenticates with the platform, retrieves and decrypts your provider key client-side
  5. Your request goes directly to the LLM provider (OpenAI, Anthropic, etc.)
  6. OpenTelemetry spans produced during each platform-provider call are reported back for analytics, with prompt/response content attributes redacted before export
┌─────────────────────────────────────────────────────────────────────────┐
│ Your Application │
│ │
│ from any_llm import completion │
│ completion(provider="platform", model="openai:gpt-4", ...) │
└──────────────────────────────┬──────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────┐
│ any-llm SDK (PlatformProvider) │
│ │
│ 1. Authenticate with platform using ANY_LLM_KEY │
│ 2. Receive encrypted provider key │
│ 3. Decrypt provider key locally (client-side) │
│ 4. Make request directly to provider │
│ 5. Report in-scope OTel spans (with content redaction) to platform │
└────────────────┬─────────────────────────────────────┬──────────────────┘
│ │
▼ ▼
┌─────────────────────────────┐ ┌────────────────────────────────────┐
│ any-llm Managed Platform │ │ LLM Provider │
│ │ │ (OpenAI, Anthropic, etc.) │
│ • Encrypted key storage │ │ │
│ • Trace tracking │ │ Your prompts/responses go │
│ • Cost analytics │ │ directly here—never through │
│ • Performance metrics │ │ our platform │
└─────────────────────────────┘ └────────────────────────────────────┘

Your provider API keys are encrypted in your browser using XChaCha20-Poly1305 before being sent to our servers. The encryption key is derived from your account credentials and never leaves your device. This means:

  • We cannot read your provider API keys
  • Even if our database were compromised, your keys remain encrypted
  • You maintain full control over your credentials

The platform tracks OpenTelemetry span data generated during each platform-provider request to provide cost and performance insights:

What we track for you:

  • Token counts (input and output)
  • Model name and provider
  • Request timestamps
  • Performance metrics (latency, throughput)
  • Additional OpenTelemetry span attributes/events emitted in the same request scope

What we never track:

  • Your prompts
  • Model responses
  • Any content from your conversations

Prompt/response payload attributes are removed from traces before export.

Organize your usage by project, team, or environment:

  • Create separate projects for development, staging, and production
  • Track costs per project
  • Set up different provider keys per project

any-llm offers two solutions for managing LLM access. Choose the one that fits your needs:

FeatureManaged PlatformSelf-Hosted Gateway
DeploymentCloud-hosted (no infrastructure)Self-hosted (Docker + Postgres)
Key StorageClient-side encrypted vaultYour own configuration
Budget EnforcementComing soonBuilt-in
User ManagementPer-projectFull user/key management
Request RoutingDirect to provider, no proxyThrough your gateway
Best ForTeams wanting zero-ops key management and usage trackingOrganizations needing full control

You can also use both together—store your provider keys in the managed platform and use them in a self-hosted gateway deployment.

The any-llm managed platform is in open beta. During the beta:

  • Free access to all features
  • Core encryption and key management are production-ready
  • Dashboard UX and advanced features are being refined
  • Feedback is welcome at any-llm.ai

Ready to try the managed platform?

  1. Create an account at any-llm.ai
  2. Add your provider API keys
  3. Get your virtual key
  4. Make your first request