Language Model Provider Management: Bring Your Own LLM
Manage multiple language model providers directly in the UI. Assign specific models to individual agents, rotate API keys without downtime, and stay in control of your AI stack.
Not every team wants to use the same model for every agent. Some tasks call for a fast, cost-effective model. Others benefit from a more capable one. And some organizations have existing agreements with specific providers that they need to honor.
Agentwise now gives you full control over which language models power your agents — and makes it easy to manage as your needs evolve.
What’s New
Multiple providers in one place — Add and manage connections to OpenAI, Azure OpenAI, and other supported providers directly in the Agentwise UI. No need to edit config files or restart services.
Per-agent model assignment — Each agent can be assigned its own model. Your customer-facing agent can run on a different model than your internal HR assistant. You decide.
Model grouping by kind — Models are organized by their capabilities (chat, embedding, transcription, etc.), making it easy to pick the right one for the right job.
Zero-downtime key rotation — Update or rotate API keys for any provider without interrupting running agents. The transition happens cleanly in the background.
Webhook endpoints and transcription clients — Provider-level configuration also supports webhook endpoints and transcription clients, enabling voice and real-time use cases alongside standard chat.
Why This Matters
The AI model landscape moves fast. New models release frequently, pricing changes, and what’s best today may not be best in six months. Having your provider configuration inside Agentwise — rather than baked into infrastructure — means you can adapt without an engineering project.
It also means organizations with data residency requirements can ensure the right models run in the right regions, under the right contracts.