Providers
Ironspire connects to multiple AI model providers so you can pick the right model for each agent. Rather than locking every agent to a single backend, the provider system lets you mix and match: run one agent on Claude Opus for deep reasoning while another uses a local Ollama model for quick iteration, all within the same project.
The Providers tab in Settings replaced the older "Backend" tab as of v3.11. If you are upgrading from an earlier version, your existing API key and model selection carry over automatically.
Provider Groups
Providers are organised into three groups based on how they connect and authenticate.
Primary
These are the most common providers and appear at the top of the Providers tab.
| Provider | ID | Authentication |
|---|---|---|
| Claude SDK | claude-sdk | Claude CLI subscription (Max or Pro) |
| Claude API | claude-api | Anthropic API key |
| OpenAI | openai | OpenAI API key |
| Ollama | ollama | None (local server) |
Claude SDK uses your existing Claude subscription through the CLI binary, so there is no additional per-token cost. Claude API connects directly to the Anthropic API with pay-per-token billing. OpenAI connects to the OpenAI platform. Ollama connects to a locally running Ollama instance, which is free but limited to models your hardware can run.
Cloud
Cloud providers connect to managed AI platforms from major cloud vendors.
| Provider | ID | Authentication |
|---|---|---|
| Google Gemini | google | Google AI API key |
| AWS Bedrock | bedrock | AWS access key + secret key + region |
| Azure OpenAI | azure-openai | Azure API key + base URL + deployment name |
These providers are useful if your organisation already has cloud accounts with negotiated pricing, or if you need models that are only available through a specific platform. Bedrock, for example, gives access to Amazon Nova models alongside Llama and Mistral variants.
OpenAI-Compatible
These providers all implement the OpenAI chat completions API format, which means Ironspire connects to them using the same underlying protocol with provider-specific adjustments.
| Provider | ID | Authentication |
|---|---|---|
| Mistral | mistral | Mistral API key |
| DeepSeek | deepseek | DeepSeek API key |
| xAI Grok | xai | xAI API key |
| Groq | groq | Groq API key |
| Together AI | together | Together API key |
| Fireworks AI | fireworks | Fireworks API key |
You can also add completely custom OpenAI-compatible endpoints for self-hosted models, proxies, or providers not yet built into Ironspire. See Custom endpoints for details.
Per-Agent Provider Routing
Provider selection in Ironspire is not a global toggle. Each agent can use a different provider and model, configured independently. This is a core design principle: in a multi-agent environment, different tasks call for different models.
When you create or edit an agent, you pick both a provider and a model from that provider's catalogue. This means you can run a mixed fleet: a Claude Opus agent for architecture decisions, a GPT-4.1 agent for code review, and a local Ollama agent for rapid prototyping, all in the same project session.
The default provider and model are set in Settings > Providers, but every agent can override them. If an agent's chosen provider becomes unavailable (invalid key, server down), only that agent is affected; the rest of the fleet continues operating normally.
Provider routing is also useful for cost management. You can reserve expensive, high-capability models for agents handling complex tasks while assigning cheaper or local models to agents doing routine work. The Analytics panel shows per-agent cost breakdowns so you can see exactly where your budget is going.
How Routing Works
- You set a global default provider and model in Settings > Providers
- When you create a new agent, it inherits the global default
- You can override the provider and model for any individual agent in the agent configuration
- Each agent's messages are routed to its assigned provider at request time
- Switching an agent's provider mid-conversation is seamless; history is preserved
Credential Storage
All API keys and secrets are encrypted at rest using your operating system's native credential storage.
| Platform | Mechanism |
|---|---|
| Windows | DPAPI via Electron safeStorage |
| macOS | Keychain via Electron safeStorage |
| Linux | libsecret via Electron safeStorage |
Credentials never leave your machine except when sent to the relevant provider's API endpoint. They are not included in project exports, state backups, or analytics data.
Ollama does not require any credentials. It connects to a local server URL (default http://localhost:11434) and discovers available models at runtime.
Health Checks
Ironspire continuously monitors the status of each configured provider. The health state is displayed as a coloured dot next to the provider name in Settings.
| State | Colour | Meaning |
|---|---|---|
| Reachable | Green | Connection verified, credentials valid |
| Unreachable | Red | Cannot connect to the provider endpoint |
| Rate-limited | Yellow | Provider is throttling requests |
| Invalid key | Red | Credentials were rejected by the provider |
Health checks run automatically when you open the Providers tab and periodically during active sessions. You can also trigger a manual check by clicking the refresh icon on any provider card.
If a provider enters an unhealthy state, agents using that provider show a warning in the sidebar. You can reassign them to a healthy provider without losing conversation history.
Troubleshooting Provider Issues
When a health check fails, start with these steps:
- Check the status dot colour: red means the connection failed entirely; yellow means rate limiting
- Verify your credentials: open the provider card in Settings and confirm the API key is correct
- Test connectivity: for cloud providers, check the provider's status page for outages
- For Ollama: ensure the Ollama server is running (
ollama serve) and the URL is correct - For Azure: confirm the base URL, API key, and deployment name all match your Azure portal configuration
If the issue persists, try disabling and re-enabling the provider. This forces a fresh connection and clears any cached state.
Model Selection at a Glance
Ironspire ships with 46 built-in model definitions. Here is a quick summary of what each provider group offers:
| Group | Models | Highlights |
|---|---|---|
| Claude | 6 models (across SDK and API) | Up to 1M context, vision, tools on all models |
| OpenAI | 8 models | GPT-4.1 with 1M context; o-series reasoning models |
| Google Gemini | 6 models | All 1M context, all with tools and vision |
| Mistral | 5 models | Code-specialised variants (Codestral, Devstral) |
| DeepSeek | 2 models | V3 (tools), R1 (reasoning only, no tools) |
| xAI Grok | 2 models | 2M context windows, the largest available |
| AWS Bedrock | 11 models | Nova family plus Llama and Mistral variants |
| Ollama | Dynamic | Whatever you have pulled locally |
For the complete model-by-model breakdown with context windows, output limits, and capability flags, see the Model registry.