Skip to content

Providers

Ironspire connects to multiple AI model providers so you can pick the right model for each agent. Rather than locking every agent to a single backend, the provider system lets you mix and match: run one agent on Claude Opus for deep reasoning while another uses a local Ollama model for quick iteration, all within the same project.

The Providers tab in Settings replaced the older "Backend" tab as of v3.11. If you are upgrading from an earlier version, your existing API key and model selection carry over automatically.

Provider Groups

Providers are organised into three groups based on how they connect and authenticate.

Primary

These are the most common providers and appear at the top of the Providers tab.

ProviderIDAuthentication
Claude SDKclaude-sdkClaude CLI subscription (Max or Pro)
Claude APIclaude-apiAnthropic API key
OpenAIopenaiOpenAI API key
OllamaollamaNone (local server)

Claude SDK uses your existing Claude subscription through the CLI binary, so there is no additional per-token cost. Claude API connects directly to the Anthropic API with pay-per-token billing. OpenAI connects to the OpenAI platform. Ollama connects to a locally running Ollama instance, which is free but limited to models your hardware can run.

Cloud

Cloud providers connect to managed AI platforms from major cloud vendors.

ProviderIDAuthentication
Google GeminigoogleGoogle AI API key
AWS BedrockbedrockAWS access key + secret key + region
Azure OpenAIazure-openaiAzure API key + base URL + deployment name

These providers are useful if your organisation already has cloud accounts with negotiated pricing, or if you need models that are only available through a specific platform. Bedrock, for example, gives access to Amazon Nova models alongside Llama and Mistral variants.

OpenAI-Compatible

These providers all implement the OpenAI chat completions API format, which means Ironspire connects to them using the same underlying protocol with provider-specific adjustments.

ProviderIDAuthentication
MistralmistralMistral API key
DeepSeekdeepseekDeepSeek API key
xAI GrokxaixAI API key
GroqgroqGroq API key
Together AItogetherTogether API key
Fireworks AIfireworksFireworks API key

You can also add completely custom OpenAI-compatible endpoints for self-hosted models, proxies, or providers not yet built into Ironspire. See Custom endpoints for details.

Per-Agent Provider Routing

Provider selection in Ironspire is not a global toggle. Each agent can use a different provider and model, configured independently. This is a core design principle: in a multi-agent environment, different tasks call for different models.

When you create or edit an agent, you pick both a provider and a model from that provider's catalogue. This means you can run a mixed fleet: a Claude Opus agent for architecture decisions, a GPT-4.1 agent for code review, and a local Ollama agent for rapid prototyping, all in the same project session.

The default provider and model are set in Settings > Providers, but every agent can override them. If an agent's chosen provider becomes unavailable (invalid key, server down), only that agent is affected; the rest of the fleet continues operating normally.

Provider routing is also useful for cost management. You can reserve expensive, high-capability models for agents handling complex tasks while assigning cheaper or local models to agents doing routine work. The Analytics panel shows per-agent cost breakdowns so you can see exactly where your budget is going.

How Routing Works

  1. You set a global default provider and model in Settings > Providers
  2. When you create a new agent, it inherits the global default
  3. You can override the provider and model for any individual agent in the agent configuration
  4. Each agent's messages are routed to its assigned provider at request time
  5. Switching an agent's provider mid-conversation is seamless; history is preserved

Credential Storage

All API keys and secrets are encrypted at rest using your operating system's native credential storage.

PlatformMechanism
WindowsDPAPI via Electron safeStorage
macOSKeychain via Electron safeStorage
Linuxlibsecret via Electron safeStorage

Credentials never leave your machine except when sent to the relevant provider's API endpoint. They are not included in project exports, state backups, or analytics data.

Ollama does not require any credentials. It connects to a local server URL (default http://localhost:11434) and discovers available models at runtime.

Health Checks

Ironspire continuously monitors the status of each configured provider. The health state is displayed as a coloured dot next to the provider name in Settings.

StateColourMeaning
ReachableGreenConnection verified, credentials valid
UnreachableRedCannot connect to the provider endpoint
Rate-limitedYellowProvider is throttling requests
Invalid keyRedCredentials were rejected by the provider

Health checks run automatically when you open the Providers tab and periodically during active sessions. You can also trigger a manual check by clicking the refresh icon on any provider card.

If a provider enters an unhealthy state, agents using that provider show a warning in the sidebar. You can reassign them to a healthy provider without losing conversation history.

Troubleshooting Provider Issues

When a health check fails, start with these steps:

  1. Check the status dot colour: red means the connection failed entirely; yellow means rate limiting
  2. Verify your credentials: open the provider card in Settings and confirm the API key is correct
  3. Test connectivity: for cloud providers, check the provider's status page for outages
  4. For Ollama: ensure the Ollama server is running (ollama serve) and the URL is correct
  5. For Azure: confirm the base URL, API key, and deployment name all match your Azure portal configuration

If the issue persists, try disabling and re-enabling the provider. This forces a fresh connection and clears any cached state.

Model Selection at a Glance

Ironspire ships with 46 built-in model definitions. Here is a quick summary of what each provider group offers:

GroupModelsHighlights
Claude6 models (across SDK and API)Up to 1M context, vision, tools on all models
OpenAI8 modelsGPT-4.1 with 1M context; o-series reasoning models
Google Gemini6 modelsAll 1M context, all with tools and vision
Mistral5 modelsCode-specialised variants (Codestral, Devstral)
DeepSeek2 modelsV3 (tools), R1 (reasoning only, no tools)
xAI Grok2 models2M context windows, the largest available
AWS Bedrock11 modelsNova family plus Llama and Mistral variants
OllamaDynamicWhatever you have pulled locally

For the complete model-by-model breakdown with context windows, output limits, and capability flags, see the Model registry.

Next steps