Configuring providers
Set up one or more providers so your agents can connect to AI models. Each provider has its own authentication requirements and model catalogue. You can enable as many providers as you need and assign them independently to each agent.
Opening the Providers Tab
- Press Ctrl+, (or Cmd+, on macOS) to open Settings
- Click the Providers tab
- You will see provider cards grouped into Primary, Cloud, and OpenAI-Compatible sections
Each card shows the provider name, a brief description, and a health indicator dot. Unconfigured providers appear greyed out until you enter valid credentials.
Provider Card Layout
Every provider card follows the same structure:
- Header row: provider icon, name, and health status dot (green, yellow, or red)
- Credential fields: API key, base URL, or other authentication inputs specific to the provider
- Model selector: a dropdown listing the models available through that provider
- Toggle: an enable/disable switch so you can configure a provider without activating it immediately
Click any provider card to expand it and reveal the configuration fields. Changes are saved automatically as you type, and a health check runs when you finish entering credentials.
Setting Up Each Provider
Claude SDK
The Claude SDK provider uses your Claude CLI subscription (Max or Pro) to power agents.
- Expand the Claude SDK card
- Ironspire detects your Claude CLI installation automatically
- If the CLI is installed and authenticated, the health dot turns green
- Select a default model from the dropdown (Opus 4.6, Sonnet 4.5, Haiku 4.5, etc.)
No API key is needed. The SDK communicates through the CLI binary, so billing goes through your existing Claude subscription.
The Claude CLI must be installed and authenticated on your machine. Visit claude.ai to install the CLI and sign in if you have not done so already.
Claude API
Connect directly to the Anthropic API with pay-per-token billing.
- Go to console.anthropic.com and create an API key
- Expand the Claude API card in Settings
- Paste your API key into the API Key field
- The health dot turns green once the key is validated
- Select a default model from the dropdown
The Claude API provider offers the same models as the SDK provider (Opus, Sonnet, Haiku) but charges per token rather than through a subscription.
OpenAI
- Go to platform.openai.com and generate an API key
- Expand the OpenAI card
- Paste your key into the API Key field
- Wait for the health check to confirm the connection
- Select a default model (GPT-4o, GPT-4.1, o3, etc.)
Ollama (Local)
Ollama runs models locally on your machine, so no API key or cloud account is required.
- Install Ollama from ollama.com
- Pull at least one model (e.g.
ollama pull llama3.1) - Ensure the Ollama server is running (
ollama serve) - Expand the Ollama card in Settings
- Confirm or edit the Server URL (default:
http://localhost:11434) - Ironspire discovers available models automatically and populates the dropdown
Ollama is excellent for offline work, experimentation with open-source models, and tasks where you want zero data leaving your machine. Performance depends entirely on your local hardware.
Google Gemini
- Get an API key from aistudio.google.com
- Expand the Google Gemini card
- Paste your key into the API Key field
- Select a model (Gemini 2.5 Pro, Gemini 3 Flash, etc.)
AWS Bedrock
Bedrock requires AWS credentials rather than a single API key.
- Expand the AWS Bedrock card
- Enter your Access Key ID
- Enter your Secret Access Key
- Select your Region from the dropdown (e.g.
us-east-1) - Ironspire lists the models available in your region
Ensure your AWS IAM user or role has the bedrock:InvokeModel permission. Model availability varies by region; not all models are enabled by default in every AWS account.
Azure OpenAI
Azure OpenAI requires a deployment-specific configuration.
- Expand the Azure OpenAI card
- Enter your API Key from the Azure portal
- Enter your Base URL (the endpoint URL for your Azure OpenAI resource, e.g.
https://your-resource.openai.azure.com) - Enter your Deployment Name (the name you gave when deploying a model in Azure)
- Wait for the health check to confirm
OpenAI-Compatible Providers
Mistral, DeepSeek, xAI Grok, Groq, Together AI, and Fireworks AI all follow the same pattern:
- Expand the provider card
- Paste your API key from the provider's console
- Wait for the health check
- Select a model from the dropdown
Each provider's console for obtaining an API key:
| Provider | Console URL |
|---|---|
| Mistral | console.mistral.ai |
| DeepSeek | platform.deepseek.com |
| xAI Grok | console.x.ai |
| Groq | console.groq.com |
| Together AI | api.together.xyz |
| Fireworks AI | fireworks.ai |
Understanding the Health Indicator
The coloured dot on each provider card updates automatically and reflects the current connection state.
| Dot colour | State | What to do |
|---|---|---|
| Green | Reachable | Everything is working |
| Yellow | Rate-limited | Wait a moment or reduce request volume |
| Red | Unreachable or invalid key | Check your credentials, network connection, or provider status page |
| Grey | Not configured | Enter credentials to activate |
Click the refresh icon on a provider card to force a health check at any time. If a provider transitions from green to red during a session, affected agents display a warning in the sidebar.
Per-Agent Model Selection
Providers and models are assigned per agent, not globally. The global default set in Settings is only used when creating new agents.
To change a running agent's provider or model:
- Right-click the agent in the sidebar (or click the agent menu icon)
- Select Configure
- Under Model, pick a provider and model from the dropdown
- Click Save
The change takes effect on the agent's next message. Conversation history is preserved across provider switches; only the underlying model changes.
You can also set the provider and model when creating a new agent through the Add Agent modal. The modal defaults to the global provider and model but lets you override both.