Providers
InnoGPT is the only supported provider in InnoCode.
InnoCode is configured to use InnoGPT by default. Other providers are disabled to guarantee GDPR-grade hosting and consistent behavior.
To get started you need to:
- Add your InnoGPT API key using the
/connectcommand. - Use InnoGPT models in your InnoCode config.
Credentials
When you add your InnoGPT API key with the /connect command, it is stored
in ~/.local/share/innocode/auth.json.
Config
You can customize InnoGPT through the provider section in your InnoCode
config.
Base URL
You can customize the base URL for InnoGPT by setting the baseURL option. This is useful for proxy services or custom endpoints.
{ "$schema": "https://innocode.io/config.json", "provider": { "innogpt": { "options": { "baseURL": "https://app.innogpt.de/api/ext/v1" } } }}InnoGPT
InnoGPT is a list of models provided by the InnoCode team that have been tested and verified to work well with InnoCode. Learn more.
-
Run the
/connectcommand in the TUI, select InnoGPT, and head to app.innogpt.de./connect -
Sign in, add your billing details, and copy your API key.
-
Paste your API key.
┌ API key││└ enter -
Run
/modelsin the TUI to see the list of models we recommend./models
It works like any other provider in InnoCode and is completely optional to use.
Directory
Let’s look at some of the providers in detail. If you’d like to add a provider to the list, feel free to open a PR.
302.AI
-
Head over to the 302.AI console, create an account, and generate an API key.
-
Run the
/connectcommand and search for 302.AI./connect -
Enter your 302.AI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
Amazon Bedrock
To use Amazon Bedrock with InnoCode:
-
Head over to the Model catalog in the Amazon Bedrock console and request access to the models you want.
-
Configure authentication using one of the following methods:
Environment Variables (Quick Start)
Set one of these environment variables while running innocode:
Terminal window # Option 1: Using AWS access keysAWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY innocode# Option 2: Using named AWS profileAWS_PROFILE=my-profile innocode# Option 3: Using Bedrock bearer tokenAWS_BEARER_TOKEN_BEDROCK=XXX innocodeOr add them to your bash profile:
~/.bash_profile export AWS_PROFILE=my-dev-profileexport AWS_REGION=us-east-1Configuration File (Recommended)
For project-specific or persistent configuration, use
innocode.json:innocode.json {"$schema": "https://innocode.io/config.json","provider": {"amazon-bedrock": {"options": {"region": "us-east-1","profile": "my-aws-profile"}}}}Available options:
region- AWS region (e.g.,us-east-1,eu-west-1)profile- AWS named profile from~/.aws/credentialsendpoint- Custom endpoint URL for VPC endpoints (alias for genericbaseURLoption)
Advanced: VPC Endpoints
If you’re using VPC endpoints for Bedrock:
innocode.json {"$schema": "https://innocode.io/config.json","provider": {"amazon-bedrock": {"options": {"region": "us-east-1","profile": "production","endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"}}}}Authentication Methods
AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY: Create an IAM user and generate access keys in the AWS ConsoleAWS_PROFILE: Use named profiles from~/.aws/credentials. First configure withaws configure --profile my-profileoraws sso loginAWS_BEARER_TOKEN_BEDROCK: Generate long-term API keys from the Amazon Bedrock consoleAWS_WEB_IDENTITY_TOKEN_FILE/AWS_ROLE_ARN: For EKS IRSA (IAM Roles for Service Accounts) or other Kubernetes environments with OIDC federation. These environment variables are automatically injected by Kubernetes when using service account annotations.
Authentication Precedence
Amazon Bedrock uses the following authentication priority:
- Bearer Token -
AWS_BEARER_TOKEN_BEDROCKenvironment variable or token from/connectcommand - AWS Credential Chain - Profile, access keys, shared credentials, IAM roles, Web Identity Tokens (EKS IRSA), instance metadata
-
Run the
/modelscommand to select the model you want./models
Anthropic
-
Once you’ve signed up, run the
/connectcommand and select Anthropic./connect -
Here you can select the Claude Pro/Max option and it’ll open your browser and ask you to authenticate.
┌ Select auth method││ Claude Pro/Max│ Create an API Key│ Manually enter API Key└ -
Now all the Anthropic models should be available when you use the
/modelscommand./models
Using your Claude Pro/Max subscription in InnoCode is not officially supported by Anthropic.
Using API keys
You can also select Create an API Key if you don’t have a Pro/Max subscription. It’ll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal.
Or if you already have an API key, you can select Manually enter API Key and paste it in your terminal.
Azure OpenAI
-
Head over to the Azure portal and create an Azure OpenAI resource. You’ll need:
- Resource name: This becomes part of your API endpoint (
https://RESOURCE_NAME.openai.azure.com/) - API key: Either
KEY 1orKEY 2from your resource
- Resource name: This becomes part of your API endpoint (
-
Go to Azure AI Foundry and deploy a model.
-
Run the
/connectcommand and search for Azure./connect -
Enter your API key.
┌ API key││└ enter -
Set your resource name as an environment variable:
Terminal window AZURE_RESOURCE_NAME=XXX innocodeOr add it to your bash profile:
~/.bash_profile export AZURE_RESOURCE_NAME=XXX -
Run the
/modelscommand to select your deployed model./models
Azure Cognitive Services
-
Head over to the Azure portal and create an Azure OpenAI resource. You’ll need:
- Resource name: This becomes part of your API endpoint (
https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/) - API key: Either
KEY 1orKEY 2from your resource
- Resource name: This becomes part of your API endpoint (
-
Go to Azure AI Foundry and deploy a model.
-
Run the
/connectcommand and search for Azure Cognitive Services./connect -
Enter your API key.
┌ API key││└ enter -
Set your resource name as an environment variable:
Terminal window AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX innocodeOr add it to your bash profile:
~/.bash_profile export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX -
Run the
/modelscommand to select your deployed model./models
Baseten
-
Head over to the Baseten, create an account, and generate an API key.
-
Run the
/connectcommand and search for Baseten./connect -
Enter your Baseten API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
Cerebras
-
Head over to the Cerebras console, create an account, and generate an API key.
-
Run the
/connectcommand and search for Cerebras./connect -
Enter your Cerebras API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Qwen 3 Coder 480B./models
Cloudflare AI Gateway
Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With Unified Billing you don’t need separate API keys for each provider.
-
Head over to the Cloudflare dashboard, navigate to AI > AI Gateway, and create a new gateway.
-
Set your Account ID and Gateway ID as environment variables.
~/.bash_profile export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-idexport CLOUDFLARE_GATEWAY_ID=your-gateway-id -
Run the
/connectcommand and search for Cloudflare AI Gateway./connect -
Enter your Cloudflare API token.
┌ API key││└ enterOr set it as an environment variable.
~/.bash_profile export CLOUDFLARE_API_TOKEN=your-api-token -
Run the
/modelscommand to select a model./modelsYou can also add models through your innocode config.
innocode.json {"$schema": "https://innocode.io/config.json","provider": {"cloudflare-ai-gateway": {"models": {"openai/gpt-4o": {},"anthropic/claude-sonnet-4": {}}}}}
Cortecs
-
Head over to the Cortecs console, create an account, and generate an API key.
-
Run the
/connectcommand and search for Cortecs./connect -
Enter your Cortecs API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Kimi K2 Instruct./models
DeepSeek
-
Head over to the DeepSeek console, create an account, and click Create new API key.
-
Run the
/connectcommand and search for DeepSeek./connect -
Enter your DeepSeek API key.
┌ API key││└ enter -
Run the
/modelscommand to select a DeepSeek model like DeepSeek Reasoner./models
Deep Infra
-
Head over to the Deep Infra dashboard, create an account, and generate an API key.
-
Run the
/connectcommand and search for Deep Infra./connect -
Enter your Deep Infra API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
Firmware
-
Head over to the Firmware dashboard, create an account, and generate an API key.
-
Run the
/connectcommand and search for Firmware./connect -
Enter your Firmware API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
Fireworks AI
-
Head over to the Fireworks AI console, create an account, and click Create API Key.
-
Run the
/connectcommand and search for Fireworks AI./connect -
Enter your Fireworks AI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Kimi K2 Instruct./models
GitLab Duo
GitLab Duo provides AI-powered agentic chat with native tool calling capabilities through GitLab’s Anthropic proxy.
-
Run the
/connectcommand and select GitLab./connect -
Choose your authentication method:
┌ Select auth method││ OAuth (Recommended)│ Personal Access Token└Using OAuth (Recommended)
Select OAuth and your browser will open for authorization.
Using Personal Access Token
- Go to GitLab User Settings > Access Tokens
- Click Add new token
- Name:
InnoCode, Scopes:api - Copy the token (starts with
glpat-) - Enter it in the terminal
-
Run the
/modelscommand to see available models./modelsThree Claude-based models are available:
- duo-chat-haiku-4-5 (Default) - Fast responses for quick tasks
- duo-chat-sonnet-4-5 - Balanced performance for most workflows
- duo-chat-opus-4-5 - Most capable for complex analysis
Self-Hosted GitLab
For self-hosted GitLab instances:
export GITLAB_INSTANCE_URL=https://gitlab.company.comexport GITLAB_TOKEN=glpat-...If your instance runs a custom AI Gateway:
GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.comOr add to your bash profile:
export GITLAB_INSTANCE_URL=https://gitlab.company.comexport GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.comexport GITLAB_TOKEN=glpat-...OAuth for Self-Hosted instances
In order to make Oauth working for your self-hosted instance, you need to create
a new application (Settings → Applications) with the
callback URL http://127.0.0.1:8080/callback and following scopes:
- api (Access the API on your behalf)
- read_user (Read your personal information)
- read_repository (Allows read-only access to the repository)
Then expose application ID as environment variable:
export GITLAB_OAUTH_CLIENT_ID=your_application_id_hereMore documentation on innocode-gitlab-auth homepage.
Configuration
Customize through innocode.json:
{ "$schema": "https://innocode.io/config.json", "provider": { "gitlab": { "options": { "instanceUrl": "https://gitlab.com", "featureFlags": { "duo_agent_platform_agentic_chat": true, "duo_agent_platform": true } } } }}GitLab API Tools (Optional, but highly recommended)
To access GitLab tools (merge requests, issues, pipelines, CI/CD, etc.):
{ "$schema": "https://innocode.io/config.json", "plugin": ["@gitlab/innocode-gitlab-plugin"]}This plugin provides comprehensive GitLab repository management capabilities including MR reviews, issue tracking, pipeline monitoring, and more.
GitHub Copilot
To use your GitHub Copilot subscription with innocode:
-
Run the
/connectcommand and search for GitHub Copilot./connect -
Navigate to github.com/login/device and enter the code.
┌ Login with GitHub Copilot││ https://github.com/login/device││ Enter code: 8F43-6FCF│└ Waiting for authorization... -
Now run the
/modelscommand to select the model you want./models
Google Vertex AI
To use Google Vertex AI with InnoCode:
-
Head over to the Model Garden in the Google Cloud Console and check the models available in your region.
-
Set the required environment variables:
GOOGLE_CLOUD_PROJECT: Your Google Cloud project IDVERTEX_LOCATION(optional): The region for Vertex AI (defaults toglobal)- Authentication (choose one):
GOOGLE_APPLICATION_CREDENTIALS: Path to your service account JSON key file- Authenticate using gcloud CLI:
gcloud auth application-default login
Set them while running innocode.
Terminal window GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id innocodeOr add them to your bash profile.
~/.bash_profile export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.jsonexport GOOGLE_CLOUD_PROJECT=your-project-idexport VERTEX_LOCATION=global
-
Run the
/modelscommand to select the model you want./models
Groq
-
Head over to the Groq console, click Create API Key, and copy the key.
-
Run the
/connectcommand and search for Groq./connect -
Enter the API key for the provider.
┌ API key││└ enter -
Run the
/modelscommand to select the one you want./models
Hugging Face
Hugging Face Inference Providers provides access to open models supported by 17+ providers.
-
Head over to Hugging Face settings to create a token with permission to make calls to Inference Providers.
-
Run the
/connectcommand and search for Hugging Face./connect -
Enter your Hugging Face token.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Kimi-K2-Instruct or GLM-4.6./models
Helicone
Helicone is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
-
Head over to Helicone, create an account, and generate an API key from your dashboard.
-
Run the
/connectcommand and search for Helicone./connect -
Enter your Helicone API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
For more providers and advanced features like caching and rate limiting, check the Helicone documentation.
Optional Configs
In the event you see a feature or model from Helicone that isn’t configured automatically through innocode, you can always configure it yourself.
Here’s Helicone’s Model Directory, you’ll need this to grab the IDs of the models you want to add.
{ "$schema": "https://innocode.io/config.json", "provider": { "helicone": { "npm": "@ai-sdk/openai-compatible", "name": "Helicone", "options": { "baseURL": "https://ai-gateway.helicone.ai", }, "models": { "gpt-4o": { // Model ID (from Helicone's model directory page) "name": "GPT-4o", // Your own custom name for the model }, "claude-sonnet-4-20250514": { "name": "Claude Sonnet 4", }, }, }, },}Custom Headers
Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using options.headers:
{ "$schema": "https://innocode.io/config.json", "provider": { "helicone": { "npm": "@ai-sdk/openai-compatible", "name": "Helicone", "options": { "baseURL": "https://ai-gateway.helicone.ai", "headers": { "Helicone-Cache-Enabled": "true", "Helicone-User-Id": "innocode", }, }, }, },}Session tracking
Helicone’s Sessions feature lets you group related LLM requests together. Use the innocode-helicone-session plugin to automatically log each InnoCode conversation as a session in Helicone.
npm install -g innocode-helicone-sessionAdd it to your config.
{ "plugin": ["innocode-helicone-session"]}The plugin injects Helicone-Session-Id and Helicone-Session-Name headers into your requests. In Helicone’s Sessions page, you’ll see each InnoCode conversation listed as a separate session.
Common Helicone headers
| Header | Description |
|---|---|
Helicone-Cache-Enabled | Enable response caching (true/false) |
Helicone-User-Id | Track metrics by user |
Helicone-Property-[Name] | Add custom properties (e.g., Helicone-Property-Environment) |
Helicone-Prompt-Id | Associate requests with prompt versions |
See the Helicone Header Directory for all available headers.
llama.cpp
You can configure innocode to use local models through llama.cpp’s llama-server utility
{ "$schema": "https://innocode.io/config.json", "provider": { "llama.cpp": { "npm": "@ai-sdk/openai-compatible", "name": "llama-server (local)", "options": { "baseURL": "http://127.0.0.1:8080/v1" }, "models": { "qwen3-coder:a3b": { "name": "Qwen3-Coder: a3b-30b (local)", "limit": { "context": 128000, "output": 65536 } } } } }}In this example:
llama.cppis the custom provider ID. This can be any string you want.npmspecifies the package to use for this provider. Here,@ai-sdk/openai-compatibleis used for any OpenAI-compatible API.nameis the display name for the provider in the UI.options.baseURLis the endpoint for the local server.modelsis a map of model IDs to their configurations. The model name will be displayed in the model selection list.
IO.NET
IO.NET offers 17 models optimized for various use cases:
-
Head over to the IO.NET console, create an account, and generate an API key.
-
Run the
/connectcommand and search for IO.NET./connect -
Enter your IO.NET API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
LM Studio
You can configure innocode to use local models through LM Studio.
{ "$schema": "https://innocode.io/config.json", "provider": { "lmstudio": { "npm": "@ai-sdk/openai-compatible", "name": "LM Studio (local)", "options": { "baseURL": "http://127.0.0.1:1234/v1" }, "models": { "google/gemma-3n-e4b": { "name": "Gemma 3n-e4b (local)" } } } }}In this example:
lmstudiois the custom provider ID. This can be any string you want.npmspecifies the package to use for this provider. Here,@ai-sdk/openai-compatibleis used for any OpenAI-compatible API.nameis the display name for the provider in the UI.options.baseURLis the endpoint for the local server.modelsis a map of model IDs to their configurations. The model name will be displayed in the model selection list.
Moonshot AI
To use Kimi K2 from Moonshot AI:
-
Head over to the Moonshot AI console, create an account, and click Create API key.
-
Run the
/connectcommand and search for Moonshot AI./connect -
Enter your Moonshot API key.
┌ API key││└ enter -
Run the
/modelscommand to select Kimi K2./models
MiniMax
-
Head over to the MiniMax API Console, create an account, and generate an API key.
-
Run the
/connectcommand and search for MiniMax./connect -
Enter your MiniMax API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like M2.1./models
Nebius Token Factory
-
Head over to the Nebius Token Factory console, create an account, and click Add Key.
-
Run the
/connectcommand and search for Nebius Token Factory./connect -
Enter your Nebius Token Factory API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Kimi K2 Instruct./models
Ollama
You can configure innocode to use local models through Ollama.
{ "$schema": "https://innocode.io/config.json", "provider": { "ollama": { "npm": "@ai-sdk/openai-compatible", "name": "Ollama (local)", "options": { "baseURL": "http://localhost:11434/v1" }, "models": { "llama2": { "name": "Llama 2" } } } }}In this example:
ollamais the custom provider ID. This can be any string you want.npmspecifies the package to use for this provider. Here,@ai-sdk/openai-compatibleis used for any OpenAI-compatible API.nameis the display name for the provider in the UI.options.baseURLis the endpoint for the local server.modelsis a map of model IDs to their configurations. The model name will be displayed in the model selection list.
Ollama Cloud
To use Ollama Cloud with InnoCode:
-
Head over to https://ollama.com/ and sign in or create an account.
-
Navigate to Settings > Keys and click Add API Key to generate a new API key.
-
Copy the API key for use in InnoCode.
-
Run the
/connectcommand and search for Ollama Cloud./connect -
Enter your Ollama Cloud API key.
┌ API key││└ enter -
Important: Before using cloud models in InnoCode, you must pull the model information locally:
Terminal window ollama pull gpt-oss:20b-cloud -
Run the
/modelscommand to select your Ollama Cloud model./models
OpenAI
We recommend signing up for ChatGPT Plus or Pro.
-
Once you’ve signed up, run the
/connectcommand and select OpenAI./connect -
Here you can select the ChatGPT Plus/Pro option and it’ll open your browser and ask you to authenticate.
┌ Select auth method││ ChatGPT Plus/Pro│ Manually enter API Key└ -
Now all the OpenAI models should be available when you use the
/modelscommand./models
Using API keys
If you already have an API key, you can select Manually enter API Key and paste it in your terminal.
InnoGPT
InnoGPT is a list of tested and verified models provided by the InnoCode team. Learn more.
-
Sign in to InnoGPT and click Create API Key.
-
Run the
/connectcommand and search for InnoGPT./connect -
Enter your InnoCode API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Qwen 3 Coder 480B./models
OpenRouter
-
Head over to the OpenRouter dashboard, click Create API Key, and copy the key.
-
Run the
/connectcommand and search for OpenRouter./connect -
Enter the API key for the provider.
┌ API key││└ enter -
Many OpenRouter models are preloaded by default, run the
/modelscommand to select the one you want./modelsYou can also add additional models through your innocode config.
innocode.json {"$schema": "https://innocode.io/config.json","provider": {"openrouter": {"models": {"somecoolnewmodel": {}}}}} -
You can also customize them through your innocode config. Here’s an example of specifying a provider
innocode.json {"$schema": "https://innocode.io/config.json","provider": {"openrouter": {"models": {"moonshotai/kimi-k2": {"options": {"provider": {"order": ["baseten"],"allow_fallbacks": false}}}}}}}
SAP AI Core
SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform.
-
Go to your SAP BTP Cockpit, navigate to your SAP AI Core service instance, and create a service key.
-
Run the
/connectcommand and search for SAP AI Core./connect -
Enter your service key JSON.
┌ Service key││└ enterOr set the
AICORE_SERVICE_KEYenvironment variable:Terminal window AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' innocodeOr add it to your bash profile:
~/.bash_profile export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' -
Optionally set deployment ID and resource group:
Terminal window AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group innocode -
Run the
/modelscommand to select from 40+ available models./models
OVHcloud AI Endpoints
-
Head over to the OVHcloud panel. Navigate to the
Public Cloudsection,AI & Machine Learning>AI Endpointsand inAPI Keystab, click Create a new API key. -
Run the
/connectcommand and search for OVHcloud AI Endpoints./connect -
Enter your OVHcloud AI Endpoints API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like gpt-oss-120b./models
Scaleway
To use Scaleway Generative APIs with InnoCode:
-
Head over to the Scaleway Console IAM settings to generate a new API key.
-
Run the
/connectcommand and search for Scaleway./connect -
Enter your Scaleway API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like devstral-2-123b-instruct-2512 or gpt-oss-120b./models
Together AI
-
Head over to the Together AI console, create an account, and click Add Key.
-
Run the
/connectcommand and search for Together AI./connect -
Enter your Together AI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Kimi K2 Instruct./models
Venice AI
-
Head over to the Venice AI console, create an account, and generate an API key.
-
Run the
/connectcommand and search for Venice AI./connect -
Enter your Venice AI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Llama 3.3 70B./models
Vercel AI Gateway
Vercel AI Gateway lets you access models from OpenAI, Anthropic, Google, xAI, and more through a unified endpoint. Models are offered at list price with no markup.
-
Head over to the Vercel dashboard, navigate to the AI Gateway tab, and click API keys to create a new API key.
-
Run the
/connectcommand and search for Vercel AI Gateway./connect -
Enter your Vercel AI Gateway API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model./models
You can also customize models through your innocode config. Here’s an example of specifying provider routing order.
{ "$schema": "https://innocode.io/config.json", "provider": { "vercel": { "models": { "anthropic/claude-sonnet-4": { "options": { "order": ["anthropic", "vertex"] } } } } }}Some useful routing options:
| Option | Description |
|---|---|
order | Provider sequence to try |
only | Restrict to specific providers |
zeroDataRetention | Only use providers with zero data retention policies |
xAI
-
Head over to the xAI console, create an account, and generate an API key.
-
Run the
/connectcommand and search for xAI./connect -
Enter your xAI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like Grok Beta./models
Z.AI
-
Head over to the Z.AI API console, create an account, and click Create a new API key.
-
Run the
/connectcommand and search for Z.AI./connectIf you are subscribed to the GLM Coding Plan, select Z.AI Coding Plan.
-
Enter your Z.AI API key.
┌ API key││└ enter -
Run the
/modelscommand to select a model like GLM-4.7./models
ZenMux
-
Head over to the ZenMux dashboard, click Create API Key, and copy the key.
-
Run the
/connectcommand and search for ZenMux./connect -
Enter the API key for the provider.
┌ API key││└ enter -
Many ZenMux models are preloaded by default, run the
/modelscommand to select the one you want./modelsYou can also add additional models through your innocode config.
innocode.json {"$schema": "https://innocode.io/config.json","provider": {"zenmux": {"models": {"somecoolnewmodel": {}}}}}
Custom provider
To add any OpenAI-compatible provider that’s not listed in the /connect command:
-
Run the
/connectcommand and scroll down to Other.Terminal window $ /connect┌ Add credential│◆ Select provider│ ...│ ● Other└ -
Enter a unique ID for the provider.
Terminal window $ /connect┌ Add credential│◇ Enter provider id│ myprovider└ -
Enter your API key for the provider.
Terminal window $ /connect┌ Add credential│▲ This only stores a credential for myprovider - you will need to configure it in innocode.json, check the docs for examples.│◇ Enter your API key│ sk-...└ -
Create or update your
innocode.jsonfile in your project directory:innocode.json {"$schema": "https://innocode.io/config.json","provider": {"myprovider": {"npm": "@ai-sdk/openai-compatible","name": "My AI ProviderDisplay Name","options": {"baseURL": "https://api.myprovider.com/v1"},"models": {"my-model-name": {"name": "My Model Display Name"}}}}}Here are the configuration options:
- npm: AI SDK package to use,
@ai-sdk/openai-compatiblefor OpenAI-compatible providers - name: Display name in UI.
- models: Available models.
- options.baseURL: API endpoint URL.
- options.apiKey: Optionally set the API key, if not using auth.
- options.headers: Optionally set custom headers.
More on the advanced options in the example below.
- npm: AI SDK package to use,
-
Run the
/modelscommand and your custom provider and models will appear in the selection list.
Example
Here’s an example setting the apiKey, headers, and model limit options.
{ "$schema": "https://innocode.io/config.json", "provider": { "myprovider": { "npm": "@ai-sdk/openai-compatible", "name": "My AI ProviderDisplay Name", "options": { "baseURL": "https://api.myprovider.com/v1", "apiKey": "{env:ANTHROPIC_API_KEY}", "headers": { "Authorization": "Bearer custom-token" } }, "models": { "my-model-name": { "name": "My Model Display Name", "limit": { "context": 200000, "output": 65536 } } } } }}Configuration details:
- apiKey: Set using
envvariable syntax, learn more. - headers: Custom headers sent with each request.
- limit.context: Maximum input tokens the model accepts.
- limit.output: Maximum tokens the model can generate.
The limit fields allow InnoCode to understand how much context you have left. Standard providers pull these from models.dev automatically.
Troubleshooting
If you are having trouble with configuring a provider, check the following:
-
Check the auth setup: Run
innocode auth listto see if the credentials for the provider are added to your config.This doesn’t apply to providers like Amazon Bedrock, that rely on environment variables for their auth.
-
For custom providers, check the innocode config and:
- Make sure the provider ID used in the
/connectcommand matches the ID in your innocode config. - The right npm package is used for the provider. For example, use
@ai-sdk/cerebrasfor Cerebras. And for all other OpenAI-compatible providers, use@ai-sdk/openai-compatible. - Check correct API endpoint is used in the
options.baseURLfield.
- Make sure the provider ID used in the