Skip to content

Models

Configuring an LLM provider and model.

InnoCode is configured to use InnoGPT only, with EU-hosted models and a single API key.


Providers

InnoGPT is preloaded by default. After you add your InnoGPT API key using /connect, the models will be available when you start InnoCode.

Learn more about InnoGPT.


Select a model

Once you’ve configured your provider you can select the model you want by typing in:

/models

There are a lot of models out there, with new models coming out every week.

However, there are only a few of them that are good at both generating code and tool calling.

Here are several InnoGPT models that work well with InnoCode, in no particular order:

  • GPT 4o
  • GPT 4o Mini
  • Claude Sonnet 4.5
  • Claude Haiku 4.5
  • Gemini 3 Pro

Set a default

To set one of these as the default model, you can set the model key in your InnoCode config.

innocode.json
{
"$schema": "https://innocode.io/config.json",
"model": "innogpt/gpt-4o"
}

Here the full ID is provider_id/model_id. For InnoGPT, that looks like innogpt/gpt-4o.


Configure models

You can globally configure a model’s options through the config.

innocode.jsonc
{
"$schema": "https://innocode.io/config.json",
"provider": {
"innogpt": {
"models": {
"gpt-5": {
"options": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
"include": ["reasoning.encrypted_content"],
},
},
},
},
"anthropic": {
"models": {
"claude-sonnet-4-5-20250929": {
"options": {
"thinking": {
"type": "enabled",
"budgetTokens": 16000,
},
},
},
},
},
},
}

Here we’re configuring global settings for two built-in models: gpt-5 when accessed via the openai provider, and claude-sonnet-4-20250514 when accessed via the anthropic provider. The built-in provider and model names can be found on Models.dev.

You can also configure these options for any agents that you are using. The agent config overrides any global options here. Learn more.

You can also define custom variants that extend built-in ones. Variants let you configure different settings for the same model without creating duplicate entries:

innocode.jsonc
{
"$schema": "https://innocode.io/config.json",
"provider": {
"innocode": {
"models": {
"gpt-5": {
"variants": {
"high": {
"reasoningEffort": "high",
"textVerbosity": "low",
"reasoningSummary": "auto",
},
"low": {
"reasoningEffort": "low",
"textVerbosity": "low",
"reasoningSummary": "auto",
},
},
},
},
},
},
}

Variants

Many models support multiple variants with different configurations. InnoCode ships with built-in default variants for popular providers.

Built-in variants

InnoCode ships with default variants for many providers:

Anthropic:

  • high - High thinking budget (default)
  • max - Maximum thinking budget

OpenAI:

Varies by model but roughly:

  • none - No reasoning
  • minimal - Minimal reasoning effort
  • low - Low reasoning effort
  • medium - Medium reasoning effort
  • high - High reasoning effort
  • xhigh - Extra high reasoning effort

Google:

  • low - Lower effort/token budget
  • high - Higher effort/token budget

Custom variants

You can override existing variants or add your own:

innocode.jsonc
{
"$schema": "https://innocode.io/config.json",
"provider": {
"openai": {
"models": {
"gpt-5": {
"variants": {
"thinking": {
"reasoningEffort": "high",
"textVerbosity": "low",
},
"fast": {
"disabled": true,
},
},
},
},
},
},
}

Cycle variants

Use the keybind variant_cycle to quickly switch between variants. Learn more.


Loading models

When InnoCode starts up, it checks for models in the following priority order:

  1. The --model or -m command line flag. The format is the same as in the config file: provider_id/model_id.

  2. The model list in the InnoCode config.

    innocode.json
    {
    "$schema": "https://innocode.io/config.json",
    "model": "anthropic/claude-sonnet-4-20250514"
    }

    The format here is provider/model.

  3. The last used model.

  4. The first model using an internal priority.