Skip to main content

Overview

CodeBuddy and WorkBuddy are AI-powered coding assistants developed by Tencent Cloud that integrate directly into your IDE. By connecting them to FoxAPI, you can leverage a wide selection of AI models through FoxAPI’s unified API.
CodeBuddy and WorkBuddy use a models.json configuration file to define custom model providers. FoxAPI’s OpenAI-compatible endpoint works seamlessly with this configuration method.

Prerequisites

FoxAPI API Key

An active FoxAPI API key with access to the models you want to use.

CodeBuddy / WorkBuddy

The CodeBuddy or WorkBuddy extension installed in your IDE (VS Code, JetBrains, etc.).

Configuration

CodeBuddy and WorkBuddy read model definitions from a models.json file. There are two levels of configuration:
  • User-level (applies to all projects): stored in your home directory
  • Project-level (higher priority, applies to a single project): stored in the project root
1

Locate the Configuration File

Choose the appropriate path based on your tool and operating system:
LevelmacOS / LinuxWindows
User~/.codebuddy/models.jsonC:\Users\<username>\.codebuddy\models.json
Project<project-root>/.codebuddy/models.json<project-root>\.codebuddy\models.json
Project-level configuration takes priority over user-level. Use project-level config when you need different models or API keys for specific projects.
2

Create the models.json File

Create the models.json file at the path from the previous step. Here is a basic example with one model:
{
  "models": [
    {
      "id": "gpt-5.4",
      "name": "FoxAPI Auto (Smart Routing)",
      "vendor": "FoxAPI",
      "apiKey": "sk-your-foxapi-api-key",
      "url": "https://api.foxapi.cc/v1/chat/completions",
      "supportsToolCall": true,
      "supportsImages": true
    }
  ]
}
Replace sk-your-foxapi-api-key with your actual FoxAPI API key.
The URL must be the full chat completions endpoint: https://api.foxapi.cc/v1/chat/completions. Do not use just the base URL.
3

Add More Models (Optional)

You can define multiple models in the models array. Each entry appears as a selectable model in the extension:
{
  "models": [
    {
      "id": "gpt-5.4",
      "name": "FoxAPI Auto (Smart Routing)",
      "vendor": "FoxAPI",
      "apiKey": "sk-your-foxapi-api-key",
      "url": "https://api.foxapi.cc/v1/chat/completions",
      "supportsToolCall": true,
      "supportsImages": true
    },
    {
      "id": "claude-sonnet-4-20250514",
      "name": "Claude Sonnet 4",
      "vendor": "FoxAPI",
      "apiKey": "sk-your-foxapi-api-key",
      "url": "https://api.foxapi.cc/v1/chat/completions",
      "supportsToolCall": true,
      "supportsImages": true
    },
    {
      "id": "gpt-4o",
      "name": "GPT-4o",
      "vendor": "FoxAPI",
      "apiKey": "sk-your-foxapi-api-key",
      "url": "https://api.foxapi.cc/v1/chat/completions",
      "supportsToolCall": true,
      "supportsImages": true
    },
    {
      "id": "deepseek-chat",
      "name": "DeepSeek V3",
      "vendor": "FoxAPI",
      "apiKey": "sk-your-foxapi-api-key",
      "url": "https://api.foxapi.cc/v1/chat/completions",
      "supportsToolCall": true,
      "supportsImages": false
    }
  ]
}
Refer to FoxAPI’s model list for all available model IDs.
4

Select the Model in Your IDE

After saving the models.json file, the extension auto-detects the changes — no restart required. Open CodeBuddy or WorkBuddy in your IDE and select one of the FoxAPI models you configured from the model dropdown.

Configuration Reference

Each model entry in models.json supports the following fields:
FieldTypeRequiredDescription
idstringYesThe model identifier sent to the API (e.g., gpt-4o, claude-sonnet-4-20250514).
namestringYesDisplay name shown in the model selector.
vendorstringYesA label for the provider (e.g., FoxAPI).
apiKeystringYesYour FoxAPI API key.
urlstringYesThe full chat completions endpoint URL.
supportsToolCallbooleanNoSet to true if the model supports function/tool calling.
supportsImagesbooleanNoSet to true if the model supports image inputs.

Verify the Connection

After configuring models.json:
  1. Open any code file in your IDE.
  2. Select one of your FoxAPI models from the model dropdown.
  3. Trigger the AI assistant (e.g., select code and ask for an explanation, or start an inline chat).
  4. If the response is generated successfully, the connection is working.

Troubleshooting

  • Verify your API key is correct and has sufficient balance.
  • Check that the url field is exactly https://api.foxapi.cc/v1/chat/completions.
  • Ensure the id field matches a model available on FoxAPI.
  • Validate the JSON syntax of your models.json file (no trailing commas, proper quoting).
  • Confirm the models.json file is in the correct directory (see Locate the Configuration File).
  • The extension auto-detects file changes, but try restarting the extension or IDE if models still do not appear.
  • Ensure the JSON file is valid. You can check with:
    cat ~/.codebuddy/models.json | python3 -m json.tool
    
  • Try switching to a faster model (e.g., claude-sonnet-4-20250514 instead of claude-opus-4-20250514).
  • Check your network connection.
  • Ensure supportsToolCall and/or supportsImages are set to true in your model entry.
  • Not all models support these features. Check the model’s capabilities on FoxAPI’s model list.

Security Best Practices

  • Never commit models.json to version control if it contains your API key. Add .codebuddy/ and .workbuddy/ to your .gitignore file.
  • Use project-level config with caution. If working in a shared repository, prefer user-level configuration to avoid accidentally exposing your API key.
  • Monitor usage on your FoxAPI dashboard to detect any unexpected consumption.