Skip to content

Scenarios

Claude with a Model from Vertex AI

This scenario demonstrates how to configure Claude Code to use a model hosted on Google Cloud Vertex AI instead of the default Anthropic API. This is useful when you need to use Claude through your Google Cloud organization's billing or compliance setup.

Prerequisites:

  • A Google Cloud project with the Vertex AI API enabled and Claude models available
  • Google Cloud credentials configured on your host machine (via gcloud auth application-default login)

Step 1: Configure Claude agent settings

The easiest way is to let kdn autoconf detect your environment and configure things automatically. Set the required Vertex AI environment variables in your shell and run:

export CLAUDE_CODE_USE_VERTEX=1
export ANTHROPIC_VERTEX_PROJECT_ID=my-gcp-project-id
export CLOUD_ML_REGION=my-region

kdn autoconf

autoconf detects all three variables plus your application default credentials file, then asks where to record the configuration:

  • Claude agent config (all Claude workspaces) — writes to ~/.kdn/config/agents.json; applies to every Claude workspace you create from now on
  • Local (.kaiden/workspace.json) — applies only to the current project

Pick the agent config option to configure all future Claude workspaces at once. To skip the interactive prompt and apply immediately to the agent config, pass --yes:

CLAUDE_CODE_USE_VERTEX=1 ANTHROPIC_VERTEX_PROJECT_ID=my-gcp-project-id CLOUD_ML_REGION=my-region \
  kdn autoconf --yes

Alternative: configure manually

If the environment variables are not present in your shell, create or edit ~/.kdn/config/agents.json directly:

{
  "claude": {
    "environment": [
      {
        "name": "CLAUDE_CODE_USE_VERTEX",
        "value": "1"
      },
      {
        "name": "ANTHROPIC_VERTEX_PROJECT_ID",
        "value": "my-gcp-project-id"
      },
      {
        "name": "CLOUD_ML_REGION",
        "value": "my-region"
      }
    ],
    "mounts": [
      {
        "host": "$HOME/.config/gcloud/application_default_credentials.json",
        "target": "$HOME/.config/gcloud/application_default_credentials.json",
        "ro": true
      }
    ]
  }
}

Fields:

  • CLAUDE_CODE_USE_VERTEX — set to 1 to instruct Claude Code to use Vertex AI instead of the Anthropic API
  • ANTHROPIC_VERTEX_PROJECT_ID — your Google Cloud project ID where Vertex AI is configured
  • CLOUD_ML_REGION — the region where Claude is available on Vertex AI
  • The ADC file mounted read-only — provides the workspace access to your application default credentials

Step 2: Register and start the workspace

# Register a workspace with the Podman runtime and Claude agent
kdn init /path/to/project --runtime podman --agent claude

# Start the workspace (using name or ID)
kdn start my-project

# Connect to the workspace — Claude Code will use Vertex AI automatically
kdn terminal my-project

When Claude Code starts, it detects ANTHROPIC_VERTEX_PROJECT_ID and CLOUD_ML_REGION and routes all requests to Vertex AI using the mounted application default credentials.

Sharing local Claude settings (optional)

To reuse your host Claude Code settings (preferences, custom instructions, etc.) inside the workspace, add ~/.claude and ~/.claude.json to the mounts in ~/.kdn/config/agents.json:

{
  "claude": {
    "mounts": [
      {
        "host": "$HOME/.config/gcloud/application_default_credentials.json",
        "target": "$HOME/.config/gcloud/application_default_credentials.json",
        "ro": true
      },
      {"host": "$HOME/.claude", "target": "$HOME/.claude"},
      {"host": "$HOME/.claude.json", "target": "$HOME/.claude.json"}
    ]
  }
}

~/.claude contains your Claude Code configuration directory (skills, settings) and ~/.claude.json stores your account and preferences. These are mounted read-write so that changes made inside the workspace (e.g., updated preferences) are persisted back to your host.

Notes:

  • Run gcloud auth application-default login on your host machine before starting the workspace to ensure valid credentials are available
  • If GOOGLE_APPLICATION_CREDENTIALS is set in your shell, kdn autoconf uses the file it points to instead of the default ADC path — no extra steps needed
  • kdn automatically intercepts the credentials file mount: a placeholder file is written into the container and OneCLI injects the real credentials transparently at request time — the actual credential file never touches the container filesystem
  • No ANTHROPIC_API_KEY is needed when using Vertex AI — credentials are provided via the mounted credentials file
  • When network.mode is "deny", the Google OAuth and Vertex AI endpoints (oauth2.googleapis.com, aiplatform.googleapis.com) are automatically added to the allow-list — no explicit hosts entry is needed
  • To pin a specific Claude model, use --model flag during init (e.g., --model claude-sonnet-4-20250514), which takes precedence over any model in default settings, or add an ANTHROPIC_MODEL environment variable (e.g., "claude-opus-4-5")
  • If you run kdn autoconf again after Vertex AI is already configured, it reports the existing configuration location and exits without making changes

Starting Claude with Default Settings

This scenario demonstrates how to pre-configure Claude Code's settings so that when it starts inside a workspace, it skips the interactive onboarding flow and uses your preferred defaults. kdn automatically handles the onboarding flags, and you can optionally customize other settings like theme preferences.

Automatic Onboarding Skip

When you register a workspace with the Claude agent, kdn automatically: - Sets hasCompletedOnboarding: true to skip the first-run wizard - Sets hasTrustDialogAccepted: true for the workspace sources directory (the exact path is determined by the runtime)

This happens automatically for every Claude workspace — no manual configuration required.

Optional: Customize Theme and Other Settings

If you want to customize Claude's theme or other preferences, create default settings:

Step 1: Create the agent settings directory

mkdir -p ~/.kdn/config/claude

Step 2: Write the default Claude settings file

cat > ~/.kdn/config/claude/.claude.json << 'EOF'
{
  "theme": "dark-daltonized"
}
EOF

Fields:

  • theme - The UI theme for Claude Code (e.g., "dark", "light", "dark-daltonized")

You don't need to set hasCompletedOnboarding or hasTrustDialogAccepted — kdn adds these automatically when creating the workspace.

Step 3: Register and start the workspace

# Register a workspace — the settings file is embedded in the container image
kdn init /path/to/project --runtime podman --agent claude

# Start the workspace (using name or ID)
kdn start my-project

# Connect — Claude Code starts directly without onboarding
kdn terminal my-project

When init runs, kdn: 1. Reads all files from ~/.kdn/config/claude/ (e.g., your theme preferences) 2. Automatically adds hasCompletedOnboarding: true and marks the workspace sources directory as trusted (the path is determined by the runtime) 3. Copies the final merged settings into the container image at /home/agent/.claude.json

Claude Code finds this file on startup and skips onboarding.

Notes:

  • Onboarding is skipped automatically — even if you don't create any settings files, kdn ensures Claude starts without prompts
  • The settings are baked into the container image at init time, not mounted at runtime — changes to the files on the host require re-registering the workspace to take effect
  • Any file placed under ~/.kdn/config/claude/ is copied into the container home directory, preserving the directory structure (e.g., ~/.kdn/config/claude/.some-tool/config becomes /home/agent/.some-tool/config inside the container)
  • This approach keeps your workspace self-contained — other developers using the same project are not affected, and your local ~/.claude directory is not exposed inside the container
  • To apply changes to the settings, remove and re-register the workspace: kdn remove <workspace-id> then kdn init again

Using Goose Agent with a Model from Vertex AI

This scenario demonstrates how to configure the Goose agent in a kdn workspace using Vertex AI as the backend, covering credential injection, sharing your local gcloud configuration, and pre-configuring the default model.

Authenticating with Vertex AI

Goose can use Google Cloud Vertex AI as its backend. Authentication relies on Application Default Credentials (ADC) provided by the gcloud CLI. Mount your local ~/.config/gcloud directory to make your host credentials available inside the workspace, and set the GCP_PROJECT_ID, GCP_LOCATION, and GOOSE_PROVIDER environment variables to tell Goose which project and region to use.

Create or edit ~/.kdn/config/agents.json:

{
  "goose": {
    "environment": [
      {
        "name": "GOOSE_PROVIDER",
        "value": "gcp_vertex_ai"
      },
      {
        "name": "GCP_PROJECT_ID",
        "value": "my-gcp-project"
      },
      {
        "name": "GCP_LOCATION",
        "value": "my-region"
      }
    ],
    "mounts": [
      {"host": "$HOME/.config/gcloud", "target": "$HOME/.config/gcloud", "ro": true}
    ]
  }
}

The ~/.config/gcloud directory contains your Application Default Credentials and active account configuration. kdn automatically intercepts this mount: a placeholder credential file is written into the container and OneCLI injects the real Application Default Credentials transparently at request time — the actual credential file never touches the container filesystem.

Then register and start the workspace:

# Register a workspace with the Podman runtime and Goose agent
kdn init /path/to/project --runtime podman --agent goose

# Start the workspace
kdn start my-project

# Connect — Goose starts with Vertex AI configured
kdn terminal my-project

Sharing Local Goose Settings

To reuse your host Goose settings (model preferences, provider configuration, etc.) inside the workspace, mount the ~/.config/goose directory.

Edit ~/.kdn/config/agents.json to add the mount alongside the Vertex AI configuration:

{
  "goose": {
    "environment": [
      {
        "name": "GOOSE_PROVIDER",
        "value": "gcp_vertex_ai"
      },
      {
        "name": "GCP_PROJECT_ID",
        "value": "my-gcp-project"
      },
      {
        "name": "GCP_LOCATION",
        "value": "my-region"
      }
    ],
    "mounts": [
      {"host": "$HOME/.config/gcloud", "target": "$HOME/.config/gcloud", "ro": true},
      {"host": "$HOME/.config/goose", "target": "$HOME/.config/goose"}
    ]
  }
}

The ~/.config/goose directory contains your Goose configuration (settings, model preferences, etc.). It is mounted read-write so that changes made inside the workspace are persisted back to your host.

Using Default Settings

If you want to pre-configure Goose with default settings without exposing your local ~/.config/goose directory inside the container, create default settings files that are baked into the container image at workspace registration time. This is an alternative to mounting your local Goose settings — use one approach or the other, not both.

Automatic Onboarding Skip

When you register a workspace with the Goose agent, kdn automatically sets GOOSE_TELEMETRY_ENABLED to false in the Goose config file if it is not already defined, so Goose skips its telemetry prompt on first launch.

Step 1: Create the agent settings directory

mkdir -p ~/.kdn/config/goose/.config/goose

Step 2: Write the default Goose settings file

As an example, you can configure the model and enable telemetry:

cat > ~/.kdn/config/goose/.config/goose/config.yaml << 'EOF'
GOOSE_MODEL: "claude-sonnet-4-6"
GOOSE_TELEMETRY_ENABLED: true
EOF

Fields:

  • GOOSE_MODEL - The model identifier Goose uses for its AI interactions. Alternatively, use --model flag during init to set this (the flag takes precedence over this setting)
  • GOOSE_PROVIDER - The LLM provider Goose uses (e.g. anthropic, openai, google). When using --model with the provider::model format, kdn sets this automatically: gemini is mapped to google, and all other values are kept identical. If no provider is specified, it defaults to openai.
  • GOOSE_TELEMETRY_ENABLED - Whether Goose sends usage telemetry; set to true to opt in, or omit to have kdn default it to false

Step 3: Register and start the workspace

# Register a workspace — the settings file is embedded in the container image
kdn init /path/to/project --runtime podman --agent goose

# Start the workspace
kdn start my-project

# Connect — Goose starts with the configured provider and model
kdn terminal my-project

When init runs, kdn: 1. Reads all files from ~/.kdn/config/goose/ (e.g., your provider and model settings) 2. Automatically sets GOOSE_TELEMETRY_ENABLED: false in .config/goose/config.yaml if the key is not already defined 3. Copies the final settings into the container image at /home/agent/.config/goose/config.yaml

Goose finds this file on startup and uses the pre-configured settings without prompting.

Notes:

  • Telemetry is disabled automatically — even if you don't create any settings files, kdn ensures Goose starts without the telemetry prompt
  • If you prefer to enable telemetry, set GOOSE_TELEMETRY_ENABLED: true in ~/.kdn/config/goose/.config/goose/config.yaml
  • The settings are baked into the container image at init time, not mounted at runtime — changes to the files on the host require re-registering the workspace to take effect
  • Any file placed under ~/.kdn/config/goose/ is copied into the container home directory, preserving the directory structure (e.g., ~/.kdn/config/goose/.config/goose/config.yaml becomes /home/agent/.config/goose/config.yaml inside the container)
  • This approach keeps your workspace self-contained — other developers using the same project are not affected, and your local ~/.config/goose directory is not exposed inside the container
  • To apply changes to the settings, remove and re-register the workspace: kdn remove <workspace-id> then kdn init again

Using Cursor CLI Agent

This scenario demonstrates how to configure the Cursor agent in a kdn workspace, covering API key injection, sharing your local Cursor settings, and pre-configuring the default model.

Defining the Cursor API Key via a Secret

Cursor requires a CURSOR_API_KEY environment variable to authenticate with the Cursor service. Rather than embedding the key as plain text, use the secret mechanism to keep credentials out of your configuration files.

Step 1: Create the secret

For the Podman runtime, create the secret once on your host machine using podman secret create:

echo "$CURSOR_API_KEY" | podman secret create cursor-api-key -

Step 2: Reference the secret in agent configuration

Create or edit ~/.kdn/config/agents.json to inject the secret as an environment variable for the cursor agent:

{
  "cursor": {
    "environment": [
      {
        "name": "CURSOR_API_KEY",
        "secret": "cursor-api-key"
      }
    ]
  }
}

Step 3: Register and start the workspace

# Register a workspace with the Podman runtime and Cursor agent
kdn init /path/to/project --runtime podman --agent cursor

# Start the workspace
kdn start my-project

# Connect — Cursor starts with the API key available
kdn terminal my-project

The secret name (cursor-api-key) must match the secret field value in your configuration. At workspace creation time, kdn passes the secret to Podman, which injects it as the CURSOR_API_KEY environment variable inside the container.

Sharing Local Cursor Settings

To reuse your host Cursor settings (preferences, keybindings, extensions configuration, etc.) inside the workspace, mount the ~/.cursor directory.

Edit ~/.kdn/config/agents.json to add the mount:

{
  "cursor": {
    "environment": [
      {
        "name": "CURSOR_API_KEY",
        "secret": "cursor-api-key"
      }
    ],
    "mounts": [
      {"host": "$HOME/.cursor", "target": "$HOME/.cursor"}
    ]
  }
}

The ~/.cursor directory contains your Cursor configuration (settings, model preferences, etc.). It is mounted read-write so that changes made inside the workspace are persisted back to your host.

Using Default Settings

If you want to pre-configure Cursor with default settings without exposing your local ~/.cursor directory inside the container, create default settings files that are baked into the container image at workspace registration time. This is an alternative to mounting your local Cursor settings — use one approach or the other, not both.

Automatic Onboarding Skip

When you register a workspace with the Cursor agent, kdn automatically creates a .workspace-trusted file in the Cursor projects directory for the workspace sources path, so Cursor skips its workspace trust dialog on first launch.

Step 1: Configure the agent environment

Create or edit ~/.kdn/config/agents.json to inject the API key. No mount is needed since settings are baked in:

{
  "cursor": {
    "environment": [
      {
        "name": "CURSOR_API_KEY",
        "secret": "cursor-api-key"
      }
    ]
  }
}

Step 2: Create the agent settings directory

mkdir -p ~/.kdn/config/cursor/.cursor

Step 3: Write the default Cursor settings file (optional)

You can optionally pre-configure Cursor with additional settings by creating a cli-config.json file:

cat > ~/.kdn/config/cursor/.cursor/cli-config.json << 'EOF'
{
  "model": {
    "modelId": "my-preferred-model",
    "displayModelId": "my-preferred-model",
    "displayName": "My Preferred Model",
    "displayNameShort": "My Model",
    "maxMode": false
  },
  "hasChangedDefaultModel": true
}
EOF

Fields:

  • model.modelId - The model identifier used internally by Cursor
  • model.displayName / model.displayNameShort - Human-readable model names shown in the UI
  • model.maxMode - Whether to enable max mode for this model
  • hasChangedDefaultModel - Tells Cursor that the model selection is intentional and should not prompt the user to choose a model

Note: Using the --model flag during init is the preferred way to configure the model, as it automatically sets all model fields correctly.

Step 4: Register and start the workspace

# Register a workspace with a specific model using the --model flag (recommended)
kdn init /path/to/project --runtime podman --agent cursor --model my-model-id

# Or register without --model to use settings from cli-config.json
kdn init /path/to/project --runtime podman --agent cursor

# Start the workspace
kdn start my-project

# Connect — Cursor starts with the configured model
kdn terminal my-project

When init runs, kdn: 1. Reads all files from ~/.kdn/config/cursor/ (e.g., your settings) 2. If --model is specified, updates cli-config.json with the model configuration (takes precedence over any existing model in settings files) 3. Automatically creates the workspace trust file so Cursor skips its trust dialog 4. Copies the final settings into the container image at /home/agent/.cursor/cli-config.json

Cursor finds this file on startup and uses the pre-configured model without prompting.

Notes:

  • Model configuration: Use --model flag during init to set the model (e.g., --model my-model-id). This takes precedence over any model defined in settings files
  • The settings are baked into the container image at init time, not mounted at runtime — changes to the files on the host require re-registering the workspace to take effect
  • Any file placed under ~/.kdn/config/cursor/ is copied into the container home directory, preserving the directory structure (e.g., ~/.kdn/config/cursor/.cursor/cli-config.json becomes /home/agent/.cursor/cli-config.json inside the container)
  • To apply changes to the settings, remove and re-register the workspace: kdn remove <workspace-id> then kdn init again
  • This approach keeps your workspace self-contained — other developers using the same project are not affected, and your local ~/.cursor directory is not exposed inside the container
  • Do not combine this approach with the ~/.cursor mount from the previous section — the mounted directory would override the baked-in defaults at runtime

Using OpenCode with a Local Model

OpenCode supports using locally-running models via providers like Ollama or RamaLama. This scenario demonstrates how to configure a kdn workspace to use a local model running on your host machine.

Prerequisites:

  • A local model server running on your host (e.g., Ollama or RamaLama)
  • The model you want to use downloaded to your local server

Step 1: Start a local model server on your host

For example, with Ollama:

# Pull a model
ollama pull gemma3:12b

# Ollama runs as a service on port 11434 by default

Or with RamaLama:

# Serve a model (runs on port 8080 by default)
ramalama serve granite3.3:8b

Step 2: Register the workspace with a local model

Use the --model flag with the provider::model format. kdn knows the default endpoints for ollama and ramalama and automatically configures them to be reachable from inside the container:

# Use Ollama with a specific model (default endpoint: host.containers.internal:11434/v1 for Podman)
kdn init /path/to/project --runtime podman --agent opencode --model ollama::gemma3:12b

# Use RamaLama with a specific model (default endpoint: host.containers.internal:8080/v1 for Podman)
kdn init /path/to/project --runtime podman --agent opencode --model ramalama::granite3.3:8b

Using a custom endpoint

If your model server runs on a non-default port or a remote host, specify the full endpoint as the third component:

# Custom port on localhost (localhost is auto-converted to host.containers.internal for Podman)
kdn init /path/to/project --runtime podman --agent opencode --model ollama::gemma3:12b::http://localhost:8080/v1

# Remote host
kdn init /path/to/project --runtime podman --agent opencode --model ollama::gemma3:12b::http://192.168.1.50:11434/v1

When using the Podman runtime, localhost aliases (localhost, 127.0.0.1, 0.0.0.0, ::1) in the base URL are automatically converted to host.containers.internal so the model server is reachable from inside the container.

Using a custom OpenAI-compatible provider

For any OpenAI-compatible model server not in the known provider list, use the three-part format with an explicit base URL:

kdn init /path/to/project --runtime podman --agent opencode --model myprovider::mymodel::http://localhost:9090/v1

What kdn configures

When you specify a local model provider, kdn writes an opencode.json configuration file baked into the container image. For ollama::gemma3:12b with the Podman runtime, it produces:

{
  "model": "ollama/gemma3:12b",
  "provider": {
    "ollama": {
      "name": "ollama",
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "http://host.containers.internal:11434/v1"
      },
      "models": {
        "gemma3:12b": {
          "_launch": true,
          "name": "gemma3:12b"
        }
      }
    }
  }
}

Step 3: Start and connect to the workspace

# Start the workspace
kdn start my-project

# Connect — OpenCode starts using the local model automatically
kdn terminal my-project

Notes:

  • The model server must be running on your host before connecting to the workspace
  • The provider::model format stores the model as provider/model in the configuration (e.g., ollama/gemma3:12b)
  • Known providers (ollama, ramalama) have preconfigured default base URLs; for other OpenAI-compatible providers, use the full provider::model::baseURL format
  • When using the Podman runtime, the default base URLs for known providers point to host.containers.internal, which is the standard way to reach the host from a Podman container
  • The settings are baked into the container image at init time — changes require re-registering the workspace: kdn remove <workspace-id> then kdn init again

Auto-configuring Secrets from the Environment

kdn autoconf scans your shell environment for known API keys and tokens, creates the corresponding secrets in the local store, and records them in the configuration target you choose (global, project-specific, or local .kaiden/workspace.json).

# Detect what is in the environment and apply interactively
kdn autoconf

# Apply immediately without prompts (saves to global config)
kdn autoconf --yes

# Pass secrets inline and apply immediately
GH_TOKEN="$(gh auth token)" kdn autoconf --yes

With --yes, every detected secret is created without prompts and recorded in the global config ("" key in ~/.kdn/config/projects.json), making it available across all projects.

When run interactively, autoconf asks one question per detected secret:

  1. Confirm creation — create the secret in the local store?
  2. Choose target — where to record the reference:
  3. Global — available across all projects (~/.kdn/config/projects.json global key)
  4. Project — scoped to the current directory's git project
  5. Local — written to .kaiden/workspace.json in the current directory

Secrets that are already stored and referenced in any config source are reported as already configured and skipped.

Auto-mounting Home Config Files

kdn autoconf also scans your home directory for known config files and, when found, offers to mount them read-only into workspace containers. This gives agents access to your local tool settings (git identity, editor preferences, etc.) without any manual configuration.

# Detect config files and apply interactively
kdn autoconf

# Apply immediately without prompts (mounts to global config)
kdn autoconf --yes

When a matching file is found, autoconf follows the same flow as for secrets:

  1. Confirm mounting — add the read-only bind mount?
  2. Choose target — where to record it:
  3. Global — available across all projects (~/.kdn/config/projects.json global key)
  4. Project — scoped to the current directory's git project
  5. Local — written to .kaiden/workspace.json in the current directory

Files already mounted in any config source are reported as already configured and skipped.

Example: $HOME/.gitconfig

If ~/.gitconfig exists on your machine, kdn autoconf detects it and offers to mount it read-only at $HOME/.gitconfig inside workspace containers. This makes your git identity (name, email, aliases) available to the agent without embedding it in any config file.

The resulting entry in ~/.kdn/config/projects.json looks like:

{
  "": {
    "mounts": [
      {"host": "$HOME/.gitconfig", "target": "$HOME/.gitconfig", "ro": true}
    ]
  }
}

The $HOME variable is resolved at workspace-start time, keeping the config portable across machines.

Auto-detecting Languages and Ports

kdn autoconf also uses alizer to analyse the current directory and detect programming languages and exposed TCP ports. For each detected language it offers to add the corresponding devcontainer feature to .kaiden/workspace.json; for detected ports it offers to add port-forwarding entries.

# Detect languages and ports and apply interactively
kdn autoconf

# Apply immediately without prompts
kdn autoconf --yes

Supported languages and their devcontainer features:

Language Feature
Go ghcr.io/devcontainers/features/go:1
Python ghcr.io/devcontainers/features/python:1
JavaScript ghcr.io/devcontainers/features/node:2
TypeScript ghcr.io/devcontainers/features/node:2
Java ghcr.io/devcontainers/features/java:1

JavaScript and TypeScript both map to the same Node.js feature and are presented as a single prompt.

When run interactively, autoconf asks one question per detected feature and one question for all detected ports together:

  1. Confirm feature — add the devcontainer feature for the detected language?
  2. Confirm ports — add all detected port numbers to the local workspace config?

With --yes, all detected features and ports are added without prompts.

Features and ports that are already present in .kaiden/workspace.json are reported as already configured and skipped. Language features and port-forwarding entries are written to the local workspace config (.kaiden/workspace.json).

Example: Go project

Running kdn autoconf in a Go repository adds the Go devcontainer feature to .kaiden/workspace.json:

{
  "features": {
    "ghcr.io/devcontainers/features/go:1": {}
  }
}

When the workspace is next registered with kdn init, kdn downloads and installs the feature into the container image, making the Go toolchain available to the agent.

Notes:

  • Port detection is based on source-code analysis (e.g., listening calls in server code), not on running processes
  • The features and ports fields are merged with any values already in .kaiden/workspace.json; no existing configuration is removed

Sharing a GitHub Token

This scenario demonstrates how to make a GitHub token available inside workspaces using the multi-level configuration system — either globally for all projects or scoped to a specific project.

kdn has a built-in github secret service. The token is stored once with kdn secret create and referenced by name in any configuration level. At workspace creation time, kdn provisions the token into OneCLI, which injects it as a Bearer Authorization header for requests to api.github.com. It also sets GH_TOKEN and GITHUB_TOKEN as placeholder environment variables so that gh CLI and other GitHub-aware tools detect that credentials are configured.

Step 1: Create the secret

kdn secret create my-github-token --type github --value ghp_mytoken

The token is stored securely in the system keychain. The config files only hold the name.

For all projects

Edit ~/.kdn/config/projects.json and add the secret name and your git configuration under the global "" key:

{
  "": {
    "secrets": ["my-github-token"],
    "mounts": [
      {"host": "$HOME/.gitconfig", "target": "$HOME/.gitconfig", "ro": true}
    ]
  }
}

The $HOME/.gitconfig mount makes your git identity (name, email, aliases, etc.) available to git commands run by the agent.

For a specific project

Use the project identifier as the key instead. The identifier is the git remote URL (without .git) as detected by kdn during init:

{
  "https://github.com/my-org/my-repo/": {
    "secrets": ["my-github-token"]
  }
}

This injects the token only when working on workspaces that belong to https://github.com/my-org/my-repo/, leaving other projects unaffected.

Both at once

If you need different tokens for different projects, create a secret for each and reference them per entry:

kdn secret create my-github-token-default --type github --value ghp_default
kdn secret create my-github-token-private --type github --value ghp_private
{
  "": {
    "secrets": ["my-github-token-default"]
  },
  "https://github.com/my-org/my-private-repo/": {
    "secrets": ["my-github-token-private"]
  }
}

Notes:

  • The token value never appears in configuration files — only the secret name does
  • gh CLI and git will see GH_TOKEN/GITHUB_TOKEN set to a placeholder value, signalling that credentials are available; OneCLI injects the real token as a Bearer header on actual requests to api.github.com
  • The project identifier used as the key must match what kdn detected during init — run kdn list -o json to see the project field for each registered workspace
  • Configuration changes in projects.json take effect the next time you run kdn init for that workspace; already-registered workspaces need to be removed and re-registered

Connecting to an OpenShift Cluster

This scenario demonstrates how to connect to an OpenShift cluster from inside a workspace. kdn automatically intercepts the kubeconfig mount: the real token is replaced with a placeholder inside the container and OneCLI injects it transparently as a Bearer Authorization header on requests to the cluster API server.

Prerequisites:

  • The oc CLI installed on your host machine
  • An OpenShift cluster reachable from your host, with a valid login session (oc login …)
  • Token-based authentication in the current kubeconfig context (not client-certificate auth)

Step 1: Declare the kubeconfig mount

Add the following to your .kaiden/workspace.json:

{
  "mounts": [
    {
      "host": "$HOME/.kube/config",
      "target": "$HOME/.kube/config"
    }
  ]
}

You can also mount the entire .kube directory if you prefer:

{
  "mounts": [
    {
      "host": "$HOME/.kube",
      "target": "$HOME/.kube"
    }
  ]
}

Step 2: Register and start the workspace

# Register a workspace
kdn init /path/to/project --runtime podman --agent claude

# Start the workspace
kdn start my-project

# Connect — oc and kubectl commands reach the cluster via OneCLI
kdn terminal my-project

How it works:

At workspace creation time, kdn reads the current context from your ~/.kube/config, checks that it uses token-based auth, and:

  1. Writes a pruned kubeconfig inside the container — containing only the current context, its cluster (server URL and CA certificate), and the current user — with the real token replaced by a placeholder
  2. Registers an OneCLI secret that injects Authorization: Bearer <real-token> on every outbound HTTPS request to the cluster API server

The real token never appears in the container filesystem. oc and kubectl work transparently because OneCLI intercepts their requests and injects the real header at the network level.

Notes:

  • Automatic interception only applies when the mount target is $HOME/.kube/config or $HOME/.kube, and the current context uses token-based auth. Client-certificate contexts are not intercepted.
  • When network.mode is "deny", the cluster API server hostname is automatically added to the allow-list — no explicit hosts entry is needed.
  • If your token expires, re-run oc login on the host and recreate the workspace (kdn remove + kdn init) so kdn picks up the new token.

Working with Git Worktrees

This scenario demonstrates how to run multiple agents in parallel, each working on a different branch of the same repository. Git worktrees allow each branch to live in its own directory, so each agent gets its own isolated workspace.

Step 1: Clone the repository

git clone https://github.com/my-org/my-repo.git /path/to/my-project/main

Step 2: Create a worktree for each feature branch

cd /path/to/my-project/main

git worktree add ../feature-a feature-a
git worktree add ../feature-b feature-b

This results in the following layout:

/path/to/my-project/
├── main/       ← main branch (original clone)
├── feature-a/  ← feature-a branch (worktree)
└── feature-b/  ← feature-b branch (worktree)

Step 3: Configure the main branch mount in your local project config

If you want the agents to have access to the main branch (e.g., to compare changes), add the mount in ~/.kdn/config/projects.json under the project identifier. This keeps the configuration on your machine only — not all developers of the project may use worktrees, so it does not belong in the repository's .kaiden/workspace.json.

{
  "https://github.com/my-org/my-repo/": {
    "mounts": [
      {"host": "$SOURCES/../main", "target": "$SOURCES/../main"}
    ]
  }
}

$SOURCES expands to the workspace sources directory (e.g., /path/to/my-project/feature-a), so $SOURCES/../main resolves to /path/to/my-project/main on both the host and inside the container.

Step 4: Register a workspace for each worktree

kdn init /path/to/my-project/feature-a --runtime podman --agent claude
kdn init /path/to/my-project/feature-b --runtime podman --agent claude

Step 5: Start and connect to each workspace independently

# Start both workspaces (using names or IDs)
kdn start feature-a
kdn start feature-b

# Connect to each agent in separate terminals
kdn terminal feature-a
kdn terminal feature-b

Each agent runs independently in its own container, operating on its own branch without interfering with the other.

Notes:

  • Each worktree shares the same .git directory, so agents can run git commands that are branch-aware
  • Workspaces for different worktrees of the same repository share the same project identifier (derived from the git remote URL), so the mount defined in projects.json automatically applies to all of them

Managing Workspaces from a UI or Programmatically

This scenario demonstrates how to manage workspaces programmatically using JSON output, which is ideal for UIs, scripts, or automation tools. All commands support the --output json (or -o json) flag for machine-readable output.

Step 1: Check existing workspaces

$ kdn workspace list -o json
{
  "items": []
}

Exit code: 0 (success, but no workspaces registered)

Step 2: Register a new workspace

$ kdn init /path/to/project --runtime podman --agent claude -o json
{
  "id": "2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea"
}

Exit code: 0 (success)

Step 3: Register with verbose output to get full details

$ kdn init /path/to/another-project --runtime podman --agent claude --model claude-sonnet-4-20250514 -o json -v
{
  "id": "f6e5d4c3b2a1098765432109876543210987654321098765432109876543210a",
  "name": "another-project",
  "agent": "claude",
  "model": "claude-sonnet-4-20250514",
  "project": "/absolute/path/to/another-project",
  "state": "stopped",
  "paths": {
    "source": "/absolute/path/to/another-project",
    "configuration": "/absolute/path/to/another-project/.kaiden"
  },
  "timestamps": {
    "created": 1752912000000
  }
}

Exit code: 0 (success)

Step 3a: Register and start immediately with auto-start flag

$ kdn init /path/to/third-project --runtime podman --agent claude -o json --start
{
  "id": "3c4d5e6f7a8b9098765432109876543210987654321098765432109876543210b"
}

Exit code: 0 (success, workspace is running)

Step 4: List all workspaces

$ kdn workspace list -o json
{
  "items": [
    {
      "id": "2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea",
      "name": "project",
      "agent": "claude",
      "project": "/absolute/path/to/project",
      "state": "running",
      "paths": {
        "source": "/absolute/path/to/project",
        "configuration": "/absolute/path/to/project/.kaiden"
      },
      "timestamps": {
        "created": 1752912000000,
        "started": 1752912300000
      }
    },
    {
      "id": "f6e5d4c3b2a1098765432109876543210987654321098765432109876543210a",
      "name": "another-project",
      "agent": "claude",
      "model": "claude-sonnet-4-20250514",
      "project": "/absolute/path/to/another-project",
      "state": "stopped",
      "paths": {
        "source": "/absolute/path/to/another-project",
        "configuration": "/absolute/path/to/another-project/.kaiden"
      },
      "timestamps": {
        "created": 1752912000000
      }
    }
  ]
}

Exit code: 0 (success)

Step 5: Start a workspace

$ kdn workspace start 2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea -o json
{
  "id": "2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea"
}

Exit code: 0 (success)

Step 6: Stop a workspace

$ kdn workspace stop 2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea -o json
{
  "id": "2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea"
}

Exit code: 0 (success)

Step 7: Remove a workspace

$ kdn workspace remove 2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea -o json
{
  "id": "2c5f16046476be368fcada501ac6cdc6bbd34ea80eb9ceb635530c0af64681ea"
}

Exit code: 0 (success)

Step 8: Verify removal

$ kdn workspace list -o json
{
  "items": [
    {
      "id": "f6e5d4c3b2a1098765432109876543210987654321098765432109876543210a",
      "name": "another-project",
      "agent": "claude",
      "model": "claude-sonnet-4-20250514",
      "project": "/absolute/path/to/another-project",
      "state": "stopped",
      "paths": {
        "source": "/absolute/path/to/another-project",
        "configuration": "/absolute/path/to/another-project/.kaiden"
      },
      "timestamps": {
        "created": 1752912000000
      }
    }
  ]
}

Exit code: 0 (success)

Error Handling

All errors are returned in JSON format when using --output json, with the error written to stdout (not stderr) and a non-zero exit code.

Error: Non-existent directory

$ kdn init /tmp/no-exist --runtime podman --agent claude -o json
{
  "error": "sources directory does not exist: /tmp/no-exist"
}

Exit code: 1 (error)

Error: Workspace not found

$ kdn workspace remove unknown-id -o json
{
  "error": "workspace not found: unknown-id"
}

Exit code: 1 (error)

Best Practices for Programmatic Usage

  1. Always check the exit code to determine success (0) or failure (non-zero)
  2. Parse stdout for JSON output in both success and error cases
  3. Use verbose mode with init (-v) when you need full workspace details immediately after creation
  4. Handle both success and error JSON structures in your code:
  5. Success responses have specific fields (e.g., id, items, name, paths)
  6. Error responses always have an error field

Example script pattern:

#!/bin/bash

# Register a workspace
output=$(kdn init /path/to/project --runtime podman --agent claude -o json)
exit_code=$?

if [ $exit_code -eq 0 ]; then
    workspace_id=$(echo "$output" | jq -r '.id')
    echo "Workspace created: $workspace_id"
else
    error_msg=$(echo "$output" | jq -r '.error')
    echo "Error: $error_msg"
    exit 1
fi