Skip to content

Glossary

Workspace

A registered directory containing your project source code and its configuration. Each workspace is tracked by kdn with a unique ID and a human-readable name. Workspaces can be accessed using either their ID or name in all commands (start, stop, remove, terminal).

Project

A stable identifier used to scope configuration to a specific repository or directory. kdn auto-detects the project identifier from the git remote URL (or the repository path when no remote is configured). Project-specific settings are stored in ~/.kdn/config/projects.json and take precedence over global settings but are overridden by agent-specific settings.

Runtime

The environment where workspaces run. kdn's runtime system is extensible — new runtimes can be added to support other execution environments. Supported runtimes: - Podman — container-based workspaces using a custom Fedora image - OpenShell — sandbox-based workspaces using the OpenShell Gateway with Podman or VM drivers

Sandbox

The isolated execution environment created by the runtime for a workspace. The sandbox contains the mounted project source code, the configured agent, and any injected environment variables, secrets, and mounts. Network access is controlled per workspace: outbound traffic can be fully allowed, restricted to an explicit list of hosts, or denied entirely — preventing the agent from reaching unintended external services. Depending on the runtime, the sandbox is implemented as a container (Podman) or a VM-based environment (OpenShell).

Agent

An AI assistant that can perform tasks autonomously. In kdn, agents are the different AI tools (Claude Code, Cursor, Goose, OpenCode, OpenClaw) that can be launched and configured.

LLM (Large Language Model)

The underlying AI model that powers the agents. Examples include Claude (by Anthropic), Gemini (by Google), GPT (by OpenAI), and open-source models such as Llama (by Meta), Gemma (by Google), and Granite (by IBM).

LLM Provider

The service or runtime that hosts and serves an LLM. kdn supports configuring agents with remote and local providers: - Remote — Anthropic API, Google Cloud Vertex AI, OpenRouter, and any OpenAI-compatible API - Local — Ollama and RamaLama, for running open-source models on your own machine

MCP (Model Context Protocol)

A standardized protocol for connecting AI agents to external data sources and tools. MCP servers provide agents with additional capabilities like database access, API integrations, or file system operations.

Skills

Pre-configured capabilities or specialized functions that can be enabled for an agent. Skills extend what an agent can do, such as code review, testing, or specific domain knowledge.