Skip to main content
Mycel uses a split configuration system with three-tier merge: system defaults → user config (~/.leon/) → project config (.leon/ in workspace root). CLI arguments override everything.

Config file overview

JSON config files (three-tier merge):
FilePurpose
runtime.jsonTools, memory, MCP, skills, security
models.jsonProviders, API keys, model mapping
observation.jsonLangfuse / LangSmith tracing
Other config files:
FilePurpose
config.envQuick API key setup
sandboxes/<name>.jsonPer-provider sandbox config
.mcp.json (member)Per-member MCP servers
Priority (highest to lowest): .leon/<file> in workspace → ~/.leon/<file> → built-in defaults → CLI flags win over all. Merge strategy per domain:
DomainStrategy
runtime, memory, toolsDeep merge — higher-priority tiers override individual fields
mcp, skillsLookup — first tier that defines a key wins; no merging
system_promptLookup — project → user → system
providers, mapping (models.json)Deep merge per-key
pool (models.json)Last wins — no list merging
catalog, virtual_models (models.json)System-only — never overridden

First run

Create ~/.leon/config.env:
OPENAI_API_KEY=sk-xxx
OPENAI_BASE_URL=https://api.openai.com/v1
MODEL_NAME=claude-sonnet-4-5-20250929

runtime.json

FieldDefaultDescription
temperaturenullSampling temperature (0–2). null = model default
max_tokensnullMax output tokens. null = model default
context_limit0Context window in tokens. 0 = auto-detect
enable_audit_logtrueAudit logging for file operations
allowed_extensionsnullRestrict file access by extension. null = all
block_dangerous_commandstrueBlock rm -rf, sudo, etc.
block_network_commandsfalseBlock network commands
Pruning trims large tool results:
FieldDefaultDescription
soft_trim_chars3,000Trim results longer than this
hard_clear_threshold10,000Clear results longer than this
protect_recent3Keep last N tool messages untrimmed
Compaction summarizes old history via LLM when context fills:
FieldDefaultDescription
reserve_tokens16,384Reserve for new messages
keep_recent_tokens20,000Keep recent tokens verbatim
min_messages20Minimum messages before trigger
See Memory for full details.
Each tool group has an enabled flag. Both the group and individual tool must be enabled.
{
  "tools": {
    "filesystem": { "enabled": true },
    "search": { "enabled": true },
    "web": {
      "enabled": true,
      "tools": {
        "web_search": { "enabled": true, "tavily_api_key": null },
        "fetch": { "enabled": true }
      }
    },
    "command": {
      "enabled": true,
      "tools": {
        "run_command": { "default_timeout": 120 }
      }
    }
  }
}
Complete tool catalog:
ToolGroupMode
Read, Write, Edit, list_dirfilesysteminline
Grep, Globsearchinline
Bashcommandinline
WebSearch, WebFetchwebinline
Agent, SendMessage, TaskOutput, TaskStopagentinline
TaskCreate, TaskGet, TaskList, TaskUpdatetododeferred
load_skillskillsinline
tool_searchsysteminline
deferred tools are not injected into every request. The agent discovers them via tool_search when needed — saving tokens in conversations that don’t require task management.
.leon/runtime.json in your workspace root:
{
  "allowed_extensions": ["py", "js", "ts", "json", "yaml", "md"],
  "block_dangerous_commands": true,
  "tools": {
    "web": { "enabled": false },
    "command": {
      "tools": {
        "run_command": { "default_timeout": 300 }
      }
    }
  },
  "system_prompt": "You are a Python expert working on a FastAPI project."
}

models.json

Mycel provides four leon:* aliases:
AliasModelUse case
leon:miniclaude-haiku-4-5-20250929Fast, simple tasks
leon:mediumclaude-sonnet-4-5-20250929Balanced, daily work
leon:largeclaude-opus-4-6Complex reasoning
leon:maxclaude-opus-4-6 + temp=0Maximum precision
Set via Settings → Models in the Web UI, or in ~/.leon/models.json:
{ "active": { "model": "leon:large" } }
Override a mapping in ~/.leon/models.json:
{
  "mapping": {
    "leon:medium": { "model": "gpt-4o", "provider": "openai" }
  }
}
{
  "active": {
    "model": "claude-sonnet-4-5-20250929",
    "provider": null
  },
  "providers": {
    "anthropic": {
      "api_key": "${ANTHROPIC_API_KEY}",
      "base_url": "https://api.anthropic.com"
    },
    "openai": {
      "api_key": "${OPENAI_API_KEY}",
      "base_url": "https://api.openai.com/v1"
    }
  }
}
Provider auto-detection (when no explicit provider is set):
  • ANTHROPIC_API_KEY set → provider = anthropic
  • OPENAI_API_KEY set → provider = openai
  • OPENROUTER_API_KEY set → provider = openai (OpenRouter-compatible)
{
  "pool": {
    "custom": ["deepseek-chat"],
    "custom_config": {
      "deepseek-chat": {
        "based_on": "gpt-4o",
        "context_limit": 65536
      }
    }
  }
}
based_on tells Mycel which tokenizer to use. context_limit overrides auto-detection.

MCP servers

Connect external services via the Model Context Protocol. MCP tools appear as mcp__{server_name}__{tool_name}.
{
  "mcp": {
    "enabled": true,
    "servers": {
      "github": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-github"],
        "env": {
          "GITHUB_TOKEN": "${GITHUB_TOKEN}"
        }
      }
    }
  }
}

Skills

{
  "skills": {
    "enabled": true,
    "paths": ["~/.leon/skills"],
    "skills": {
      "code-review": true,
      "debugging": false
    }
  }
}
Skill paths must exist on disk — Mycel does not create them automatically. Run mkdir -p ~/.leon/skills if needed.

Observation (tracing)

{
  "active": "langfuse",
  "langfuse": {
    "secret_key": "${LANGFUSE_SECRET_KEY}",
    "public_key": "${LANGFUSE_PUBLIC_KEY}",
    "host": "https://cloud.langfuse.com"
  }
}

Environment variables

All string values in JSON config files support ${VAR} expansion and ~ for home directory.
VariablePurpose
ANTHROPIC_API_KEYAnthropic API key
OPENAI_API_KEYOpenAI-compatible API key
OPENAI_BASE_URLAPI base URL
OPENROUTER_API_KEYOpenRouter API key
MODEL_NAMEOverride active model
LEON_SANDBOXDefault sandbox name
TAVILY_API_KEYTavily web search
EXA_API_KEYExa search
JINA_API_KEYJina AI fetch
E2B_API_KEYE2B sandbox
DAYTONA_API_KEYDaytona sandbox
AGENTBAY_API_KEYAgentBay sandbox