Tool Design and Contracts
Human in the Loop
Human-in-the-loop (HITL) patterns insert human checkpoints into agent workflows at critical decision points, requiring explicit approval before the agent takes high-stakes or irreversible actions. This is the primary safety mechanism for production agent systems, because even capable models make mistakes, and HITL ensures those mistakes are caught before they reach databases, customer-facing systems, or financial transactions. The most effective patterns use risk-based escalation, where routine actions proceed automatically while destructive, expensive, or irreversible actions require human approval.
connected to
resources
Building Effective Agentsanthropic.comAnthropic's guidance on human-in-the-loop patterns for agent systems (anthropic.com)Claude Code: Permissionsdocs.anthropic.comHow Claude Code implements permission-based human approval for tool use (docs.anthropic.com)LangGraph: Human-in-the-Looplangchain-ai.github.ioImplementing HITL checkpoints in LangGraph agent workflows (langchain-ai.github.io)Cursor: AI Reviewdocs.cursor.comHow Cursor implements human review for AI-generated code changes (docs.cursor.com)