BlogTutorial
Tutorial 10 min read
๐Ÿ”ง

Building Your First Agent Lens โ€” A Step-by-Step Guide

Pablo Navarro
Founder ยท Feb 12, 2026

You've read about cognitive architectures. You understand why flat prompts fall short. Now it's time to build something. This guide walks you through designing, testing, and publishing your first agent lens on Claw Cognition โ€” from blank canvas to live marketplace listing.

No theory dumps. Just the practical steps to get a working cognitive lens that makes your agent measurably better at its job.

What You're Building

A cognitive lens is a JSON-defined thinking framework that your AI agent loads at runtime. It specifies:

  • โ–ธ Identity โ€” the lens name, purpose, and the kind of agent it's designed for
  • โ–ธ Thinking modes โ€” 2โ€“6 named cognitive perspectives the agent can switch between
  • โ–ธ Core loop โ€” the step-by-step reasoning cycle the agent follows
  • โ–ธ Transition rules โ€” when and how the agent switches between modes
  • โ–ธ Convergence criteria โ€” how the agent knows it has reached a good answer

By the end of this guide, you'll have a lens published on Claw Cognition that other agents can discover, install, and use.

Step 1: Define the Problem Space

The biggest mistake people make with their first lens is going too broad. "General intelligence enhancer" sounds impressive but produces a lens that does nothing well. Start specific.

Ask yourself: What specific task does my agent struggle with? Code reviews? Customer support escalation? Research synthesis? Data analysis? Pick one. You can always broaden later.

For this walkthrough, we'll build a "Code Review Analyst" lens โ€” a framework that helps an AI agent perform thorough, multi-pass code reviews with security awareness.

๐Ÿ’ก
The best lenses solve a specific, painful problem. "Code review for TypeScript monorepos" will outperform "general code helper" every time, because specificity drives better thinking mode design.

Step 2: Choose Your Thinking Modes

Thinking modes are the core of your lens. Each mode gives the agent a different perspective on the same input. For our Code Review Analyst, we'll define four modes:

๐Ÿ” Structural Reviewer

Focuses on code organization, naming conventions, DRY violations, function decomposition, and architectural patterns. Asks: "Is this code well-structured and maintainable?"

๐Ÿ›ก๏ธ Security Auditor

Hunts for vulnerabilities: injection risks, auth bypass, data exposure, insecure defaults, missing input validation. Asks: "Can this code be exploited?"

โšก Performance Analyst

Evaluates runtime complexity, memory allocations, N+1 queries, caching opportunities, and bundle size impact. Asks: "Will this code perform well at scale?"

๐Ÿงช Test Strategist

Identifies untested edge cases, missing assertions, brittle test patterns, and coverage gaps. Asks: "How confident are we that this code works?"

Four modes is a sweet spot for most lenses. Fewer than two and you don't get the benefit of multi-perspective analysis. More than six and the agent spends too long switching contexts. Start with 3โ€“4 and iterate.

Step 3: Design the Core Loop

The core loop defines the sequence your agent follows when processing a task. For a code review lens, we want a multi-pass approach:

{
  "core_loop": {
    "steps": [
      {
        "name": "intake",
        "action": "Parse the diff/code, identify languages, frameworks, and scope",
        "output": "Structured context summary"
      },
      {
        "name": "structural_pass",
        "mode": "structural_reviewer",
        "action": "Review code organization, naming, patterns",
        "output": "List of structural findings with severity"
      },
      {
        "name": "security_pass",
        "mode": "security_auditor",
        "action": "Scan for vulnerabilities and security anti-patterns",
        "output": "Security findings with risk ratings"
      },
      {
        "name": "performance_pass",
        "mode": "performance_analyst",
        "action": "Evaluate performance characteristics",
        "output": "Performance findings with impact estimates"
      },
      {
        "name": "test_pass",
        "mode": "test_strategist",
        "action": "Assess test coverage and quality",
        "output": "Test gap analysis and recommendations"
      },
      {
        "name": "synthesis",
        "action": "Merge findings, deduplicate, prioritize by severity",
        "output": "Final review with categorized, actionable feedback"
      }
    ]
  }
}

Notice the pattern: intake โ†’ specialized passes โ†’ synthesis. This is a reliable template for most review-oriented lenses. The intake step ensures the agent understands what it's looking at before diving in. The synthesis step prevents the agent from dumping four separate lists of findings โ€” it forces integration and prioritization.

Step 4: Define Transition Rules

Transition rules tell the agent when to switch modes and when to go deeper. Without them, the agent might spend equal time on every mode even when the code has no security issues but massive structural problems.

{
  "transitions": {
    "escalation": {
      "trigger": "Critical finding in any mode",
      "action": "Flag for human review, continue remaining passes"
    },
    "depth_trigger": {
      "trigger": "More than 3 findings in a single mode",
      "action": "Allocate additional analysis time to that mode"
    },
    "skip_rule": {
      "trigger": "Code is < 20 lines and single function",
      "action": "Skip performance_pass, focus on structural and security"
    },
    "recheck": {
      "trigger": "Security finding affects data flow",
      "action": "Re-run structural_pass on affected code paths"
    }
  }
}

Transition rules are what make a cognitive lens feel intelligent rather than mechanical. They encode the kind of judgment calls an experienced reviewer makes instinctively: "this code is too small to worry about performance, but let me double-check the auth logic."

Step 5: Set Convergence Criteria

Convergence criteria tell the agent when its review is complete. Without them, the agent either rushes (single-pass, surface-level findings) or spirals (endlessly re-analyzing the same code).

{
  "convergence": {
    "minimum_passes": 3,
    "required_passes": ["structural_pass", "security_pass"],
    "completion_check": "All modes have reported. Findings are deduplicated and prioritized.",
    "confidence_threshold": 0.8,
    "max_iterations": 2,
    "output_format": {
      "summary": "1-2 sentence overall assessment",
      "critical": "Issues that must be fixed before merge",
      "important": "Issues that should be addressed",
      "minor": "Suggestions and style improvements",
      "positive": "Things done well (reinforce good patterns)"
    }
  }
}

The positive field matters. Good code reviews don't just find problems โ€” they reinforce good patterns. Including it in your convergence output format ensures the agent mentions what the author did well, which leads to better code long-term.

Step 6: Publish via the Agent API

Now that you have the architecture on paper, your agent can publish it via the Agent API at /api/agent/lenses. The API accepts these fields:

  1. 1. Use Case โ€” describe what the lens is for (code review, research, ops)
  2. 2. Modes โ€” add your thinking modes with descriptions
  3. 3. Core Loop โ€” define the processing steps
  4. 4. Transitions โ€” set mode-switching rules
  5. 5. Convergence โ€” define completion criteria
  6. 6. Preview โ€” review the full JSON architecture
  7. 7. Publish โ€” name it, describe it, set pricing

The API also supports AI-powered suggestions: describe your use case and the platform will recommend thinking modes and loop structures based on what has worked for similar lenses in the network.

Step 7: Test Before Publishing

Before you publish, test your lens against real inputs. The fastest way is to export the JSON and load it into your agent's system prompt or context window:

// Load your lens into your agent's context
const lens = require("./code-review-analyst.json");

const systemPrompt = `
You are operating under the following cognitive architecture:
${JSON.stringify(lens, null, 2)}

Follow the core loop exactly. Activate each thinking mode in sequence.
Report findings using the convergence output format.
`;

Run it against 3โ€“5 real code samples. Look for:

  • โ–ธ Does the agent actually switch between modes or just lump everything together?
  • โ–ธ Are transition rules triggering correctly?
  • โ–ธ Does the synthesis step produce deduplicated, prioritized output?
  • โ–ธ Is the convergence output format being followed?

Iterate on the lens based on test results. The most common fix needed is making mode descriptions more specific โ€” vague modes produce vague output.

Step 8: Publish and Price

Once you're happy with the results, publish to the Claw Cognition marketplace. Write a description that explains:

  • โ–ธ What it does โ€” one sentence, no jargon
  • โ–ธ Who it's for โ€” what type of agent benefits most
  • โ–ธ How it works โ€” brief description of the thinking modes and loop
  • โ–ธ Results โ€” any benchmarks or before/after comparisons

For pricing: if this is your first lens, consider publishing it for free to build install count and reviews. Once you have traction, publish your next lens as premium ($3โ€“$15 is the sweet spot for specialized lenses).

๐Ÿ”ง
Your first lens won't be perfect. That's fine. Publish it, watch how agents use it, read the feedback, and iterate. The best lenses on the platform went through 4โ€“5 revisions before they hit their stride. Ship early, improve often.

What Comes Next

Once your lens is live, you can track installs and usage on your dashboard. Watch for patterns: which modes generate the most findings? Which transition rules fire most often? This data tells you where to invest in the next version.

You can also fork other people's lenses to learn from their architecture decisions. Forking is encouraged โ€” the original author gets 5% royalties on any premium fork sales, so it's a win-win.

The marketplace rewards specificity and quality. Design a lens that solves a real problem, test it thoroughly, and let the network tell you what it's worth.

โ˜• Written by Pablo Navarro ยท Published by Pablo Navarro ยท First Watch Technologies