---
title: "OpenBrain Judge Extender"
type: "guide"
label: "Guide"
project: "The Judge Layer Is The Product"
---

# OpenBrain Judge Extender

# OpenBrain Judge Extender

![nbj-ob1-agent-memory-hero-16x9](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/7c1b3a56-8421-453d-acba-9d5e9fc93b7e)

## Executive Summary

OpenBrain Judge Extender is a runtime-independent extension that gives production agent judges durable memory, provenance, recall, write-back, review, and inspection.

The extender should not make OpenBrain the runtime. It should not make OpenBrain the orchestrator. It should not make OpenBrain a generic vector store. Its job is narrower and more valuable: OpenBrain should provide the continuity layer that judgment systems need before and after they decide whether an agent action should be allowed, blocked, revised, or escalated.

The core loop is simple.

Before a judge decision, a runtime asks OpenBrain for scoped, policy-aware recall: prior decisions, relevant policies, user-confirmed preferences, source references, provenance labels, freshness, confidence, and use restrictions.

During judgment, the actor agent submits a structured action proposal. A judge evaluates that proposal using current runtime context plus OpenBrain recall.

After judgment, the runtime writes the decision back to OpenBrain. OpenBrain stores the event with provenance, scope, confidence, review status, and future use policy. Inferred or generated memory cannot become instruction-grade without human confirmation.

The first adapter should be OpenClaw. The contracts should remain portable enough to support OpenAI Agents SDK tool guardrails, Gas Town, Gas City, Codex, Claude Code, Thrum, and internal agent runtimes later.

## Why This Exists

Production agents increasingly take side-effectual actions: sending emails, editing files, updating tickets, creating pull requests, changing records, booking meetings, calling APIs, and handing work to other agents.

The moment an agent can act, the system needs a judge layer. The judge layer decides whether the agent's proposed action is authorized, supported by evidence, consistent with policy, safe enough to execute, and appropriate for automation.

But a judge without trustworthy context is weak.

It may not know that the user already approved a certain class of action. It may not know that a policy changed. It may not know that a similar action was blocked yesterday. It may not know that a memory was inferred by an agent rather than confirmed by a user. It may not know whether a source is stale, disputed, or superseded.

That is the OpenBrain opportunity.

OpenBrain should make agent judgment durable, inspectable, and portable across runtimes. It should give judges governed recall before decisions and governed write-back after decisions.

The product thesis is:

OpenBrain should be the continuity layer for production agent judgment.

![continuity-layer](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/a553e8bb-c8d2-413f-9f08-88352a73a20a)

## What This Is Not

This is not an orchestration system.

Gas Town, Gas City, OpenClaw, OpenAI Agents SDK, Thrum, Codex, Claude Code, and internal runtimes can all own execution, sessions, handoffs, tools, queues, and worker assignment.

This is not a generic guardrails product.

The judge itself may be an LLM validator, a rules engine, a hybrid policy checker, a human review process, or an OpenAI Agents SDK tool guardrail. OpenBrain does not need to own that decision engine in V1.

This is not a vector database wrapper.

Semantic retrieval may be part of implementation, but the product contract is not "return similar chunks." The contract is "return scoped, provenance-labeled, policy-aware memory that a judge is allowed to use in specific ways."

This is not hidden agent memory.

Every memory returned to a judge should have provenance, scope, freshness, confidence, and use policy. Every memory written by a judge should be inspectable and reviewable.

This is not raw transcript storage.

Do not store raw transcripts by default. Do not store model reasoning traces. Do not store full tool arguments unless a runtime explicitly opts in with an approved retention policy.

## Architecture Overview

The extender has six core components.

Runtime Adapter

A runtime adapter converts runtime-specific events into OpenBrain judge contracts. V1 should target OpenClaw because OpenClaw maps cleanly to runtime, task, flow, channel, tool, and work-log concepts. Later adapters should support OpenAI Agents SDK tool guardrails, Gas Town/Gas City events and beads, Codex task boundaries, Claude Code tool calls, Thrum handoffs, and internal runtimes.

Judge Recall API

The recall API lets a runtime ask OpenBrain for relevant memories and policies before a judge decision. The response must be scoped, provenance-labeled, confidence-scored, freshness-aware, and use-policy constrained.

Action Proposal Envelope

The actor agent must submit a structured action proposal before side-effectual execution. The proposal tells the judge what action is requested, why, what evidence supports it, what authorization exists, what consequence is expected, and whether rollback is possible.

Judge Decision API

The decision API records the judge outcome: allow, block, revise, or escalate. It captures the action proposal reference, judge kind, checks performed, memory used, decision summary, required revision, escalation target, and memory candidates to write.

Review Queue

The review queue prevents inferred or generated memories from silently becoming future instructions. Human review is required before instruction-grade memory can be injected automatically into future judge calls.

Memory Inspector

The inspector is the trust surface. It must let a developer, admin, or user answer why a memory exists, where it came from, which decision created it, how it has been used, whether it was confirmed, and what future actions it can influence.

## Core Flow

Before judgment, the runtime classifies a proposed action and sends a judge.recall request to OpenBrain. The request includes workspace, project, task, flow, action type, tool name, entities, scope, sensitivity signals, and limits.

![recall-lifecycle](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/7d03638a-ebc3-40f8-9305-1c837e5d4fff)

OpenBrain returns relevant memories, prior decisions, policies, source references, provenance labels, freshness, confidence, and use policies.

During judgment, the actor agent submits an action proposal. The judge evaluates that proposal against current runtime context, OpenBrain recall, and runtime policy. The judge returns one of four decisions: allow, block, revise, or escalate.

After judgment, the runtime writes the decision back to OpenBrain. OpenBrain stores the compact judgment event. If the event contains reusable lessons or future constraints, those memories enter the review queue unless they are purely observed operational records with evidence-only use.

![writeback-lifecycle](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/f30fb664-cba0-4124-a26e-6494f2706688)

## Risk Classes

V1 should classify actions into four risk classes.

Read-only

Retrieve, summarize, classify, inspect, search, draft, compare, or explain.

Default behavior: no heavy judge unless sensitive data, high-stakes output, or policy-matching risk is present.

Reversible write

Create a draft, add a label, update an internal note, create a non-public task, write to a branch, or modify local workspace state.

Default behavior: lightweight judge or post-action audit.

External side effect

Send email, message another person, book a meeting, update CRM, trigger workflow, create external ticket, open PR, comment publicly, or notify a customer.

Default behavior: judge required before execution.

High-risk action

Spend money, delete data, change permissions, merge to main, execute production command, submit legal or financial work, expose sensitive data, or notify customers at scale.

Default behavior: judge plus human approval unless explicit workspace policy allows automation.

## Action Proposal Schema

schema\_version: "openbrain.judge.action\_proposal.v1"

workspace\_id: string

project\_id: string | null

task\_id: string | null

flow\_id: string | null

action\_id: string

idempotency\_key: string

runtime:

  name: string

  version: string | null

  adapter: string | null

actor:

  agent\_id: string

  role: string | null

  provider: string | null

  model: string | null

tool:

  name: string

  kind: enum # function\_tool | hosted\_tool | shell | browser | api | message | file | workflow | handoff

  target\_system: string | null

action:

  risk\_class: enum # read\_only | reversible\_write | external\_side\_effect | high\_risk

  description: string

  target: string | null

  arguments\_digest: string

  full\_arguments\_ref: string | null

authorization:

  claimed\_user\_authorization: string | null

  user\_authorization\_refs:

    - kind: enum # user\_message | task | ticket | memory | policy | manual\_approval

      uri: string | null

      quote\_or\_summary: string

      timestamp: string | null

evidence:

  source\_refs:

    - kind: enum # file | message | doc | ticket | memory | log | web | api | policy

      uri: string | null

      title: string | null

      timestamp: string | null

      summary: string

expected\_consequence:

  summary: string

  external\_recipients: string\[\]

  data\_exposed: string\[\]

  systems\_changed: string\[\]

  persistence: enum # none | temporary | durable | external

rollback:

  is\_reversible: boolean

  rollback\_plan: string | null

  rollback\_owner: string | null

sensitivity:

  contains\_secret\_like\_data: boolean

  contains\_customer\_data: boolean

  contains\_private\_personal\_data: boolean

  contains\_financial\_or\_legal\_data: boolean

  contains\_production\_system\_access: boolean

  

## Judge Recall Request Schema

schema\_version: "openbrain.judge.recall.v1"

request\_id: string

workspace\_id: string

project\_id: string | null

task\_id: string | null

flow\_id: string | null

action\_id: string

query:

  summary: string

  action\_type: enum # read\_only | reversible\_write | external\_side\_effect | high\_risk

  tool\_name: string | null

  target\_system: string | null

entities:

  people: string\[\]

  orgs: string\[\]

  repos: string\[\]

  files: string\[\]

  customers: string\[\]

  systems: string\[\]

  topics: string\[\]

scope:

  visibility: enum # personal | project | workspace | org

  include\_unconfirmed: boolean

  include\_disputed: boolean

  include\_stale: boolean

limits:

  max\_items: number

  max\_tokens: number

  recency\_days: number | null

policy:

  allowed\_use\_policies:

    - enum # can\_use\_as\_instruction | can\_use\_as\_evidence | requires\_confirmation | do\_not\_inject\_automatically

  require\_source\_refs: boolean

  

## Judge Recall Response Schema

schema\_version: "openbrain.judge.recall\_response.v1"

request\_id: string

memories:

  - memory\_id: string

    summary: string

    content: string

    source:

      kind: enum # user\_message | doc | ticket | file | system\_event | import | judge\_event | manual\_entry

      uri: string | null

      title: string | null

      timestamp: string | null

    provenance:

      status: enum # observed | inferred | user\_confirmed | imported | generated | superseded | disputed

      confidence: number

      created\_by: enum # user | agent | system | import

      model: string | null

      runtime: string | null

    use\_policy:

      policy: enum # can\_use\_as\_instruction | can\_use\_as\_evidence | requires\_confirmation | do\_not\_inject\_automatically

      reason: string | null

    freshness:

      created\_at: string

      last\_confirmed\_at: string | null

      stale\_after: string | null

    scope:

      workspace\_id: string

      project\_id: string | null

      visibility: enum # personal | project | workspace | org

policy\_hits:

  - policy\_id: string

    summary: string

    required\_behavior: enum # allow | block | revise | escalate | human\_review

    source\_ref: string | null

warnings:

  - code: string

    message: string

  

## Judge Decision Schema

schema\_version: "openbrain.judge.decision.v1"

workspace\_id: string

project\_id: string | null

task\_id: string | null

flow\_id: string | null

action\_id: string

decision\_id: string

idempotency\_key: string

decision: enum # allow | block | revise | escalate

reasoning\_summary: string

confidence: enum # high | medium | low

judge:

  kind: enum # llm | rule | hybrid | human

  provider: string | null

  model: string | null

  policy\_version: string | null

checks:

  authorization\_check: enum # pass | fail | uncertain | not\_applicable

  evidence\_check: enum # pass | fail | uncertain | not\_applicable

  policy\_check: enum # pass | fail | uncertain | not\_applicable

  sensitivity\_check: enum # pass | fail | uncertain | not\_applicable

  reversibility\_check: enum # pass | fail | uncertain | not\_applicable

  quality\_check: enum # pass | fail | uncertain | not\_applicable

required\_revision:

  summary: string | null

  revised\_action\_constraints: string\[\]

escalation:

  required: boolean

  reason: string | null

  owner: string | null

  due\_at: string | null

memory\_used:

  - memory\_id: string

    used\_as: enum # instruction | evidence | background

memory\_to\_write:

  decisions: string\[\]

  lessons: string\[\]

  failures: string\[\]

  constraints: string\[\]

  open\_questions: string\[\]

provenance:

  default\_status: enum # observed | inferred | generated | user\_confirmed

  requires\_review: boolean

  

## Write-Back Rules

OpenBrain should store compact judgment events by default, not raw transcripts.

A judgment event may include the action attempted, the risk class, the decision, the reason summary, the policy or memory used, the human correction if any, the reusable lesson, source references, future constraints, and review status.

Do not store model reasoning traces.

Do not store full tool arguments by default. Store arguments\_digest and a controlled full\_arguments\_ref only when the runtime has an approved retention policy.

Do not let inferred memory become instruction-grade automatically.

A blocked action should be retrievable as evidence, but it should not become a permanent rule unless confirmed.

A human correction should enter the review queue with high priority because it is often the strongest signal for future judge behavior.

## Provenance Labels

Use these labels exactly in V1:

provenance\_status:

  - observed

  - inferred

  - user\_confirmed

  - imported

  - generated

  - superseded

  - disputed

  

Observed means the memory records a concrete event.

Inferred means an agent derived the memory from context.

User-confirmed means a human explicitly approved the memory.

Imported means it came from an external system.

Generated means it was produced by an agent or judge as a proposed lesson.

Superseded means a newer memory replaces it.

Disputed means another source or user correction conflicts with it.

## Memory Use Policies

![trust-ladder](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/e25f967b-7330-4139-ba72-76fede28b5ff)

Use these policies exactly in V1:

memory\_use\_policy:

  - can\_use\_as\_instruction

  - can\_use\_as\_evidence

  - requires\_confirmation

  - do\_not\_inject\_automatically

  

can\_use\_as\_instruction should require human confirmation.

can\_use\_as\_evidence can include observed events, imported policies, and reviewed decision history.

requires\_confirmation should be the default for inferred or generated future-facing memories.

do\_not\_inject\_automatically should apply to sensitive, disputed, stale, or low-confidence memories.

## Review Queue Behavior

Memories enter review when they are instruction-grade candidates, inferred with high future impact, low confidence, related to permissions, legal, financial, customer communication, production systems, security, personnel, or disputed by another memory.

Review actions should include confirm, edit, mark as evidence only, restrict scope, mark stale, merge, reject, and escalate to admin.

The review queue must show the source event, the proposed memory, the provenance label, confidence, suggested use policy, affected scope, and examples of future actions it might influence.

![review-queue-flow](https://promptkit.natebjones.com/api/assets/20260508_246_guide_main/files/ecd8eedc-3bbc-42f3-ad8c-3e28af73c131)

The default review stance should be conservative. If the system is not sure whether a generated lesson should guide future behavior, it should remain evidence, not instruction.

## Memory Inspector Behavior

The inspector should answer these questions:

Why does this memory exist?

Which action or judge decision created it?

What source did it come from?

Was it observed, inferred, generated, imported, confirmed, disputed, or superseded?

Which workflows retrieved it?

Was it used as instruction, evidence, or background?

Who confirmed, edited, rejected, or restricted it?

What future actions can it influence?

Which memories conflict with it?

When does it become stale?

This is required for trust. Without inspection, OpenBrain becomes hidden agent memory.

## Runtime Integration Plan

V1 should target OpenClaw.

OpenClaw should call judge.recall before judge decisions and judge.write\_decision after outcomes. The first two examples should be Code Review Memory and TaskFlow Work Log.

For Code Review Memory, OpenBrain should recall repo rules, prior review corrections, known failure patterns, and maintainer preferences. The judge should decide whether a proposed code action or review comment should move forward, be revised, or escalate.

For TaskFlow Work Log, OpenBrain should recall task context, prior decisions, blockers, user-confirmed constraints, and unresolved questions. The judge should decide whether a handoff or tool action is supported by the current work record.

V1.5 should add OpenAI Agents SDK examples around tool guardrails. The SDK already has a natural boundary around function-tool invocation. OpenBrain can supply recall before the tool guardrail decision and write back the decision afterward.

V2 should support Gas Town and Gas City through beads, events, tool boundaries, handoffs, and convoy completion. Do not mirror their role taxonomy. Treat them as orchestration systems and attach to the points where work crosses boundaries.

V2 should support Thrum as a coordination source. Use handoffs, messages, queue state, and completion events as signals. Do not make Thrum the judge.

## V1 Scope

V1 should include:

POST /v1/judge/recall

POST /v1/judge/decisions

GET /v1/judge/decisions/{decision\_id}

GET /v1/memories/{memory\_id}/inspector

GET /v1/review-queue

POST /v1/review-queue/{item\_id}/actions

V1 should also include action proposal schema validation, decision schema validation, idempotency keys, basic sensitive-data filtering, provenance labels, use-policy labels, review queue for instruction-grade memories, inspector view, one OpenClaw Code Review Memory example, and one OpenClaw TaskFlow Work Log example.

Do not build full generalized policy authoring in V1. Use simple policy hits and imported policy references.

Do not build full multi-runtime support in V1. Keep the contract portable, but validate it through OpenClaw first.

Do not build automatic self-improving memory in V1. Generated memories can be proposed, but they must enter review before becoming instruction-grade.

## Acceptance Criteria

V1 is ready when a developer can wire an OpenClaw task to OpenBrain judge recall.

The judge receives scoped memories with provenance and use policy.

The runtime can write back allow, block, revise, and escalate decisions.

Human review is required before inferred or generated memories become future instructions.

The inspector shows what was recalled, what was used, what was decided, and what was written.

Re-running a similar workflow retrieves a prior confirmed lesson.

Secret-like data and raw transcript dumps are blocked by default.

The contract is not OpenClaw-specific and can be reused by another runtime later.

Decision write-back is idempotent.

Disputed and superseded memories are not injected automatically.

The review queue can downgrade a memory from instruction candidate to evidence only.

## Evaluation Plan

Track judge quality and memory quality separately.

Judge metrics:

false allows, false blocks, escalation rate, revision rate, human override rate, latency added, cost per judged action, incidents caught before execution, and incidents missed.

Recall metrics:

recall precision, recall usefulness, stale memory retrieval rate, disputed memory retrieval rate, policy-hit accuracy, duplicate memory rate, and percentage of judge decisions with sufficient evidence.

Memory governance metrics:

memories confirmed, memories rejected, memories downgraded to evidence, inferred memories prevented from becoming instructions, average review time, stale memories marked, and human correction rate.

The key product metric is not number of memories stored.

The key metric is whether future judges make better decisions because prior judgment became trustworthy memory.

## Risks And Mitigations

Risk: OpenBrain becomes an orchestrator.

Mitigation: keep runtime ownership outside OpenBrain. OpenBrain stores recall and decision events. Runtimes execute.

Risk: OpenBrain becomes a generic vector store.

Mitigation: require provenance, scope, use policy, freshness, and review status on returned memories.

Risk: inferred memories become hidden instructions.

Mitigation: default inferred and generated memories to requires\_confirmation or can\_use\_as\_evidence.

Risk: judge write-back stores sensitive data.

Mitigation: default to compact summaries, argument digests, source references, and sensitive-data filters. No raw transcripts by default.

Risk: the judge over-escalates and hurts product experience.

Mitigation: track escalation rate, revision rate, human override rate, and latency. Tune by action class.

Risk: runtime adapters become too specific.

Mitigation: keep contracts runtime-independent and isolate runtime-specific mapping in adapter packages.

Risk: stale memory affects future decisions.

Mitigation: include freshness fields, stale-after dates, supersession links, and inspector warnings.

## Recommended Implementation Phases

Phase 1: Contract and local harness.

Define schemas, validation, idempotency behavior, fixture events, and a local judge recall/write-back harness. Build golden tests for allow, block, revise, and escalate.

Phase 2: OpenBrain storage and inspector.

Implement judgment event storage, provenance labels, use policies, source references, review status, and inspector read paths.

Phase 3: OpenClaw adapter.

Wire OpenClaw Code Review Memory and TaskFlow Work Log to judge.recall and judge.write\_decision. Keep adapter code thin and contract-driven.

Phase 4: Review queue.

Add review actions, instruction-grade gating, scope restriction, stale marking, dispute marking, and human confirmation flow.

Phase 5: Evaluation.

Add eval fixtures for false allows, false blocks, revision quality, escalation appropriateness, recall precision, and memory governance.

Phase 6: Second runtime example.

Add an OpenAI Agents SDK tool-guardrail example to prove portability beyond OpenClaw.

## Bottom Line

OpenBrain Judge Extender should make judgment durable.

The runtime still owns execution. The orchestrator still owns work routing. The judge still owns the decision. Human review still owns approval and correction.

OpenBrain owns continuity: what the judge should know before deciding, what happened after the decision, and which memories are trustworthy enough to shape future behavior.

That is the extension worth building.
