Enforcement (Tool Boundary)

How to enforce allow/block/modify decisions at safe boundaries.

Enforcement is the core of Njira's safety guarantees. This guide explains where and how to apply policy decisions.

Enforcement Points

The recommended approach is tool-boundary enforcement, where you check inputs before a tool runs and outputs after it completes.

┌─────────────┐      ┌─────────────┐      ┌─────────────┐
│  enforcePre │ ───▶ │  Tool Call  │ ───▶ │ enforcePost │
│  (input)    │      │  (execute)  │      │  (output)   │
└─────────────┘      └─────────────┘      └─────────────┘

Verdicts

Every enforcement call returns a decision:

Verdict Meaning Action Required
allow Safe to proceed None
block Unsafe, do not proceed Return error or fallback
modify Safe after transformation Use modifiedInput or modifiedOutput

TypeScript Example

import { NjiraAI } from "@njiraai/sdk";

const njira = new NjiraAI({ apiKey: "...", projectId: "...", mode: "active" });

async function safeTool(toolName: string, toolInput: any) {
  // Step 1: Pre-enforcement
  const pre = await njira.enforcePre({ 
    input: toolInput, 
    metadata: { tool: toolName } 
  });

  if (pre.verdict === "block") {
    throw new Error(`Blocked: ${pre.reasons[0]?.message}`);
  }

  // Step 2: Use modified input if provided
  const effectiveInput = pre.verdict === "modify" 
    ? pre.modifiedInput 
    : toolInput;

  // Step 3: Execute the tool
  const output = await executeTool(toolName, effectiveInput);

  // Step 4: Post-enforcement
  const post = await njira.enforcePost({ 
    output, 
    metadata: { tool: toolName } 
  });

  if (post.verdict === "block") {
    throw new Error(`Output blocked: ${post.reasons[0]?.message}`);
  }

  // Step 5: Return modified output if provided
  return post.verdict === "modify" ? post.modifiedOutput : output;
}

Python Example

from njiraai import NjiraAI

njira = NjiraAI(api_key="...", project_id="...", mode="active")

async def safe_tool(tool_name: str, tool_input: dict):
    # Step 1: Pre-enforcement
    pre = await njira.enforce_pre(
        input_data=tool_input, 
        metadata={"tool": tool_name}
    )

    if pre["verdict"] == "block":
        raise RuntimeError(f"Blocked: {pre['reasons'][0]['message']}")

    # Step 2: Use modified input if provided
    effective_input = pre.get("modifiedInput", tool_input)

    # Step 3: Execute the tool
    output = await execute_tool(tool_name, effective_input)

    # Step 4: Post-enforcement
    post = await njira.enforce_post(
        output=output, 
        metadata={"tool": tool_name}
    )

    if post["verdict"] == "block":
        raise RuntimeError(f"Output blocked: {post['reasons'][0]['message']}")

    # Step 5: Return modified output if provided
    return post.get("modifiedOutput", output)

Metadata Best Practices

Always include structured metadata for better audit trails:

{
  tool: "web_search",        // Tool identifier
  agent: "research_agent",   // Agent name
  task: "find_competitors",  // High-level task
  orgId: "org_123",          // Tenant
  userId: "user_456",        // End user
  requestId: "req_789"       // Request correlation
}

Failure Strategy

Configure what happens when Njira is unavailable:

Strategy Behavior
fail_open Allow the call; record NJIRA_UNAVAILABLE
fail_closed Block the call; return error

See Modes and Failure Behavior for details.

Shadow Mode

In shadow mode, enforcePre and enforcePost still compute verdicts, but:

  • block verdicts are logged but not enforced
  • modify transformations are logged but not applied

Use this for safe production rollout before switching to active mode.