Core concepts

The shared model across the TypeScript and Python SDKs.

Architecture Models

NjiraAI supports two primary integration models:

  1. Transparent Gateway (Proxy)

    • Best for: Remote agents, standard LLM clients (OpenAI/Anthropic).
    • How it works: You change your baseURL to the Njira Gateway. Njira acts as a middleware, enforcing policies on the fly.
    • Pros: Zero code changes to agent logic, language agnostic.
  2. SDK Client (Deep Integration)

    • Best for: Complex workflows, specific enforcement points (e.g., tool inputs/outputs), tracing internal spans.
    • How it works: Import the NjiraAI client and manually call enforce() or trace().
    • Pros: Granular control, custom span attributes, programmatic handling of verdicts.

NjiraAI Client (SDK)

If you choose the SDK route, both SDKs expose a single client:

  • TypeScript: new NjiraAI({ ... }) from @njiraai/sdk
  • Python: NjiraAI(...) from njiraai

The client provides:

  • Enforcement: enforcePre, enforcePost, and enforce(stage=pre|post)
  • Tracing: spans, events, and flush()
  • Context: request/trace correlation across async boundaries

Enforcement decisions

Every enforcement call returns a normalized decision shape:

  • verdict: allow | block | modify
  • reasons: array of { code, message, data? }
  • traceId, decisionId
  • modifiedInput? or modifiedOutput? when verdict === "modify"

Safe boundaries (v0)

In v0, the recommended enforcement boundary is tool calls:

  1. enforcePre() before executing the tool
  2. enforcePost() after the tool returns

This minimizes breaking changes and keeps enforcement deterministic.

Traces and spans

Tracing is span-based:

  • trace.startSpan({ name, type, parentId?, input?, tags? })
  • trace.endSpan(spanId, { output?, metrics? })
  • trace.error(spanId, err)
  • trace.event(name, payload)
  • trace.flush() (important for serverless)

Span type values in integrations:

  • llm, tool, chain, retriever (plus custom)