Core concepts
The shared model across the TypeScript and Python SDKs.
Architecture Models
NjiraAI supports two primary integration models:
-
Transparent Gateway (Proxy)
- Best for: Remote agents, standard LLM clients (OpenAI/Anthropic).
- How it works: You change your
baseURLto the Njira Gateway. Njira acts as a middleware, enforcing policies on the fly. - Pros: Zero code changes to agent logic, language agnostic.
-
SDK Client (Deep Integration)
- Best for: Complex workflows, specific enforcement points (e.g., tool inputs/outputs), tracing internal spans.
- How it works: Import the
NjiraAIclient and manually callenforce()ortrace(). - Pros: Granular control, custom span attributes, programmatic handling of verdicts.
NjiraAI Client (SDK)
If you choose the SDK route, both SDKs expose a single client:
- TypeScript:
new NjiraAI({ ... })from@njiraai/sdk - Python:
NjiraAI(...)fromnjiraai
The client provides:
- Enforcement:
enforcePre,enforcePost, andenforce(stage=pre|post) - Tracing: spans, events, and
flush() - Context: request/trace correlation across async boundaries
Enforcement decisions
Every enforcement call returns a normalized decision shape:
verdict:allow | block | modifyreasons: array of{ code, message, data? }traceId,decisionIdmodifiedInput?ormodifiedOutput?whenverdict === "modify"
Safe boundaries (v0)
In v0, the recommended enforcement boundary is tool calls:
enforcePre()before executing the toolenforcePost()after the tool returns
This minimizes breaking changes and keeps enforcement deterministic.
Traces and spans
Tracing is span-based:
trace.startSpan({ name, type, parentId?, input?, tags? })trace.endSpan(spanId, { output?, metrics? })trace.error(spanId, err)trace.event(name, payload)trace.flush()(important for serverless)
Span type values in integrations:
llm,tool,chain,retriever(pluscustom)