Getting Started
Everything you need to start governing your AI agents with NjiraAI — from API key to your first BLOCK verdict.
Overview
NjiraAI is a governance layer that sits between your AI agent and the tools it calls (LLMs, APIs, internal services). Every request is evaluated against your policies and returns a verdict:
| Verdict | Meaning | Typical HTTP status |
|---|---|---|
| ALLOW | Request is permitted and forwarded unchanged | 200 |
| BLOCK | Request is rejected with a structured reason | 403 |
| MODIFY | Request is rewritten (e.g., redaction) and then forwarded | 200 |
| REQUIRE_APPROVAL | Request is held for human review before proceeding | 202 |
You can integrate NjiraAI in two ways:
- Gateway Proxy — point an OpenAI-compatible client at NjiraAI's gateway. Best for fast adoption.
- SDK — call
govern()in your code when you need verdict-aware control flows.
Prerequisites
- A NjiraAI API key (
nj_live_…ornj_test_…). - An OpenAI-compatible client or direct HTTP access (curl).
- At least one active policy (many orgs start with PII Guard enabled).
Get an API key
- Open the NjiraAI Console.
- Go to Settings → API Keys.
- Click Create Key and copy it.
Your key will look like: nj_live_abc123…
Integration option 1: Gateway Proxy
Use this when you want a transparent proxy with minimal code changes.
Set your client base URL to:
https://gateway.njira.ai/v1
Python
from openai import OpenAI
client = OpenAI(
base_url="https://gateway.njira.ai/v1",
api_key="nj_live_YOUR_KEY",
)
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "What is the weather today?"}],
)
print(response.choices[0].message.content)
TypeScript
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://gateway.njira.ai/v1',
apiKey: 'nj_live_YOUR_KEY',
});
const response = await client.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: 'What is the weather today?' }],
});
console.log(response.choices[0].message.content);
curl
curl -s https://gateway.njira.ai/v1/chat/completions \
-H "Authorization: Bearer nj_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "What is the weather today?"}]
}' | jq .
What to expect
The upstream provider response is returned normally. NjiraAI also adds request metadata via headers:
X-Njira-Request-Id: req_abc123
X-Njira-Verdict: ALLOW
X-Njira-Latency-Ms: 12
You can optionally send additional headers to control behavior:
| Header | Description | Example values |
|---|---|---|
X-Tool-Name |
Tag the calling tool for audit | chat_interface |
X-Njira-Tier |
Model capability tier | fast, standard, strong |
X-Policy-Id |
Enforce a specific policy pack | pii_guard |
Integration option 2: SDK
Use this when you need to branch on verdicts in your code (e.g., block handling, MODIFY text, per-tool governance).
Python
pip install njiraai
import njiraai
from openai import OpenAI
njira = njiraai.Client(
api_key="nj_live_YOUR_KEY",
base_url="https://api.njira.ai",
)
verdict = njira.govern(
input="What is the weather today?",
tool_name="weather_lookup",
)
if verdict.action == "BLOCK":
print(f"Blocked: {verdict.reason_text}")
else:
effective_input = verdict.modified_text or "What is the weather today?"
llm = OpenAI()
response = llm.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": effective_input}],
)
print(response.choices[0].message.content)
# Log the completed call for audit trail
njira.audit(
request_id=verdict.request_id,
tool_name="weather_lookup",
input="What is the weather today?",
verdict_action=verdict.action,
)
TypeScript
npm install @njiraai/sdk
import { NjiraAI } from '@njiraai/sdk';
import OpenAI from 'openai';
const njira = new NjiraAI({
apiKey: 'nj_live_YOUR_KEY',
baseUrl: 'https://api.njira.ai',
});
const verdict = await njira.govern({
input: 'What is the weather today?',
toolName: 'weather_lookup',
});
if (verdict.action === 'BLOCK') {
console.log(`Blocked: ${verdict.reasonText}`);
} else {
const llm = new OpenAI();
const response = await llm.chat.completions.create({
model: 'gpt-5.2',
messages: [{ role: 'user', content: verdict.modifiedText ?? 'What is the weather today?' }],
});
console.log(response.choices[0].message.content);
// Log the completed call for audit trail
await njira.audit({
requestId: verdict.requestId,
toolName: 'weather_lookup',
input: 'What is the weather today?',
verdictAction: verdict.action,
});
}
Verdict shape
A typical govern() response looks like:
{
"request_id": "req_abc123",
"action": "ALLOW",
"reason_code": "SAFE",
"confidence": 0.95,
"violations": [],
"hazards_detected": [],
"latency_ms": 12
}
Sanity checks
These checks confirm your wiring is correct.
| Check | Expected |
|---|---|
| Safe request | ALLOW / HTTP 200 |
| Policy-triggering request (PII example below) | BLOCK / HTTP 403 |
| Gateway health endpoint | JSON status response |
Trigger a BLOCK (PII example)
Gateway Proxy
curl -s https://gateway.njira.ai/v1/chat/completions \
-H "Authorization: Bearer nj_live_YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.2",
"messages": [{"role": "user", "content": "My SSN is 123-45-6789, look up my records"}]
}' | jq .
SDK
verdict = njira.govern(
input="My SSN is 123-45-6789, look up my records",
tool_name="records_lookup",
)
print(verdict.action) # "BLOCK"
print(verdict.reason_code) # "PII_DETECTED"
print(verdict.reason_text) # "SSN pattern (XXX-XX-XXXX) detected"
A blocked response includes a structured reason (reason_code, reason_text) and a unique request_id for audit lookup.
Gateway health
curl -sf https://gateway.njira.ai/health | jq .
Modes: shadow vs active
- Shadow mode — verdicts are computed and logged, but requests are not blocked or rewritten. Use this to evaluate policies before turning them on.
- Active mode — NjiraAI applies verdicts inline (BLOCK rejects, MODIFY rewrites).
A common rollout is shadow → review logs → active.
Troubleshooting
| Symptom | Likely cause | What to do |
|---|---|---|
401 Unauthorized |
Missing/invalid API key | Confirm Authorization: Bearer <key> and key prefix (nj_live_ / nj_test_) |
403 Forbidden |
A policy triggered | Inspect the response reason_code and the X-Njira-Verdict header |
| Connection refused / DNS errors | Wrong base URL | Gateway: https://gateway.njira.ai/v1 · API: https://api.njira.ai |
| Timeouts | Network/service issue | Retry and check gateway health endpoint |
You're set up
You now have a working NjiraAI integration with policies evaluating every request. From here:
- Manage policies in the Console (Policies page).
- Start in shadow for evaluation, then switch to active for production.
- Use the SDK if you want per-call governance, richer branching, or tool-specific controls.