LangChain Python Basic

Trace LangChain Python with NjiraAI callbacks and enforce at tool boundaries.

This example shows how to trace LangChain Python runs and wrap tools for enforcement.

Full code

import asyncio
import os

from langchain_openai import ChatOpenAI

from njiraai import NjiraAI
from njiraai_langchain import NjiraCallbackHandler, wrap_tool


async def main():
    njira = NjiraAI(
        api_key=os.environ["NJIRA_API_KEY"],
        project_id=os.environ["NJIRA_PROJECT_ID"],
        mode="active",
    )

    # Create callback handler for automatic tracing
    handler = NjiraCallbackHandler(njira)

    # Initialize LLM
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)

    # Example tool
    def calculator(expression: str) -> str:
        try:
            return str(eval(expression))
        except Exception:
            return "Error: invalid expression"

    # Wrap tool for enforcement
    safe_calculator = wrap_tool(
        calculator,
        njira,
        tool_name="calculator",
        description="Evaluates math expressions",
    )

    # Traced LLM call with callback
    response = await llm.ainvoke("What is 2 + 2?", config={"callbacks": [handler]})
    print(f"LLM response: {response.content}")

    # Enforced tool call
    result = await safe_calculator("2 + 2")
    print(f"Calculator result: {result}")

    # Flush traces
    await njira.flush()


if __name__ == "__main__":
    asyncio.run(main())

Run it

cd sdks/python/examples/langchain-basic
uv sync
NJIRA_API_KEY=your-key NJIRA_PROJECT_ID=your-project OPENAI_API_KEY=your-openai-key uv run python main.py

What's happening

  1. NjiraCallbackHandler automatically captures spans for LLM calls, chains, and tool invocations
  2. wrap_tool adds pre/post enforcement around tool execution
  3. Traces are flushed at the end to ensure delivery