Skip to content

LangGraph: LLM Tracekit

OpenTelemetry instrumentation for LangGraph, focused on span structure and node attributes for graph runs. Use it together with LangChain, OpenAI, or other LLM instrumentors for full observability.

Span structure

Each graph invocation produces three levels of spans:

  1. Graph span — One per graph invocation. Starts when execution leaves START and ends when it reaches END. Span name: LangGraph.
  2. Node spans — One per graph node execution, as children of the graph span. Span name: LangGraph Node <node_name>. Each node span has two attributes: node name (gen_ai.langgraph.node) and step number (gen_ai.langgraph.step, when provided by LangGraph). The node span is the active span while the node runs, so any LLM calls inside the node are traced by other instrumentors as children of that node span. Tool-only nodes (nodes that run tools without calling an LLM) get a node span with no LLM child spans.
  3. LLM spans — Created by other instrumentors (LangChain, OpenAI, Gemini, etc.) when a node calls an LLM. They appear as children of the corresponding node span.

Resulting trace: LangGraphLangGraph Node …chat or completion (from LangChain, OpenAI, etc.) for nodes that run an LLM. Tool-only nodes appear as LangGraph Node <name> with no child spans.

Installation

pip install "llm-tracekit-langgraph"

Usage

Set up tracing

Use setup_export_to_coralogix to set up tracing and export traces to Coralogix:

from llm_tracekit.langgraph import setup_export_to_coralogix

setup_export_to_coralogix(
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
)

Alternatively, set up tracing manually with your preferred TracerProvider and exporter.

Activate instrumentation

To instrument all LangGraph runs that use LangChain's callback manager:

from llm_tracekit.langgraph import LangGraphInstrumentor

LangGraphInstrumentor().instrument()

Capture LLM call spans

This instrumentor creates only the graph-level and node-level spans described above. It does not create spans for LLM calls. To get LLM spans (model, token usage, tool calls, etc.) as children of the node span that runs the LLM:

  • Use LangChain: install llm-tracekit-langchain and call LangChainInstrumentor().instrument() alongside LangGraphInstrumentor().instrument(). Both can run together; LangChain creates child spans under the active node span.
  • Use provider-specific instrumentors (OpenAI, Bedrock, etc.) instead of or alongside LangChain, depending on your stack.

Install and activate the extra instrumentor(s) you need. The same tracer provider can be passed to all of them. LLM spans appear under the correct node span because the node span is set as the active span while the node runs.

Uninstrument

LangGraphInstrumentor().uninstrument()

Full example

Minimal graph (no LLM):

from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver

from llm_tracekit.langgraph import LangGraphInstrumentor, setup_export_to_coralogix

setup_export_to_coralogix(service_name="ai-service")

LangGraphInstrumentor().instrument()

def node_a(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["A"]}

def node_b(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["B"]}

graph = StateGraph(dict)
graph.add_node("a", node_a)
graph.add_node("b", node_b)
graph.add_edge(START, "a")
graph.add_edge("a", "b")
graph.add_edge("b", END)

app = graph.compile(checkpointer=MemorySaver())
result = app.invoke({"messages": []}, config={"configurable": {"thread_id": "1"}})

Manual handler

To add the handler explicitly when invoking a graph — for example, during testing or when not using the instrumentor:

from llm_tracekit.langgraph.callback import LangGraphCallbackHandler

tracer = tracer_provider.get_tracer(__name__)
handler = LangGraphCallbackHandler(tracer=tracer)
result = app.invoke(
    initial_state,
    config={"callbacks": [handler], "configurable": {"thread_id": "1"}},
)