Skip to content

LangGraph

OpenTelemetry instrumentation for LangGraph, focused on span structure and node attributes for graph runs. Use it together with LangChain, OpenAI, or other LLM instrumentors for full observability.

Span structure (3 levels)

  1. Global span — One per graph invocation. Starts when execution leaves START and ends when it reaches END. Span name: "LangGraph".
  2. Node spans — One per graph node execution, as children of the global span. Span name: "LangGraph Node <node_name>". Each node span has two attributes: node name (gen_ai.langgraph.node) and step number (gen_ai.langgraph.step, when provided by LangGraph). The node span is the current span while the node runs, so any LLM calls inside the node are traced by other instrumentors as children of that node span. Tool nodes (nodes that only run tools and do not call an LLM) get a node span too; they have no LLM child spans.
  3. LLM spans — Created by other instrumentors (LangChain, OpenAI, Gemini, etc.) when a node calls an LLM. They appear as children of the corresponding node span.

Resulting trace: LangGraphLangGraph Node …chat/completion (from LangChain/OpenAI/etc.) where the node runs an LLM; tool-only nodes appear as LangGraph Node <name> with no child.

Requirements

Installation

Run the following command.

pip install "llm-tracekit-langgraph"

Authentication

Authentication data is passed during OTel Span Exporter definition:

  1. Choose the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.
  2. Use your customized API key in the authorization request header.
  3. Provide the application and subsystem names.
from llm_tracekit.langgraph import setup_export_to_coralogix

setup_export_to_coralogix(
    coralogix_token=<your_coralogix_token>,
    coralogix_endpoint="ingress.:443",
    service_name="ai-service",
    application_name="ai-application",
    subsystem_name="ai-subsystem",
)

Note

All of the authentication parameters can also be provided through environment variables (CX_TOKEN, CX_ENDPOINT, etc.).

Usage

Set up tracing

Automatic

Use the setup_export_to_coralogix function to set up tracing and export traces to Coralogix. See the code snippet in the Authentication section.

Manual

Alternatively, set up tracing manually with your preferred TracerProvider and exporter.

Instrument

To instrument all LangGraph runs that use LangChain's callback manager:

from llm_tracekit.langgraph import LangGraphInstrumentor

LangGraphInstrumentor().instrument()

Capture LLM call spans

This instrumentor only creates the graph-level and node-level spans above. It does not create spans for LLM calls. To get LLM spans (model, token usage, tool calls, etc.) as children of the node span that runs the LLM:

  • Use LangChain: llm-tracekit-langchain and LangChainInstrumentor().instrument(...) in addition to LangGraphInstrumentor().instrument(...). Both can run together; LangChain will create child spans under the current (node) span.
  • Or use provider-specific instrumentors (OpenAI, Bedrock, etc.) instead of or alongside LangChain.

Install and activate the extra instrumentor(s) you need. The same tracer provider can be passed to all of them. LLM spans will appear under the correct node span because the node span is set as the current span while the node runs.

Uninstrument

LangGraphInstrumentor().uninstrument()

Full example

Minimal graph (no LLM):

from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver

from llm_tracekit.langgraph import LangGraphInstrumentor, setup_export_to_coralogix

setup_export_to_coralogix(service_name="ai-service")

LangGraphInstrumentor().instrument()

def node_a(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["A"]}

def node_b(state: dict) -> dict:
    return {"messages": state.get("messages", []) + ["B"]}

graph = StateGraph(dict)
graph.add_node("a", node_a)
graph.add_node("b", node_b)
graph.add_edge(START, "a")
graph.add_edge("a", "b")
graph.add_edge("b", END)

app = graph.compile(checkpointer=MemorySaver())
result = app.invoke({"messages": []}, config={"configurable": {"thread_id": "1"}})

Manual handler

You can also add the handler explicitly when invoking a graph (for example, for testing or when not using the instrumentor):

from llm_tracekit.langgraph.callback import LangGraphCallbackHandler

tracer = tracer_provider.get_tracer(__name__)
handler = LangGraphCallbackHandler(tracer=tracer)
result = app.invoke(
    initial_state,
    config={"callbacks": [handler], "configurable": {"thread_id": "1"}},
)

Pass user identity

Pass the user identifier in either the metadata dict or the configurable dict of the LangGraph config:

# Option 1: via metadata (preferred)
result = app.invoke(
    {"messages": [HumanMessage(content="Hello")]},
    config={
        "configurable": {"thread_id": "1"},
        "metadata": {"user": "[email protected]"},
    },
)

# Option 2: via configurable (also supported)
result = app.invoke(
    {"messages": [HumanMessage(content="Hello")]},
    config={
        "configurable": {"thread_id": "1", "user": "[email protected]"},
    },
)

Semantic conventions

Node span attributes

AttributeTypeDescriptionExample
gen_ai.langgraph.nodestringThe name of the LangGraph node being executedagent, tools
gen_ai.langgraph.stepintStep counter for this node execution within the graph run1, 2
gen_ai.request.userstringA unique identifier representing the end user (from config={"metadata": {"user": "..."}} or config={"configurable": {"user": "..."}})[email protected]

Next steps

Once your integration is set up, explore the AI Center Overview to monitor performance, costs, quality issues, and security across all your AI applications — and to set up Guardrails for real-time policy enforcement.