Skip to content

LangChain

Coralogix's AI Observability integrations make it easy to monitor any LangChain-powered application. With a dedicated LangChain integration, Coralogix consolidates spans emitted by OpenAI, AWS Bedrock, and future LangChain chat providers so teams can understand performance, drift, and tool usage without stitching logs across services.

How LangChain works

This library ships production-ready OpenTelemetry instrumentation for LangChain. It automatically hooks into LangChain's callback manager, traces chat and tool-call workflows, and records usage metrics. The integration accelerates debugging, highlights token consumption, and standardizes observability across multi provider LangChain deployments.

Instrumentation currently supports OpenAI and AWS Bedrock providers (including tool binding) across synchronous, streaming, and multi turn LangChain runs.

What you need

  • Python version 3.10 and above.
  • Coralogix API keys.
  • LangChain 1.0.0 or newer along with the provider-specific SDKs (for example langchain-openai or langchain-aws).

Ensure each underlying provider SDK (OpenAI, AWS Bedrock, etc.) is configured with valid credentials before enabling instrumentation.

Installation

Run the following command.

pip install llm-tracekit[langchain]

Authentication

Exporting spans to Coralogix requires configuring OTLP credentials before enabling instrumentation. Use setup_export_to_coralogix or the corresponding environment variables to supply authentication details.

Using setup_export_to_coralogix

from llm_tracekit import setup_export_to_coralogix

setup_export_to_coralogix(
    coralogix_token="<your_coralogix_token>",
    coralogix_endpoint="ingress.:443",
    application_name="<ai-application>",
    subsystem_name="<ai-subsystem>",
    capture_content=True
)

Using environment variables

If arguments are not passed to setup_export_to_coralogix, the helper reads the following environment variables:

  • CX_TOKEN: Your Coralogix API key.
  • CX_ENDPOINT: Select the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.
  • CX_APPLICATION_NAME: Your application's name.
  • CX_SUBSYSTEM_NAME: Your subsystem's name.

Usage

Instrument for LangChain.

Set up tracing

Instrument

LangChain instrumentation is installed globally by wrapping BaseCallbackManager. Create an instance of LangChainInstrumentor and call instrument before instantiating your chains.

From llm_tracekit import LangChainInstrumentor

instrumentor = LangChainInstrumentor()
instrumentor.instrument()

Uninstrument

To remove instrumentation, call the uninstrument method.

instrumentor.uninstrument()

Full example

from langchain_openai import ChatOpenAI

from llm_tracekit import setup_export_to_coralogix
from llm_tracekit import LangChainInstrumentor

setup_export_to_coralogix(
    coralogix_token="<your_coralogix_token>",
    coralogix_endpoint="ingress.:443",
    application_name="<ai-application>",
    subsystem_name="<ai-subsystem>"
)

instrumentor = LangChainInstrumentor()
instrumentor.instrument()

llm = ChatOpenAI(model="gpt-4o-mini")

response = llm.invoke("Draft a release note for LangChain instrumentation")

print(response)

Enable message content capture

By default, message content - such as prompts, completions, tool call arguments, and tool responses - is not captured.

To capture message content as span attributes, set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT to true.

Many Coralogix AI evaluations rely on message content; enabling capture is a best practice.

Key differences from OpenTelemetry

Prompts, tool invocations, and model responses are stored as span attributes instead of log events, preserving a single correlated timeline for each LangChain run.

Semantic conventions

AttributeTypeDescriptionExample
gen_ai.operation.namestringThe specific name of the LangChain operation being performedchat
gen_ai.systemstringThe provider or framework responsible for the operationopenai / aws.bedrock
gen_ai.request.modelstringThe name of the model requested by the user or applicationgpt-4o-mini
gen_ai.request.temperaturefloatThe temperature parameter passed in the request. Higher values increase randomness, while lower values make output more deterministic.0.2
gen_ai.request.top_pfloatThe top_p parameter used for nucleus sampling, limiting token selection to the top probability mass.0.95
gen_ai.prompt.<message_number>.rolestringRole of the author for each input messageuser
gen_ai.prompt.<message_number>.contentstringContents of the input message (captured when content capture is enabled)Draft a release note for LangChain instrumentation.
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.idstringID of a tool call issued from the promptcall_yPIxaozNPCSp1tJ34Hsbdtzg
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.typestringType of tool call issued from the prompttool_call
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.namestringFunction name used in the prompt's tool callget_current_weather
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments passed to the prompt's tool call{"location": "Paris"}
gen_ai.request.tools.<tool_number>.typestringDeclared type for each available tool definition exposed to the model (provider-agnostic).function
gen_ai.request.tools.<tool_number>.function.namestringTool name surfaced in the request-level invocation params (OpenAI function.name, Bedrock name, etc.).get_destination_tip
gen_ai.request.tools.<tool_number>.function.descriptionstringTool description shown to the LLM regardless of provider.Return a mock travel tip for the provided city.
gen_ai.request.tools.<tool_number>.function.parametersstringJSON-serialized parameter schema (OpenAI function.parameters, Bedrock input_schema, other provider equivalents).{"type":"object","properties":{"city":{"type":"string"}}}
gen_ai.completion.<choice_number>.rolestringRole of the author for each returned choiceassistant
gen_ai.completion.<choice_number>.finish_reasonstringFinish reason reported for the choicestop
gen_ai.completion.<choice_number>.contentstringText returned by the model for the choice (captured when content capture is enabled)Here is the requested release note...
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.idstringID of a tool call triggered by the modelcall_O8NOz8VlxosSASEsOY7LDUcP
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.typestringType of tool call triggered by the modeltool_call
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.namestringFunction name executed by the model's tool callget_current_weather
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.argumentsstringArguments supplied to the model's tool call{"location": "Paris"}
gen_ai.response.modelstringExact model identifier that produced the responsegpt-4o-mini
gen_ai.response.idstringUnique identifier assigned to the specific LangChain responserun-20241010-abcdef1234567890
gen_ai.response.finish_reasonsstring[]Ordered list of finish reasons observed across all choices["stop"]
gen_ai.usage.input_tokensintNumber of tokens consumed by the prompt744
gen_ai.usage.output_tokensintNumber of tokens generated in the response256
Was this helpful?