LangChain
Coralogix's AI Observability integrations make it easy to monitor any LangChain-powered application. With a dedicated LangChain integration, Coralogix consolidates spans emitted by OpenAI, Anthropic, AWS Bedrock, and other LangChain chat providers so teams can understand performance, drift, and tool usage without stitching logs across services.
How LangChain works
This library ships production-ready OpenTelemetry instrumentation for LangChain. It automatically hooks into LangChain's callback manager, traces chat and tool-call workflows, and records usage metrics. The integration accelerates debugging, highlights token consumption, and standardizes observability across multi-provider LangChain deployments.
Supported providers
The following providers are supported with full prompt/completion attributes:
| Provider | Chat model class | System value |
|---|---|---|
| OpenAI | ChatOpenAI | openai |
| Anthropic | ChatAnthropic | anthropic |
| AWS Bedrock | ChatBedrock, ChatBedrockConverse, BedrockChat | aws.bedrock |
Other chat model classes are still instrumented with system value langchain. A span is always created; the model name uses metadata or the provider class name when not available in standard keys.
What you need
- Python version 3.10 and above.
- Coralogix API keys.
- LangChain 1.0.0 or newer along with the provider-specific SDKs (for example
langchain-openaiorlangchain-aws).
Ensure each underlying provider SDK (OpenAI, Anthropic, AWS Bedrock, etc.) is configured with valid credentials before enabling instrumentation.
Installation
Run the following command.
Authentication
Exporting spans to Coralogix requires configuring OTLP credentials before enabling instrumentation. Use setup_export_to_coralogix or the corresponding environment variables to supply authentication details.
Using setup_export_to_coralogix
from llm_tracekit.langchain import setup_export_to_coralogix
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
Using environment variables
If arguments are not passed to setup_export_to_coralogix, the helper reads the following environment variables:
CX_TOKEN: Your Coralogix API key.CX_ENDPOINT: Select the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.CX_APPLICATION_NAME: Your application's name.CX_SUBSYSTEM_NAME: Your subsystem's name.
Set up tracing
Instrument
LangChain instrumentation is installed globally by wrapping BaseCallbackManager. Create an instance of LangChainInstrumentor and call instrument before instantiating your chains.
Uninstrument
To remove instrumentation, call the uninstrument method.
Full example
from llm_tracekit.langchain import LangChainInstrumentor, setup_export_to_coralogix
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Optional: configure sending spans to Coralogix
# Reads connection details from environment variables: CX_TOKEN, CX_ENDPOINT
setup_export_to_coralogix(
service_name="ai-service",
application_name="ai-application",
subsystem_name="ai-subsystem",
capture_content=True,
)
# Activate instrumentation
LangChainInstrumentor().instrument()
# Example LangChain usage
llm = ChatOpenAI(model="gpt-4o-mini")
messages = [HumanMessage(content="Write a short poem on open telemetry.")]
response = llm.invoke(messages)
# Pass user via config metadata
response = llm.invoke(
messages,
config={"metadata": {"user": "[email protected]"}}
)
Enable message content capture
By default, message content — such as prompts, completions, tool call arguments, and tool responses — is not captured.
To capture message content as span attributes, do one of the following:
- Pass
capture_content=Truewhen callingsetup_export_to_coralogix. - Set the environment variable
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENTtotrue.
Many Coralogix AI evaluations rely on message content; enabling capture is a best practice.
Key differences from OpenTelemetry
Prompts, tool invocations, and model responses are stored as span attributes instead of log events, preserving a single correlated timeline for each LangChain run.
Semantic conventions
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.operation.name | string | The specific name of the LangChain operation being performed | chat |
gen_ai.system | string | The provider or framework responsible for the operation | openai / aws.bedrock / anthropic |
gen_ai.request.model | string | The name of the model requested by the user or application | gpt-4o-mini |
gen_ai.request.temperature | float | The temperature parameter passed in the request | 0.2 |
gen_ai.request.top_p | float | The top_p parameter used for nucleus sampling | 0.95 |
gen_ai.request.user | string | A unique identifier representing the end-user (from config={"metadata": {"user": "..."}}) | [email protected] |
gen_ai.prompt.<message_number>.role | string | Role of the author for each input message | user |
gen_ai.prompt.<message_number>.content | string | Contents of the input message (captured when content capture is enabled) | Draft a release note for LangChain instrumentation. |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call issued from the prompt | call_yPIxaozNPCSp1tJ34Hsbdtzg |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.type | string | Type of tool call issued from the prompt | function |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.name | string | Function name used in the prompt's tool call | get_current_weather |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments passed to the prompt's tool call | {"location": "Paris"} |
gen_ai.request.tools.<tool_number>.type | string | Declared type for each available tool definition exposed to the model | function |
gen_ai.request.tools.<tool_number>.function.name | string | Tool name surfaced in the request-level invocation params | get_destination_tip |
gen_ai.request.tools.<tool_number>.function.description | string | Tool description shown to the LLM | Return a mock travel tip for the provided city. |
gen_ai.request.tools.<tool_number>.function.parameters | string | JSON-serialized parameter schema | {"type":"object","properties":{"city":{"type":"string"}}} |
gen_ai.completion.<choice_number>.role | string | Role of the author for each returned choice | assistant |
gen_ai.completion.<choice_number>.finish_reason | string | Finish reason reported for the choice | stop |
gen_ai.completion.<choice_number>.content | string | Text returned by the model (captured when content capture is enabled) | Here is the requested release note... |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call triggered by the model | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.type | string | Type of tool call triggered by the model | function |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.name | string | Function name executed by the model's tool call | get_current_weather |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments supplied to the model's tool call | {"location": "Paris"} |
gen_ai.response.model | string | Exact model identifier that produced the response | gpt-4o-mini |
gen_ai.usage.input_tokens | int | Number of tokens consumed by the prompt | 744 |
gen_ai.usage.output_tokens | int | Number of tokens generated in the response | 256 |
LangChain-specific attributes
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.provider.name | string | The provider name from LangChain metadata | openai, anthropic |