LangChain
Coralogix's AI Observability integrations make it easy to monitor any LangChain-powered application. With a dedicated LangChain integration, Coralogix consolidates spans emitted by OpenAI, AWS Bedrock, and future LangChain chat providers so teams can understand performance, drift, and tool usage without stitching logs across services.
How LangChain works
This library ships production-ready OpenTelemetry instrumentation for LangChain. It automatically hooks into LangChain's callback manager, traces chat and tool-call workflows, and records usage metrics. The integration accelerates debugging, highlights token consumption, and standardizes observability across multi provider LangChain deployments.
Instrumentation currently supports OpenAI and AWS Bedrock providers (including tool binding) across synchronous, streaming, and multi turn LangChain runs.
What you need
- Python version 3.10 and above.
- Coralogix API keys.
- LangChain 1.0.0 or newer along with the provider-specific SDKs (for example
langchain-openaiorlangchain-aws).
Ensure each underlying provider SDK (OpenAI, AWS Bedrock, etc.) is configured with valid credentials before enabling instrumentation.
Installation
Run the following command.
Authentication
Exporting spans to Coralogix requires configuring OTLP credentials before enabling instrumentation. Use setup_export_to_coralogix or the corresponding environment variables to supply authentication details.
Using setup_export_to_coralogix
from llm_tracekit import setup_export_to_coralogix
setup_export_to_coralogix(
coralogix_token="<your_coralogix_token>",
coralogix_endpoint="ingress.:443",
application_name="<ai-application>",
subsystem_name="<ai-subsystem>",
capture_content=True
)
Using environment variables
If arguments are not passed to setup_export_to_coralogix, the helper reads the following environment variables:
CX_TOKEN: Your Coralogix API key.CX_ENDPOINT: Select the ingress.:443 endpoint that corresponds to your Coralogix domain using the domain selector at the top of the page.CX_APPLICATION_NAME: Your application's name.CX_SUBSYSTEM_NAME: Your subsystem's name.
Usage
Instrument for LangChain.
Set up tracing
Instrument
LangChain instrumentation is installed globally by wrapping BaseCallbackManager. Create an instance of LangChainInstrumentor and call instrument before instantiating your chains.
From llm_tracekit import LangChainInstrumentor
instrumentor = LangChainInstrumentor()
instrumentor.instrument()
Uninstrument
To remove instrumentation, call the uninstrument method.
Full example
from langchain_openai import ChatOpenAI
from llm_tracekit import setup_export_to_coralogix
from llm_tracekit import LangChainInstrumentor
setup_export_to_coralogix(
coralogix_token="<your_coralogix_token>",
coralogix_endpoint="ingress.:443",
application_name="<ai-application>",
subsystem_name="<ai-subsystem>"
)
instrumentor = LangChainInstrumentor()
instrumentor.instrument()
llm = ChatOpenAI(model="gpt-4o-mini")
response = llm.invoke("Draft a release note for LangChain instrumentation")
print(response)
Enable message content capture
By default, message content - such as prompts, completions, tool call arguments, and tool responses - is not captured.
To capture message content as span attributes, set the environment variable OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT to true.
Many Coralogix AI evaluations rely on message content; enabling capture is a best practice.
Key differences from OpenTelemetry
Prompts, tool invocations, and model responses are stored as span attributes instead of log events, preserving a single correlated timeline for each LangChain run.
Semantic conventions
| Attribute | Type | Description | Example |
|---|---|---|---|
gen_ai.operation.name | string | The specific name of the LangChain operation being performed | chat |
gen_ai.system | string | The provider or framework responsible for the operation | openai / aws.bedrock |
gen_ai.request.model | string | The name of the model requested by the user or application | gpt-4o-mini |
gen_ai.request.temperature | float | The temperature parameter passed in the request. Higher values increase randomness, while lower values make output more deterministic. | 0.2 |
gen_ai.request.top_p | float | The top_p parameter used for nucleus sampling, limiting token selection to the top probability mass. | 0.95 |
gen_ai.prompt.<message_number>.role | string | Role of the author for each input message | user |
gen_ai.prompt.<message_number>.content | string | Contents of the input message (captured when content capture is enabled) | Draft a release note for LangChain instrumentation. |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call issued from the prompt | call_yPIxaozNPCSp1tJ34Hsbdtzg |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.type | string | Type of tool call issued from the prompt | tool_call |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.name | string | Function name used in the prompt's tool call | get_current_weather |
gen_ai.prompt.<message_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments passed to the prompt's tool call | {"location": "Paris"} |
gen_ai.request.tools.<tool_number>.type | string | Declared type for each available tool definition exposed to the model (provider-agnostic). | function |
gen_ai.request.tools.<tool_number>.function.name | string | Tool name surfaced in the request-level invocation params (OpenAI function.name, Bedrock name, etc.). | get_destination_tip |
gen_ai.request.tools.<tool_number>.function.description | string | Tool description shown to the LLM regardless of provider. | Return a mock travel tip for the provided city. |
gen_ai.request.tools.<tool_number>.function.parameters | string | JSON-serialized parameter schema (OpenAI function.parameters, Bedrock input_schema, other provider equivalents). | {"type":"object","properties":{"city":{"type":"string"}}} |
gen_ai.completion.<choice_number>.role | string | Role of the author for each returned choice | assistant |
gen_ai.completion.<choice_number>.finish_reason | string | Finish reason reported for the choice | stop |
gen_ai.completion.<choice_number>.content | string | Text returned by the model for the choice (captured when content capture is enabled) | Here is the requested release note... |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.id | string | ID of a tool call triggered by the model | call_O8NOz8VlxosSASEsOY7LDUcP |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.type | string | Type of tool call triggered by the model | tool_call |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.name | string | Function name executed by the model's tool call | get_current_weather |
gen_ai.completion.<choice_number>.tool_calls.<tool_call_number>.function.arguments | string | Arguments supplied to the model's tool call | {"location": "Paris"} |
gen_ai.response.model | string | Exact model identifier that produced the response | gpt-4o-mini |
gen_ai.response.id | string | Unique identifier assigned to the specific LangChain response | run-20241010-abcdef1234567890 |
gen_ai.response.finish_reasons | string[] | Ordered list of finish reasons observed across all choices | ["stop"] |
gen_ai.usage.input_tokens | int | Number of tokens consumed by the prompt | 744 |
gen_ai.usage.output_tokens | int | Number of tokens generated in the response | 256 |