Our next-gen architecture is built to help you make sense of your ever-growing data. Watch a 4-min demo video!

How Does OpenTelemetry Support Logging? 

OpenTelemetry supports logging by providing a unified framework that collects, processes, and exports log data alongside metrics and traces. This allows developers and operators to correlate logs with other observability data, offering a more comprehensive understanding of system performance and behavior. 

OpenTelemetry provides an extensible collection of libraries, agents, and services that facilitate the capturing of log data from applications and infrastructure. It supports various logging libraries and formats, enabling seamless integration into existing systems without the need for significant modifications.

OpenTelemetry also provides mechanisms for enriching log data with contextual information, such as trace IDs and span IDs, which are crucial for debugging and monitoring complex interactions within microservices architectures.

In this article, you will learn:

What Is OpenTelemetry’s Log Data Model? 

The OpenTelemetry log data model defines the structure and semantics of log records, ensuring consistency and interoperability across tools and platforms. It specifies key elements like timestamps, resource identifiers, and attributes, which enrich logs with contextual information.

This model supports various log formats and sources, adapting to complex logging environments. It aims for comprehensive observability, linking logs with traces and metrics for a holistic view. The standardized format simplifies log processing, storage, and analysis, streamlining operations and reducing overhead.

Types of Logs Gathered by OpenTelemetry 

Here are some of the different types of logs that can be collected by OpenTelemetry.

System and Infrastructure Logs

System and infrastructure logs represent the operational events of hardware, networks, and foundational software components. OpenTelemetry captures these to monitor system health, performance issues, and security alerts. This data is useful for site reliability engineers and IT professionals in ensuring system stability and identifying potential infrastructure failures.

First-Party Application Logs

First-party application logs record the events within an organization’s proprietary software applications. They provide visibility into application behavior, user interactions, and operational anomalies. This is critical for developers and operations teams to identify bugs, optimize performance, and enhance user experiences.

Third-Party Application Logs

Third-party application logs cover the events from external software services and APIs integrated into an organization’s system architecture. OpenTelemetry’s ability to collect these logs offers a comprehensive view of the external dependencies affecting system performance and stability.

Methods of Log Data Collection with OpenTelemetry 

There are two primary ways to collect log data in OpenTelemetry.

Via File or Stdout Logs

OpenTelemetry can capture logs written to files or standard output (stdout) streams, common logging approaches in many applications. This method enables easy integration without modifying existing logging implementations. Logs are directed to OpenTelemetry collectors for processing and analysis.

Centralizing logs via file or stdout collection simplifies monitoring and troubleshooting across distributed systems, but can result in higher latency and requires intermediate log storage.

Direct to Collector

The direct-to-collector method involves sending logs directly from applications or services to OpenTelemetry collectors, bypassing intermediary storage. This approach reduces latency, enhances log data fidelity, and simplifies the architecture by minimizing components between log generation and collection.

Direct log shipping supports real-time observability and advanced analytics, and is particularly useful in highly dynamic environments. However it can be more challenging to implement and might require code changes.

OpenTelemetry Logging Examples 

Here are some examples showing how to collect OpenTelemetry logs.

Note: Before running the following examples, please install the appropriate otelcol-contrib package (you can download it here). The examples use features such as filelog, which are not available in the regular oteocl package.

Tailing a Simple JSON File

To integrate JSON logs collection into OpenTelemetry, modify the receivers section of the Collector configuration file as follows. We are omitting the processors, exporters, and service elements of the Collector configuration.

receivers:
filelog:

    include: [/var/log/service1/*.json]

    operators:

      - type: json_parser

        timestamp:

          parse_from: attributes.time

          layout: '%Y-%m-%d %H:%M:%S'

This setup collects logs from JSON files located at /var/log/service1/*.json. The filelog receiver parses logs using a JSON parser, which extracts timestamps from the logs based on a specified layout, ensuring that logs are timestamped correctly for further processing and analysis.

Syslog

To collect syslog logs, start by setting up the syslog receiver within the OpenTelemetry Collector configuration file. This receiver listens on a specified TCP port (54527) for syslog messages, conforming to the RFC3164 protocol. Timezone information and message transformation operations are also defined to format the logs appropriately.

receivers:
syslog:

    tcp:

      listen_address: '0.0.0.0:54527'

    protocol: rfc3164

    location: UTC

    operators:

      - type: move

        from: attributes.message

        to: body

Note: To forward logs from rsyslog to the OpenTelemetry Collector, add a forwarding rule to the /etc/rsyslog.conf file, and then restart the rsyslog service. This ensures that all logs captured by rsyslog are sent to the OpenTelemetry Collector for further processing and analysis.

Kubernetes Logs

Collecting logs from Kubernetes environments requires a specialized configuration to handle the diversity of log formats. Here is an example configuration:

receivers:
filelog:

    include: [/var/log/pods/*/*/*.log]

    operators:

      - id: get-format

        type: router

        routes:

          - expr: body matches "^\\{"

            output: parser-docker

          - expr: body matches "^[^ Z]+ "

            output: parser-crio

          - expr: body matches "^[^ Z]+Z"

            output: parser-containerd

      - id: parser-docker

        type: json_parser

        timestamp:

          parse_from: attributes.time

          layout: '%Y-%m-%dT%H:%M:%S.%LZ'

This is a simplified configuration that receives logs from all Kubernetes pods, with special operators to parse and route logs based on their format—whether they originate from Docker, CRI-O, or containerd. 

OpenTelemetry Logging Best Practices 

Here are some best practices to ensure effective logging in OpenTelemetry.

Use Structured Logs

Structured logging involves using a consistent format for log entries, typically JSON, which makes the data easier to search, analyze, and understand. OpenTelemetry encourages the use of structured logs as they can be automatically parsed and enriched with additional metadata without requiring custom parsing rules. 

This approach ensures that every log entry contains clear, delineated fields, such as timestamps, log levels, and messages. This enables more effective data querying and indexing.

Use Consistent Semantic Conventions

Semantic conventions in OpenTelemetry provide a standardized way to describe the attributes in log data, such as service names, error codes, and geographic locations. Adhering to these conventions ensures compatibility across different tools and systems, enhancing the interoperability of log data. 

Consistency in log format and attribute naming significantly simplifies the process of log aggregation and analysis, particularly in complex, distributed systems where logs from multiple sources need to be correlated.

Include Contextual Information

Logs are more useful when they include contextual information that ties them to particular operations or transactions. OpenTelemetry supports the inclusion of trace IDs and span IDs within logs, which links the log entries to specific activities in an application. 

This practice enables developers and operators to trace the sequence of events leading up to an issue or during a transaction, providing a deeper insight into application behavior and aiding quicker root cause analysis.

Use Appropriate Log Levels

Setting appropriate log levels (e.g., DEBUG, INFO, WARN, ERROR) helps in managing the verbosity of log output and focusing attention on important events. OpenTelemetry allows for configuring log levels dynamically, which helps in tuning the granularity of logging based on the operational context, such as during a debugging session or in a production environment. 

Proper use of log levels helps in reducing the noise in log data and conserving storage, while ensuring that critical information is always logged.

Integrate with Traces and Metrics

OpenTelemetry promotes the integration of logs with traces and metrics to provide a holistic view of observability. This integration enables the correlation of logs with specific trace segments and metric readings, offering a complete overview of system performance and health. 

By linking logs to traces and metrics, OpenTelemetry helps in creating a unified observability framework that enhances the detection of anomalies, speeds up troubleshooting, and supports proactive monitoring strategies.

Get Full Observability with OpenTelemetry and Coralogix

Data plus context are key to supercharging observability using OpenTelemetry. As Coralogix is open-source friendly, we support OpenTelemetry to get your app’s telemetry data (traces, logs, and metrics) as requests travel through its many services and other infrastructure. You can easily use OpenTelemetry’s APIs, SDKs, and tools to collect and export observability data from your environment directly to Coralogix.

Learn more about OpenTelemetry support in Coralogix

Where Modern Observability
and Financial Savvy Meet.