[Live Webinar] Next-Level O11y: Why Every DevOps Team Needs a RUM Strategy Register today!

Quick Guide to OpenTelemetry: Concepts, Tutorial, and Best Practices

  • 12 min read

What is OpenTelemetry

OpenTelemetry is an open-source observability framework designed to provide a unified way to collect and export telemetry data (metrics, logs, and traces) from your applications and services. It aims to simplify the process of instrumentation, making it easier for developers to track their system’s performance and behavior in real time. 

OpenTelemetry supports multiple programming languages, offering APIs and SDKs that enable developers to capture data from their applications without tying them to a specific vendor’s observability tool. This flexibility allows for seamless integration with various data analysis and monitoring backends, facilitating a more comprehensive view of system health and performance.

In this article, you will learn:

What Type of Telemetry Data Does OpenTelemetry Handle? 

OpenTelemetry handles three primary types of telemetry data: 

  • Traces allow developers to track the journey of a request through various services, helping to identify bottlenecks and understand the flow of requests within a system. 
  • Metrics provide quantitative information about the operation of applications and infrastructure, such as response times, memory usage, and request counts, enabling performance monitoring and trend analysis. 
  • Logs offer qualitative insights through event records, detailing what happened in the system at a specific point in time. Together, these data types offer a holistic view of system performance and behavior, aiding in debugging, monitoring, and optimizing applications.

Learn more in our detailed guide to:

  • OpenTelemetry trace (coming soon)
  • OpenTelemetry metrics (coming soon)

OpenTelemetry Architecture and Components 

The architecture of OpenTelemetry consists of several core components, each playing a crucial role in the collection, processing, and exporting of telemetry data.

OpenTelemetry graphic

APIs

The OpenTelemetry APIs are language-specific interfaces that allow you to instrument your code to collect telemetry data. They are designed to be lightweight and efficient, with minimal impact on your application’s performance.

These APIs provide a consistent way to collect trace and metric data, regardless of your programming language. This consistency simplifies the process of instrumenting your code and ensures that the data collected is comparable across different parts of your system.

SDK

The OpenTelemetry SDK, or Software Development Kit, is a set of tools that implements the OpenTelemetry APIs. It provides the functionality to collect, process, and export telemetry data from your services.

The SDK includes features like batching, retry, and throttling to ensure efficient data collection. It also includes a configuration system that allows you to control how data is collected and exported.

Collector

The OpenTelemetry Collector is a service that can receive, process, and export telemetry data. It provides a unified way to ingest and export data, making it easier to integrate OpenTelemetry with your existing infrastructure.

The Collector can be deployed as a standalone service or as a sidecar, depending on your needs. It is designed to be scalable and reliable, ensuring that your telemetry data is safely and efficiently handled.

Receiver

A Receiver is a component that receives telemetry data from your services. Receivers can handle different types of data, including traces, metrics, and logs. Receivers are responsible for accepting data, transforming it, and passing it on to processors. They are the entry point for data into the OpenTelemetry system.

Processor

Processors in OpenTelemetry are components that take data from receivers and process it before sending it to exporters. They can perform a variety of tasks, such as batching, filtering, and enriching the data.

Processors are a critical part of the data pipeline, as they allow you to manipulate the data to fit your needs. They also improve the efficiency of the system by reducing the amount of data that needs to be exported.

Exporter

Exporters take the processed data from processors and export it to a backend of your choice. This could be a database, a monitoring service, or any other system where you want to store and analyze your telemetry data. Exporters are designed to be flexible and extensible, allowing you to integrate OpenTelemetry with various backends and analysis tools.

Learn more in our detailed guide to OpenTelemetry exporter (coming soon)

Benefits of OpenTelemetry 

Let’s look at some of the reasons why you might choose OpenTelemetry for observability:

  • Vendor neutrality: With OpenTelemetry, you’re not constrained to a single vendor’s solution. Instead, you can send your telemetry data to any backend of your choice. This means you have the freedom to switch between vendors as your needs change, without needing to alter your instrumentation code.
  • Data flexibility: OpenTelemetry does not restrict you to a predefined set of metrics. You can capture and analyze any type of data that you deem useful for your operations. This gives you complete flexibility to monitor your systems.
  • Easy setup: OpenTelemetry provides auto-instrumentation libraries for many popular programming languages. These libraries automatically capture telemetry data from your applications, without requiring any code changes. This simplifies the setup process and reduces the time and effort needed to start monitoring your systems.
  • Custom metrics and traces: For more advanced users, OpenTelemetry provides APIs for manual instrumentation. These allow you to capture custom metrics and traces that are specific to your applications.

Learn more in our detailed guide to OpenTelemetry instrumentation (coming soon)

OpenTelemetry vs. Prometheus 

Prometheus is sometimes considered as an alternative to OpenTelemetry, even though it’s a different type of tool. Prometheus is primarily a monitoring solution that collects and stores metrics data. It uses a pull-based model to gather metrics from configured endpoints at specified intervals. This approach allows Prometheus to collect time-series data efficiently, which can be queried and visualized using its query language (PromQL).

In contrast, OpenTelemetry provides a broader framework for observability, including metrics, logs, and traces. It is designed to be agnostic to the backend monitoring or observability platforms, offering flexibility in how and where data is exported. OpenTelemetry’s auto-instrumentation capabilities reduce the need for manual instrumentation, making it easier to collect telemetry data across various languages and frameworks. 

While Prometheus can be used as a backend for metrics data collected via OpenTelemetry, OpenTelemetry’s scope extends beyond just metrics, aiming to provide a comprehensive observability framework that includes distributed tracing and logging.

Learn more in our detailed guide to OpenTelemetry vs Prometheus (coming soon)

Tutorial: Getting Started with OpenTelemetry 

Let’s look at how to use OpenTelemetry at a basic level.

Step 1: Setup Node.js

For the purpose of this tutorial, we’ll assume you’re working with a Node.js environment. You’ll need to have Node.js and npm installed on your machine. If you haven’t installed these yet, the official Node.js website provides detailed instructions.

Once you have Node.js and npm installed, you must create a new directory for this project. You can create a new directory using the mkdir command followed by the name of the directory. Navigate into the newly created directory using the cd command.

Step 2: Create and Launch an HTTP Server

In your project directory, create a new file named server.js. This file will contain the code for our HTTP server. For now, we’ll create a simple server that responds with a “Hello, World!” message to every request.

You can create and launch the server using Node.js’ built-in http module. This module allows Node.js to transfer data over the HTTP protocol without needing any external package. Here’s how you can do it:

const http = require('http');


const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello, World!\n');
});


server.listen(3000, '127.0.0.1', () => {
  console.log('Server running at http://127.0.0.1:3000/');
});

Step 3: Add Dependencies

With the server in place, the next step is to add the necessary dependencies. For OpenTelemetry, you’ll need to install the following packages:

  • @opentelemetry/node—automatically instruments Node.js applications
  •  @opentelemetry/core—provides core functionalities and types for OpenTelemetry
  • @opentelemetry/tracing—provides tracing functionalities

You can install these using npm:

npm install @opentelemetry/node @opentelemetry/core @opentelemetry/tracing

Step 4: Initialize the OpenTelemetry SDK

Having installed the required dependencies, you can now initialize the OpenTelemetry SDK. This involves creating a new file named tracer.js in your project directory. This file will contain the code for initializing the OpenTelemetry SDK.

The initialization of the OpenTelemetry SDK involves creating a tracer provider, configuring a sampler, and registering the tracer provider with the OpenTelemetry API. Here’s how you can do it:

const { NodeTracerProvider } = require('@opentelemetry/node');
const { SimpleSpanProcessor } = require('@opentelemetry/tracing');
const { ConsoleSpanExporter } = require('@opentelemetry/tracing');


const provider = new NodeTracerProvider();


provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));


provider.register();

Step 5: Instrument the HTTP Server

After initializing the OpenTelemetry SDK, you can proceed to instrument your HTTP server. This involves modifying the server.js file to import the OpenTelemetry API and use it to create spans for incoming HTTP requests.

const http = require('http');
const api = require('@opentelemetry/api');


const server = http.createServer((req, res) => {
  const span = api.trace.getTracer('example-http-server').startSpan('handleRequest');
  
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello, World!\n');


  span.end();
});


server.listen(3000, '127.0.0.1', () => {
  console.log('Server running at http://127.0.0.1:3000/');
});

Step 6: Add Custom Instrumentation

In addition to the automatic instrumentation provided by OpenTelemetry, you can also add custom instrumentation to your application. This involves creating custom spans for certain parts of your code that you want to monitor.

For example, let’s say you want to monitor how long it takes to generate the “Hello, World!” response. You can create a custom span for this:

const http = require('http');
const api = require('@opentelemetry/api');


const server = http.createServer((req, res) => {
  const span = api.trace.getTracer('example-http-server').startSpan('handleRequest');
  
  const responseSpan = api.trace.getTracer('example-http-server').startSpan('generateResponse');
  
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello, World!\n');


  responseSpan.end();
  span.end();
});


server.listen(3000, '127.0.0.1', () => {
  console.log('Server running at http://127.0.0.1:3000/');
});

Step 7: Run the Application

Finally, you can run your application. This involves starting the HTTP server and making some requests to it. You can start the server using the node command followed by the name of the server file:

node server.js

Once the server runs, you can request it using a browser or a tool like curl. Each request will generate a trace that includes the spans created in your server code. These traces will be printed to the console, thanks to the ConsoleSpanExporter that we configured earlier.

OpenTelemetry Best Practices 

You can take several measures to make the most of OpenTelemetry and ensure you use the data effectively.

Keep Initialization Separate from Instrumentation

In the initialization phase, you set up the OpenTelemetry SDK, define the configuration parameters, and specify the backend to which the telemetry data will be sent. This process sets up the ground rules for how OpenTelemetry will operate within your system. It’s important to execute this phase carefully, as a misconfiguration could lead to the loss of valuable telemetry data or even impact the performance of your system.

The instrumentation phase, on the other hand, involves integrating OpenTelemetry into your application code. It’s during this phase that you decide which parts of your system should be observed and what data should be collected. This process requires a deep understanding of your application’s behavior and observability needs.

Keeping these two phases separate helps prevent confusion and ensures that each can be executed effectively. It also allows for greater flexibility, as you can change your configuration parameters or switch your observability backend without having to touch the instrumentation code.

Use Attributes Consistently

Attributes are key-value pairs that you can add to your telemetry data to provide additional context. They can be used to identify the source of a request, the version of the software that handled it, or any other relevant information.

Consistent use of attributes makes your telemetry data more usable and valuable. It enables you to filter and group your data based on these attributes, helping you to identify patterns and uncover insights. For example, if you consistently add a ‘version’ attribute to your telemetry data, you can easily compare the performance of different versions of your software.

However, using attributes consistently requires a well-thought-out naming convention and a clear understanding of what information is important to your observability needs. It’s also important to avoid using too many attributes, as this can overwhelm your observability backend and make your data harder to analyze.

Carefully Consider Cardinality

Cardinality, in the context of OpenTelemetry, refers to the number of unique combinations of a set of attributes. When defining your attributes, it’s important to carefully consider their cardinality. Try to strike a balance between the level of detail you need and the complexity it adds to your data. 

For example, adding an attribute that identifies the exact time of a request can create a high number of unique attribute combinations, increasing the cardinality. This type of data is better addressed by metrics.

In addition, keep in mind that some observability backends have limitations regarding the cardinality they can handle. Exceeding these limits can result in lost data or decreased performance. Therefore, before adding a new attribute, consider its potential impact on cardinality and make sure your backend can handle it.

Integrate with Logging and Tracing

Logs provide detailed information about individual events, while traces show how requests flow through your system. For example, Jaeger and Zipkin are two popular open-source distributed tracing systems. Integrating OpenTelemetry with these tools allows you to correlate telemetry data with log entries and trace data, providing deeper insights into your system’s behavior. 

However, you need to ensure that the data from all these sources can be correlated, which often requires consistent use of identifiers across all data sources. It’s also important to consider the impact on your system’s performance and the capacity of your observability backend.

Learn more in our detailed guide to OpenTelemetry logging (coming soon)

Get Full Observability with OpenTelemetry and Coralogix

Data plus context are key to supercharging observability using OpenTelemetry. As Coralogix is open-source friendly, we support OpenTelemetry to get your app’s telemetry data (traces, logs, and metrics) as requests travel through its many services and other infrastructure. You can easily use OpenTelemetry’s APIs, SDKs, and tools to collect and export observability data from your environment directly to Coralogix.

Learn more about OpenTelemetry support in Coralogix

Where Modern Observability
and Financial Savvy Meet.

Live Webinar
Next-Level O11y: Why Every DevOps Team Needs a RUM Strategy
April 30th at 12pm ET | 6pm CET
Save my Seat