OpenTelemetry Overview 

Monitoring distributed systems means collecting data from various sources, including servers, containers, and applications. In large organizations, this data distribution makes it harder to get a single view of the performance of their entire system.

OpenTelemetry helps you streamline your full-stack observability efforts by giving you a single, universal format for collecting and sending telemetry data. Thus, OpenTelemetry makes improving performance and troubleshooting issues easier for teams.

In this article, we will understand what is OpenTelemetry, the data pipeline, components of OpenTelemetry, and the benefits of OpenTelemetry.

What is OpenTelemetry?

OpenTelemetry (a.k.a. OTel) is an open-source observability framework under the Cloud Native Computing Foundation (CNCF). Otel helps developers, operations, DevOps, and IT teams instrument, generate, collect, and export telemetry data. With Otel, you can monitor application health, troubleshoot issues, and gain insights into the system’s overall performance.

In the context of OpenTelemetry, observability refers to the ability to collect, measure, and analyze data about an application and its infrastructure’s behavior and performance. OpenTelemetry provides tools, APIs, libraries, SDKs, and agents to add observability to your system.

With OpenTelemetry, you can instrument your application in a vendor-agnostic way and then analyze the telemetry data in your backend tool of choice, whether Prometheus, Jaeger, Zipkin, or others.

How does OpenTelemetry work?

OpenTelemetry helps gather and process Telemetry data through stages or components, from collection to analysis and storage. These stages comprise the OpenTelemetry (OTel) pipeline, often called the telemetry data processing pipeline.

Here’s a high-level overview of how OpenTelemetry works:

  • Instrumentation: Add code to your applications and services to capture telemetry data, including traces, metrics, logs, and context information. Use OpenTelemetry libraries and SDKs to instrument your code.
  • Data Collection: The code collects telemetry data such as traces representing the flow of requests and interactions, metrics measuring performance and resource usage, and log recording events and errors.
  • Data Exporters: After data is collected, it is sent to one or more data exporters responsible for transmitting telemetry data to external systems or observability backends for further processing and storage.
  • Agent (Optional): Agents help with data aggregation, batching, and load balancing. Use these as an intermediary between your code & exporters.
  • Data Transformation and Enrichment: Before exporting data, it may go through transformation and enrichment stages. This involves adding metadata, filtering data, or performing aggregations.
  • Export to Backends: The data is then exported to observability backends or data storage solutions. These backends are responsible for storing, indexing, and making the data available for other processes.
  • Data Query and Analysis: Query and analyze the data using tools such as Grafana, Kibana, Prometheus, Jaeger, and custom-built dashboards. These tools provide insights into application behavior, performance, and issues.
  • Alerting and Monitoring(optional): Set up alerting rules and thresholds based on data to proactively detect and respond to issues. Alerts can be triggered when metrics or trace data indicate abnormal behavior.
  • Visualization and Reporting: Visualizations and reports generated from the telemetry data help teams understand system behavior, track key performance indicators, and make informed decisions

The entire OpenTelemetry process can be implemented using the various components as follows:

Components of OpenTelemetry

OpenTelemetry has several vendor-neutral and open-source components, including:

  • APIs and SDKs per programming language for generating and emitting telemetry data
  • Collector component to receive, process, and export telemetry data
  • OTLP protocol for transmitting telemetry data

These components work together to specify metrics to be measured, gather the relevant data, clean and organize the information, and export it in the appropriate format to a monitoring backend. 

OpenTelemetry’s components are loosely coupled, so you can easily choose which OTel parts you want to integrate. Also, these components can be implemented with a wide range of programming languages, including Go, Java, and Python. 

Let’s understand a bit about all these components:

APIs and SDKs

Application programming interfaces (APIs) help instrument your code and coordinate data collection across your system. OpenTelemetry defines a set of language-agnostic APIs that define the structure and behavior of the telemetry data and operations. Also, for each supported programming language, there are language-specific OpenTelemetry implementations, so you can implement these APIs in the language of your choice.

Software development kits (SDKs) implement and support APIs via libraries that help gather, process, and export data. Unlike APIs, however, SDKs are language-specific.

OpenTelemetry Collector

The collector receives, processes, and exports telemetry data to your favorite observability tool, such as Coralogix. While not technically required, it is an extremely useful component of the OpenTelemetry architecture because it allows flexibility for receiving and sending the application telemetry to the backend(s).

The OpenTelemtry Collector consists of three components:

  • Receiver – Defines how data is gathered: pushing to the collector or pulling when required
  • Processor – Intermediary operations to prepare data for exporting: batching, adding metadata, etc.
  • Exporter – Send telemetry data to an open source or commercial backend. Can push or pull data.

Since the collector is just a specification for collecting and sending telemetry, it still requires a backend to receive, store, and process the data.

OTLP: OpenTelemetry Protocol

OpenTelemetry defines a vendor and tool-agnostic protocol specification called OTLP (OpenTelemetry Protocol) for all kinds of telemetry data. OTLP can be used for transmitting telemetry data from the SDK to the Collector and from the Collector to the backend tool of choice. The OTLP specification defines the encoding, transport, and delivery mechanism for the data and is the future-proof choice.

Now that you have understood what is OpenTelemetry and how it works, let’s understand some benefits that you can get after using OpenTelemetry for your application.

Benefits of OpenTelemetry

OTel provides a future-proof standard for working with telemetry data in your cloud-native applications. You spend less debugging time and more time delivering business-centric features.

Some of the other benefits that you can see after implementing OpenTelemetry in your applications include:

Improved Observability

Observability using OpenTelemetry provides a standardized and comprehensive approach to observability. You can trace requests, collect metrics, and analyze data to monitor system health effectively to gain deep insights into your application’s performance and behavior.

By adopting OpenTelemetry, you ensure that your full-stack observability practices remain up-to-date and aligned with industry standards. As the project evolves, you can benefit from new features and improvements without major rework.

Easy Setup for Distributed Tracing

OpenTelemetry enables distributed tracing, allowing you to trace requests as they traverse through different services and components of your application. This setup helps you visualize request flows, identify bottlenecks, and diagnose performance issues in complex, distributed systems.

By using OpenTelemetry, your organization doesn’t need to spend time developing an in-house solution or researching individual tools for your stack. You can even conserve engineering efforts, if you decide to switch to a different vendor or add tools to your system.  Your team won’t need to develop new telemetry mechanisms after adding new tools.

Vendor Neutrality

OpenTelemetry provides a vendor-neutral, open-source standard for observability instrumentation. You can use the same instrumentation libraries and practices across different programming languages and frameworks.

With OpenTelemetry, you can collect telemetry data from different sources and send it to multiple platforms without significant configuration changes. OTel enables you to send Telemetry data to Coralogix or any backend of your choice, thus preventing vendor lock-in.

Flexibility for Data Metrics and Integration

OpenTelemetry allows you to control the telemetry data you send to your platforms. You only capture the information you need, reducing unnecessary noise and excess costs. Additionally, filtering makes it easier to also add custom tags to metrics for streamlined organization and searching.

OpenTelemetry also allows exporters to integrate with various observability backends and platforms, including popular solutions like Prometheus, Jaeger, Zipkin, Elasticsearch, and more. So, you can choose any set of tools that fulfill your organizational needs.

Coralogix and OpenTelemetry

Data plus context are key to supercharging observability using OpenTelemetry. Coralogix supports OpenTelemetry to get telemetry data (traces, logs, and metrics) from your app as requests travel through its many services and other infrastructure. You can easily use OpenTelemetry’s APIs, SDKs, and tools to collect and export observability data from your environment directly to Coralogix.

Coralogix currently supports OpenTelemetry metrics v0.19. Combine your telemetry data and Coralogix to supercharge your system’s observability!

Benefits of Learning Python for Game Development

The world of computer games is vast, ranging from single-player agility games and logic puzzles with simple 2D animations to the stunning graphics in 3D rendered massive multiplayer online role-playing games like the Lost Ark.

Wanting to design and build your own games is a common motivator for learning to code while building a portfolio of work is an essential step for breaking into the gaming industry. For experienced developers, creating your own game from scratch can be anything from a satisfying side project. It can be an opportunity to experiment with elements of computing that don’t feature in your day job – like graphics and audio – or a taste of what would be involved in moving to a new role within computing.

Once you’ve devised an idea for a new computer game, one of your decisions needs to be which programming language to use to turn your ideas into reality. Python is one of the most popular programming languages in the world and an excellent choice for those new to coding. However, as you may have found if you’ve already started researching this topic, Python isn’t necessarily an obvious choice for game development. In this article, we’ll look at the pros and cons of Python for building computer games, some of the considerations for writing games in Python, and essential libraries and frameworks to help you get started.

Advantages of Python for game development

As we said above, Python is one of the world’s most popular programming languages, and with good reason. Its concise, human-readable syntax and built-in interpreter make Python an optimal choice for anyone new to programming. Its platform independence, extensive ecosystem of libraries, and high-level design ensure versatility and increase developer productivity.

How does that translate to game development? If you’re new to coding, you’ll find plenty of resources to help you start writing code in Python. As a high-level language, Python abstracts away details about how your code is run, leaving you focused on the logic and aesthetics of your game design. However, for games where performance is a key concern, this is the biggest disadvantage.

Developer overheads when working in Python are quite low, so you can get something up and running quickly. This is great for beginners and experienced developers alike. As a novice, seeing your progress and building up your game incrementally makes for a more rewarding experience and makes it easier to find mistakes as you go. For professionals, this makes Python an ideal tool for getting something working quickly in the case of prototyping while also providing a dev-friendly language for longer-term development.

Furthermore, the Python ecosystem is vast, with a friendly online community of developers you can turn to for pointers if you get stuck. Because it’s both open-source and flexible, Python is used for a wide range of applications, so you can find libraries for machine learning, artificial intelligence, manipulating audio, and processing graphics, as well as several libraries and frameworks aimed explicitly at game development (discussed in more detail below). All of this makes it easier to start developing games with Python.

Disadvantages of Python for game development

One of Python’s key upsides as a general programming language is also its main drawback when it comes to game development, at least where video games are concerned. The high rendering speeds, realistic graphics, and responsiveness that players expect from video games require developers to optimize their code for every microsecond of computing performance. Although high-level languages like Python are not designed to be slow, they don’t give developers the flexibility to control how memory is allocated and released or to interact with hardware-level components. Furthermore, Python is interpreted at runtime rather than compiled in advance – that doesn’t necessarily make a perceptible difference on modern hardware for most applications. Still, when speed is of the essence, it matters.

Instead, video game developers have tended towards lower-level languages that give them more precise control over resources, with C++ being the primary choice. While computing power and memory have increased significantly in the last decade, so have user expectations. When you’re building a game involving 3D graphics that emulate real-world physics from multiple perspectives while responding to hundreds of simultaneous inputs, you simply can’t afford to waste processor cycles. Combine that with decades of industry experience and knowledge poured into building the tools to support game development. Unsurprisingly, many best-known gaming engines, including Unreal, Unity, CryEngine, and Godot, are written wholly or partly in C++. Other popular languages within the gaming world include C# (also used by Unity) and Java.

When to use Python for game development

Despite these limitations, Python has plenty to offer game developers.

Prototyping games with Python

Because it’s easy to work with, Python is a great choice for prototyping all kinds of programs, including games. Even if you’re planning to build the final version in a different language for performance reasons, Python provides a quick turnaround time for trying out game logic, testing concepts on your target audience, or pitching ideas to colleagues and stakeholders.

Learning to code via game development

If you’re using game development to learn how to code, then Python is an excellent way to become familiar with the basics and learn about object orientation. You’ll be able to progress relatively quickly and test what you’re building as you go. You’ll also find plenty of gaming libraries and tutorials for different experience levels.

Scripting gaming engines with Python

If your sights are set on a career in gaming, and you’re concerned that learning Python will be a waste of effort, you might want to think again. As a widely used, open-source scripting language, Python is a common choice for some supporting code in developing larger games. Unreal Engine, for example, supports Python for scripting tasks that you can perform manually from the editor, like importing assets or randomizing actor placement. In contrast, Unity supports Python for automating scene and sequence assembly, among other tasks.

Developing games with Python

Don’t let performance considerations turn you off Python for gaming completely. If you’re looking to develop a game that doesn’t need to be tuned for maximum performance and you’re not using one of the heavyweight gaming engines, then Python is a valid choice. For a sense of what’s possible, look at existing games built with Python, including Disney’s ToonTown Online, Frets on Fire, The Sims 4, and Eve Online.

Getting started in game development with Python

The Python ecosystem offers gaming libraries for everyone from complete novices to experienced Pythonistas, including:

  • Pygame is a popular choice for building relatively simple 2D games.
  • Pygame Zero provides a tutorial for migrating games built in Scratch, making it ideal for complete beginners, including children.
  • Pyglet is a powerful cross-platform windowing and multimedia library for building games and other graphically rich applications.
  • Panda3D was originally developed by Disney to build Toontown Online and is now an open-source framework for building games and 3D-rendered graphics with Python. Under the hood, Panda3D uses C++, and you can create games using C++.
  • Ursina Engine was built on Panda3D and simplified certain aspects of that library.
  • Kivy is a framework for developing Python apps for multiple platforms, including Android, iOS, and Raspberry Pi. You’ll find serval tutorials showing you how to start building games for mobile with Kivy.

Final thoughts

Part of being a developer is choosing the correct programming language for the job. Python has a place within game development: as an entry point for those new to coding, as a prototyping tool for creating proofs of concept quickly to test your ideas and gather feedback, as a powerful scripting language to support other aspects of game development, and as a simple yet powerful language for building games for any platform.

While Python would not be the ideal choice where game performance is critical, it’s an excellent tool for developing something quickly or if you don’t want to learn a more complex language and gaming engine. 

Akka License Change: The Impact of Akka’s Move Away From “Open Source”

Akka’s license change has surprised many of us, but it didn’t come out of nowhere. Lightbend recently announced that Akka will be transitioning from an “Open Source” license to a “Source available” license called BSL 1.1. Let’s unpack this to understand what it all means.

What is the difference between Source Available and Open Source?

Source Available is a relatively new term in software licensing. A source available license allows users to view and access the code, however they like, BUT they do not have the same freedoms to use, edit or repackage that code as they do in an open source license. 

So why has Lightbend changed the Akka license?

Lightbend has listed a lot of different reasons for why they felt it was time to shift to a source available model. The key points are:

  • Lightbend have become the major contributors to the Akka codebase. It is less of a community effort than it was.
  • Organizations have been using Akka without contributing back, and this is making it impossible to continue development at the desired pace.
  • They’re trying to make a more sustainable open source model, to continue Akka development for years to come.

And the Git commits back them up

When you look through the top names on the GitHub commits for the Akka repository, you see a mixture of people who have all spent significant time working for Akka. Viktor Klang, the greatest contributor by volume, spent ten years at the company but recently left, adding weight to Lightbend’s argument that they need to go ahead with the Akka SPPL license change.

So who is impacted by the Akka license change?

Lightbend have clearly thought about this, and they’ve exempted some groups from their license change to avoid the collateral damage we’ve seen from similar license changes in the past. 

2.6.20 is the first “source available” version. Version 2.6.19 is the last open source version of Akka. There will be no further bugs or improvements to these versions, and any subsequent updates will be applied to the source available versions of Akka. This means that if a new CVE is detected in a version before 2.6.20, you may be forced to migrate. (Edit: As of 09/09/2022, Lightbend have changed their position on security patches and will backport critical security updates until September 2023)

A license is needed for larger companies only

The license is only enforced if your company earns over $25 million in annual revenue. Revenue is something of a risky metric since many companies operate on razor-thin profit margins, but this indicates that Lightbend has no interest in going after the smaller companies that are leveraging Akka. 

The Play framework is exempt from licensing for now

Lightbend has ensured that users who are using the Play framework, and only the Play framework (i.e they’re not using Akka code aside from the Play framework) do not need to get a license and should not be worried about the Akka license change. However, if you use Akka directly as part of a Play service, you fall in scope for this change and will need to purchase a license from Lightbend. 

And there is a change date in place, to revert to open source

A version of Akka will remain under the BSL license for 3 years, but after those 3 years, it will revert to the Apache 2.0 license. This is a nice gesture, but no company is going to leverage 3-year-old dependencies in any meaningful way, so it’s difficult to see how this is going to have a material impact on the user experience, beyond perhaps making the code legally available for reuse in other projects. In short, this 3-year timeout doesn’t do much to limit the impact of the Akka license change. 

How much will this new license cost?

Lightbend are operating on a “per core” model, with their base license starting at $1995 per core (defined as a thread or vCPU). This could become potentially expensive for heavy users of Akka.

Lightbend has written a thorough FAQ to cover other questions you may have. Our advice, as is often the case in these situations, is to talk to a corporate lawyer and get an expert opinion on the Akka license change. 

How has the community responded?

Lightbend’s careful and considered approach to this change has paid off in many ways. There is far less outrage than we have seen in similar license changes, but that hasn’t stopped open source alternatives to Akka from gaining immediate traction. It will be interesting to see if companies are happy to migrate to another tool, or indeed work on a fork of the existing Akka codebase, in much the same way users moved to Opendistro after the Elastic license change. 

What does this mean for Open Source in general?

In early 2021, we saw Elastic change their licensing in a similar way to Akka now. From open source to source available. These are two giants of the software engineering world that became trusted and ubiquitous tools based on their open source community.

There is an ethical gray area here. The nature of open source software development isn’t easily compatible with traditional ideas of ownership. There are stewards and core contributors, but ownership is a little more unclear. It is precisely this community spirit, and absence of commercial interest, that has driven so many engineers to contribute their time and skillset to the growth of Akka. While Akka has been maintained heavily by Lightbend, 100 engineers have contributed to the core repository. 

One could argue that Lightbend are simply responding to the decisions of many large companies to use Akka without giving back to the community, and that if companies had contributed time, code, or money proportional to the benefit they got from Akka, that none of this would have been necessary. 

Time will tell, but one thing is certain. Open Source software is changing. Each time a ubiquitous tool abandons the dream of community ownership and usage without limits, that change is driven forward. Will these shifts, and the subsequent potential for legal exposure, be enough to push companies away from open source once and for all?

So what can we do about this shift in Open Source?

As Open Source Software looks increasingly more precarious, many customers are looking to SaaS providers to ensure that they are not going to be faced with a serious refactor in the future. Where once, Open source solutions were the best way to ensure that we had complete control of your system, with each passing year, we see that this is no longer the case.

If you’re interested in making the move to SaaS, Coralogix is a powerful Observability platform, with world first Streama technology, that allows us to process huge volumes of customer data at a fraction of the cost, while delivering some of the best production insights available on the market.

With features like Flow Alerts, DataMap, and our TCO Optimizer, we have everything an organization needs to scale with their data, and generate actionable insights that drive concrete reliability and data driven, strategic thinking.