Python Logging Best Practices: The Ultimate Guide

Python is a highly skilled language with a large developer community, which is essential in data science, machine learning, embedded applications, and back-end web and cloud applications. 

And logging is critical to understanding software behavior in Python. Once logs are in place, log monitoring can be utilized to make sense of what is happening in the software. Python includes several logging libraries that create and direct logs to their assigned targets.

This article will go over Python logging best practices to help you get the best log monitoring setup for your organization.  

What is Python logging?

Logging in Python, like other programming languages, is implemented to indicate events that have occurred in software. Logs should include descriptive messages and variable data to communicate the state of the software at the time of logging. 

They also communicate the severity of the event using unique log levels. Logs can be generated using the Python standard library.

Python logging module

The Python standard library provides a logging module to log events from applications and libraries. Once the Python JSON logger is configured, it becomes part of the Python interpreter process that is running the code. 

In other words, Python logging is global. You can also configure the Python logging subsystem using an external configuration file. The specifications for the logging configuration format are found in the Python standard library documentation.

The logging library is modular and offers four categories of components:

  • Loggers expose the interface used by the application code.
  • Handlers are created by loggers and send log records to the appropriate destination.
  • Filters can determine which log records are output.
  • Formatters specify the layout of the final log record output.

Multiple logger objects are organized into a tree representing various parts of your system and the different third-party libraries you have installed. When you send a message to one of the loggers, the message gets output on that logger’s handlers using a formatter attached to each handler.

The message then propagates the logger tree until it hits the root logger or a logger in the tree configured with .propagate=False. This hierarchy allows logs to be captured up the subtree of loggers, and a single handler could catch all logging messages.

Python loggers

The logging.Logger objects offer the primary interface to the logging library. These objects provide the logging methods to issue log requests along with the methods to query and modify their state. From here on out, we will refer to Logger objects as loggers.

Creating a new logger

The factory function logging.getLogger(name) is typically used to create loggers. By using the factory function, clients can rely on the library to manage loggers and access loggers via their names instead of storing and passing references to loggers.

The name argument in the factory function is typically a dot-separated hierarchical name, i.e. a.b.c. This naming convention enables the library to maintain a hierarchy of loggers. Specifically, when the factory function creates a logger, the library ensures a logger exists for each level of the hierarchy specified by the name, and every logger in the hierarchy is linked to its parent and child loggers.

Threshold logging level

Each logger has a threshold logging level to determine whether a log request should be processed. A logger processes a log request if the numeric value of the requested logging level is greater than or equal to the severity of the logger’s threshold logging level.

Clients can retrieve and change the threshold logging level of a logger via Logger.getEffectiveLevel() and Logger.setLevel(level) methods, respectively. When the factory function is used to create a logger, the function sets a logger’s threshold logging level to the threshold logging level of its parent logger as determined by its name.

Log levels

Log levels allow you to define event severity for each log so they are easily analyzed. Python supports predefined values, which can be found by calling logging.getLevelName(). Predefined log levels include CRITICAL, ERROR, WARNING, INFO, and DEBUG from highest to lowest severity. Developers can also maintain a  dictionary of log levels by defining custom levels using logging.getLogger().

LogWithLevelName = logging.getLogger(‘myLoggerSample’)
level = logging.getLevelName(‘INFO’)
LogWithLevelName.setLevel(level)

Printing vs logging

Python comes with different methods to read events from the software: print() and logging. Both will communicate event data but pass this information to different storage locations using different methods. 

The print function sends data exclusively to the console. This can be convenient for fast testing as a function is developed, but it is not practical for use in functional software. There are two critical reasons to not use print() in software:

  • If your code is used by other tools or scripts, the user will not know the context of the print messages.
  • When running Python software in containers like Docker, the print messages will not be seen since containers cannot access the console. 

The logging library also provides many features contributing to Python logging best practices. These include identifying the line of the file, function, and time of log events, distinguishing log events by their importance, and providing formatting to keep log messages consistent. 

Python logging examples

Here are a few code snippets to illustrate how to use the Python logging library.

Snippet 1: Creating a logger with a handler and a formatter

# main.py
import logging, sys

def _init_logger():    #Create a logger named 'app'
    logger = logging.getLogger('app')
    #Set the threshold logging level of the logger to INFO
    logger.setLevel(logging.INFO)
    #Create a stream-based handler that writes the log entries    #into the standard output stream
    handler = logging.StreamHandler(sys.stdout)
    #Create a formatter for the logs
    formatter = logging.Formatter(       '%(created)f:%(levelname)s:%(name)s:%(module)s:%(message)s')
        #Set the created formatter as the formatter of the handler    handler.setFormatter(formatter)
    #Add the created handler to this logger
    logger.addHandler(handler)

_init_logger()
_logger = logging.getLogger('app')

In snippet 1, a logger is created with a log level of INFO. Any logs that have a severity less than INFO will not print (i.e. DEBUG logs). A new handler is created and assigned to the logger. New handlers can be added to send logging outputs to streams like sys.stdout or any file-like object.

A formatter is created and added to the handler to transform log messages into placeholder data. In this formatter, the time of the log request (as an epoch timestamp), the logging level, the logger’s name, the module name, and the log message will all print.

Snippet 2: Issuing log requests

# main.py
_logger.info('App started in %s', os.getcwd())

In snippet 2, an info log states the app has started. When the app is started in the folder /home/kali with the logger created in snippet 1, the following log entry will be generated in the std.out stream:

1586147623.484407:INFO:app:main:App started in /home/kali/

Snippet 3: Issuing log requests with positional arguments

# app/io.py
import logging

def _init_logger():
    logger = logging.getLogger('app.io')
    logger.setLevel(logging.INFO) 

_init_logger()
_logger = logging.getLogger('app.io')

def write_data(file_name, data):
    try:
        # write data
        _logger.info('Successfully wrote %d bytes into %s', len(data), file_name)
    except FileNotFoundError:
        _logger.exception('Failed to write data into %s', file_name)

This snippet logs an informational message every time data is written successfully via write_data. If a write fails, the snippet logs an error message that includes the stack trace in which the exception occurred. The logs here use positional arguments to enhance the value of the logs and provide more contextual information.

With the logger created using snippet 1, successful execution of write_data would create a log similar to:

1586149091.005398:INFO:app.io:io:Successfully wrote 134 bytes into /tmp/tmp_data.txt

If the execution fails, then the created log will appear like:

1586149219.893821:ERROR:app:io:Failed to write data into /tmp1/tmp_data.txt

Traceback (most recent call last):

  File “/home/kali/program/app/io.py”, line 12, in write_data

    print(open(file_name), data)

FileNotFoundError: [Errno 2] No such file or directory: ‘/tmp1/tmp_data.txt’

Alternatively to positional arguments, the same outputs could be achieved using complete names as in:

_logger.info('Successfully wrote %(data_size)s bytes into %(file_name)s',
    {'data_size': len(data), 'file_name': file_name})

Types of Python logging methods

Every logger offers a shorthand method to log requests by level. Each pre-defined log level is available in shorthand; for example, Logger.error(msg, *args, **kwargs). 

In addition to these shorthand methods, loggers also offer a general method to specify the log level in the arguments. This method is useful when using custom logging levels.

Logger.log(level, msg, *args, **kwargs)

Another useful method is used for logs inside exception handlers. It issues log requests with the logging level ERROR and captures the current exception as part of the log entry. 

Logger.exception(msg, *args, **kwargs)

In each of the methods above, the msg and args arguments are combined to create log messages captured by log entries. They each support the keyword argument exc_info to add exception information to log entries and stack_info and stacklevel to add call stack information to log entries. Also, they support the keyword argument extra, which is a dictionary, to pass values relevant to filters, handlers, and formatters.

How to get started with Python logging

To get the most out of your Python logging, they need to be set up consistently and ready to analyze. When setting up your Python logging, use these best practices below.

  1. Create loggers using .getlogger

The logging.getLogger() factory function helps the library manage the mapping from logger names to logger instances and maintain a hierarchy of loggers. In turn, this mapping and hierarchy offer the following benefits:

  • Clients can use the factory function to access the same logger in different application parts by merely retrieving the logger by its name.
  • Only a finite number of loggers are created at runtime (under normal circumstances).
  • Log requests can be propagated up the logger hierarchy.
  • When unspecified, the threshold logging level of a logger can be inferred from its ascendants.
  • The configuration of the logging library can be updated at runtime by merely relying on the logger names.
  1. Use pre-defined logging levels

Use the shorthand logging.<logging level>() method to log at pre-defined logging levels. Besides making the code a bit shorter, the use of these functions helps partition the logging statements into two sets:

  • Those that issue log requests with pre-defined logging levels.
  • Those that issue log requests with custom logging levels.

The pre-defined logging levels capture almost all logging scenarios that occur. Most developers are universally familiar with these logging levels across different programming languages, making them easy to understand. The use of these values reduces deployment, configuration, and maintenance burdens. 

  1. Create module-level loggers

While creating loggers, we can create a logger for each class or create a logger for each module. While the first option enables fine-grained configuration, it leads to more loggers in a program, i.e., one per class. In contrast, the second option can help reduce the number of loggers in a program. So, unless such fine-grained configuration is necessary, create module-level loggers.

  1. Use .LoggerAdapter to inject local contextual information

Use logging.LoggerAdapter() to inject contextual information into log records. The class can also modify the log message and data provided as part of the request. Since the logging library does not manage these adapters, they cannot be accessed with common names. Use them to inject contextual information local to a module or class.  

  1. Use filters or .setLogRecordFactor() to inject global contextual information

Two options exist to seamlessly inject global contextual information (common across an app) into log records. The first option is to use the filter support to modify the log record arguments provided to filters. For example, the following filter injects version information into incoming log records.

def version_injecting_filter(logRecord):
    logRecord.version = '3'
    return True

There are two downsides to this option. First, if filters depend on the data in log records, then filters that inject data into log records should be executed before filters that use the injected data. Thus, the order of filters added to loggers and handlers becomes crucial. Second, the option “abuses” the support to filter log records to extend log records.

The second option is to initialize the logging library with a log record creating a factory function via logging.setLogRecordFactory(). Since the injected contextual information is global, it can be injected into log records when created in the factory function. This ensures the data will be available to every filter, formatter, logger, and handler in the program.

The downside of this option is that we have to ensure factory functions contributed by different components in a program play nicely with each other. While log record factory functions could be chained, such chaining increases the complexity of programs.

  1. Use .disable() to inhibit processing of low-level requests

A logger will process a log request based on the effective logging level. The effective logging level is the higher of two logging levels: the logger’s threshold level and the library-wide level. Set the library-wide logging level using the logging.disable(level) function. This is set to 0 by default so that every log request will be processed. 

Using this function, the software will throttle the logging output of an app by increasing the logging level across the whole app. This can be important to keep log volumes in check in production software.

Advantages and disadvantages of python logging

Python’s logging library is more complicated than simple print() statements. The library has many great features that provide a complete solution for obtaining log data needed to achieve full-stack observability in your software.

Here we show the high-level advantages and disadvantages of the library.

  1. Configurable logging

The Python logging library is highly configurable. Logs can be formatted before printing, can have placeholder data filled in automatically, and can be turned on and off as needed. Logs can also be sent to a number of different locations for easier reading and debugging.  All of these settings are codified, so are well-defined for each logger. 

  1. Save Tracebacks

In failures, it is useful to log debugging information showing where and when a failure occurred. These tracebacks can be generated automatically in the Python logging library to help speed up troubleshooting and fixes.

  1. Difficulty using consistent logging levels

Log levels used in different scenarios can be subjective across a development team. For proper analysis, it is important to keep log levels consistent. Create a well-defined strategy for your team about when to use each logging level available and when a custom level is appropriate. 

  1. Design of multiple loggers

Since the logging module is so flexible, logging configurations can quickly get complicated. Create a strategy for your team for how each logging module will be defined to keep logs consistent across developers.

Python logging platforms

Let’s look at an example of a basic logger in Python:

import logging

logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
      filename='/tmp/myapp.log',
      filemode='w')

logging.debug("Debug message")

logging.info("Informative message")

logging.error("Error message")

Line 1: import the logging module.

Line 2: create a basicConf function and pass some arguments to create the log file. In this case, we indicate the severity level, date format, filename and file mode to have the function overwrite the log file.

Line 3  to 5: messages for each logging level.

The default format for log records is SEVERITY: LOGGER: MESSAGE. Hence, if you run the code above as is, you’ll get this output:

2021-07-02 13:00:08,743 DEBUG Debug message

2021-07-02 13:00:08,743 INFO Informative message

2021-07-02 13:00:08,743 ERROR Error message

Regarding the output, you can set the destination of the log messages. As a first step, you can print messages to the screen using this sample code:

import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.')

If your goals are aimed at the Cloud, you can take advantage of Python’s set of logging handlers to redirect content. Currently in beta release, you can write logs to Stackdriver Logging from Python applications by using Google’s Python logging handler included with the Stackdriver Logging client library, or by using the client library to access the API directly. When developing your logger, take into account that the root logger doesn’t use your log handler.  Since the Python Client for Stackdriver Logging library also does logging, you may get a recursive loop if the root logger uses your Python log handler.

Basic Python logging concepts

When we use a logging library, we perform/trigger the following common tasks while using the associated concepts (highlighted in bold).

  1. A client issues a log request by executing a logging statement. Often, such logging statements invoke a function/method in the logging (library) API by providing the log data and the logging level as arguments. The logging level specifies the importance of the log request. Log data is often a log message, which is a string, along with some extra data to be logged. Often, the logging API is exposed via logger objects.
  2. To enable the processing of a request as it threads through the logging library, the logging library creates a log record that represents the log request and captures the corresponding log data.
  3. Based on how the logging library is configured (via a logging configuration), the logging library filters the log requests/records. This filtering involves comparing the requested logging level to the threshold logging level and passing the log records through user-provided filters.
  4. Handlers process the filtered log records to either store the log data (e.g., write the log data into a file) or perform other actions involving the log data (e.g., send an email with the log data). In some logging libraries, before processing log records, a handler may again filter the log records based on the handler’s logging level and user-provided handler-specific filters. Also, when needed, handlers often rely on user-provided formatters to format log records into strings, i.e., log entries.

Independent of the logging library, the above tasks are performed in an order similar to that shown in Figure 1.

image2

Figure 1: The flow of tasks when logging via a logging library

Python logging methods

Every logger offers the following logging methods to issue log requests.

Each of these methods is a shorthand to issue log requests with corresponding pre-defined logging levels as the requested logging level.

In addition to the above methods, loggers also offer the following two methods:

  • Logger.log(level, msg, *args, **kwargs) issues log requests with explicitly specified logging levels. This method is useful when using custom logging levels.
  • Logger.exception(msg, *args, **kwargs) issues log requests with the logging level ERROR and that capture the current exception as part of the log entries. Consequently, clients should invoke this method only from an exception handler.

msg and args arguments in the above methods are combined to create log messages captured by log entries. All of the above methods support the keyword argument exc_info to add exception information to log entries and stack_info and stacklevel to add call stack information to log entries. Also, they support the keyword argument extra, which is a dictionary, to pass values relevant to filters, handlers, and formatters.

When executed, the above methods perform/trigger all of the tasks shown in Figure 1 and the following two tasks:

  1. After deciding to process a log request based on its logging level and the threshold logging level, the logger creates a LogRecord object to represent the log request in the downstream processing of the request. LogRecord objects capture the msg and args arguments of logging methods and the exception and call stack information along with source code information. They also capture the keys and values in the extra argument of the logging method as fields.
  2. After every handler of a logger has processed a log request, the handlers of its ancestor loggers process the request (in the order they are encountered walking up the logger hierarchy). The Logger.propagate field controls this aspect, which is True by default.

Beyond logging levels, filters provide a finer means to filter log requests based on the information in a log record, e.g., ignore log requests issued in a specific class. Clients can add and remove filters to/from loggers using Logger.addFilter(filter) and Logger.removeFilter(filter) methods, respectively.

Python logging configuration

The logging classes introduced in the previous section provide methods to configure their instances and, consequently, customize the use of the logging library. Snippet 1 demonstrates how to use configuration methods. These methods are best used in simple single-file programs.

When involved programs (e.g., apps, libraries) use the logging library, a better option is to externalize the configuration of the logging library. Such externalization allows users to customize certain facets of logging in a program (e.g., specify the location of log files, use custom loggers/handlers/formatters/filters) and, hence, ease the deployment and use of the program. We refer to this approach to configuration as data-based approach.

Configuring the library

Clients can configure the logging library by invoking logging.config.dictConfig(config: Dict) function. The config argument is a dictionary and the following optional keys can be used to specify a configuration.

filters key maps to a dictionary of strings and dictionaries. The strings serve as filter ids used to refer to filters in the configuration (e.g., adding a filter to a logger) while the mapped dictionaries serve as filter configurations. The string value of the name key in filter configurations is used to construct logging.Filter instances.

"filters": {
"io_filter": {
"name": "app.io"
}
}

This configuration snippet results in the creation of a filter that admits all records created by the logger named ‘app.io’ or its descendants.

formatters key maps to a dictionary of strings and dictionaries. The strings serve as formatter ids used to refer to formatters in the configuration (e.g., adding a formatter to a handler) while the mapped dictionaries serve as formatter configurations. The string values of the datefmt and format keys in formatter configurations are used as the date and log entry formatting strings, respectively, to construct logging.Formatter instances. The boolean value of the (optional) validate key controls the validation of the format strings during the construction of a formatter.

"formatters": {
"simple": {
"format": "%(asctime)s - %(message)s",
"datefmt": "%y%j-%H%M%S"

},
"detailed": {
"format": "%(asctime)s - %(pathname):%(lineno) - %(message)s"
}
}

This configuration snippet results in the creation of two formatters. A simple formatter with the specified log entry and date formatting strings and detailed formatter with specified log entry formatting string and default date formatting string.

handlers key maps to a dictionary of strings and dictionaries. The strings serve as handler ids used to refer to handlers in the configuration (e.g., adding a handler to a logger) while the mapped dictionaries serve as handler configurations. The string value of the class key in a handler configuration names the class to instantiate to construct a handler. The string value of the (optional) level key specifies the logging level of the instantiated handler. The string value of the (optional) formatter key specifies the id of the formatter of the handler. Likewise, the list of values of the (optional) filters key specifies the ids of the filters of the handler. The remaining keys are passed as keyword arguments to the handler’s constructor.

"handlers": {
"stderr": {
"class": "logging.StreamHandler",
"level": "INFO",
"filters": ["io_filter"],
"formatter": "simple",
"stream": "ext://sys.stderr"
},
"alert": {
"class": "logging.handlers.SMTPHandler",
"level": "ERROR",
"formatter": "detailed",
"mailhost": "smtp.skynet.com",
"fromaddr": "logging@skynet.com",
"toaddrs": [ "admin1@skynet.com", "admin2@skynet.com" ],
"subject": "System Alert"
}
}

This configuration snippet results in the creation of two handlers:

  • A stderr handler that formats log requests with INFO and higher logging level log via the simple formatter and emits the resulting log entry into the standard error stream. The stream key is passed as keyword arguments to logging.StreamHandler constructor.
    The value of the stream key illustrates how to access objects external to the configuration. The ext:// prefixed string refers to the object that is accessible when the string without the ext:// prefix (i.e., sys.stderr) is processed via the normal importing mechanism. Refer to Access to external objects for more details. Refer to Access to internal objects for details about a similar mechanism based on cfg:// prefix to refer to objects internal to a configuration.
  • An alert handler that formats ERROR and CRITICAL log requests via the detailed formatter and emails the resulting log entry to the given email addresses. The keys mailhost, formaddr, toaddrs, and subject are passed as keyword arguments to logging.handlers.SMTPHandler’s constructor.

loggers key maps to a dictionary of strings that serve as logger names and dictionaries that serve as logger configurations. The string value of the (optional) level key specifies the logging level of the logger. The boolean value of the (optional) propagate key specifies the propagation setting of the logger. The list of values of the (optional) filters key specifies the ids of the filters of the logger. Likewise, the list of values of the (optional) handlers key specifies the ids of the handlers of the logger.

"loggers": {
"app": {
"handlers": ["stderr", "alert"],
"level": "WARNING"
},
"app.io": {
"level": "INFO"
}
}

This configuration snippet results in the creation of two loggers. The first logger is named app, its threshold logging level is set to WARNING, and it is configured to forward log requests to stderr and alert handlers. The second logger is named app.io, and its threshold logging level is set to INFO. Since a log request is propagated to the handlers associated with every ascendant logger, every log request with INFO or a higher logging level made via the app.io logger will be propagated to and handled by both stderr and alert handlers.

root key maps to a dictionary of configuration for the root logger. The format of the mapped dictionary is the same as the mapped dictionary for a logger.

incremental key maps to either True or False (default). If True, then only logging levels and propagate options of loggers, handlers, and root loggers are processed, and all other bits of the configuration is ignored. This key is useful to alter existing logging configuration. Refer to Incremental Configuration for more details.

disable_existing_loggers key maps to either True (default) or False. If False, then all existing non-root loggers are disabled as a result of processing this configuration.

Also, the config argument should map the version key to 1.

Here’s the complete configuration composed of the above snippets.

{
"version": 1,
"filters": {
"io_filter": {
"name": "app.io"
}
},
"formatters": {
"simple": {
"format": "%(asctime)s - %(message)s",
"datefmt": "%y%j-%H%M%S"

},
"detailed": {
"format": "%(asctime)s - %(pathname):%(lineno) - %(message)s"
}
},
"handlers": {
"stderr": {
"class": "logging.StreamHandler",
"level": "INFO",
"filters": ["io_filter"],
"formatter": "simple",
"stream": "ext://sys.stderr"
},
"alert": {
"class": "logging.handlers.SMTPHandler",
"level": "ERROR",
"formatter": "detailed",
"mailhost": "smtp.skynet.com",
"fromaddr": "logging@skynet.com",
"toaddrs": [ "admin1@skynet.com", "admin2@skynet.com" ],
"subject": "System Alert"
}
},
"loggers": {
"app": {
"handlers": ["stderr", "alert"],
"level": "WARNING"
},
"app.io": {
"level": "INFO"
}
}
}

Customizing via factory functions

The configuration schema for filters supports a pattern to specify a factory function to create a filter. In this pattern, a filter configuration maps the () key to the fully qualified name of a filter creating factory function along with a set of keys and values to be passed as keyword arguments to the factory function. In addition, attributes and values can be added to custom filters by mapping the . key to a dictionary of attribute names and values.

For example, the below configuration will cause the invocation of app.logging.customFilterFactory(startTime='6PM', endTime='6AM') to create a custom filter and the addition of local attribute with the value True to this filter.

  "filters": {
"time_filter": {
"()": "app.logging.create_custom_factory",
"startTime": "6PM",
"endTime": "6PM",
".": {
"local": true
}
}
}

Configuration schemas for formatters, handlers, and loggers also support the above pattern. In the case of handlers/loggers, if this pattern and the class key occur in the configuration dictionary, then this pattern is used to create handlers/loggers. Refer to User-defined Objects for more details.

Configuring using Configparse-Format Files

The logging library also supports loading configuration from a configparser-format file via the <a href="https://docs.python.org/3/library/logging.config.html#logging.config.fileConfig" target="_blank" rel="noopener noreferrer">logging.config.fileConfig() function. Since this is an older API that does not provide all of the functionalities offered by the dictionary-based configuration scheme, the use of the dictConfig() function is recommended; hence, we’re not discussing the fileConfig() function and the configparser file format in this tutorial.

Configuring over the wire

While the above APIs can be used to update the logging configuration when the client is running (e.g., web services), programming such update mechanisms from scratch can be cumbersome. The logging.config.listen() function alleviates this issue. This function starts a socket server that accepts new configurations over the wire and loads them via dictConfig() or fileConfig() functions. Refer to logging.config.listen() for more details.

Loading and storing configuration

Since the configuration provided to dictConfig() is nothing but a collection of nested dictionaries, a logging configuration can be easily represented in JSON and YAML format. Consequently, programs can use the json module in Python’s standard library or external YAML processing libraries to read and write logging configurations from files.

For example, the following snippet suffices to load the logging configuration stored in JSON format.

import json, logging.config

with open('logging-config.json', 'rt') as f:
  config = json.load(f)
  logging.config.dictConfig(config)

Limitations

In the supported configuration scheme, we cannot configure filters to filter beyond simple name-based filtering. For example, we cannot create a filter that admits only log requests created between 6 PM and 6 AM. We need to program such filters in Python and add them to loggers and handlers via factory functions or the addFilter() method.

Python logging performance

While logging statements help capture information at locations in a program, they contribute to the cost of the program in terms of execution time (logging statements in loops) and storage (logging lots of data). Although cost-free yet useful logging is impossible, we can reduce the cost of logging by making choices that are informed by performance considerations.

Configuration-based considerations

After adding logging statements to a program, we can use the support to configure logging (described earlier) to control the execution of logging statements and the associated execution time. In particular, consider the following configuration capabilities when making decisions about logging-related performance.

  1. Change logging levels of loggers: This change helps suppress log messages below a certain log level. This helps reduce the execution cost associated with unnecessary creation of log records.
  2. Change handlers: This change helps replace slower handlers with faster handlers (e.g., during testing, use a transient handler instead of a persistent handler) and even remove context-irrelevant handlers. This reduces the execution cost associated with unnecessary handling of log records.
  3. Change format: This change helps exclude unnecessary parts of a log record from the log (e.g., exclude IP addresses when executing in a single node setting). This reduces the execution cost associated with unnecessary handling of parts of log records.

The above changes the range over coarser to finer aspects of logging support in Python.

Code-based considerations

While the support to configure logging is powerful, it cannot help control the performance impact of implementation choices baked into the source code. Here are a few such logging-related implementation choices and the reasons why you should consider them when making decisions about logging-related performance.

Do not execute inactive logging statements

Upon adding the logging module to Python’s standard library, there were concerns about the execution cost associated with inactive logging statements — logging statements that issue log requests with logging level lower than the threshold logging level of the target logger. For example, how much extra time will a logging statement that invokes logger.debug(...) add to a program’s execution time when the threshold logging level of logger is logging.WARN? This concern led to client-side coding patterns (as shown below) that used the threshold logging level of the target logger to control the execution of the logging statement.

# client code
...
if logger.isEnabledFor(logging.DEBUG):
    logger.debug(msg)
...

Today, this concern is not valid because the logging methods in the logging.Logger class perform similar checks and process the log requests only if the checks pass. For example, as shown below, the above check is performed in the logging.Logger.debug method.

# client code
...
logger.debug(msg)
...

# logging library code

class Logger:
    ...
    def debug(self, msg, *args, **kwargs):
        if self.isEnabledFor(DEBUG):
            self._log(DEBUG, msg, args, **kwargs)

Consequently, inactive logging statements effectively turn into no-op statements and do not contribute to the execution cost of the program.

Even so, one should consider the following two aspects when adding logging statements.

  1. Each invocation of a logging method incurs a small overhead associated with the invocation of the logging method and the check to determine if the logging request should proceed, e.g., a million invocations of logger.debug(...) when threshold logging level of logger was logging.WARN took half a second on a typical laptop. So, while the cost of an inactive logging statement is trivial, the total execution cost of numerous inactive logging statements can quickly add up to be non-trivial.
  2. While disabling a logging statement inhibits the processing of log requests, it does not inhibit the calculation/creation of arguments to the logging statement. So, if such calculations/creations are expensive, then they can contribute non-trivially to the execution cost of the program even when the corresponding logging statement is inactive.

Do not construct log messages eagerly

Clients can construct log messages in two ways: eagerly and lazily.

  1. The client constructs the log message and passes it on to the logging method, e.g., logger.debug(f'Entering method Foo: {x=}, {y=}').
    This approach offers formatting flexibility via f-strings and the format() method, but it involves the eager construction of log messages, i.e., before the logging statements are deemed as active.
  2. The client provides a printf-style message format string (as a msg argument) and the values (as a args argument) to construct the log message to the logging method, e.g., logger.debug('Entering method %s: x=%d, y=%f', 'Foo', x, y). After the logging statement is deemed as active, the logger constructs the log message using the string formatting operator %.
    This approach relies on an older and quirky string formatting feature of Python but it involves the lazy construction of log messages.

While both approaches result in the same outcome, they exhibit different performance characteristics due to the eagerness and laziness of message construction.

For example, on a typical laptop, a million inactive invocations of logger.debug('Test message {0}'.format(t)) takes 2197ms while a million inactive invocations of logger.debug('Test message %s', t) takes 1111ms when t is a list of four integers. In the case of a million active invocations, the first approach takes 11061ms and the second approach took 10149ms. A savings of 9–50% of the time taken for logging!

So, the second (lazy) approach is more performant than the first (eager) approach in cases of both inactive and active logging statements. Further, the gains would be larger when the message construction is non-trivial, e.g., use of many arguments, conversion of complex arguments to strings.

Do not gather unnecessary under-the-hood information

By default, when a log record is created, the following data is captured in the log record:

  1. Identifier of the current process.
  2. Identifier and name of the current thread.
  3. Name of the current process in the multiprocessing framework.
  4. Filename, line number, function name, and call stack info of the logging statement.

Unless these bits of data are logged, gathering them unnecessarily increases the execution cost. So, if these bits of data will not be logged, then configure the logging framework to not gather them by setting the following flags.

  1. logging.logProcesses = False
  2. logging.logThreads = False
  3. logging.logMultiProcessing = False
  4. logging._srcFile = None

Do not block the main thread of execution

There are situations where we may want to log data in the main thread of execution without spending almost any time logging the data. Such situations are common in web services, e.g., a request processing thread needs to log incoming web requests without significantly increasing its response time. We can tackle these situations by separating concerns across threads: a client/main thread creates a log record while a logging thread logs the record. Since the task of logging is often slower as it involves slower resources (e.g., secondary storage) or other services (e.g., logging services such as Coralogix, pub-sub systems such as Kafka), this separation of concerns helps minimize the effort of logging on the execution time of the main/client thread.

The Python logging library helps handle such situations via the QueueHandler and QueueListener classes as follows.

  1. A pair of QueueHandler and QueueListener instances are initialized with a queue.
  2. When the QueueHandler instance receives a log record from the client, it merely places the log request in its queue while executing in the client’s thread. Given the simplicity of the task performed by the QueueHandler, the client thread hardly pauses.
  3. When a log record is available in the QueueListener queue, the listener retrieves the log record and executes the handlers registered with the listener to handle the log record. In terms of execution, the listener and the registered handlers execute in a dedicated thread that is different from the client thread.

Note: While QueueListener comes with a default threading strategy, developers are not required to use this strategy to use QueueHandler. Instead, developers can use alternative threading strategies that meet their needs.

That about wraps it up for this Python logging guide. If you’re looking for a log management solution to centralize your Python logs, check out our easy-to-configure Python integration.

Python Data Analysis for Finance: Analyzing Big Financial Data

Python has staked its claim as the most popular programming language among developers worldwide. Accessible via Windows, Linux, and Mac, it’s intuitive and easy to read, and its use of maths lends itself perfectly to Python for finance and data analysis. 

A popular and intuitive programming language means good availability of programmers, so it’s little surprise that recent years have seen rapid growth in Python at big banks, financial services providers, and emerging financial markets like cryptocurrency. 

Lean Python programming allows for efficient automated financial services, faster transaction processing, and quicker decision-making, bolstering processing speeds in critical pipelines.

Why conduct financial analysis using python? 

There are many reasons to choose Python for finance applications, including platform versatility, intuitive programming, high levels of efficiency, and so on. 

Qualitative vs. Quantitative 

Python can be implemented effectively for quantitative and qualitative data analysis, which is ideal for the vast volumes of data generated by the financial services and fintech sectors. Finance and banking data can be processed alongside discrete demographic data without difficulty. 

Making Predictions 

Python’s analysis of financial data and its ability to support machine learning are ideal for analysts and traders. While no forecasts are fool-proof, Python’s predictions can help to guide informed decision-making in changing market conditions. 

High-Level Language 

Python is classed as a high-level programming language, which means the programmer doesn’t need to write code for basic functions like logic and arithmetic. The availability of Python libraries can extend this further, as they contain functions ideal for financial data analysis, further streamlining development. 

Accessible Syntax 

The syntax of Python is unusually accessible, based on the concept that simple is better than complex. New programmers can gain competency faster because learning Python is not like trying to master the grammar of a foreign language. Even intermediate programmers can gain a relatively high level of fluency quickly. 

Budget-Friendly 

Scalability makes Python a suitable programming language for businesses — and budgets — of all sizes and stages. Fintech startups embrace Python as an affordable way to implement initial code, but its massive scalability and extensibility mean the largest financial firms can use it equally well. 

How Python can inform stock market trading 

One of the most powerful applications of Python in finance data analysis is the creation of stock market trading strategies. The continual generation of unfathomable data during a typical trading day means that no human alone could plot emerging trends fast enough to capitalize on them. 

Using Python, stock market financial data analysis is automated and streamlined, providing fast insights to respond before market opportunities vanish. This analysis is then returned to the human trader in bite-sized, understandable instructions that can be acted upon without delay. 

The use of Python in cryptocurrency 

A more recent application of Python in finance data analysis is cryptocurrency, and the Python-based finance data science tool Anaconda is aimed at this turbulent financial market. 

Anaconda allows cryptocurrency developers to collect, analyze and report on real-time pricing data, which allows for a more rapid response to changing market conditions in the fast-moving cryptocurrency sector. 

Newly developed Python libraries for cryptocurrency applications make the language easier for crypto-focused fintech startups, allowing new entrants to compete in the maturing market. 

Bridging the data gap 

While the fintech and financial services sectors are necessarily data-driven, the same is only sometimes true of those working there. Python can process structured and unstructured data, making it more understandable to personnel with less expertise in computers and programming. 

The predictions, forecasts, and insights made by Python can be applied across a wide range of decisions, ranging from stock purchases and investments to credit ratings, to qualitative decisions, without any programming knowledge from the end user. 

Libraries make it even easier to set this up, with the ability to pull in data from multiple sources, merge it, and create all manner of outputs, often with just a few lines of code. 

Plotting financial data with Python 

The Matplotlib package for Python is a valuable tool for plotting financial data and creating understandable visualizations to help decision-making among personnel who deal better with graphs and charts than with raw numerical data tables. 

Matplotlib is well documented but is only a host of Python data visualization packages that can be used in isolation or together to create customized reports with compelling visuals. But because many other Python fintech analysis packages depend on Matplotlib, it is considered all but essential by many programmers in the financial services sector. 

Again, simple syntax means that even complex finance data visualizations can be created quickly by anyone familiar with Python. Financial models can be set up and used on an ongoing basis, giving finance firms a stable reporting platform that can grow as needed in the future. 

What’s next for Python finance data analysis?

The financial services sector — especially the fintech segment — continues to evolve rapidly, with the emergence of cryptocurrencies, instant cash transfer apps, and the nearly universal use of contactless technology in gift cards, loyalty cards, travel cards, and more. 

Python’s massive versatility, and the availability of code libraries with pre-written functions for specific niches, make it the perfect fit for a fintech-dominated financial sector. At the same time, it can work behind the scenes to make technical data understandable to those working in non-tech financial disciplines. 

It’s no surprise that Python is used extensively by some of the biggest brands in the business, including e-wallet platforms like PayPal and Venmo. With this in mind, the next big name in the fintech segment will likely provide financial services powered by Python financial data analysis, visualization, and reporting.

Benefits of Learning Python for Game Development

The world of computer games is vast, ranging from single-player agility games and logic puzzles with simple 2D animations to the stunning graphics in 3D rendered massive multiplayer online role-playing games like the Lost Ark.

Wanting to design and build your own games is a common motivator for learning to code while building a portfolio of work is an essential step for breaking into the gaming industry. For experienced developers, creating your own game from scratch can be anything from a satisfying side project. It can be an opportunity to experiment with elements of computing that don’t feature in your day job – like graphics and audio – or a taste of what would be involved in moving to a new role within computing.

Once you’ve devised an idea for a new computer game, one of your decisions needs to be which programming language to use to turn your ideas into reality. Python is one of the most popular programming languages in the world and an excellent choice for those new to coding. However, as you may have found if you’ve already started researching this topic, Python isn’t necessarily an obvious choice for game development. In this article, we’ll look at the pros and cons of Python for building computer games, some of the considerations for writing games in Python, and essential libraries and frameworks to help you get started.

Advantages of Python for game development

As we said above, Python is one of the world’s most popular programming languages, and with good reason. Its concise, human-readable syntax and built-in interpreter make Python an optimal choice for anyone new to programming. Its platform independence, extensive ecosystem of libraries, and high-level design ensure versatility and increase developer productivity.

How does that translate to game development? If you’re new to coding, you’ll find plenty of resources to help you start writing code in Python. As a high-level language, Python abstracts away details about how your code is run, leaving you focused on the logic and aesthetics of your game design. However, for games where performance is a key concern, this is the biggest disadvantage.

Developer overheads when working in Python are quite low, so you can get something up and running quickly. This is great for beginners and experienced developers alike. As a novice, seeing your progress and building up your game incrementally makes for a more rewarding experience and makes it easier to find mistakes as you go. For professionals, this makes Python an ideal tool for getting something working quickly in the case of prototyping while also providing a dev-friendly language for longer-term development.

Furthermore, the Python ecosystem is vast, with a friendly online community of developers you can turn to for pointers if you get stuck. Because it’s both open-source and flexible, Python is used for a wide range of applications, so you can find libraries for machine learning, artificial intelligence, manipulating audio, and processing graphics, as well as several libraries and frameworks aimed explicitly at game development (discussed in more detail below). All of this makes it easier to start developing games with Python.

Disadvantages of Python for game development

One of Python’s key upsides as a general programming language is also its main drawback when it comes to game development, at least where video games are concerned. The high rendering speeds, realistic graphics, and responsiveness that players expect from video games require developers to optimize their code for every microsecond of computing performance. Although high-level languages like Python are not designed to be slow, they don’t give developers the flexibility to control how memory is allocated and released or to interact with hardware-level components. Furthermore, Python is interpreted at runtime rather than compiled in advance – that doesn’t necessarily make a perceptible difference on modern hardware for most applications. Still, when speed is of the essence, it matters.

Instead, video game developers have tended towards lower-level languages that give them more precise control over resources, with C++ being the primary choice. While computing power and memory have increased significantly in the last decade, so have user expectations. When you’re building a game involving 3D graphics that emulate real-world physics from multiple perspectives while responding to hundreds of simultaneous inputs, you simply can’t afford to waste processor cycles. Combine that with decades of industry experience and knowledge poured into building the tools to support game development. Unsurprisingly, many best-known gaming engines, including Unreal, Unity, CryEngine, and Godot, are written wholly or partly in C++. Other popular languages within the gaming world include C# (also used by Unity) and Java.

When to use Python for game development

Despite these limitations, Python has plenty to offer game developers.

Prototyping games with Python

Because it’s easy to work with, Python is a great choice for prototyping all kinds of programs, including games. Even if you’re planning to build the final version in a different language for performance reasons, Python provides a quick turnaround time for trying out game logic, testing concepts on your target audience, or pitching ideas to colleagues and stakeholders.

Learning to code via game development

If you’re using game development to learn how to code, then Python is an excellent way to become familiar with the basics and learn about object orientation. You’ll be able to progress relatively quickly and test what you’re building as you go. You’ll also find plenty of gaming libraries and tutorials for different experience levels.

Scripting gaming engines with Python

If your sights are set on a career in gaming, and you’re concerned that learning Python will be a waste of effort, you might want to think again. As a widely used, open-source scripting language, Python is a common choice for some supporting code in developing larger games. Unreal Engine, for example, supports Python for scripting tasks that you can perform manually from the editor, like importing assets or randomizing actor placement. In contrast, Unity supports Python for automating scene and sequence assembly, among other tasks.

Developing games with Python

Don’t let performance considerations turn you off Python for gaming completely. If you’re looking to develop a game that doesn’t need to be tuned for maximum performance and you’re not using one of the heavyweight gaming engines, then Python is a valid choice. For a sense of what’s possible, look at existing games built with Python, including Disney’s ToonTown Online, Frets on Fire, The Sims 4, and Eve Online.

Getting started in game development with Python

The Python ecosystem offers gaming libraries for everyone from complete novices to experienced Pythonistas, including:

  • Pygame is a popular choice for building relatively simple 2D games.
  • Pygame Zero provides a tutorial for migrating games built in Scratch, making it ideal for complete beginners, including children.
  • Pyglet is a powerful cross-platform windowing and multimedia library for building games and other graphically rich applications.
  • Panda3D was originally developed by Disney to build Toontown Online and is now an open-source framework for building games and 3D-rendered graphics with Python. Under the hood, Panda3D uses C++, and you can create games using C++.
  • Ursina Engine was built on Panda3D and simplified certain aspects of that library.
  • Kivy is a framework for developing Python apps for multiple platforms, including Android, iOS, and Raspberry Pi. You’ll find serval tutorials showing you how to start building games for mobile with Kivy.

Final thoughts

Part of being a developer is choosing the correct programming language for the job. Python has a place within game development: as an entry point for those new to coding, as a prototyping tool for creating proofs of concept quickly to test your ideas and gather feedback, as a powerful scripting language to support other aspects of game development, and as a simple yet powerful language for building games for any platform.

While Python would not be the ideal choice where game performance is critical, it’s an excellent tool for developing something quickly or if you don’t want to learn a more complex language and gaming engine. 

What is scripting?

Like all programming, scripting is a way of providing instructions to a computer so you can tell it what to do and when to do it. Programs can be designed to be interacted with manually by a user (by clicking buttons in the GUI or entering commands via the command prompt) or programmatically using other programs (or a mixture of both). 

Consider this web page displayed on your browser – the browser is a program that you can interact with manually, and that program also reads other code (the HTML and CSS that describes the page) to determine what to display.

What is the difference between scripting and coding?

So what exactly is the difference between writing a script and writing code? The answer is simple: scripting is just a particular type of coding. You can think of coding – or programming – as the collective term for providing instructions to a computer. 

These instructions can achieve a whole range of things, from building web pages and writing apps to automating IoT devices, designing databases, or developing a new operating system.

One of the considerations when you start writing these instructions is deciding which programming language to use. There are hundreds of computer languages in existence, with more being developed all the time. 

Different languages are better suited to different use cases, whether that’s manipulating data, generating graphics, or creating tools to help developers write code. Scripting languages are a subset of these programming languages, and different scripting languages are helpful for various tasks.

One of the critical features of scripts is that they provide instructions that are read and executed by another program while it is running. In technical terms, instructions from a script are interpreted at runtime (i.e., when the code is used). 

By contrast, the code is compiled in advance of being run in other programming languages. You can think of this as packaging up the instructions ready for use. This means you get the same behavior each time you run them, but if you want to make a change, you need to recompile the program into a new package and redeliver it. 

Sometimes that’s acceptable, but in other cases, it’s helpful to modify the instructions without recompiling the code first. Scripts allow you to modify the instructions each time the program runs.

(Note that the distinction between interpreted and compiled languages isn’t completely clear cut, but that’s a topic for another day.)

What are the advantages of scripting?

Because scripts provide instructions to other computer programs while running, they are ideal for creating dynamic experiences. One of the significant use cases for scripting languages is web development, where dynamic and responsive experiences are highly valued.

Imagine a basic webpage, such as an online retailer’s page for a particular prod, consisting of text and images. You could create a static version of that webpage with HTML and CSS. 

However, most online retailers have hundreds, if not thousands, of products and therefore pages. Those pages need to be kept up to date with the latest availability and price information. The retailer usually wants to display other information on each page, such as reminders of other products you’ve viewed and the number of items in your basket.

Creating every possible permutation of each webpage in advance as static HTML and CSS would be very inefficient. Instead, we can combine the static HTML and CSS with scripts that call up the dynamic content each time a user loads the page. Scripts populate the web page with relevant content based on your browsing history and product database.

Scripting language examples

The following scripting languages are widely used in web development to create dynamic user experiences (such as social media feeds, recommendations, and search results) and generate pages from templates (such as news and e-commerce sites). 

As we’ll see, web development frameworks have been created for each. Frameworks make building and maintaining experiences with a particular language easier by providing tools and libraries that simplify everyday tasks (such as authenticating users or making database calls).

JavaScript (JS)

JavaScript is a well-known programming language that’s primarily associated with client-side scripting. Client-side scripts run from your browser when you view a web page rather than on the server hosting the website.

JavaScript is used on most modern websites, including social media platforms, news pages, and any e-commerce site. Common use cases include updating the page based on information the user has entered (e.g., dynamically updating a form based on previous answers), changing the display when the user clicks a button, and providing animation. There are dozens of client-side JavaScript frameworks to choose from, including Vue, React, and Angular. More recently, it’s become possible to use JavaScript for server-side scripting, thanks to frameworks such as Node.

PHP

PHP (a recursive acronym for Hypertext Preprocessor) is very widely used for server-side scripting, powering Facebook, Wikipedia, and WordPress, to name just a few.

Server-side scripts are the instructions run on the servers hosting your website when a user visits a particular page. For example, you could use PHP scripts to call a database to retrieve availability and pricing information about a specific product or request the five most recent blog posts. The script runs when a page is requested, allowing you to populate it with the latest information without generating the page in advance.

There are a number of PHP web frameworks to choose from, including Laravel and CodeIgniter. You can also use PHP for other use cases, such as command-line scripting.

Python

One of the most popular programming languages globally, is Python. It is a scripting language with many use cases, from application and game development to data science and DevOps automation.

Python is a popular scripting choice for back-end web development, where it is used to retrieve data from databases and APIs and manipulate it for inclusion in response to the client. Python server-side web frameworks include Django, a fully-featured framework widely used for content-oriented sites including Instagram and Dropbox, and Flask, a lightweight web framework that’s popular with microservice architectures.

Perl

Perl is best known as a language for writing text-manipulation scripts, whether writing regular expressions (regex), parsing HTML, manipulating JSON files, or extracting data from log files. As a result, Perl is a popular choice for sysadmin work, such as managing file systems, databases, and users.

Thanks to both its versatility in integrating backend services and the fact it was a widely used language when the Internet started to take off, Perl was at one point a popular choice for server-side web development. 

Over the years, there have been several Perl web frameworks, with Dancer and Mojolocious being the most popular still in development. While Perl might not be your first choice when starting on a new web project, you’re likely to encounter it on older web projects, and it remains a widely used scripting language in other contexts.

Ruby

Like Python, Ruby is a general-purpose scripting language with many applications, from websites and web apps to desktop programs, data processing tools, and automated DevOps tasks.

Ruby has become increasingly popular as a server-side scripting language thanks to the Ruby on Rails web framework, which powers the likes of Airbnb, Kickstarter, and GitHub.

Wrapping up

You can think of scripting as a subset of coding, and scripting languages as a particular family of programming languages. As we’ve seen, scripting is widely used in web development to create dynamic, responsive experiences and enable pages to be generated from templates. 

But scripting is not limited to websites; you’ll find scripting languages used to create mobile and desktop apps, manipulate large data sets, automate deployments, and orchestrate machine learning utilities. In all cases, a script requires another program to run it.

Finally, as with any programming language, choosing the correct scripting language for a project will depend on several factors, including your specific use case, the existing ecosystem, and you and your team’s previous experience.

Python JSON Log Limits: What Are They and How Can You Avoid Them?

Python JSON logging has become the standard for generating readable structured data from logs. While monitoring logs in JSON is definitely much better than using the standard logging module, it comes with its own set of challenges. 

As your server or application grows, the number of logs also increases exponentially. It’s difficult to go through JSON log files, even if it’s structured, due to the sheer size of logs generated. These Python JSON log limits will become a real engineering problem for you.

Let’s dive into how log management solutions help with these issues and how they can help streamline and centralize your log management, so you can surpass your Python JSON log limits and tackle the real problems you’re looking to solve.

Python Log File Sizes

Based on the server you’re using, you’ll encounter server-specific log file restrictions due to the database constraints. 

For instance, AWS Cloudwatch skips the log event if the file size is larger than 256 KB. In such cases, especially with longer log files like JSON generates, retaining specific logs on the server is complex. 

The good news is, this is one of the easier Python JSON log limits to overcome. In some cases, you can avoid this by increasing the python log size limit configurations on the server level. However, the ideal log size limit for the server varies depending on the amount of data that your application generates. 

So how do you avoid this Python JSON Log limit on your files?

The solution here is to implement logging analytics via Coralogix. Through this platform, you can integrate and transform your logging data with any webhook and record vital data without needing to manage it actively. Since it is directly integrated with Python, your JSON logs can be easily parsed and converted.

Servers like Elasticsearch also roll logs after 256 MB based on timestamps. However, when you have multiple deployments, filtering them just based on the timestamp or on a file limit size becomes difficult. More log files can also lead to confusion and disk space issues.

To help tackle this issue, Coralogix cuts down on your overall development time by providing version benchmarks on logs and an intuitive visual dashboard.

Python JSON Log Formatting

Currently, programs use Python’s native JSON library or external libraries to implement JSON logging. Filtering these types of outputs needs additional development. For instance, you can only have name-based filtering natively, but if you want to filter logs based on time, severity, and so on, you’ll have to program those filters in. 

By using log management platforms, you can easily track custom attributes in the JSON log and implement specialized filters without having to do additional coding. You can also have alert mechanisms for failures or prioritized attributes. This significantly cuts down the time to troubleshoot via logs in case of critical failures. Correlating these attributes to application performance also helps you understand the bigger picture through the health and compliance metrics of your application.

Wrapping Up

Python JSON logging combined with a log management solution is the best way to streamline your logs and visualize them centrally. Additionally, you should also check out python logging practices to ensure that you format and collect the most relevant data. Your Python JSON logger limits will potentially distract you from adding value, and it’s important to get ahead of them.

If you want to make the most out of your Python JSON logs, our python integration should help!