Python logging best practices and tips

In the vast computing world, there are different programming languages that include facilities for logging. From our previous posts, you can learn best practices about Node logging, Java logging, and Ruby logging. As part of the ongoing logging series, this post describes what you need to discover about Python logging. Considering that “log” has the double meaning of a (single) log-record and a log-file, this post assumes that “log” refers to a log-file.

Advantages of Python logging

Why learning about logging in Python? One of Python’s striking features is its capacity to handle high traffic sites, with an emphasis on code readability. Other advantages of logging in Python is its own dedicated library for this purpose, the various outputs where the log records can be directed, such as console, file, rotating file, Syslog, remote server, email, etc., and the large number of extensions and plugins it supports. In this post, you’ll find out examples of different outputs.

Python logging description

The Python standard library provides a logging module as a solution to log events from applications and libraries. Once the logger is configured, it becomes part of the Python interpreter process that is running the code. In other words, it is global. You can also configure Python logging subsystem using an external configuration file. The specifications for the logging configuration format are found in the Python standard library.

The logging library is based on a modular approach and includes categories of components: loggers, handlers, filters, and formatters. Basically:

  • Loggers expose the interface that application code directly uses.
  • Handlers send the log records (created by loggers) to the appropriate destination.
  • Filters provide a finer grained facility for determining which log records to output.
  • Formatters specify the layout of log records in the final output.

These multiple logger objects are organized into a tree which represents various parts of your system and different third-party libraries that you have installed. When you send a message into one of the loggers, the message gets output on all of that logger’s handlers, using a formatter that’s attached to each handler. The message then propagates up the logger tree until it hits the root logger, or a logger up in the tree that is configured with propagate=False.

Python logging platforms

This is an example of a basic logger in Python:

import logging

logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(message)s',
      filename='/tmp/myapp.log',
      filemode='w')

logging.debug("Debug message")

logging.info("Informative message")

logging.error("Error message")

Line 1: import the logging module.

Line 2: create a basicConf function and pass some arguments to create the log file. In this case, we indicate the severity level, date format, filename and file mode to have the function overwrite the log file.

Line 3  to 5: messages for each logging level.

The default format for log records is SEVERITY: LOGGER: MESSAGE. Hence, if you run the code above as is, you’ll get this output:

2004-07-02 13:00:08,743 DEBUG Debug message

2004-07-02 13:00:08,743 INFO Informative message

2004-07-02 13:00:08,743 ERROR Error message

Regarding the output, you can set the destination of the log messages. As a first step, you can print messages to the screen using this sample code:

import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
logging.debug('This is a log message.')

If your goals are aimed at the Cloud, you can take advantage of Python’s set of logging handlers to redirect content. Currently in beta release, you can write logs to Stackdriver Logging from Python applications by using Google’s Python logging handler included with the Stackdriver Logging client library, or by using the client library to access the API directly. When developing your logger, take into account that the root logger doesn’t use your log handler.  Since the Python Client for Stackdriver Logging library also does logging, you may get a recursive loop if the root logger uses your log handler.

Python logging best practices

The possibilities with Python logging are endless and you can customize them to your needs. The following are some tips for best practices, so you can take the most from Python logging:

Setting level names: This supports you in maintaining your own dictionary of log messages and reduces the possibility of typo errors.

LogWithLevelName = logging.getLogger('myLoggerSample')
level = logging.getLevelName('INFO')
LogWithLevelName.setLevel(level)

logging.getLevelName(logging_level) returns the textual representation of the severity called logging_level. The predefined values include, from highest to lowest severity:

  1. CRITICAL
  2. ERROR
  3. WARNING
  4. INFO
  5. DEBUG

Logging from multiple modules: if you have various modules, and you have to perform the initialization in every module before logging messages, you can use cascaded logger naming:

logging.getLogger(“coralogix”)

logging.getLogger(“coralogix.database”)

logging.getLogger(“coralogix.client”)

Making coralogix.client and coralogix.database descendants of the logger coralogix, and propagating their messages to it, it thereby enables easy multi-module logging. This is one of the positive side-effects of name in case the library structure of the modules reflects the software architecture.

Logging with Django and uWSGI: To deploy web applications you can use StreamHandler as logger which sends all logs to For Django you have:

  'handlers': {

    'stderr': {

        'level': 'INFO',

        'class': 'logging.StreamHandler',

        'formatter': 'your_formatter',

      },

    },

Next, uWSGI forwards all of the app output, including prints and possible tracebacks, to syslog with the app name attached:

    $ uwsgi --log-syslog=yourapp 

Logging with Nginx: In case you need having additional features not supported by uWSGI — for example, improved handling of static resources (via any combination of Expires or E-Tag headers, gzip compression, pre-compressed gzip, etc.), access logs and their format can be customized in conf. You can use the combined format, such the example for a Linux system:

access_log /var/log/nginx/access.log;

This line is similar to explicitly specifying the combined format as this:

# note that the log_format directly below is a single line
log_format mycombined '$remote_addr - $remote_user [$time_local] 
"$request" $status $body_bytes_sent "$http_referer" 
"$http_user_agent"'server_port'; 
access_log /var/log/nginx/access.log mycombinedplus;

Log analysis and filtering: after writing proper logs, you might want to analyze them and obtain useful insights. First, open files using blocks, so you won’t have to worry about closing them. Moreover, avoid reading everything into memory at once. Instead, read a line at a time and use it to update the cumulative statistics. The use of the combined log format can be practical if you are thinking on using log analysis tools (e.g. ELK) because they have pre-built filters for consuming these logs.

If you need to parse your log output for analysis you might want to use the code below:

    with open(logfile, "rb") as f:

        for line in csv.reader(f, delimiter=' '):

            self._update(**self._parse(line))

Python’s CSV module contains code the read CSV files and other files with a similar format.  In this way, you can combine Python’s logging library to register the logs and the CSV library to parse them.

And of course, the Coralogix way for python logging, using the Coralogix Python appender allows sending all Python written logs directly to Coralogix for search, live tail, alerting, and of course, machine learning powered insights such as new error detection and flow anomaly detection.

Give a try to our best practices for logging in Python and let us know your experience. Do you have more tips that you’d like to share? We’d love to hear from you!

Signup to Coralogix

WordPress Lightbox