Five Things to Log in Your CI Pipeline: Continuous Delivery
Logs in continuous delivery pipelines are often entirely ignored, right up until something goes wrong. We usually find ourselves wishing we’d put some thought into our…
Our next-gen architecture is built to help you make sense of your ever-growing data Watch a 4-min demo video!
Formats: PNG, PDF, and SVG
Files size: 2.8 MB
For brand guidelines, please click here
A Content Delivery Network (CDN) is a distributed set of servers that are designed to get your web-based content into the hands of your users as fast as possible. CDNs produce CDN logs that can be analyzed, and this information is invaluable. Why? CDNs host servers all over the world and are designed to help you scale your traffic without maxing out your load balancers. A CDN also gives you added protection against many of the most common cyber attacks. This activity needs to be closely monitored.
A CDN, such as Akamai or Fastly, does all of this brilliant work, but we so often ignore the need to monitor it. CDN log analysis is the missing piece in your observability goal and it is a mistake to ignore it. A CDN is a fundamental part of your infrastructure and, as such, needs to be a first-class citizen in your alerting and monitoring conversations.
Accessing the logs for your CDN will differ, depending on which provider you decide to go with. For example:
Whichever the mechanism, you’ll need to create some method of extracting the logs directly from your provider. Once you have the logs, you need to understand what you’re looking at.
The following is a very common format of a web access log. Be mindful that on modern CDN solutions, you can change the format of your logs to use something more suited to CDN log analysis, like JSON, but this example will let you see the type of information that is typically available in your CDN logs and explain how to analyze your CDN logs:
127.0.0.1 username [10/Oct/2021:13:55:36 +0000] “GET /my_image.gif HTTP/2.0” 200 150 1289
Let’s break this line down into its constituent parts.
IP Address (127.0.0.1)
This is the source IP address from which the user has requested their data. This is useful because you’ll be able to see a high number of requests coming from the same IP address, which may give you an indication that someone is misusing your site.
Some providers will decode the Authorization header in the incoming request and attempt to find out the username. For example, a Basic authentication request contains the username and password encoded. If you detect any malicious activity, you may be able to trace it back to an account that you can close down.
Timestamp (10/Oct/2021:13:55:36 +0000)
As the name suggests, this portion of the log indicates when the request was sent. This is usually one of the key values when you’re looking to render this data out on a graph. For example, detecting sudden spikes in traffic.
Request Line (“GET /my_image.gif HTTP/2.0”)
The request line indicates the type of request and what was requested. For example, we can see that an HTTP GET request was issued. This means that the user was most likely requesting something from the server. Another example might be POST where the user is sending something to the server. You can also see which resource was requested and which version of the HTTP protocol was used.
HTTP Status (200)
The HTTP status lets you know whether your server was able to fulfill the request. As a general rule of thumb, if your HTTP status code begins with 2, it was most likely successful. Anything else indicates a different state. For example, 4XX status codes indicate that the request could not be fulfilled for some reason. For example, lack of authentication or a missing resource, as in the common error 404.
Latency is a killer metric to track. Spikes in latency mean slow down for your users and can be the first indication that something is going wrong. Latency is the time taken between the request arriving at your CDN and the response being sent back to the user.
Response size (1289)
The response body size is an often ignored value, but it is incredibly important. If an endpoint that delivers a large response body is being used excessively, this can translate into much more work for the server. Understanding the response size gives you an idea of the true load that your application is under.
So now you know what you can expect from your CDN logs, what kind of things should you be looking for?
If you monitor how much traffic you’re getting, you can immediately detect when something has gone wrong. If you include the latency property in this, you can quickly track when a slow-down is occurring.
Averages are useful but they hide important information, such as variance. For example, if 9 of your requests respond in 100 ms but 1 of your requests take 10 seconds, your average latency will be about 1 second. From this, we can see that averages can hide information, so you need something different – percentiles.
It is best to take the median, 95th, and 99th percentile of your data. Using the same example again, the median from our data set would be 100ms (which reflects our most common data), the 95th would be 5545ms, and our 99th would be 9109ms. This shows us that while our data is around the 100ms mark, we’ve got a variance to investigate.
If you’re hosting a live event on your site, or perhaps hosting a webinar or live talk and you’re directing people to your site, that sudden influx of users is going to add a strain to your system and the CDN you’re using. You can check how much traffic you’re getting (by grouping requests into 1-second buckets and counting), or you can monitor latency to check for slowdowns. You could also look for errors in your logs to see if the users have uncovered a bug.
It’s tempting to view your CDN logs as an operational measurement and nothing else, however, CDN logs are much more valuable.
By monitoring the specific resources that users are requesting, you can identify your high-traffic pages. These high-traffic pages will make great locations for advertisements or product promotions. In addition, you can find where users drop off from your site and work to fix those pages.
CDN logs help you to detect suspicious traffic. For example, web scraping software will work through your web pages. If you notice someone rapidly moving through every page in your site from the same IP address, you can be sure this is a scraper and you may wish to block this IP.
Coralogix offers a centralized, mature, scalable platform that can store and analyze your logs, using some of the most advanced observability techniques in the world. Correlating your logs across a centralized platform like Coralogix will enable you to combine your CDN insights with your application logs, your security scans, and much more, giving you complete observability and total insight into the state of your system. With integrations to Akamai, Fastly, Cloudflare, and AWS, you’re probably already in a position to get the best possible value out of Coralogix.
Whenever you use a CDN, these logs are a goldmine of useful information that will enable you to better understand the behavior of your users, the performance of your service, and the frequency of malicious requests that arrive at your website. These insights are fundamental for learning and growing your service, so you can safely scale and achieve your goals. While you grow, consider a full-stack observability platform like Coralogix, so you can skip the engineering headaches and get straight to the value.