Distributed Tracing Observability in Microservices

Have you ever tried to find a bug in a multi-layered architecture? Although this might sound like a simple enough task, it can quickly become a nightmare if the system doesn’t have proper monitoring. And the more distributed your system is, the more complex it becomes to analyze the root cause of a problem.

That’s precisely why observability is key in distributed systems. Observability can be thought of as the advanced version of application monitoring. It helps engineers gain visibility into a system by analyzing the three pillars of observability — logs, metrics and traces.

While there is some overlap between the three data points, tracing helps provide a holistic view of application performance. Traces follow a request throughout its lifecycle of interacting with different components and record data about how each service has performed.

This article will explain distributed tracing and how it works in microservices.

What is distributed tracing for microservices?

Microservices Observability have become pretty popular, and with good reason, too — they allow businesses to scale vertically and rapidly. However, they also increase complexity. 

Think of it this way: when you want to scale a business unit, you’d like to make smaller teams that can work efficiently on a single task. Such as a frontend team who can handle UI/UX tweaks on your website or a backend team to manage order fulfillment.

But now, you also have to think about how your front-end team will communicate with your backend team. If there’s any delay in response from either side or how they work, the customer’s order can get stuck. 

Applications behave similarly. In distributed systems, dozens or even hundreds of services depend on the responses of application calls. Analyzing where a call originated from and where it is breaking is difficult, especially when a single team cannot fully understand how individual services interact with each other.

Even if you enable traditional log monitoring on an application, you only get the overall response time of a microservice. So, how do you understand which piece of code or service is responsible? More importantly, how do you resolve it before it affects customers?

The answer lies in using observability to your advantage. Observability goes beyond traditional monitoring by combining logs, metrics, security, and traces to troubleshoot application performance issues quickly. However, most companies haven’t yet integrated the most important aspect of observability to their stack: distributed tracing.

Distributed tracing in microservices provides a bird’s eye view of a request traversing every single component of a vast network of services. Kind of like a manager knowing exactly how his team works, right down to the individual response times and how they interact with each other. Neat, right?

In applications, traces break down the time each component takes so that you can quickly pinpoint the source of a performance bottleneck. This data can then be overlaid with logs and metrics to focus your optimization efforts.

Here’s how it works in practice.

How does distributed tracing work?

In a distributed system, a request generally spans different microservices and returns a response to the function that triggered it. A trace follows this request over multiple spans, gathering data on how each service has performed. This data can be linked together through a unique correlation ID, or trace ID. 

Thus, distributed tracing can help you visualize the request as it flows through each system component. But, of course, tracing needs to be implemented in each microservice to achieve this.

Before you think you need a system overhaul, let’s talk about tracing tools. Tracing tools help you track, find, and diagnose errors in multi-tiered applications with a fraction of the effort required if you go the manual route.

While it’s true that you need to make some system modifications when you first implement tracing, it’s one-time if you choose the right tool. Make sure the tool you choose follows the open standards for tracing outlined by OpenTelemetry. This way, you can easily switch between vendors if your tracing requirements change. 

Popular tools that adhere to these standards include open-source options such as Jaeger and Zipkin. Coralogix’s distributed tracing screen also adheres to OpenTelemetry

Now that you understand what distributed tracing is and how it works, let’s look at some best practices and challenges.

Distributed tracing: challenges and best practices

Challenge 1:

Although tracing is a powerful tool in your monitoring arsenal, it’s not always the best choice. Tracing is a resource-intensive process. Since you’re adding extra load to your application and logging each response, the amount of data produced can impact the application’s performance. And that’s the opposite of what we want to do.

The Solution:

The best way forward here is to sample the traces and only switch them on if your logs show an anomaly. Think of tracing as an emergency alarm that can help you diffuse challenging situations, but you don’t want it to keep ringing all the time.

Challenge 2:

Tracing your entire system can be complicated, expensive, and extensive. You need to first understand if it makes sense for your business.

For example, if your applications primarily run on legacy systems or are transitioning from legacy to modern systems, your components likely use different programming languages and frameworks. Retrofitting tracing in a system like this is complex and requires you to devote engineering time to this process. You might even introduce bugs while trying to make tracing work.

The Solution:

While implementing tracing, an iterative approach works best. You’ll need to prioritize common breaking points and critical services within your applications for tracing. 

With this approach, you have to accept that there might be blind spots in your system. However, narrowing down those blind spots will be easier by the process of elimination. This would result in faster resolution times and effective troubleshooting. And once your application uptime improves, getting stakeholders to approve “tracing” sprints will be more straightforward.

Wrapping up

Distributed tracing is critical to observability in a distributed system. With traces, you gain access to data that can help streamline many crucial processes, such as running tests in production, disaster recovery, or fault injection testing. 

However, tracing cannot always be used as it can affect application performance. Thus, it’s crucial to implement the other pillars of observability, i.e., logs, metrics, and security to build a complete picture of what’s happening with your system. That’s why Coralogix’s full stack observability platform combines all four to help monitor your system at all times. Try us out today!

More Changes Mean More Challenges for Troubleshooting

The widespread adoption of Agile methodologies in recent years has allowed organizations to significantly increase their ability to push out more high quality software. The new fast-paced CI/CD solutions pipeline and lightweight microservices architecture enable us to introduce new updates at a rate and scale that would have seemed unimaginable just a few years ago.

Previous development practices revolved heavily around centralized applications and infrequent updates that were shipped maybe once a quarter or even once a year. 

But with the benefits of new technologies comes a new set of challenges that need to be addressed as well. 

While the tools for creating and deploying software have improved significantly, we are still in the process of establishing a more efficient approach to troubleshooting. 

This post takes a look at the tools that are currently available for helping teams keep their software functioning as intended, the challenges that they face with advances in the field, and a new way forward that will speed up the troubleshooting process for everyone involved.

Challenges Facing Troubleshooters 

On the face of it, the ability to make more software is a positive mark in the ledger. But with new abilities come new issues to solve. 

Let’s take a look at three of the most common challenges facing organizations right now.

The Move from the Monolith to Microservices 

Our first challenge is that there are now more moving parts where issues may arise that can complicate efforts to fix them quickly and efficiently. 

Since making the move to microservices and Kubernetes, we are now dealing with a much more distributed set of environments. We have replaced the “monolith” of the single core app with many smaller, more agile but dispersed apps. 

Their high level of distribution makes it more difficult to quickly track down where a change occurred and understand what’s impacting what. As the elements of our apps become more siloed, we lose overall visibility over the system structure.

Teams are Becoming More Siloed

The next challenge is that with more widely distributed teams taking part in the software development process, there are much fewer people who are significantly familiar with the environment when it comes to addressing problems impacting the products. 

Each team is proficient in their own domain and segment of the product, but are essentially siloed off from the other groups. This leads to a lack of knowledge sharing across teams that can be highly detrimental when it comes time for someone to get called in to fix an issue stemming from another team’s part of the product. 

More Changes, More Often

Then finally for our current review, is the fact that changes are happening far more often than before. 

Patches no longer have to wait until it’s Tuesday to come out. New features on the app or adjustments to infrastructure like load balancing can impact the product and require a fix. With all this going on, it is easy for others to not be aware of these changes.

The lack of communication caused by siloes combined with the increase in the number of changes can create confusion when issues arise. Sort of like finding a specific needle in a pile of needles. It’s enough to make you miss the haystack metaphor. 

In Search of Context

In examining these challenges, we can quickly understand that they all center around the reality that those tasked with troubleshooting issues lack the needed context for fixing them. 

Developers, who are increasingly being called upon to troubleshoot, lack the knowledge of what has changed, by who, what it impacts, or even where to start looking in a potentially unfamiliar environment.

Having made the move to the cloud, developers have at their disposal a wide range of monitoring and observability tools covering needs related to tracing, logs, databases, alerting, security, and topology, providing them with greater visibility into their software throughout the Software Development Lifecycle.

When they get the call that something needs their attention, they often have to begin investigating essentially from scratch. This means jumping into their many tools and dashboards, pouring over logs to try and figure out the source of the problem. While these monitoring and observability tools can provide valuable insights, it can still be difficult to identify which changes impacted which other components.

Some organizations attempt to use communication tools like Slack to track changes. While we love the scroll as much as anyone else, it is far from a comprehensive solution and still lacks the connection between source and impact. Chances are that they will need to call in for additional help in tracking down the root of the problem from someone else on their team or a different team altogether.

In both cases, the person on-call still needs to spend significant and valuable time on connecting the dots between changes and issues. Time that might be better spent on actually fixing the problem and getting the product back online.  

Filling in the Missing Part of the DevOps Toolchain

The tooling available to help identify issues are getting much better at providing visibility, including to provide on-call responders with the context that will help them get to the source of the problem faster. 

Moving forward, we want to see developers gain a better, holistic understanding of their Kubernetes environments. So even if their microservices are highly distributed, their visibility over them should be more unified. 

Fixing problems faster means getting the relevant information into the hands of whoever is called up to address the issue. It should not matter if the incident is in a product that they themselves built or if they are opening it up for the first time. 

Reducing the MTTR relies on providing them with the necessary context from the tools that they are already using to monitor their environment. Hopefully by better utilizing our existing resources, we can cut down on the time and number of people required to fix issues when they pop up — allowing us to get back to actually building better products.

How to Address the Most Common Microservice Observability Issues

Breaking down larger, monolithic software, services, and applications into microservices have become a standard practice for developers. While this solves many issues, it also creates new ones. Architectures composed of microservices create their own unique challenges. 

In this article, we are going to break down some of the most common. More specifically, we are going to assess how observability-based solutions can overcome many of these obstacles.

Observability vs Monitoring

We don’t need to tell you that monitoring when working with microservices is crucial. This is obvious. Monitoring in any area of IT is the cornerstone of maintaining a healthy, usable system, software, or service.

A common misconception is that observability and monitoring are interchangeable terms. The difference is that while monitoring gives you a great picture of the health of your system, observability takes these findings and provides data with practical applications.

Observability is where monitoring inevitably leads. A good monitoring practice will provide answers to your questions. Observability enables you to know what to ask next.

No App Is An Island

In a microservices architecture, developers can tweak and tinker with individual apps without worrying about this leading to the need for a full redeploy. However, the larger the microservice architecture gets the more issues this creates. When you have dozens of apps, worked on by as many developers, you end up running a service that relies on a multitude of different tools and coding languages.

A microservice architecture cannot function if the individual apps lack the ability to communicate effectively. For an app in the architecture to do its job, it will need to request data from other apps. It relies on smooth service-to-service interaction. This interaction can become a real hurdle when each app in the architecture was built with differing tools and code.

In a microservice-based architecture you can have thousands of components communicating with each-other. Observability tools give developers, engineers, and architects, the power to observe the way these services interact. This can be during specific phases of development or usage, or across the whole project lifecycle.

Of course, it is entirely possible to program communication logic into each app individually. With large architectures though this can be a nightmare. It is when microservice architectures reach significant size and complexity that our first observability solution comes into play- a service mesh.

Service Mesh

A service mesh works inter-service communication into the infrastructure of your microservice architecture. It does this using a concept familiar to anybody with knowledge of networks- proxies.

What does a service mesh look like in your cluster?

Your service mesh takes form as an array of proxies within the architecture, commonly referred to as sidecars. Why? Because they run alongside each service instead of within them. Simple!

Rather than communicate directly, apps in your architecture relay information and data to their sidecar. The sidecar will then pass this to other sidecars, communicating using a common logic embedded into the architecture’s infrastructure.

What does it do for you?

Without a service mesh, every app in your architecture needs to have communication logic coded in manually. Service meshes remove (or at least severely diminish) the need for this. Also, a service mesh makes it a lot easier to diagnose communication errors. Instead of scouring through every service in your architecture to find which app contains the failed communication logic, you instead only have to find the weak point in your proxy mesh network.

A single thing to configure

Implementing new policies is also simplified. Once out there in the mesh, new policies can be applied throughout the architecture. This goes a long way to safeguarding yourself from scattergun changes to your apps throwing the wider system into disarray.

Commonly used service meshes include Istio, Linkerd, and Consul. Using any of these will minimize downtime (by diverting requests away from failed services), provide useful performance metrics for optimizing communication, and allow developers to keep their eye on adding value without getting bogged down in connecting services.

The Three Pillars Of Observability

It is generally accepted that there are three important pillars needed in any decent observability solution. These are metrics, logging, and traceability. 

By adhering to these pillars, observability solutions can give you a clear picture of an individual app in an architecture or the infrastructure of the architecture itself.

An important note is that this generates a lot of data. Harvesting and administrating this data is time-consuming if done manually. If this process isn’t automated it can become a bottleneck in the development or project lifecycle. The last thing anybody wants is a solution that creates more problems than it solves.

Fortunately, automation and Artificial Intelligence are saving thousands of man hours every day for developers, engineers, and anybody working in or with IT. Microservices are no exception to this revolution, so there are of course plenty of tools available to ensure tedious data wrangling doesn’t become part of your day-to-day life.

Software Intelligence Platforms

Having a single agent provide a real-time topology of your microservice architecture has no end of benefits. Using a host of built-in tools, a Software Intelligence Platform can easily become the foundation of the smooth delivery of any project utilizing a large/complex microservice architecture. These platforms are designed to automate as much of the observation and analysis process as possible, making everything from initial development to scaling much less stressful.

A great software intelligence platform can:

  • Automatically detect components and dependencies.
  • Understand which component behaviors are intended and which aren’t wanted.
  • Identify failures and their root cause.

Tracking Requests In Complex Environments

Since the first days of software engineering and development, traceability of data has been vital. 

Even in monolith architectures keeping track of the origin points of data, documentation, or code can be a nightmare. In a complex microservice environment composed of potentially hundreds of apps it can feel impossible.

This is one of the few areas in which a monolith has an operational advantage. When literally every bundle of code is compiled in a single artifact, troubleshooting or request tracking through the lifecycle is more straightforward. Everything is in the same place.

In an environment as complex and multidimensional as a microservices architecture, documentation and code bounces from container to container. Requests travel through a labyrinth of apps. Keeping tabs on all this migration is vital if you don’t want debugging and troubleshooting to be the bulk of your workload.

Thankfully, there are plenty of tools available (many of which are open source) to ensure tracking requests through the entire life-cycle is a breeze.

Jaeger and Zipkin- Traceability For Microservices

When developing microservices it’s likely you’ll be using a stack containing some DevOps tooling. By 2020 it is safe to assume that most projects will at least be utilizing containerization of some description. 

Containers and microservices are often spoken of in the same context. There is a good reason for this. Many of the open source traceability tools developed for one also had the other very much in mind. The question of which best suits your needs largely depends on your containerization stack. 

If you are leaning into Kubernetes, then Jaeger will be the most functionally compatible. In terms of what it does, Jaeger has features like distributed transaction monitoring and root cause analysis that can be deployed across your entire system. It can scale with your environment and avoids single points of failure by way of supporting a wide variety of storage back ends.

If you’re more Docker-centric, then Zipkin is going to be much easier to deploy. This ease of use is aided by the fact that Zipkin runs as a single process. There are several other differences, but functionally Zipkin fills a similar need to Jaeger. They both allow you to track requests, data, and documentation across an entire life-cycle in a containerized, microservices architecture.

Logging Frameworks

The importance of logging cannot be overstated. If you don’t have effective systems for logging errors, changes, and requests, you are asking for nothing short of chaos and anarchy. As you can imagine, in a microservices architecture potentially containing hundreds of apps from which bugs and crashes can originate a decent logging solution is a high priority.

To have effective logging observability within a microservices architecture requires a standardized, system-wide approach to logging. Logging frameworks are a great way to do this. Logging is so fundamental that some of the earliest open source tools available were logging frameworks. There’s plenty to choose from, and they all have long histories and solid communities for support and updates by this point.

The tool you need really boils down to your individual requirements and the language/framework you’re developing in. If you’re logging in .Net then something like Nlog, log4net, or Serilog will suit. For Java your choice may be between log4j or logback. There are logging frameworks targeting most programming languages. Regardless of your stack, there’ll be something available. 

Centralizing Log Storage

Now that your apps have a framework in place to log deployments, requests, and everything else, you need somewhere to keep them until you need them. Usually, this is when something has gone wrong. The last thing you want to be doing on the more stressful days is wading through a few dozen apps-worth of log data.

Like almost every problem on this list, the reason observability is necessary for your logging process is due to the complexity of microservices. In a monolith architecture, logs will be pushed from a few sources at most. In a microservice architecture, potentially hundreds of individual apps are generating log data every second. You need to know not just what’s happened, but where it happened amongst the maze of inter-service noise.

Rather than go through the incredibly time-consuming task of building a stack to monitor all of this, my recommendation is to deploy a log management and analysis tool like Coralogix to provide a centralized location to monitor and analyze relevant log data from across the entirety of your architecture.

When errors arise or services fail, any of the dozens of options available will quickly inform you both the nature of the problem and the source. Log management tools hold your data in a single location. No more will you have to travel app to app searching for the minor syntax error which brought down your entire system.

In Short

There are limitless possibilities available for implementing a decent observability strategy when you’re working with microservices. We haven’t even touched upon many cloud-focused solutions, for example, or delved into the realms of web or mobile app-specific tools.

If you’re looking for the short answer of how to go about overcoming microservices issues caused by poor observability, it’s this: find a solution that allows you to track relevant metrics in organized logs so everything is easily traceable.

Of course, this is highly oversimplified, but if you’re looking purely for a nudge in the right direction the above won’t steer you wrong. With so many considerations and solutions available, it can feel overwhelming when weighing up your options. As long as you remember what you set out to do, the journey from requirement to deployment doesn’t have to be an arduous one.

Microservices architectures are highly complex at a topological level. Always keep that in mind when considering your observability solutions. The goal is to enable a valuable analysis of your data by overcoming this innate complexity of microservices. That is what good observability practice brings to the table.

The Ultimate Guide to Microservices Logging 

Microservice architecture is widely popular. The ease of building and maintaining apps, scaling CI/CD pipelines, as well as the flexibility it offers when it comes to pivoting technologies are some of the main reasons companies like Uber and Netflix are all in on this approach.

As the amount of services in a microservice architecture rises, complexity naturally also rises. Bugs and individual service failures can be very tricky to deal with and having to spend hours digging deep to find the root of an unnamed error can be a daunting and unproductive task. 

To effectively deal with the challenges of system errors, request chain breakdowns, or even simply to stay on top of your system architecture, logging is a vital tool. It is undeniable that it is the cornerstone of maintaining and debugging your app efficiently. As will become clear, there are some fundamental discrepancies when it comes to logging in microservices, as opposed to more conventional, pre-2011 architectures. Moving away from a monolithic architecture offers us many advantages, but also comes with greater responsibilities.  

This post will break down the best practices, including a step-by-step approach to injecting a fantastic logging mechanism into a Node.js/Express.js App. We will start by building a quick Microservice REST API, then delve into the best practices in a practical manner. For logging, we will be using Winston, a popular npm package that offers us powerful tools to log within a Node environment. 

Prerequisites:

Collecting Logs 

As is common with Microservices, requests can span multiple services, on several different machines, and can be tricky to deal with.

Scattered log files, each containing information for a single, or subset of services, often means difficult debugging and a lot of time spent searching for what exactly has caused the error – or what precise line of code has caused a process to simply stop functioning. There is an abundance of issues that can arise when using non-centralized log mechanisms, and it has increasingly become the norm to aggregate logs instead.

Logs that have been aggregated together generate immense value due to their centralized nature. What if we would like to keyword search for log instances? What if we would like to format our log data to be meaningful, flawlessly readable, and easily digestible? If we are dealing with scattered logs, this is simply too tricky to achieve.

Collecting logs saves time, resources, and in many cases, sanity. Depending on what your ultimate aim is, aggregating logs gives space for dashboard-style log analyzing, where log resources can be abstracted into beautiful user interfaces that will give you the power to really stay on top of monitoring your application.

This is an article that will walk you step-by-step through the process of setting up a logging solution based on Elasticsearch, Fluentd, and Kibana. It cover’s a practical, efficient solution to Logging in Kubernetes. 

Service Name

Adding the service name to every single log instance is the no-brainer beginning to good logging practices. Unmasking what specific service or request chain has caused an error is the only effective way to troubleshoot your application. What better way to be privy to the root of errors than by having a centralized logging system that has the service name tagged onto it, for every single instance. We want our logs to be accountable; such accountability only stems from the culprit always being clear. 

You might be used to using the built-in JavaScript method that allows us to easily log output into our console, console.log(). Of course, for a plethora of typical use cases, this can be more than suitable.

However, it is important to clearly understand why console.log() is not enough for a sizeable Microservice architecture. Firstly, it’s virtually impossible to centralize your logging mechanism if it’s being handled by the console. Carrying forward the idea of a central logging file/database, it’s also advantageous if we can easily attach as much information as we need to any log instance, easily. This is not achievable through console logging – fiddly, awkward parsing is needed for it to play to your advantage.

If we have many services running simultaneously, formatting the log data should be high on the list of priorities – We want the logged data to be automatable and clean. 

As a practical start to logging in Microservices, let’s introduce the idea of making sure each logged output comes accompanied by its related service name. If Service A, for example, makes a request to Service B, which then fails whilst requesting information from Service C, it is imperative that we know that this failure has stemmed from the communication between Service B and C. It may seem trivial, but including the service name to log output is a great way to make your logs work for you. When we have failures occurring simultaneously across multiple services, this will prove invaluable as a troubleshooting mechanism.

Setting up your Application

Let’s fire up a Node.js/Express.js app, inject Winston, and implement a logging service that will incorporate some of the best practices when it comes to Microservices. If you have an Express app already set up, skip this part of the tutorial.

The application will be a simple REST API for storing, retrieving, and modifying pet information. The small example will be for matching pets with potential buyers. We will start off with some simple console logging, then move onto using Winston and unleashing its full potential. Let’s begin.

Navigate to wherever you would like to begin the project, and create the following file structure:

./pets/img
./pets/pets.js

This can be done with the following commands: 

mkdir -p pets/img
touch pets/pets.js

Pets.js is where we will implement our first service, and the “img” directory will contain our pet images. From within our pets directory, we can initialize the npm project. We can initialize the project with the initial dependencies we need like this:

npm init -y
npm install express body-parser request

Great! We have our node app set up now, and we are ready to populate pets.js with our REST API service. Begin by downloading the following pet images that we will use to populate our pets/img directory. Save them as labeled:

PeterParrot.jpg

Rex.jpg

Luther.jpg

Mink.jpg

Now we can populate our first service, pets.js, with the following code:

const express = require("express");
const path = require("path");
const bodyParser = require("body-parser");

const app = express();
app.use(bodyParser.json());

//Attributes that a pet can have, capped at 2. E.g. if a pet has [1,4] = Loyal & Intelligent.
const attributes = [
  { id: 1, attName: "Loyal" },
  { id: 2, attName:"Aggressive" },
  { id: 3, attName: "Affectionate" },
  { id: 4, attName: "Intelligent" },
  { id: 5, attName: "Courageous" },
  { id: 6, attName: "Gentle"}
];
//List of pets contained in the application, with all relevant information.
const pets = [
  {
    id: 1,
    displayName: "PeterParrot",
    attributes: [1, 4],
    img: "PeterParrot.jpg",
  },
  {
    id: 2,
    displayName: "Mink",
    attributes: [4, 5],
    img: "Mink.jpg",
  },
  {
    id: 3,
    displayName: "Rex",
    attributes: [2, 5],
    img: "Rex.jpg",
  },
  {
    id: 4,
    displayName: "Luther",
    attributes: [1, 3],
    img: "Luther.jpg",
  },
];
  
//A simple GET route that returns the list of all pets.
app.get("/pets", (req, res) => {
  console.log("Returning the list of available pets");
  res.send(pets);
});

//A simple GET route that returns the list of all attributes.
app.get("/attributes", (req, res) => {
  console.log("Returning the list of possible attributes");
  res.send(attributes);
});
//A simple POST route that allows the modification of a single pet "profile"
app.post("/pet/**", (req, res) => {
  const petId = parseInt(req.params[0]);

  //Searching list of pets for id using param
  const petFound = pets.find((subject) => subject.id === petId);

  //Finding a pet according to his unique petID
  if (petFound) {
    for (let attribute in petFound) {
      if (req.body[attribute]) {
        //The attribute we are changing
        petFound[attribute] = req.body[attribute];
        console.log(
          `Changing the following: ${attribute} to ${req.body[attribute]} for the pet with the ID of: ${petId}`
        );
      }
    }
    res
      .status(202)
      .header({ Location: `https://localhost:8081/pet/${petFound.id}` })
      .send(petFound);
  } else {
    console.log(`Pet id not found. Try a different ID number`);
    res.status(404).send();
  }
});

app.use("/img", express.static(path.join(__dirname, "img")));
console.log(`listening on 8081`);
app.listen(8081);

As it stands, pets.js contains the following:

  • pets : Contains the list of pets, alongside information such as displayName, attributes and an accompanying image.
  • attributes : Contains the list of attributes that each pet has (Capped at 2).
  • 2 GET API endpoints that retrieve the pets and attributes respectively.
  • 1 POST API endpoint that allows the attribute of any pet to be modified.
  • A simple logging mechanism using console.log()

Let’s run our service and make sure it’s working correctly.

node pets.js 
> listening on 8081


For simplicity we will just use curl requests:

curl -i --request GET localhost:8081/pets

You should see a healthy output from CURL, with a 200 status code and some JSON.

HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: application/json; charset=utf-8
Content-Length: 281
ETag: W/"119-KII8px6yhw8BnHe42pQdXZZJ0yk"
Date: Sat, 18 Jul 2020 20:20:42 GMT
Connection: keep-alive

[{"id":1,"displayName":"PeterParrot","attributes":[1,4],"img":"PeterParrot.jpg"},{"id":2,"displayName":"Mink","attributes":[4,5],"img":"Mink.jpg"},{"id":3,"displayName":"Rex","attributes":[2,5],"img":"Rex.jpg"},{"id":4,"displayName":"Luther","attributes":[1,3],"img":"Luther.jpg"}]

Buyer Service

Lets create a service to match some buyers with some pets. We want our buyers to have one attribute that they desire from a pet, and match them with a petID that has that desired attribute. Nothing fancy. Within the same pets directory, create a new file. Keep the pets.js service running on port 8081, or at least have it ready to run when the time comes.

touch buyers.js

We can now populate buyers.js with a very similar structure that we saw for pets.js:

const express = require("express");
const request = require("request");
const path = require("path");
const bodyParser = require("body-parser");

const app = express();

app.use(bodyParser.json());

//The port that links the services together
const petsService = "https://localhost:8081";

//List of buyers, with the attribute they are most looking for in their pet.
const buyers = [
  {
    id: 1,
    buyerName: "John Doe",
    desiredAttribute: ["Loyal"],
    matchedPet: 0,
  },
  {
    id: 2,
    buyerName: "Adam Apple",
    desiredAttribute: ["Courageous"],
    matchedPet: 0,
  },
  {
    id: 3,
    buyerName: "Sally Smith",
    desiredAttribute: ["Aggressive"],
    matchedPet: 0,
  },
];

//A simple GET route that returns the list of buyers
app.get("/buyers", (req, res) => {
  console.log("Returning list of potential buyers...");
  res.send(buyers);
});

//A simple POST route that assigns a pet according to the attribute that the buyer is looking for.
app.post("/assign", (req, res) => {
  request.post(
    {
      headers: { "content-type": "application/json" },
      url: `${petsService}/pet/${req.body.petId}`,
      body: `{
          "busy": true
      }`,
    },
    (err, petResponse, body) => {
      if (!err) {

        //Parsing the integer, then trying to find a match with buyer id
        const buyerId = parseInt(req.body.buyerId);
       
        const buyer = buyers.find((subject) => subject.id === buyerId);
       
        buyer.matchedPet = req.body.buyerId;        console.log("Matched a buyer with a pet");
       
        res.status(202).send(buyer);


      } else {
        res
          .status(400)
          .send({ problem: `There was a problem:  ${err}` });
      }
    }
  );
});

app.use("/img", express.static(path.join(__dirname, "img")));

console.log(`Buyers service listening on port 8082`);
app.listen(8082);

As it stands, pets.js contains the following:

  • buyers : Contains a list of buyers with the desired pet attribute.
  • 1 GET API endpoint that returns the list of buyers.
  • 1 POST API endpoint that matches a buyer with a pet that matches the desired attribute. The matching is done using petID and buyerID (We assume the actual matching of desired attributes has been done somewhere else).

We can test it in the same way we did for the pets.js service. As mentioned earlier, we need our pets.js service to be running as well. If it’s not, get it up and running before carrying on. To initialize the buyers.js service, we run the following:

node buyers.js

Our example application is now finished, embodying a very simple Microservice architecture, featuring two services that communicate with each other through API endpoints.

As it stands, every bit of logging is done through the use of console.log(). Every action that is being logged is currently happening in its respective service environment. Pet logs are coming through on the pet service console, and similarly, buyer log information is being streamed on the buyer console. This would be manageable if these were the only two services in our application, as we would easily be able to keep track of resources and there would be little need for any further action. Of course, when dealing with real-world examples, there can be many, much more complex services, making requests that span multiple other service instances.

Winston

Our application is ready to be used with Winston, which will give us powerful tools to take care of logging within our Microservices architecture. Check out our guide if you aren’t familiar with Winston.

Without further ado, let’s add Winston to our project:

npm install winston --save

Now that we have Winston as a dependency, create a new file within the pets directory, and call it logger_service.js. This will contain our logging service:]

touch logger_service.js

Initially, all we would like to do is instead of logging service updates on the console, to use Winston to write all the logs to a centralized file, with a custom format.  Let us envision what we would like our starter log file instances to look like:

TIMESTAMP | TYPE OF MESSAGE | SERVICE NAME | OCCURRENCE

As a starting point, this will be good for giving accountability to errors in a centralized manner. Each log update will accompany its service name and a timestamp. This is important within our Microservices architecture as we want to make sure we have outputs that are detailed and specific; identifying the service is of the highest priority. We will set up the Winston transport to save into a log file that will contain updates from both of our services.

Add the following to logger_service.js and we can break down what is happening under the hood: 

//Bringing Winston in
const winston = require("winston");
//Simple function to return the current date and time
timeStamp = () => {
return new Date(Date.now()).toUTCString();
};

//Here we create our Custom Logger class
class CustomLogger {

//We want to attach the service route to each instance, so when we call it from our services it gets attached to the message
constructor(service) {
this.log_data = null;
this.service = service;

const logger = winston.createLogger({
  transports: [
    //Here we declare the winston transport, and assign it to our file: allLogs.log
    new winston.transports.File({
    filename: `./logs/allLogs.log`,
    }),
  ],

  format: winston.format.printf((info) => {

  //Here is our custom message
  let message = `${timeStamp()} | ${info.level} |  ${info.message} | From: ${service} `;

  return message;
  }),
});

  this.logger = logger;
}

setLogData(log_data) {
  this.log_data = log_data;
}

async info(message) {
  this.logger.log("info", message);
}

async info(message, obj) {
  this.logger.log("info", message, {
  obj,
});
}

async debug(message) {
  this.logger.log("debug", message);
}

async debug(message, obj) {
  this.logger.log("debug", message, {
    obj,
  });
}

async error(message) {
  this.logger.log("error", message);
}

async error(message, obj) {
  this.logger.log("error", message, {
    obj,
  });
}
}

module.exports = CustomLogger;

So our custom logger is built! But let’s understand what’s going on.

//Bringing Winston in
const winston = require("winston");


//Simple function to return the current date and time
timeStamp = () => {
  return new Date(Date.now()).toUTCString();
};
 

Here all we are doing is bringing Winston in, and creating a simple helper function that always returns the current date and time. This will be important as we want a timestamp on every single log instance.

class CustomLogger {

  //We want to attach the service route to the constructor, so when we call it from our services it gets attached to the message
  constructor(service) {
    this.log_data = null;
    this.service = service;

    const logger = winston.createLogger({
      transports: [
        //Here we declare the winston transport, and assign it to our file: allLogs.log
        new winston.transports.File({
          filename: `./logs/allLogs.log`,
        }),
      ],

      format: winston.format.printf((info) => {

        //Here is our custom message
        let message = `${timeStamp()} | ${info.level} |  ${info.message} | From: ${service} `;

        return message;
      }),
    });

Here is where we declare our class that will contain our custom logger. As seen, we declare a new transport and assign it to a log file:

./logs/allLogs.log

We could just as easily set the constructor up to accept another parameter, and split our service logs into different files. If we had many services, this would be suitable, and evidently quite simple to do. Our goal here however is to collect the log information together.

//Here is our custom message
let message = `${timeStamp()} | ${info.level} |  ${info.message} | From: ${service} `;

We are also formatting the message variable that we will use to populate each element in the log file. For starters, we are keeping it simple and just including the service name, timestamp and information regarding the log update. We are also interpolating the severity level. The rest of the functions in the class are simply helper functions to allow for the log instances to be created successfully. See the Winston documentation for more.

Pets service

When we developed our pets service earlier, we logged some small bits of information to the console. Let’s start by plugging our logging_service into pets.js. Add the following lines to the top of pets.js:

const Logger = require('./logger_service')
const logger = new Logger('pets')

Here we are creating a new instance of Logger and supplying it with the “pet” service name, so that it gets labelled correctly when supplying the log file. We can start swapping out our console.log statements, and replacing them with Winston logging methods. Replace:

console.log("Returning the list of available pets");

With:

logger.info("Returning pets");

Here we are simply using the severity level of “info” to return a simple statement when our API request is fulfilled. Lets replace a few others within the pets service. Change:

console.log("Returning the list of possible attributes");

To: 

logger.info("Returning the list of possible attributes");

Then change:

console.log(
          `Changing the following: ${attribute} to ${req.body[attribute]} for the pet with the ID of: ${petId}`
        );

To:

preparedLog = `Changing the following: ${attribute} to ${req.body[attribute]} for the pet with the ID of: ${petId}`;
logger.info(preparedLog);

Here we are formatting into a non interpolated string first, as we have set up our Winston transport to format strings. We can easily modify our formatting by using Winston’s String Interpolation formatting.

And finally, change:

console.log(`Pet id not found. Try a different ID number`);

To:

logger.error(`Pet id not found. Try a different ID number`);

To test it out, let’s run the pets service again using the same command we did last time. If the port happens to be in use, here’s a simple trick to killing the previous node process occurring on that port:

lsof -i:PORTNUMBER
kill -9 PID

Run the pets service again:

node pets.js

To test out our logger, let’s perform a couple of GET and POST requests in succession, within a different terminal window:

curl -i --request GET localhost:8081/pets
curl -i --request GET localhost:8081/attributes
curl -i --request POST --header "Content-Type: application/json" --data '{"attributes": [3,4]}' localhost:8081/pet/1
curl -i --request POST --header "Content-Type: application/json" --data '{"petId": 8, "buyerId": 1}' localhost:8082/assign

Our console should be empty, and the fruits of our Winston setup should be located over at: /logs/allLogs.log. Your log file should look something like this:

Sat, 18 Jul 2020 22:48:40 GMT | info |  Returning pets | From: pets
Sat, 18 Jul 2020 22:49:35 GMT | info |  Returning the list of possible attributes | From: pets
Sat, 18 Jul 2020 23:09:27 GMT | info |  Changing the following: attributes to 3,4 for the pet with the ID of: 1 | From: pets
Sat, 18 Jul 2020 23:14:09 GMT | error |  Pet id not found. Try a different ID number | From: pets

All of our curl requests were successful, apart from the last one, which purposefully input an invalid pet ID. We can see that instead of being labeled info, it is labeled error. This ease of categorization is invaluable when scaling to many services.  Now that our pets service is functioning with our Winston transport, we can move onto the buyer service. Let’s begin by injecting our custom logger as we did for the pets service:

const Logger = require('./logger_service');
const logger = new Logger('buyers');

We are making sure to pass “buyers” as our service name, so it can be properly labeled on the log file. Let’s modify the only console.log() statement:

console.log("Returning list of potential buyers...");

To:

logger.info("Returning list of potential buyers...");

To test Winston out a little bit more, let’s add a few more logged outputs:

(err, petResponse, body) => {
      if (!err) {
        //Parsing the integer, then trying to find a match with buyer id
        const buyerId = parseInt(req.body.buyerId);
       
        const buyer = buyers.find((subject) => subject.id === buyerId);
       
        if (buyer === undefined) { //THIS LINE
          logger.error("Could not find the buyerId") //THIS LINE
          res.status(400).send("Error"); //THIS LINE
        }
       
        if (buyer !== undefined) { //THIS LINE
          buyer.matchedPet = req.body.buyerId; //THIS LINE
          logger.info("Matched a buyer with a pet " , { sessionID:  req.sessionID }) //THIS LINE
          res.status(202).send(buyer); //THIS LINE
        } //THIS LINE

      } else {
        res
          .status(400)
          .send({ problem: `There was a problem:  ${err}` });
      }
}

All we have done here is supply the log with more valuable information. If there is an error in a pets.js request to buyers.js, we want to know where and why this has happened. Did pets fail or did buyers fail? We have also added validation; If a buyer ID is not found, it is now logged and the appropriate response is sent (400). We can add as much or as little information to the Winston transport as we would like. When dealing with Microservices, more detailed log elements can only play to your advantages if you want to avoid losing track of parts of your individual services. Especially when dealing with many services, detailing logs gives you fail-safe opportunities to tackle maintaining your application. Lets run a few more curl requests to see our logging mechanism fully at work:

curl -i --request GET localhost:8082/buyers
curl -i --request POST --header "Content-Type: application/json" --data '{"petId": 1, "buyerId": 1}' localhost:8082/assign
curl -i --request POST --header "Content-Type: application/json" --data '{"petId": 1, "buyerId": 9}' localhost:8082/assign

Two successful requests, and one failed request (Incorrect buyerID). Our updated log file should look something like this:

Sat, 18 Jul 2020 22:48:40 GMT | info |  Returning pets | From: pets
Sat, 18 Jul 2020 22:49:35 GMT | info |  Returning the list of possible attributes | From: pets
Sat, 18 Jul 2020 23:07:04 GMT | info |  Returning the list of possible attributes | From: pets
Sat, 18 Jul 2020 23:09:27 GMT | info |  Changing the following: attributes to 3,4 for the pet with the ID of: 1 | From: pets
Sat, 18 Jul 2020 23:14:09 GMT | error |  Pet id not found. Try a different ID number | From: pets
Sat, 18 Jul 2020 23:48:25 GMT | info |  Returning list of potential buyers... | From: buyers
Sat, 18 Jul 2020 23:50:40 GMT | info |  Matched a buyer with a pet. | From: buyers
Sun, 19 Jul 2020 00:36:17 GMT | error |  Could not find the buyerId | From: buyers 

So now we have achieved a basic, but efficient logging service. We get detailed log updates, with labelled service names and timestamps. Our successful and failed requests are both labelled on the console, which is already a massive improvement from using the console. 

Traceability

There are still some things missing from our logs. Our information messages are suitable, but would be incredibly vague if we were dealing with several different services. There are a few ways to tackle this problem, and to ensure that when it comes down to troubleshooting, the process is easy and seamless. For starters, let’s add the sessionID to each log output. In our example, we don’t have sessionID’s setup, and to avoid bringing in another unnecessary dependency, we will be manually setting our req.sessionID values in order to further add to our Winston logs. Navigate to our pets service, pet.js, and we will get started adding some sessionID’s.

req.sessionID = '123456';

Add this under each one of the 3 API endpoint functions in pets.js. At the end, the services will be posted again in full and you can ensure everything has been copied down correctly. Let’s do the same for our buyers service.

req.sessionID = '123456';

Add the same statement under each of the 2 API endpoint functions in buyers.js. We have now manually set the sessionID value for every single one of our endpoints. In order to incorporate the sessionID into our logs, we could technically inject it as a String and parse it together with the rest of the information that we are already sending to the log file.   Winston offers a much more elegant, scalable solution. We can modify our logger_service.js file to incorporate sessionID’s as meta information. Firstly, we need to ensure our transport is capable of dealing with being passed an object. Add the following to the logging_service.js: 

format: winston.format.printf((info) => {
    //Here is our custom message
    let message = `${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`;

    if (info.obj !== undefined) { //THIS LINE
        message = `${JSON.stringify(info.obj)} | ${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`; //THIS LINE
        } //THIS LINE

        return message;
      }),

Now, when we do our typical logger.info, or logger.error calls, we can attach the sessionID as an extra meta tag. We are going to stringify it to make it work properly with the formatter we set up. We have also made sure that if we don’t want to pass any meta information, we don’t have to; our custom logger can automatically deal with this. All that there is left to do is make sure we are attaching the sessionID to our log methods within our services. Starting with pets.js, let’s add an extra parameter to each of the logger calls. This includes the logger.error() call we have made too. Since we have our manual sessionID’s setup already, it’s just a question of passing the object.

logger.info(preparedLog , { sessionID:  req.sessionID } );
logger.error(`Pet id not found. Try a different ID number`, { sessionID: req.sessionID });

Let’s do the same for our buyers.js service. Remember, we should have the manual sessionID’s setup for this service already.

logger.info("Returning list of potential buyers...", { sessionID:  req.sessionID });
logger.error("Could not find the buyerId", { sessionID:  req.sessionID });

Like usual, we will run some curl requests to test out our new log format. Make sure you are running fresh instances of both of the services before continuing, we don’t want to accidentally run our old logger or old services.

curl -i --request GET localhost:8082/buyers
curl -i --request POST --header "Content-Type: application/json" --data '{"petId": 1, "buyerId": 1}' localhost:8082/assign
curl -i --request POST --header "Content-Type: application/json" --data '{"petId": 1, "buyerId": 9}' localhost:8082/assign

Our new log file should look something like this:

Sat, 18 Jul 2020 22:48:40 GMT | info |  Returning pets | From: pets
Sat, 18 Jul 2020 22:49:35 GMT | info |  Returning the list of possible attributes | From: pets
Sat, 18 Jul 2020 23:07:04 GMT | info |  Returning the list of possible attributes | From: pets
Sat, 18 Jul 2020 23:09:27 GMT | info |  Changing the following: attributes to 3,4 for the pet with the ID of: 1 | From: pets
Sat, 18 Jul 2020 23:14:09 GMT | error |  Pet id not found. Try a different ID number | From: pets
Sat, 18 Jul 2020 23:48:25 GMT | info |  Returning list of potential buyers... | From: buyers
Sat, 18 Jul 2020 23:50:40 GMT | info |  Matched a buyer with a pet. | From: buyers
{"sessionID":"123456"} | Sun, 19 Jul 2020 23:46:55 GMT | info |  Returning list of potential buyers... | From: buyers
{"sessionID":"123456"} | Sun, 19 Jul 2020 23:47:01 GMT | info |  Matched a buyer with a pet  | From: buyers
{"sessionID":"123456"} | Sun, 19 Jul 2020 23:47:04 GMT | error |  Could not find the buyerId | From: buyers

Perfect! We have added our sessionID to every single new log instance. Typically when dealing with many users, across many services, and having the sessionID as a handy starting point to break down a service failure is definitely a best practice. Additionally, the way we have set up the Winston transport allows us to send an object as metadata. Depending on what your needs are, why stop there? There are many things we could easily attach to the log instances should we need it. A non-exhaustive list of things you could add includes:

  • Username
  • HTTP Header Values
  • IP address of client request
  • Name of function
  • External interactions (Such as a call to a database)

All we would have to do is attach these values within the same object containing the sessionID. For example:

logger.info(preparedLog , { sessionID:  req.sessionID, username: 'john176', httpCode: 'POST', clientIP: '192.168.1.1'  } );

This would produce the following log instance:

{"sessionID":"123456","username":"john176","httpCode":"POST","clientIP":"192.168.1.1"} | Mon, 20 Jul 2020 00:06:25 GMT | info |  Changing the following: attributes to 3,4 for the pet with the ID of: 1 | From: pets

We can extrapolate a hypothetical in which we have many services and see why this would be extremely useful. Everything is properly labelled, and context is given for each request made across the entire architecture. 

Troubleshooting

If we had hundreds of requests being made a day to our services, all with different sessionID’s and meta objects, we would not be able to easily search through our logfile. Isolating all log instances matching a particular sessionID for example, is made challenging if we can’t quickly access that information. Winston unfortunately doesn’t offer much support when it comes to querying custom logs. If we were using a default winston transport, we would be able to sort out our logs by date. Even so, sorting by sessionID for example, is made tricky. On the plus side, since we have a centralized log file, we can use inbuilt search tools within an IDE or text file. As a quick example, if we wanted to search for all log instances with sessionID of 123456, we simply can do, in VScode, using <code>cmd-F</code>.

Versioning your Logs

When working with Microservices, services may be swapped in and out at a fast pace and it is imperative that you are keeping track of the current version of your application. It can be a slippery slope once versions get tangled, and once it should be stressed once again that organization and detail is key in a Microservice architecture. We would like our Winston log to, therefore, read an environment variable that will contain our current app version, and inject this into all of our log instances. Navigate to the root directory of the project and type:

touch .env

Here is where we will store our current app version. Add the following to .env :

APP_VERSION = 1.0

We will need to add the dependency, dotenv, in order to allow us to use custom env variables:

npm i dotenv

Now all we have to do is ensure our Winston transport is attaching the app version to each log instance. Navigate over to logger_service.js, and let’s begin by adding the dotenv dependency at the top of the file:

require('dotenv').config()

Now that we have access to our environment variable for the current version of the app, let’s make sure it gets added to each of our Winston instances. Replace the following code:

//Here is our custom message
let message = `App v-:${process.env.APP_VERSION} | ${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`;

if (info.obj !== undefined) { //THIS LINE
    message = `App v-:${process.env.APP_VERSION} |${JSON.stringify(info.obj)} | ${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`; //THIS LINE
} //THIS LINE

We have added the variable process.env.APP_VERSION. Please note that you must close and reopen your IDE in order to get this to work. This is because the node environment must be reinitialized for the environment variables to be found. Running a quick curl request, we see that our log formatting has changed:

App v-:1.0 |{"sessionID":"123456"} | Mon, 20 Jul 2020 01:12:17 GMT | info |  Returning list of potential buyers... | From: buyers

And there we have it! A scalable logging mechanism for Microservices, all whilst incorporating some of the best practices. Using the example app we built, if we were to add more services, all we have to do is validate our logging outputs within each individual service. The formatting of our logs is serialized and labelled for different types of messages, and we still have extreme flexibility to change our logging outputs, like specific object formatting or even the file into which each service logs to. To ensure you are up to date, here is the final code for each of our files:

pets.js

const express = require("express");
const path = require("path");
const bodyParser = require("body-parser");

const Logger = require('./logger_service')
const logger = new Logger('pets')




const app = express();
app.use(bodyParser.json());

//Attributes that a pet can have, capped at 2. E.g. if a pet has [1,4] = Loyal & Intelligent.
const attributes = [
  { id: 1, attName: "Loyal" },
  { id: 2, attName:"Aggressive" },
  { id: 3, attName: "Affectionate" },
  { id: 4, attName: "Intelligent" },
  { id: 5, attName: "Courageous" },
  { id: 6, attName: "Gentle"}
];

//List of pets contained in the application, with all relevant information.
const pets = [
  {
    id: 1,
    displayName: "PeterParrot",
    attributes: [1, 4],
    img: "PeterParrot.jpg",
  },
  {
    id: 2,
    displayName: "Mink",
    attributes: [4, 5],
    img: "Mink.jpg",
  },
  {
    id: 3,
    displayName: "Rex",
    attributes: [2, 5],
    img: "Rex.jpg",
  },
  {
    id: 4,
    displayName: "Luther",
    attributes: [1, 3],
    img: "Luther.jpg",
  },
];
 

//A simple GET route that returns the list of all pets.
app.get("/pets", (req, res) => {
  req.sessionID = '123456';
  logger.info("Returning pets");
  res.send(pets);
});

//A simple GET route that returns the list of all attributes.
app.get("/attributes", (req, res) => {
  req.sessionID = '123456';
  logger.info("Returning the list of possible attributes");
  res.send(attributes);
});

//A simple POST route that allows the modification of a single pet "profile"
app.post("/pet/**", (req, res) => {
  req.sessionID = '123456';
  const petId = parseInt(req.params[0]);

 
  const petFound = pets.find((subject) => subject.id === petId);

  //Finding a pet according to his unique petID
  if (petFound) {
    for (let attribute in petFound) {
      if (req.body[attribute]) {
        //The attribute we are changing
        petFound[attribute] = req.body[attribute];
        preparedLog = `Changing the following: ${attribute} to ${req.body[attribute]} for the pet with the ID of: ${petId}`;
        logger.info(preparedLog , { sessionID:  req.sessionID } );
      }
    }
    res
      .status(202)
      .header({ Location: `https://localhost:8081/pet/${petFound.id}` })
      .send(petFound);
  } else {
    logger.error(`Pet id not found. Try a different ID number`, { sessionID: req.sessionID });
    res.status(404).send();
  }
});

app.use("/img", express.static(path.join(__dirname, "img")));

console.log(`listening on 8081`);
app.listen(8081);

buyers.js

const express = require("express");
const request = require("request");
const path = require("path");
const bodyParser = require("body-parser");

const Logger = require('./logger_service')
const logger = new Logger('buyers');


const app = express();


app.use(bodyParser.json());

//The port that links the services together
const petsService = "https://localhost:8081";

//List of buyers, with the attribute they are most looking for in their pet.
const buyers = [
  {
    id: 1,
    buyerName: "John Doe",
    desiredAttribute: ["Loyal"],
    matchedPet: 0,
  },
  {
    id: 2,
    buyerName: "Adam Apple",
    desiredAttribute: ["Courageous"],
    matchedPet: 0,
  },
  {
    id: 3,
    buyerName: "Sally Smith",
    desiredAttribute: ["Aggressive"],
    matchedPet: 0,
  },
];

//A simple GET route that returns the list of buyers
app.get("/buyers", (req, res) => {
  req.sessionID = '123456';
  logger.info("Returning list of potential buyers...", { sessionID:  req.sessionID });
  res.send(buyers);
});

//A simple POST route that assigns a pet according to the attribute that the buyer is looking for.
app.post("/assign", (req, res) => {
  req.sessionID = '123456';
  request.post(
    {
      headers: { "content-type": "application/json" },
      url: `${petsService}/pet/${req.body.petId}`,
      body: `{
          "busy": true
      }`,
    },
    (err, petResponse, body) => {
      if (!err) {


        //Parsing the integer, then trying to find a match with buyer id
        const buyerId = parseInt(req.body.buyerId);
       
        const buyer = buyers.find((subject) => subject.id === buyerId);
       
        if (buyer === undefined) { //THIS LINE
          logger.error("Could not find the buyerId", { sessionID:  req.sessionID }) //THIS LINE
          res.status(400).send("Error"); //THIS LINE
        }
       
        if (buyer !== undefined) { //THIS LINE
          buyer.matchedPet = req.body.buyerId; //THIS LINE
          logger.info("Matched a buyer with a pet " , { sessionID:  req.sessionID }) //THIS LINE
          res.status(202).send(buyer); //THIS LINE
        } //THIS LINE

      } else {
        res
          .status(400)
          .send({ problem: `There was a problem:  ${err}` });
      }
    }
  );
});

app.use("/img", express.static(path.join(__dirname, "img")));

console.log(`Buyers service listening on port 8082`);
app.listen(8082);

logging_service.js 

//Bringing Winston in
const winston = require("winston");
require('dotenv').config()

//Simple function to return the current date and time
timeStamp = () => {
  return new Date(Date.now()).toUTCString();
};


//Here we create our Custom Logger class
class CustomLogger {

 

  //We want to attach the service route to each instance, so when we call it from our services it gets attached to the message
  constructor(service) {

    this.log_data = null;
    this.service = service;

    const logger = winston.createLogger({
      transports: [
        //Here we declare the winston transport, and assign it to our file: allLogs.log
        new winston.transports.File({
          filename: `./logs/allLogs.log`,
          metaKey:'meta'
        }),
      ],

      format: winston.format.printf((info) => {


        //Here is our custom message
        let message = `App v-:${process.env.APP_VERSION} | ${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`;

        if (info.obj !== undefined) { //THIS LINE
          message = `App v-:${process.env.APP_VERSION} |${JSON.stringify(info.obj)} | ${timeStamp()} | ${info.level} |  ${info.message} | From: ${service}`; //THIS LINE
        } //THIS LINE

        return message;
      }),
    });

    this.logger = logger;
  }

  setLogData(log_data) {
    this.log_data = log_data;
  }

  async info(message) {
    this.logger.log("info", message);
  }

  async info(message, obj) {
    this.logger.log("info", message, {
      obj,
    });
  }

  async debug(message) {
    this.logger.log("debug", message);
  }

  async debug(message, obj) {
    this.logger.log("debug", message, {
      obj,
    });
  }

  async error(message) {
    this.logger.log("error", message);
  }

  async error(message, obj) {
    this.logger.log("error", message, {
      obj,
    });
  }
}

module.exports = CustomLogger;