We’re Making Our Debut In Cybersecurity with Snowbit

2021 was a crazy year, to say the least, not only did we welcome our 2,000th customer, we announced our Series B AND Series C funding rounds, and on top of that, we launched Streamaⓒ – our in-stream data analytics pipeline.

But this year, we’re going to top that!

We’re eager to share that we are venturing into cybersecurity with the launch of Snowbit! This new venture will focus on helping cloud-native companies comprehensively manage the security of their environments.

As you know, observability and security are deeply intertwined and critical to the seamless operation of cloud environments. Post becoming a full-stack observability player with the addition of metrics and tracing, it was natural for us to delve deeper into cybersecurity.

So what are we trying to solve?

Today we are witnessing accelerated cybersecurity risks with the online explosion post the onset of the pandemic. The acute global scarcity of cybersecurity talent has aggravated the situation as most organizations are unlikely to have adequately staffed in-house security teams over the medium term. They are just too expensive, difficult to hire and keep updated.

As Navdeep Mantakala, Co-founder of Snowbit says, “Rapidly accelerating cyberthreats are leaving many organizations exposed and unable to effectively deal with security challenges as they arise. Snowbit aims to address fundamental security-related challenges faced today including growing cloud complexity, increasing sophistication of attacks, lack of in-house cybersecurity expertise, and the overhead of managing multiple point security solutions.”

What is also adding to the challenge is the increasing leverage of the cloud, both multi-provider infrastructure and SaaS, which is dramatically broadening the attack surface and complexity. Leverage of multiple point solutions to address specific use cases are only increasing the operational overhead.

How are we solving it?

Snowbit’s Managed Extended Detection and Response (MxDR) incorporates a SaaS platform and expert services. The platform gives organizations a comprehensive view of their cloud environment’s security and compliance (CIS, NIST, SOC, PCI, ISO, HIPAA). 

The Snowbit team will work to expand on the existing capabilities of the Coralogix platform, so that all data will be used to identify any abnormal activity, configurations, network, and vulnerability issues. This is rooted in the idea that every log can and should be a security log. Furthermore, it will automate threat detection and incident response via machine learning, an extensive set of pre-configured rules, alerts, dashboards, and more. 

The MxDR platform deploys a team of security analysts, researchers, and DFIR professionals stationed at Snowbit’s 24×7 Security Resource Center. There, they provide guided responses to enable organizations to more decisively respond to threats detected in their environment.

“Observability forms the bedrock of cybersecurity, and as a result, Snowbit is strategic for Coralogix as it enables us to offer a powerful integrated observability and security proposition to unlock the value of data correlation,” said Ariel Assaraf, CEO of Coralogix. “Snowbit’s platform and services enable organizations to overcome challenges of cybersecurity talent and disparate tools to more effectively secure their environments.”

With Snowbit, we have the vision to empower organizations across the globe to quickly, efficiently, and cost-effectively secure themselves against omnipresent and growing cyber risks. Snowbit is looking to offer the broadest cloud-native managed detection and response offering available to enable this. 

Make sure to sign up for updates so you can get notified once Snowbit launches. 

How Biden’s Executive Order on Improving Cybersecurity Will Impact Your Systems

President Joe Biden recently signed an executive order which made adhering to cybersecurity standards a legal requirement for federal departments and agencies.

The move was not a surprise. It comes after a string of high-profile cyber-attacks and data breaches in 2020 and 2021. The frequency and scale of these events exposed a clear culture of lax cybersecurity practices throughout both the public and private sectors.

President Biden’s order brings into law many principles which have long been espoused by cybersecurity advocacy groups, such as the National Institute of Standards and Technology (NIST)’s Five Functions. It is the latest legislation in a trend towards greater transparency and regulation of technology in the US.

The Executive Order on Improving the Nation’s Cybersecurity puts in place safeguards that have until now been lacking or non-existent. While regulations are only legally binding for public organizations (and their suppliers), many see it as a foreshadowing of further regulation and scrutiny of cybersecurity in the private sector.

Despite not being directly impacted, a memo was sent out from the White House to corporate leaders urging them to act as though regulations are legally binding. It’s clear that businesses must take notice of Biden’s drive to safeguard US national infrastructure against cyber threats.

What’s in the Executive Order on Improving the Nation’s Cyber Security

The order spans almost sections and covers a range of issues, but there are several which stand out as likely to become relevant to the private sector.

Chief of these is a requirement for IT and OT providers who supply government and public bodies to store and curate data in accordance with new regulations. They must also report any potential incidents and cooperate with any government operation to combat a cyber threat.

The order also implies future changes for secure software development, with the private sector encouraged to develop standards and display labels confirming their products’ security and adherence to regulatory standards. Some also theorize that government-only mandates for two-factor authentication, encryption, and cloud security, could include private organizations soon.

The key takeaway for businesses is that, whether it’s next year or a decade from now, it’s likely they’ll be required by law to maintain secure systems. If your security, logging, or systems observability are lacking, Biden’s executive order could be your last warning to get them up-to-scratch before regulations become legally binding.

How does this affect my systems?

Many enterprises are acting as though the executive order is legally binding. This is in no small part due to the White House’s memo urging businesses to do so. A common view is that it won’t be long before regulations outlined in the EO are expanded beyond government.

For suppliers to the government, any laws passed following Biden’s order immediately apply. This even extends to IT/OT providers whose own customers include the government as their customers. In short, if any part of your system(s) handles government data, you’ll be legally required to secure them according to the regulatory standards.

Data logging and storage regulations

Logging and storage is a key EO focal point. Compliant businesses will have system logs properly collected, maintained, and ready for access should they be required as part of an intelligence or security investigation.

This move is to enhance federal abilities to investigate and remediate threats, and covers both internal network logs and logging data from 3rd party connections. Logs will have to, by law, be available immediately on request. Fortunately, many end-to-end logging platforms make compliance both intuitive and cost-effective.

System visibility requirements

Under the EO, businesses will be required to share system logs and monitoring data when requested. While there aren’t currently legal mandates outlining which data this includes, a thorough and holistic view of your systems will be required during any investigation.

With the order itself stating that “recommendations on requirements for logging events and retaining other relevant data” are soon to come, and shall include “the types of logs to be maintained, the time periods to retain the logs and other relevant data, the time periods for agencies to enable recommended logging and security requirements, and how to protect logs”, it’s clear that future cybersecurity legislation won’t be vague. Compliance requirements, wherever they’re applied, will be specific.

In the near future, businesses found to have critical system visibility blind spots could face significant legal ramifications. Especially if said blind spots become an exploited vulnerability in a national cybercrime or cybersecurity incident.

The legal onus will soon be on businesses to ensure their systems don’t contain invisible back doors into the wider national infrastructure. Your observability platform must provide full system visibility.

Secure services

The EO also included suggestions for software and service providers to create a framework for advertising security compliance as a marketable selling point.

While this mainly serves to create a competitive drive to develop secure software, it’s also to encourage businesses to be scrupulous about 3rd parties and software platforms they engage.

In the not-too-distant future, businesses utilizing non-compliant or insecure software or services will likely face legal consequences. Again, the ramifications will be greater should these insecure components be found to have enabled a successful cyberattack. Moving forward, businesses need to show 3rd party services and software they deploy unprecedented levels of scrutiny. 

Security should always be the primary concern. While this should have been the case anyway, the legal framework set out by Biden’s executive order means that investing in only the most secure 3rd party tools and platforms could soon be a compliance requirement. How does this affect my systems?

Many enterprises are acting as though the executive order is legally binding. This is in no small part due to the White House’s memo urging businesses to do so. A common view is that it won’t be long before regulations outlined in the EO are expanded beyond government.

For suppliers to the government, any laws passed following Biden’s order immediately apply. This even extends to IT/OT providers whose own customers include the government as their customers. In short, if any part of your system(s) handles government data, you’ll be legally required to secure them according to the regulatory standards.

Data logging and storage regulations

Logging and storage is a key EO focal point. Compliant businesses will have system logs properly collected, maintained, and ready for access should they be required as part of an intelligence or security investigation.

This move is to enhance federal abilities to investigate and remediate threats, and covers both internal network logs and logging data from 3rd party connections. Logs will have to, by law, be available immediately on request. Fortunately, many end-to-end logging platforms make compliance both intuitive and cost-effective.

System visibility requirements

Under the EO, businesses will be required to share system logs and monitoring data when requested. While there aren’t currently legal mandates outlining which data this includes, a thorough and holistic view of your systems will be required during any investigation.

With the order itself stating that “recommendations on requirements for logging events and retaining other relevant data” are soon to come, and shall include “the types of logs to be maintained, the time periods to retain the logs and other relevant data, the time periods for agencies to enable recommended logging and security requirements, and how to protect logs”, it’s clear that future cybersecurity legislation won’t be vague. Compliance requirements, wherever they’re applied, will be specific.

In the near future, businesses found to have critical system visibility blind spots could face significant legal ramifications. Especially if said blind spots become an exploited vulnerability in a national cybercrime or cybersecurity incident.

The legal onus will soon be on businesses to ensure their systems don’t contain invisible back doors into the wider national infrastructure. Your observability platform must provide full system visibility.

Secure services

The EO also included suggestions for software and service providers to create a framework for advertising security compliance as a marketable selling point.

While this mainly serves to create a competitive drive to develop secure software, it’s also to encourage businesses to be scrupulous about 3rd parties and software platforms they engage.

In the not-too-distant future, businesses utilizing non-compliant or insecure software or services will likely face legal consequences. Again, the ramifications will be greater should these insecure components be found to have enabled a successful cyberattack. Moving forward, businesses need to show 3rd party services and software they deploy unprecedented levels of scrutiny. 

Security should always be the primary concern. While this should have been the case anyway, the legal framework set out by Biden’s executive order means that investing in only the most secure 3rd party tools and platforms could soon be a compliance requirement.

Why now?

The executive order didn’t come out of the blue. In the last couple of years, there have been several high-profile, incredibly damaging cyberattacks on government IT suppliers and critical national infrastructure.

Colonial Pipeline Ransomware Attack

The executive order was undoubtedly prompted by the Colonial Pipeline ransomware attack. On May 7th, 2021, ransomware created by hacker group DarkSide compromised critical systems operated by the Colonial Pipeline Company. The following events led to Colonial Pipeline paying $4.4million in ransom, and the subsequent pipeline shutdown and slow operation period caused an emergency fuel shortage declaration in 17 states.

SolarWinds Supply Chain Attack

The Colonial Pipeline ransomware attack was the just latest high-impact cybercrime event with a national impact. In December 2020 SolarWinds, an IT supplier with government customers across multiple executive branches and military/intelligence services compromised their own system security with an exploitable update.

This ‘supply chain attack’ deployed trojans into SolarWinds customers’ systems through the update. The subsequent vulnerabilities opened a backdoor entrance into many highly classified government databases, including Treasury email traffic.

Why is it necessary?

While the damage of the Colonial Pipeline incident can be measured in dollars, the extent of the SolarWinds compromise has not yet been quantified. Some analysts believe the responsible groups could have been spying on classified communications for months. SolarWinds also had significant private sector customers including Fortune500 companies and universities, many of which could have been breached and still be unaware.

Again, these incidents are the latest in several decades marked by increasingly severe cyberattacks. Unless action is taken, instances of cybercrime that threaten national security will become not only more commonplace but more damaging.

Cybersecurity: An unprecedented national concern

Cybercrime is a unique threat. A single actor could potentially cause trillions of dollars in damages (assuming their goal is financial and not something more sinister). What’s more, the list of possible motivations for cybercriminals is far wider.

Whereas a state or non-state actor threatening US interests with a physical attack is usually politically or financially motivated (thus easier to predict), there have been many instances of ‘troll hackers’ targeting organizations for no reason other than to cause chaos.

When you factor this in with the constantly evolving global technical ecosystem, lack of regulation looks increasingly reckless. The threat of domestic terrorism is seen as real enough to warrant tight regulation of air travel (for example). Biden’s executive order is a necessary step towards cybercrime being treated as the equally valid threat it is.

Cybersecurity: A necessary investment long before Biden’s EO

Biden’s EO has shaken up how both the government and private sector are approaching cybersecurity. However, as the executive order itself and the events that preceded it prove, it’s a conversation that should have been happening much sooner.

The key takeaway for businesses from the executive order should be that none of the stipulations and requirements are new. There is no guidance in the EO which cybersecurity advocacy groups haven’t been espousing for decades.

Security, visibility, logging, and data storage/maintenance should be core focuses for your businesses’ IT teams already. The security of your systems and IT infrastructure should be paramount, before any attempts to optimize their effectiveness as a productivity and revenue boost.

Fortunately, compliance with any regulations the EO leads to doesn’t have to be a challenge. 3rd party platforms such as Coralogix offer a complete, end-to-end observability and logging solution which keeps your systems both visible and secure.

What’s more, the optimized costs and enhanced functionality over other platforms mean compliance with Biden’s EO needn’t be a return-free investment.

 

The Value of Ingesting Firewall Logs

In this article, we are going to explore the process of ingesting and monitoring logs into your data lake, and the value of importing your firewall logs into Coralogix. To understand the value of the firewall logs, we must first understand what data is being exported. 

A typical layer 3 firewall will export the source IP address, destination IP address, ports, and the action for example allow or deny. A layer 7 firewall will add more metadata to the logs including application, user, location, and more. 

This data, when placed into an aggregator like Coralogix and visualized, allows greater visibility into the traffic traversing your network. This allows you to detect anomalies and potential problems. A great example of this is detecting malware or data exfiltration to unknown destinations (more on this later).

Configuring Logstash

The first and most critical part of extracting the value of firewall logs is ingesting the correct data. When exporting logs from firewalls, Logstash is one method of ingestion. Coralogix also provides an agent for other Syslog services. This article assumes that you already have Logstash deployed and working. First, we will cover how to set up a configuration for ingestion. 

The recommended way to export firewall logs for ingestion into Logstash is using Syslog. Inside your Logstash conf.d folder, we are going to create our ingestion configuration files. 

To start with we are going to enable Syslog with the following config:

01-syslog.conf

Note: We are using port 5140 rather than 514 for Syslog. This is because Logstash is run as a user, meaning it does not have the privileges to run on port 514.

# TCP Syslog Ingestion (Port 5140)

input {

  tcp {

    type => "syslog"

    port => 5140

  }

}

# UDP Syslog Ingestion (Port 5140)

input {

  udp {

    type => "syslog"

    port => 5140

  }

}

At this point, we need to apply a filter to configure what happens with the data passed via Syslog. You can find several pre-made filters for Logstash online as it’s a popular solution. The following example is a configuration that would be used with a PFsense firewall.

In this configuration, we are filtering Syslogs from two PFsense firewalls. The first is 10.10.10.1 & the second is 10.10.10.2. You will see that we match Syslog information and add fields that will be used to index the data in Coralogix. We will cover GROK filters in more detail in the next section. 

filter {

  if [type] == "syslog" {

    if [host] =~ /10.10.10.1/ {

      mutate {

        add_tag => ["pfsense-primary", "Ready"]

      }

    }

    if [host] =~ /10.10.10.2/ {

      mutate {

        add_tag => ["pfsense-backup", "Ready"]

      }

    }




    if "Ready" not in [tags] {

      mutate {

        add_tag => [ "syslog" ]

      }

    }

  }

}

filter {

  if [type] == "syslog" {

    mutate {

      remove_tag => "Ready"

    }

  }

}

filter {

  if "syslog" in [tags] {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }

      add_field => [ "received_at", "%{@timestamp}" ]

      add_field => [ "received_from", "%{host}" ]

    }

    syslog_pri { }

    date {

      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM  dd HH:mm:ss" ]

      locale => "en"

    }

    if !("_grokparsefailure" in [tags]) {

      mutate {

        replace => [ "@source_host", "%{syslog_hostname}" ]

        replace => [ "@message", "%{syslog_message}" ]

      }

    }

    mutate {

      remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]

    }

  }

}

Next, we create an output. The output defines where Logstash should export the ingested logs. In this case, we are shipping them to Coralogix.

100-output.conf

output {

    coralogix {

        config_params => {

            "PRIVATE_KEY" => "YOUR_PRIVATE_KEY"

            "APP_NAME" => "APP_NAME"

            "SUB_SYSTEM" => "SUB_NAME"

        }

        log_key_name => "message"

        timestamp_key_name => "@timestamp"

        is_json => true

    }

}

Configuring Your Firewall for Syslog

Each firewall interface has a slightly different way to configure log forwarding. However, they all follow a similar pattern. We are going to run through configuring log forwarding on a PFsense firewall as an example of what to expect. Log forwarding is extremely common and most firewall vendors will provide details for the configuration of log forwarding in their manuals. 

On a PFsense firewall, the first step is to get logged into the firewall management interface. PFsense has a web interface that is accessible on the LAN interface. Once logged in your screen should look like this:

pfsense web interface

To set up log forwarding select status and then system logs:

configure log forwarding firewall

On the system logs page go to “Settings”. On the “Settings” page scroll to the bottom and enable “Enable Remote Logging”:

remote logging firewall

Here we can configure how logs will be shipped to Logstash:

ship firewall logs to logstash

In the above configuration, we are shipping all logs to the server 192.168.128.5 on port 5140. This is the same as the configuration we deployed to our Logstash server. Once this has been enabled, save and your logs should start shipping to Logstash. 

You may need to configure what happens with firewall rules. For instance, do you wish to ship all firewall rule events or just block events? On a PFsense firewall these are configured on each rule as below ‘Log – Log packets that are handled by this rule’:

extra configuration options pfsense

Visualize Your Data in Kibana 

Now that we have data flowing from our firewall to Logstash and into Coralogix, it’s time to add some filters that enable us to transform data so that we can visualize and create patterns in Kibana. 

Our first step is to address log parsing and then we can create some Kibana patterns to visualize the data. Filters can be made with GROK, however, Coralogix has a log parsing feature which enables you to much more quickly and easily create filters to manipulate the data you ingest. 

Log Parsing

Firstly, let’s uncover what log parsing rules are. A log parsing rule enables Coralogix to take unstructured text and structure it. When we complete this conversion, we are then able to assign how the data is structured. For example, we can extract source IP addresses, destination IP addresses, the time, and the action. With this data we can then create dashboards like the below:

log parsing dashboard

The value of being able to process, parse, and restructure log data for firewalls is that you can design data structures that are completely customized to your requirements and business processes. Firewalls generate a large number of logs that contain a wealth of data. However, the sheer volume of data means that it’s impossible to manually review your logs. 

Using a platform like Coralogix enables security engineers to quickly visualize data and ascertain what is happening on their network. Also, it’s important to keep this data retained so that in the event of a security event, the security team has the data required to review what happened and remediate any concerns. 

Let’s quickly explore how easy it is to create a rule in Coralogix. Once inside Coralogix, click on Settings, and then Rules:

log parsing rules coralogix

You will be presented with the different rule types that can be designed:

logs parsing rules types

You will note it’s easy to select the right rule type for whatever manipulation you are looking to apply. A common rule is to replace an incoming log. You might want to match a pattern that then sets the severity of the log to critical or similar. If we take a look at the replace rule we can see just how easy it is to create.

We give the rule a name:

rule naming coralogix

We create a matching rule, to locate the logs we wish to manipulate:

rule matcher coralogix

Finally, we design what is going to be manipulated:

extract and manipulate firewall logs

You can see just how easy Coralogix makes it to extract and manipulate your firewalls logs. This enables you to create a robust data visualization and security alerting solution using the Coralogix platform. 

It’s worth noting GROK patterns are also used in Logstash before the data hits Coralogix. Coralogix has a tutorial that runs through GROK patterns in more detail. It makes sense to just forward all of your logs to Coralogix and create rules using the log parsing rules, it’s much faster and far easier to maintain. 

Creating Index Patterns In Kibana

Now that we have manipulated our data, we can start to visualize it in Kibana. An index pattern tells Kibana what Elasticsearch index to analyze. 

To create a pattern you will need to jump over to the Coralogix portal and load up Kibana. In there click on Index Patterns:

index patterns

An index pattern is created by typing the name of the index you wish to match, a wild card can be used to specify multiple indexes for querying. An example *:13168_newlogs* will catch any indices that contain 13168_newlogs. Click next once you have matched your pattern.

creating index pattern step 1

Finally, we need to configure where Kibana will identify the time stamp for filtering records. This can be useful if you want to use metadata created by the firewall:

create index pattern coralogix

Now you have the data all configured, you can start to visualize your firewall logs in Kibana. We would recommend checking out the Kibana Dashboarding tutorial to explore what is possible with Kibana.

Conclusion

In this article, we have explored the value of ingesting firewall logs, we have uncovered the large wealth of data these logs contain, and how they can be used to visualize what activities are taking place on our networks.

Building on what we have explored in this article it should be clear that the more logs you consume and correlate the greater the visibility. This can assist in identifying security breaches, incorrectly configured applications, bad user behavior, and badly configured networks – to name just a few of the benefits. 

Coralogix’s tooling, like log parsing, enables you to ingest logs and transform them into meaningful data simply. Now you are armed with the knowledge, start ingesting your firewall logs! You will be amazed by just how much you learn about your network!

Using Coralogix to Gain Insights From Your FortiGate Logs

FortiGate, a next-generation firewall from IT Cyber Security leaders Fortinet, provides the ultimate threat protection for businesses of all sizes. FortiGate helps you understand what is happening on your network, and informs you about certain network activities, such as the detection of a virus, a visit to an invalid website, an intrusion, a failed login attempt, and myriad others.

This post will show you how Coralogix can provide analytics and insights for your FortiGate logs.

FortiGate Logs

FortiGate log events types are various and are divided into types (Traffic Logs, Event Logs, Security Logs, etc…) and subtypes per each type, you can view full documentation here. You may notice that the FortiGate logs are structured in a Syslog format, with multiple key/value pairs forming textual logs.

First, you will need to parse the data into a JSON log format to enjoy the full extent of the Coralogix capabilities and features and eventually, using Coralogix alerts and dashboards, instantly diagnose problems, spot potential security threats, and get a real-time notification on any event that you might want to observe. Ultimately, this offers a better monitoring experience and more capabilities from your data with minimum effort.

There are two ways to parse the FortiGate logs, either it is done on the integration side or at the 3rd party logging solution you are using if it allows a parsing engine. If you are using Coralogix as your logging solution you can use our advanced parsing engine to create series of rules within the same parsing group to eventually form a JSON object from the key/value pairs text logs. Let us review both options.

Via Logstash

In your logstash.conf add the following KV filter:

    filter {
      kv {
        trim_value => """
        value_split => "="
        allow_duplicate_values => false
      }
    }

Note that the arguments “value_split” and  “allow_duplicate_values” are not mandatory and by default, they are set with the values I am presenting here, I only added them for reference.

Sample log
date=2019-05-10 time=11:37:47 logid="0000000013" type="traffic" subtype="forward" level="notice" vd="vdom1" eventtime=1557513467369913239 srcip=10.1.100.11 srcport=58012 srcintf="port12" srcintfrole="undefined" dstip=23.59.154.35 dstport=80 dstintf="port11" dstintfrole="undefined" srcuuid="ae28f494-5735-51e9-f247-d1d2ce663f4b" dstuuid="ae28f494-5735-51e9-f247-d1d2ce663f4b" poluuid="ccb269e0-5735-51e9-a218-a397dd08b7eb" sessionid=105048 proto=6 action="close" policyid=1 policytype="policy" service="HTTP" dstcountry="Canada" srccountry="Reserved" trandisp="snat" transip=172.16.200.2 transport=58012 appid=34050 app="HTTP.BROWSER_Firefox" appcat="Web.Client" apprisk="elevated" applist="g-default" duration=116 sentbyte=1188 rcvdbyte=1224 sentpkt=17 rcvdpkt=16 utmaction="allow" countapp=1 osname="Ubuntu" mastersrcmac="a2:e9:00:ec:40:01" srcmac="a2:e9:00:ec:40:01" srcserver=0 utmref=65500-742
Output
{
	"date": "2019-05-10",
	"time": "11:37:47",
	"logid": "0000000013",
	"type": "traffic",
	"subtype": "forward",
	"level": "notice",
	"vd": "vdom1",
	"eventtime": "1557513467369913239",
	"srcip": "10.1.100.11",
	"srcport": "58012",
	"srcintf": "port12",
	"srcintfrole": "undefined",
	"dstip": "23.59.154.35",
	"dstport": "80",
	"dstintf": "port11",
	"dstintfrole": "undefined",
	"srcuuid": "ae28f494-5735-51e9-f247-d1d2ce663f4b",
	"dstuuid": "ae28f494-5735-51e9-f247-d1d2ce663f4b",
	"poluuid": "ccb269e0-5735-51e9-a218-a397dd08b7eb",
	"sessionid": "105048",
	"proto": "6",
	"action": "close",
	"policyid": "1",
	"policytype": "policy",
	"service": "HTTP",
	"dstcountry": "Canada",
	"srccountry": "Reserved",
	"trandisp": "snat",
	"transip": "172.16.200.2",
	"transport": "58012",
	"appid": "34050",
	"app": "HTTP.BROWSER_Firefox",
	"appcat": "Web.Client",
	"apprisk": "elevated",
	"applist": "g-default",
	"duration": "116",
	"sentbyte": "1188",
	"rcvdbyte": "1224",
	"sentpkt": "17",
	"rcvdpkt": "16",
	"utmaction": "allow",
	"countapp": "1",
	"osname": "Ubuntu",
	"mastersrcmac": "a2:e9:00:ec:40:01",
	"srcmac": "a2:e9:00:ec:40:01",
	"srcserver": "0",
	"utmref": "65500-742"
}

Via Coralogix

In settings –> Rules (available only for account admins) create a new group of rules with the following 3 REGEX-based replace rules. These rules should be applied consecutively (with an AND between them) on the FortiGate logs in order to format. Don’t forget to add a rule matcher to enable the parsing to take place only on your FortiGate data. Here are the rules:

  1. Regex pattern
    ([a-z0-9_-]+)=(?:")([^"]+)(?:")

    Replace pattern

    "$1":"$2",
  2. Regex pattern
    ([a-z0-9_-]+)=([0-9.-:]+|N/A)(?: |$)

    Replace pattern

    "$1":"$2",
  3. Regex pattern
    (.*),

    Replace pattern

    {$1}

For the same sample log above the result will be similar and the log entry you will have in Coralogix will be parsed as JSON.

FortiGate Dashboards

Here is an example FortiGate firewall Overview dashboard we created using FortiGate data. The options are practically limitless and you may create any visualization you can think of as long as your logs contain that data you want to visualize. For more information on using Kibana, please visit our tutorial.

FortiGate firewall Overview

FortiGate Alerts

Coralogix User-defined alerts enable you to easily create any alert you have in mind, using complex queries and various conditions heuristics, thus being more proactive with your FortiGate firewall data and be notified in real-time with potential system threats, issues, etc… Here are some examples of alerts we created using traditional FortiGate data.

The alert Condition can be customized to your pleasing and how it fits or satisfies your needs.

Alert name Description Alert Type Query Alert condition
FortiGate – new country deny action New denied source ip New Value action:deny Notify on new value in the last 12H
FortiGate – more than usual deny action More than usual access attempts with action denied Standard action:deny More than usual
FortiGate – elevated risk ratio more than 30% High apprisk ratio Ratio Q1 – apprisk:(elevated OR critical OR high)
Q2 – _exists_:apprisk
Q1/Q2>0.3 in 30 min
FortiGate – unscanned transactions 2x compared to the previous hour double of unscanned transactions comparing to previous hour Time relative appcat:unscanned current/an hour ago ration greater than 2x
FortiGate – critical risk from multiple countries alert if more than 3 unique destination countries with high/critical risk Unique count apprisk:(high OR critical) Over 3 unique dst countries in last 10 min

To avoid noise from these Alerts, Coralogix added a utility to allow you to simulate how the alert would behave. At the end of the alert, click verify Alert.

Need More Help with FortiGate or any other log data? Click on the chat icon on the bottom right corner for quick advice from our logging experts.