This post will help you write effective Suricata Rules to materially improve your security posture. We’ll begin with a breakdown of how a Rule is constructed and then explore best practices with examples in order to capture as many malicious activities as possible while using as few rules as possible.
Suricata is an open-source network intrusion detection system (NIDS) that provides real-time packet analysis and is part of the Coralogix STA solution. If you’re a Coralogix STA customer, be sure to also check my earlier post on How to Modify an STA Suricata Rule
Anatomy of Suricata Rules
Before diving into the different strategies for writing your best Suricata rules, let’s start off by dissecting an example Suricata Rule:
alert tcp $EXTERNAL_NET any -> 10.200.0.0/24 80 (msg:"WEB-IIS CodeRed v2 root.exe access"; flow:to_server,established; uricontent:"/root.exe"; nocase; classtype:web application-attack; reference:url,www.cert.org/advisories/CA-2001 19.html; sid:1255; rev:7;)
alert: tells Suricata to report this behavior as an alert (it’s mandatory in rules created for the STA).
tcp: means that this rule will only apply to traffic in TCP.
$EXTERNAL_NET: this is a variable defined in Suricata. By default, the variable HOME_NET is defined as any IP within these ranges: 192.168.0.0/16,10.0.0.0/8,172.16.0.0/12 and EXTERNAL_NET is defined as any IP outside of these ranges. You can specify IP addresses either by specifying a single IP like 10.200.0.0, an IP CIDR range like 192.168.0.0/16 or a list of IPs like [192.168.0.0/16,10.0.0.0/8]. Just note that spaces within the list are not allowed.
any: in this context, it means “from any source port”, then there’s an arrow ‘->’ which means “a connection to” (there isn’t a ‘<-‘ operator, but you can simply flip the arguments around the operator. You can use the ‘<>’ operator to indicate that the connection direction is irrelevant for this rule), then an IP range which indicates the destination IP address and then the port. You can indicate a port range by using colon like 0:1024 which means 0-1024. In the round parenthesis, there are some directives for setting the alert message, metadata about the rule, as well as additional checks.
msg: is a directive that simply sets the message that will be sent (to Coralogix in the STA case) in case a matching traffic will be detected.
flow: is a directive that indicates whether the content we’re about to define as our signature needs to appear in the communication to the server (“to_server”) or to the client (“to_client”). This can be very useful if, for example, we’d like to detect the server response that indicates that it has been breached.
established: is a directive that will cause Suricata to limit its search for packets matching this signature to packets that are part of established connections only. This is useful to minimize the load on Suricata.
uricontent: is a directive that instructs Suricata to look for a certain text in the normalized HTTP URI content. In this example, we’re looking for a url that is exactly the text “/root.exe”.
nocase: is a directive that indicates that we’d like Suricata to conduct a case insensitive search.
classtype: is a directive that is a metadata attribute indicating which type of activity this rule detects.
reference: is a directive that is a metadata attribute that links to another system for more information. In our example, the value url,<https://….> links to a URL on the Internet.
sid: is a directive that is a metadata attribute that indicates the signature ID. If you are creating your own signature (even if you’re just replacing a built-in rule), use a value above 9,000,000 to prevent a collision with another pre-existing rule.
rev: is a directive that indicates the version of the rule.
There’s a lot more to learn about Suricata rules which supports RegEx parsing, protocol-specific parsing (just like uricontent for HTTP), looking for binary (non-textual) data by using bytes hex values, and much much more. If you’d like to know more you can start here.
Best Practices to Writing Effective Suricata Rules
- Target the vulnerability, not the exploit – Avoid writing rules for detecting a specific exploit kit because there are countless exploits for the same vulnerability and we can be sure that new ones are being written as you’re reading this. For example, many of the early signatures for detecting buffer overrun attacks looked like this:
alert tcp $EXTERNAL_NET any -> $HOME_NET 80 (content:"AAAAAAAAAAAAAA", msg:"Buffer overrun detected.")
The reason for that is of course that to launch a successful buffer overrun attack, the attacker needs to fill the buffer of a certain variable and add his malicious payload at the end so that it would become executable. The characters he chooses to use to fill the buffer are completely insignificant and indeed, after such signatures appeared, many attack toolkits simply used a different letter or letters to fill the buffer and completely evaded this type of signature detection. A much better way would be to attempt to detect these kind of attacks by detecting incorrect input to fields based on their type and length.
- Your peculiarity is your best asset, so use it – Every organization has things that make it unique. Many of these can be quite useful when you try to catch malicious activity in your organization – both external and internal. By using this deep internal knowledge to your advantage, you’ll essentially convert the problem from a technological one to a pure old-school intelligence problem, forcing attackers to have a much more intimate understanding of your organization in order to be able to hide their tracks effectively. These things can be technological in nature or based on your organization’s particular working habits and internal guidelines. Here are some examples:
- Typical Working Hours: Some organizations I worked at did not allow employees to work from home at all and the majority of employees would have already left the office by 19:00. For similar organizations, it would make sense to set an alert to notify you of connections from the office after a certain hour. An attacker that would install malicious software in your organization would have to know that behavior and tune his malware to communicate with its Command & Control servers at precisely the same time such communications would go unnoticed.
- Typical Browser: Suppose your organization has decided to use the Brave browser as its main browser and it gets installed on every new corporate laptop automatically and you have removed the desktop shortcuts to IE/Edge browser from all of your corporate laptops. If this is the case, a connection from the organization, both to an internal as well as external addresses that use a different browser such as IE/Edge should be configured to raise an alert.
- IP Ranges based on Roles: If it’s possible for you to assign different IP ranges for different servers based on their role, for example, to have all DB servers on 192.168.0.0/24, the web servers on 192.168.1.0/24, etc then it would be possible and even easy to set up clever rules based on the expected behavior of your servers based on their role. For example, database servers usually don’t connect to other servers on their own, printers don’t try to connect to your domain controllers, etc.
- Unusual Connection Attempts: Many organizations use a public share on a file server to help their users share files between them and use network enabled printers to allow their users to print directly from their computers. That means that client computers should not connect to each other, even if you have (wisely) chosen to block such access in a firewall or at the switch, the very attempt to connect from one client to another client computer should raise an alert and be thoroughly investigated.
- Uncommon Ports: Some organizations use a special library for communication optimizations between services so that all HTTP communication between servers uses a different port than the common ones (such as 80, 443, 8080, etc). In this case, it’s a good idea to create a rule that would be triggered by any communication on these normally common ports.
- Honeytokens – In a battle field like the Internet where everyone can be just about anyone, deception, works well for defenders just as well as it does for the attackers, if not better. Tricks like renaming the built in administrator account to a different, less attractive name and creating a new account named Administrator which you’ll never use and create a Suricata rule for detecting if this user name, email or password are ever used on the network. It would be next to impossible for attackers to notice that Suricata has detected their attempts to use the fake administrator user. Another example is to create fake products, customers, users, and credit card records in the database and then matching Suricata rules for detecting them in the network traffic.
We hope you found this information helpful. For more information on the Coralogix STA, check out the latest features we recently released.