Stop Enforcing Security Standards – Start Implementing Policies
In days gone by, highly regulated industries like pharmaceuticals and finance were the biggest targets for nefarious cyber actors, due to the financial resources at banks…
It feels like cybersecurity is dominating the newsfeeds, doesn’t it? There is a reason.
Cyberattacks and cybercrime have risen dramatically in the last five years. 2020 broke all records in terms of data loss and the number of cyberattacks. Between 2019 and 2020 ransomware attacks alone rose by 62%, the same year that the World Economic Forum identified cyberattacks and data theft as two of the biggest risks to the global economy.
Suffice to say, the reason there’s a lot of chatter is that organizations are waking up to what cybersecurity professionals have been saying for years: cybersecurity should always be the top priority.
It’s not just the frequency that’s increased. The scale of attacks continues to grow too. From 2015 to 2020, total revenue lost to cybercrime rose from around 1 billion US dollars to 4.2 billion. Of the 15 biggest data breaches in history, 7 were in the last three years. Cybercrime isn’t a threat that’s going away any time soon.
So, what’s to be done? If you’re one of the many businesses currently reassessing your cybersecurity policy, here’s what you can learn from some of the largest, most notorious, and most damaging cybersecurity incidents in history.
The first entry on our list is also the most recent. Through August-September of 2021, Russian tech powerhouse Yandex was hit by what’s thought to be the largest DDoS attack ever received.
DDoS attacks (distributed denial-of-service) are one of the oldest tricks in the cybercriminal and hacker handbooks. In a nutshell, attackers disrupt (or shut down) your systems by flooding your network with requests. It’s one of the simplest cybersecurity threats to understand; attackers overload your systems, rendering them inoperable and disrupting your business.
According to Yandex, their servers were hit with almost 22 million requests per second (RPS). Thankfully for Yandex, and rarely for incidents like these, no user data was compromised (that we know of so far). DDoS attacks rarely end well for the businesses at the receiving end.
DDoS attacks cost small and medium-sized businesses in excess of $120k on average. For large enterprises, this figure is regularly over $2 million. DDoS attacks are nothing new, but they still account for a staggering amount of cybercrime.
DDoS attacks are targeted. They need to be countered in real-time, not something you can rely on your firewall to rebuff. This is why observability platforms are essential tools in any security architecture. Without full visibility and real-time analytics, it’s impossible to counter the waves of requests before they overwhelm your system.
With a modern observability platform, you can safeguard against DDoS activity. Your platform will flag suspicious activity as it happens, allowing you to isolate responsible clients and close connections, shutting off the compromised components or servers before the entire system is driven offline.
The Melissa virus was one of the first large-scale cybersecurity incidents to receive international press coverage.
The trojan malware embedded itself in Microsoft Outlook, spread via email in infected Word documents. From being released in March 1999, Melissa spread until it ended up causing an estimated $80m in damages.
It was the first of a format we’re now all familiar with. Users would receive an email from a known contact. The email would contain a document that, when opened, would embed Melissa onto the new machine (and forward itself to contacts in the new user’s Outlook address book).
Melissa’s activity created so much internal traffic within enterprise-level networks that it took many online. Those impacted included Microsoft and the United States Marine Corps. It’s difficult to pinpoint the exact number of systems disrupted, but it was a wake-up call during the early years of digital adoption: cybersecurity matters.
Despite the Melissa virus incident happening over 20 years ago, email-spread trojans and malware remain a threat. Raising awareness can only cover so many bases. There are always going to be one or two people who fall for a spam email.
Not to mention that cybercriminals are always growing more sophisticated. Every so often there’ll be a new release that slips through the spam filter. In 2019, Business Email Compromise (BEC) attacks cost US businesses $1.7 billion in losses.
It is possible to safeguard against human error and an evolving threat landscape, however. By setting up Secure Email Gateways (SEG’s) and monitoring them in real-time, engineers are alerted to suspicious activity before it lands in the inbox. Catching trojans and malware before they’re embedded in your systems is much easier than removing them once they’re in.
The 2010s saw many high-profile cybersecurity incidents of all kinds. With more of us uploading our personal details to businesses data centers than ever, data breaches became a particular focus. The Adobe 2013 breach was one of the largest of the decade.
It’s easy to see why the breach brought Adobe so much negative press. In October 2013 details of over 38 million Adobe users were stolen, and this included almost 3 million credit card numbers. To call it a PR disaster for Adobe would be an understatement.
Adobe ended up receiving a class action lawsuit which they settled for an undisclosed amount, however, it’s known they faced at least $1.2 million in legal fees alone. The breach led to Adobe completely restructuring its approach to cybersecurity. However, for many affected customers, it was too little, too late.
The early 2010s were a time of mass migration from on-site infrastructures to the cloud. It was this process that the Adobe hackers exploited. A due-to-be decommissioned backup server was targeted. From here the hackers lifted around 40GB of data.
A key reason the breach was so damning for Adobe was that it could have been avoided. The vulnerable server still existed as part of the system, yet became a blind spot to Adobe’s security architecture. The solution Adobe relied on lacked full system visibility. Hackers had the freedom to operate within the soon-to-be disconnected server completely undetected.
In our landscape of remote servers and as-a-Service cloud platforms, security systems need to have constant observability overall components in your system. Even inactive ones. Nothing in your system can be considered isolated, as all of it is reachable by a determined enough hacker.
The Marriott International breach is a lesson in why it’s vital to regularly reassess your cybersecurity. For four years, Marriott’s systems were accessed through an undetected, unprotected back door. How did this happen? Because when Marriott acquired the smaller Starwood Hotels in 2016, they failed to ensure Starwood’s IT infrastructure was up to their own standards.
Fast forward to 2018, and Marriott finds themselves having to report that as many as 339 million guest records have been leaked. While Marriott responded appropriately by reporting the breach to their customers and the relevant UK authorities immediately, failure to adequately secure their systems landed them with an £18.4m fine ($24.8m US).
Marriott made a crucial mistake when they acquired Starwood Hotels. Instead of migrating Starwood’s infrastructure over to their own, they allowed the smaller company to continue using their current (insecure) systems. As soon as these systems were connected to Marriott’s main infrastructure it opened the doors for the hackers already in Starwood’s servers.
The simple fact of the Marriott breach, and why they received such a hefty fine, is this: the breach could have been avoided. Hackers shouldn’t have been able to operate in the Marriott systems for two years.
The Marriott ecosystem wasn’t unique. Many enterprise IT ecosystems are made of interlinked internal infrastructures of businesses under the company umbrella. If you lack visibility over the other systems of the wider infrastructure your own are never really secure.
With an observability and visibility platform as part of your security solution, breaches such as the Marriott’s are safeguarded against. Automated discovery and ML/AI-driven alerting ensure that, when your infrastructure gains a new segment, any suspicious activity or exploited weaknesses are highlighted immediately.
Rounding off our list is perhaps the most widely covered cybersecurity incident of the last decade. In May this year (2021), American oil pipeline and fuel supplier Colonial Pipeline Company was hit with one of the largest scale ransomware attacks in history. The attack targeted computerized equipment, resulting in the entire pipeline halting operations to keep it contained. Not only were the financial damages of this to the US economy astronomical, but Colonial Pipeline also confirmed it paid nearly $5m in ransom.
The group responsible, DarkSide, managed to enter Colonial’s systems by exploiting an unused company VPN account with a compromised password. How this password was acquired is still unknown. The account in question was considered inactive by Colonial’s teams but could still access internal systems (as evidenced by the success of the attack).
How much financial damage occurred from the incident, both to Colonial themselves and the wider US economy, is still being calculated. What took no time to figure out was that even companies as large and national infrastructure supporting as Colonial Pipeline is vulnerable to cyber attack.
The Colonial Pipeline breach made clear just how far both the private and public sectors still have to go when it comes to cybersecurity. The attack is considered by some to have been the impetus behind Joe Biden’s executive order mandating high-security standards across US government services.
Ultimately yes, the attack could have been avoided. The group responsible gained entry with a single password, and once again remote access was a weak point the hackers were able to exploit. If Colonial had a more robust and prioritized approach to the security of their systems the suspicious activity from the previously-offline account would have flagged an alert.
The attack is especially alarming when you consider how much of the US infrastructure relies on Colonial lines. The shortages from the brief switch-off caused average gas prices to rise above $3/gallon for the first time since 2014.
The lessons from these examples are obvious. There are common themes that occur throughout, and avoiding these perfect storms of system vulnerability isn’t as difficult as it seems.
Chiefly, it’s important to have a robust security culture throughout your staff. This includes non-IT personnel, too. In almost every example it was a lackluster or short-sighted approach to cybersecurity, be it from those responsible or from staff ignoring warnings about spam, that led to an exploited vulnerability.
The other main lesson is that the technology you rely on needs to be adaptive. It’s not enough to rely on a library of already-known viruses and malware. You need a security system that can self-update and remain on top of the ever-evolving cyber threat ecosystem. Fortunately, many modern platforms can harness AI, machine learning, and cloud capabilities to automate this process, meaning you’re never using yesterday’s security to safeguard against tomorrow’s threats.
Finally, it’s obvious that system-wide monitoring and visibility are key. An observability platform is an essential part of any modern security solution. Many of the above could have been avoided entirely if a robust observability platform was in place. Every blind spot is a vulnerability. It’s clear that any successful security solution will have to remove them at its core. With a modern observability platform such as Coralogix, this is easier than it’s ever been.