How do Observability and Security Work Together?

There’s no question that the last 18 months have seen a pronounced increase in the sophistication of cyber threats. The technology industry is seeing a macro effect of global events propelling ransomware and wiperware development further into the future, rendering enterprise security systems useless. This is where enterprise network monitoring comes in.

Here at Coralogix, we’re passionate about data observability and security and what the former can do for the latter. We’ve previously outlined key cyber threat trends such as trojans/supply chain threats, ransomware, the hybrid cloud attack vector, insider threats, and more. 

This article will revisit some of those threats and highlight new ones while showing why observability and security should be considered codependent. 

Firewall Observability

Firewalls are a critical part of any network’s security. They can give some of the most helpful information regarding your system’s security. A firewall is different from an intrusion detection system (which we discuss below) – you can think of a firewall as your front door and the intrusion detection system as the internal motion sensors. 

Firewalls are typically configured based on a series of user-proscribed or pre-configured rules to block unauthorized network traffic.

Layer 3 vs. Layer 7 Firewalls

Two types of firewalls are common in the market today: Layer 3 and Layer 7. Layer 3 firewalls typically block specific IP addresses, either from a vendor-supplied list that is automatically updated for the user or a custom-made allow/deny list. A mixture of the two is also typical, allowing customers to benefit from global intelligence on malicious IP addresses while being able to block specific addresses that have previously attempted DDoS attacks, for example. 

Layer 7 firewalls are more advanced. They can analyze data entering and leaving your network at a packet level and filter the contents of those packets. Initially, this capability filters malware signatures, preventing malicious actors from disrupting or encrypting a system. Today, more organizations are using layer 7 firewalls to prevent data and ingress. This is particularly useful in protecting against data breaches, insider threats, and ransomware when data may be leaving your network. 

Given that it’s best practice to have a layer 3 and a layer 7 firewall, and the amount of data generated by the latter, having an observability platform like Coralogix to collate and contextualize this data is critical.

Just a piece of the puzzle

Given that a firewall is just one tool in a security team’s arsenal, it’s essential to be able to correlate events at a firewall level with other system events, such as database failures, malware detection, or data egress. Fortunately, Coralogix ingests firewall logs and metrics using either Logstash or its own syslog agent, which means that it can work with a wide variety of firewalls. Additionally, Coralogix’s advanced log parsing and visualization technologies allow security teams to overlay firewall events with other security metrics simply. Coralogix also provides some bespoke integrations to a number of the most popular firewalls. 

Firewall data in isolation isn’t that helpful. It can tell you what malicious traffic you’ve successfully blocked, but not what you’ve missed. That’s why adding context from other security tools is vital.

Intrusion Detection Systems and Observability

As mentioned above, if firewalls are the first defense, then intrusion detection systems are next in line. Intrusion detection is key because it can tell you the nature of the threat that’s breached your system and highlight what your firewall might have missed. Remember, a firewall will only be able to tell you what didn’t get in or what was let in. 

Adding an intrusion detection system allows you to assess and neutralize threats that bypass other network security controls. Some intrusion detection systems pull data from OWASP to hunt for the most common malware and vulnerabilities, while others use crowdsourced data. 

By layering intrusion detection data, like that from Suricata, your SRE or security team will be able to detect attacks and identify the point of entry. Such context is vital in reengineering cyber defenses after an attack.

Kubernetes Observability and Security 

55% of Kubernetes deployments are slowed down due to security concerns, says a recent Red Hat survey. The same study says that 93% of respondents experienced some sort of security incident in a Kubernetes environment over the last year. 

Those two statistics tell you everything you need to know. Kubernetes security is important. Monitoring Kubernetes is vital to maintaining cluster security, as we will explore below.

Pod Configuration Security

By default, there is no configured network security rule which permits pods to communicate with each other. Pod security is heavily defined by role-based access control (RBAC). It’s possible to monitor the security permissions assigned to a given user to ensure there isn’t over-provisioning of access.

Malicious Code

A common attack vector to a Kubernetes cluster is via the containerized application itself. By monitoring the host level or IP requests, you can limit your vulnerability to DDOS attacks, which would otherwise take the cluster offline. Using Prometheus for operational, enterprise network monitoring tool is a good way of picking up vital metrics from containerized environments. 

Runtime Monitoring

A container’s runtime metrics will give you a good idea of whether it’s also running a secondary, malicious process. Runtime metrics to look out for include network connections, endpoints, and audit logs. By monitoring these metrics and using an ML-powered log analyzer, such as Loggregation, you can spot any anomalies which may indicate malicious activity.

Monitoring for protection

With Kubernetes, several off-the-shelf security products may aid a more secure deployment. However, as you can see above, there is no substitute for effective monitoring for Kubernetes security.

Network Traffic Observability and Security

It should be abundantly clear why an effective observability strategy for your network traffic is critical. On top of the fundamentals discussed so far, Coralogix has many bespoke integrations designed to assist your network security and observability. 

Zeek

Zeek is an open-source network monitoring tool designed to enhance security through open-source and community participation. You can ship Zeek logs to Coralogix via Filebeat so that every time Zeek performs a scan, results are pushed to a single dashboard overlaid with other network metrics.

Cloudflare

Organizations around the world use Cloudflare for DDOS and other network security protection. However, your network is only as secure as the tools you use to secure it. Using the Coralogix audit log integration for Cloudflare, you can ensure that access to Cloudflare is monitored and any changes are flagged in a network security dashboard. 

Security Traffic Analyzer

Coralogix has built a traffic analyzer specifically for monitoring the security of your AWS infrastructure. The Security Traffic Analyzer connects directly to your AWS environment and collates information from numerous AWS services, including network load balancers and VPC traffic.

Application-level Observability

Often overlooked, application-level security is more important than ever. With zero-day exploits like Log4j becoming more and more common, having a robust approach to security from the code level up is vital. You guessed it, though – observability can help. 

To the edge, and beyond

Edge computing and serverless infrastructure are just two examples of the growing complexities you must consider with application-level security. Running applications on the edge can generate vast amounts of data, requiring advanced observability solutions to identify anomalies. Equally, serverless applications can lead to security and IAM issues, which have been the causes of some of the world’s biggest data breaches. 

Observability for Hybrid Cloud Security

In the world of hybrid cloud, observability and security are closely intertwined. The complexities of running systems in a mixture of on-premise and cloud environments give malicious actors, and your own security teams, a lot to work with. 

Centralized Logging

It’s unlikely that the security tooling for your cloud environments will be the same as that used on-premise. Across different systems, vendors will likely have different security tools, all with varied log outputs. A single repository for these outputs, which will also parse them in a standardized fashion, is a key part of effective defense. Without this, your security teams may be spending unnecessary time decrypting the nuances in two different products’ logs, trying to find a connection. 

Dashboarding

A single pane of glass is the only way to implement observability in a complex environment. Dashboards help spot trends and identify outliers, making sure that two teams with different perspectives are “singing from the same hymn sheet. Combine effective dashboarding with holistic data collection, and you’re onto a winner.

Observability is Security 

At Coralogix, we firmly believe that the most important tool in your security arsenal is effective monitoring and observability. But it’s not just effectiveness that’s key, but also pragmatism. We firmly believe in the value of collecting holistic data, such as from Slack and PagerDuty, to tackle security incidents as well as to detect them. 

The bulk of this piece has been regarding how observability can help detect malicious actors and security incidents. Our breadth of out-of-the-box integrations and the openness of our platform give organizations free rein to build security-centered SIEM tools. However, by analyzing otherwise overlooked data, such as from internal communications, website information, and marketing data, in conjunction with traditionally monitored metrics, you can really supercharge your defense and response.

Summary

Hopefully, you can see that security and observability are no longer separate concepts. As companies exploit increasingly complex technologies, generate more data, and deploy more applications, observability and security become bywords for one another. However, for your observability strategy to become part of your security strategy, you need the right platform. That platform will collate logs for you automatically while highlighting anomalies, integrate with every security tool in your arsenal and contextualize their data into one dashboard, and bring your engineers together to combat the technical threats facing your organization. 

How to Use SIEM Tools in the Modern World

In our highly connected world, organizations of all sizes need to be alert to the risk of cyberattacks. The genuine threats to today’s enterprises include data leaks, ransomware, and theft of commercial secrets or funds, with the potential for severe financial and reputational damage. 

Investing in tools to monitor your systems and alert you to suspicious activity as early as possible is vital for strengthening your security posture.

Until fairly recently, Security Information and Event Management (SIEM) tools have been the preserve of large corporations, requiring a high degree of technical and security expertise to operate and derive value from them. 

The good news is that with modern-day SIEM systems, much of the labor involved is in determining baselines and thresholds and correlating events across distributed systems. 

What’s more, triaging alerts can now be automated, with some also providing the ability to contain and mitigate threats as they emerge. This article will discuss how SIEM is evolving and how it can help you defend against cyber threats.

What is SIEM?

The first generation SIEM tools focused primarily on recording and reporting log data for regulatory compliance. Then came Security Event Management (SEM) tools, which analyzed log and event data in real-time to monitor threats and support incident response.

By combining these two functions, SIEM tools evolved to handle proactive attack monitoring, threat analysis, incident response, security forensics and reporting, and data retention for compliance purposes. 

With the data and functionality provided by a SIEM, security operations teams could analyze historical data to determine expected operating parameters and set conditions to trigger alerts when a threat is detected.

As we’ll discuss in more detail, SIEM vendors have continued to improve their offerings by leveraging innovations in machine learning, automation, and big data analysis. 

Modern SIEM solutions handle much larger volumes of data, using it to derive insights and identify anomalies that merit further investigation. They also apply automated workflows to contain and mitigate threats.

How do SIEM tools work?

A SIEM tool collects log data from the systems and applications running on multiple disparate hosts within your IT estate, including servers, routers, firewalls, and employee workstations. There are several ways SIEM tools collect data, and the most reliable methods include:

  • Using an agent installed on the device to collect from
  • Connecting to a device directly using a network protocol or API call
  • Accessing log files directly from storage, typically in Syslog format or CEF (Common Event Format)
  • Using an event streaming protocol like Netflow or SNMP

After collecting the log data in a central location, the SIEM tool normalizes and aggregates that data for analysis. SOC team members can query the log and event data to generate reports and investigate incidents. Dashboards provide the SOC team with visualizations of the data in real time, with alerts triggered when a possible threat or incident is detected.

‘Next Gen’ SIEMs build on this functionality using insights from historical data combined with external threat libraries and blocklists. Using statistics, descriptive, and predictive data mining, machine learning, simulation, and optimization, SIEM tools maintain a constantly evolving picture of normal operations and deliver critical insights to identify hidden threats.

Correlating event data

To protect your organization from cyber threats, it’s not enough to analyze logs from individual devices in isolation. With distributed systems, it’s common for attackers to move horizontally through the network, looking for weaknesses that will give access to your data or resources.

To identify signs of an attack, sometimes you need to correlate events occurring in different parts of the system to form a bigger picture. For example, whereas thousands of failed login attempts occurring in quick succession would be flagged as a potential brute force attack, but an employee logging into the VPN from a different location is not necessarily suspicious. 

However, if that user then accesses a different server to normal – perhaps one containing sensitive data – or attempts to change their security settings or account permissions, things start to look more suspicious. These two events, taken together suggest the beginnings of an attack, and the chain of events would be flagged as a potential threat to be investigated.

Event correlation relies on large numbers of data points, making it extremely difficult to perform manually. SIEM tools use statistical modeling and machine-learning to analyze the available data, automatically marking events as potentially suspicious and drawing correlations between data points to raise alerts when a sequence of events displays the hallmarks of malicious activity.

Powering intelligent alerts with Machine-Learning

One of the limitations of SIEM for securing your enterprise is the impact of false positives in triggering alert fatigue among security operators. When events that appear similar, at least on the surface, happen very frequently, it’s a natural human response to pay them less attention. 

Modern SIEM providers apply machine learning (ML) techniques to automatically refine alerts generated and adjust alert thresholds, reducing the likelihood of false positives and allowing security experts to use their time more productively.

Machine learning enables several other techniques for automating threat detection and minimizing the extent of an attack with UEBA and SOAR.

Identifying anomalies with UEBA

With User Event Behavior Analysis (UEBA), SIEM tools use machine learning and statistical analyses to identify common patterns of user behavior, which serve as a baseline for detecting anomalous activity.

For example, if an individual logs typically in for eight hours a day during local office hours, but one day stays logged in for over 24 hours or starts logging in during the middle of the night, it could indicate that their account has been compromised and an attacker is gaining access to your systems. 

In a similar vein, if you have a user who prints typically in the region of a hundred pages a week or writes a few gigabytes to removable storage each month, but one day starts exporting terabytes of data, you might be dealing with a malicious insider.

Modern SIEM tools combine insights from UEBA with the real-time analysis of system and network data to build a comprehensive picture of the activity within your IT estate. Using targeted algorithms and statistical modeling, they can adapt alert rules to filter out noise and zero in on the anomalies that merit further investigation.

Automating responses to threats

Security Orchestration, Automation and Response (SOAR) refers to the capability of modern SIEMs to automate the response to threats they have detected. Once suspicious activity is identified, a SIEM tool can assist with the orchestration of the various activities required to further investigate the threat by correlating events (as described above). 

Furthermore, they can automate certain runbook activities, such as opening tickets in tracking systems like Jira. This saves time for security experts, who can focus on the most impactful tasks.

For certain types of attacks, the SIEM tool can automatically apply steps to contain or mitigate the threat, such as notifying employees of a confirmed phishing attempt or quarantining users that have fallen victim to the phishing attack before detection.

SOAR can also be used to apply basic hygiene measures automatically. For example, by automating the steps to de-provision user accounts when employees leave your organization, you ensure these essential tasks are performed promptly while freeing up IT staff to focus on more valuable work.

Built for the cloud

As we’ve seen, modern SIEM tools can identify emerging threats in distributed systems by analyzing huge volumes of data in real-time. This data storage and analysis level require significant storage and computing power, which is why modern SIEM tools are often built for the cloud. 

The horizontal scalability of cloud-hosted storage combined with cloud-native big data analytics tools means you can feed in a wide range of data sources, maximizing data correlation and anomaly detection opportunities.

However, a possible downside of cloud-based tools is the potential for spiraling costs for data storage. SIEM providers that offer solutions for managing data and optimizing storage will help you manage costs over the long term.

Summary

SIEM tools are proving to be more critical than ever in modern times of cyberattacks. They are undeniably valuable for collating data and threats across your IT environment into a single, easy-to-use dashboard. 

Many of the ‘Next Gen SIEM’ tools are configured to flag suspect patterns independently and sometimes even resolve the underlying issue automatically. The best SIEM tools are adept at using past trends to differentiate between actual threats and legitimate use, minimizing false alarms while ensuring optimal protection. 

By helping your security team work effectively and efficiently, a modern SIEM tool streamlines your operations while strengthening your organization’s security posture.

Guide: Smarter AWS Traffic Mirroring for Stronger Cloud Security

So, you’ve installed Coralogix’s STA and you would like to start analyzing your traffic and getting valuable insights but you’re not sure that you’re mirroring enough traffic or wondering if you might be mirroring too much data and could be getting more for less.

In order to be able to detect everything, you have to capture everything and in order to be able to investigate security issues thoroughly, you need to capture every network packet. 

More often than not, the data once labeled irrelevant and thrown away is found to be the missing piece in the puzzle when slicing and dicing the logs in an attempt to find a malicious attacker or the source of an information leak.

However, as ideal as this might be, in reality, capturing every packet from every workstation and every server in every branch office is usually impractical and too expensive, especially for larger organizations. Just like in any other field of security, there is no real right or wrong here, it’s more a matter of whether or not it is worth the trade-off in particular cases.

There are several strategies that can be taken to minimize the overall cost of the AWS traffic monitoring solution and still get acceptable results. Here are some of the most commonly used strategies:

1. Mirror by Resource/Information Importance

  1. Guidelines: After mapping out the most critical assets for the organization from a business perspective, configure the mirroring to mirror only traffic to and from the most critical servers and services to be mirrored and analysed. For example, a bank will probably include all SWIFT related servers, for a software company it will probably include all traffic from and to their code repository, release location, etc.
  2. Rationale: The rationale behind this strategy is that mirroring the most critical infrastructures will still provide the ability to detect and investigate security issues that can harm the organization the most and will save money by not mirroring the entire infrastructure.
  3. Pros: By following this strategy, you will improve the visibility around the organization’s critical assets and should be able to detect issues related to your organization’s “crown jewels” (if alerts are properly set) and to investigate such issues.
  4. Cons: Since this strategy won’t mirror the traffic from non-crown jewels environments, you will probably fail to pinpoint the exact (or even approximate) path the attacker took in order to attack the organization’s “crown jewels”.
  5. Tips: If your organization uses a jump-box to connect to the crown jewels servers and environments, either configure the logs of that jump-box server to be as verbose as possible and store them on Coralogix with a long retention or mirror the traffic to the jumpbox server.

2. Mirror by Resource/Information Risk

  1. Guidelines: After mapping out all the paths and services through which the most critical data of the organization is being transferred or manipulated, configure the mirroring to mirror only traffic to and from those services and routes. The main difference between this strategy and the one mentioned above is that it is focused on sensitive data rather than critical services as defined by the organization.
  2. Rationale: The rationale behind this strategy is that mirroring all the servers and services that may handle critical information will still provide the ability to detect and investigate security issues that can harm the organization the most and will save money by not mirroring the entire infrastructure.
  3. Pros: You will improve the visibility around the critical data across services and environments and you should be able to detect, by configuring the relevant alerts, attempts to modify or otherwise interfere with handling and transferring the organization’s sensitive data
  4. Cons: Since this strategy won’t mirror traffic from endpoints connecting to the services and paths used for transmission and manipulation of sensitive data, it might be difficult or even impossible to detect the identity of the attacker and the exact or even approximate path taken by the attacker.
  5. Tips: Collecting logs from firewalls and WAFs that control the connections from and to the Internet and sending the logs to Coralogix can help a great deal in creating valuable alerts and by correlating them with the logs from the STA can help identify the attacker (to some extent) and his/her chosen MO (Modus Operandi).

3. Mirror by Junction Points

  1. Guidelines: Mirror the data that passes through the critical “junction points” such as WAFs, NLBs or services that most of the communication to the organization and its services goes through.
  2. Rationale: The idea behind this strategy is that in many organizations there are several “junction points” such as WAFs, NLBs, or services that most of the communication to the organization and its services goes through. Mirroring this traffic can cover large areas of the organization’s infrastructure by mirroring just a handful of ENIs.
  3. Pros: You will save money on mirroring sessions and avoid mirroring some of the data while still keeping a lot of the relevant information.
  4. Cons: Since some of the data (e.g. lateral connections between servers and services in the infrastructure) doesn’t necessarily traverse the mirrored junction points, it won’t be mirrored which will make it harder and sometimes even impossible to get enough information on the attack or even to be able to accurately detect it.
  5. Tips: Currently, AWS cannot mirror an NLB directly but it is possible and easy to mirror the server(s) that are configured as target(s) for that NLB. Also, you can increase the logs’ verbosity on the non-monitored environments and services and forward them to Coralogix to compensate for the loss in traffic information.

4. Mirror by Most Common Access Paths

  1. Guidelines: Mirror traffic from every server is based on the expected and allowed set of network protocols that are most likely to be used to access it.
  2. Rationale: The idea behind this strategy is that servers that expose a certain service are more likely to be attacked via that same service. For example, an HTTP/S server is more likely to be attacked via HTTP/S than via other ports (at least at the beginning of the attack). Therefore, it makes some sense to mirror the traffic from each server based on the expected traffic to it.
  3. Pros: You will be able to save money by mirroring just part of the traffic that arrived or was sent from the organization’s servers. You will be able to detect, by configuring the relevant alerts, some of the indications of an attack on your servers.
  4. Cons: Since you mirror only the expected traffic ports, you won’t see unexpected traffic that is being sent or received to/from the server which can be of great value for a forensic investigation.
  5. Tips: Depending on your exact infrastructure and the systems and services in use, it might be possible to cover some of the missing information by increasing the services’ log verbosity and forwarding them to Coralogix.

5. Mirror Some of Each

  1. Guidelines: Randomly select a few instances of each role, region or subnet and mirror their traffic to the STA.
  2. Rationale: The idea behind this strategy is that it would be reasonable to assume that the attacker would not know which instances are mirrored and which are not, and also, many tools that are used by hackers are generic and will try to propagate through the network without checking if the instance is mirrored or not, therefore, if the attacker tries to move laterally in the network (manually or automatically), or to scan for vulnerable servers and services, it is very likely that the attacker will hit at least one of the mirrored instances (depending on the percentage of instances you have selected in each network region) and if alerts were properly configured, it will raise an alert.
  3. Pros: A high likelihood of detecting security issues throughout your infrastructure, especially the more generic types of malware and malicious activities.
  4. Cons: Since this strategy will only increase the chances of detecting an issue, it is still possible that you will “run out of luck” and the attacker will penetrate the machines that were not mirrored. Also, when it comes to investigations it might be very difficult or even impossible to create a complete “story” based on the partial data that will be gathered.
  5. Tips: Since this strategy is based on a random selection of instances, increasing the operating system and auditing logs as well as other services logs and forwarding them to Coralogix for monitoring and analysis can sometimes help in completing the picture in such cases.

In addition to every strategy, you’ll choose or develop, we would also recommend that you mirror the following. These will probably cost you near nothing but can be of great value when you’ll need to investigate an issue or detect security issues (manually and automatically):

  1. All DNS traffic – It is usually the tiniest traffic in terms of bytes/sec and packets/sec but can compensate for most black spots that will result in such trade-offs.
  2. Mirror traffic that should never happen – Suppose you have a publicly accessible HTTP server that is populated with new content only by scp from another server. In this case, mirroring should be done on the FTP access to the server, since that FTP is one of the most common methods to push new content to HTTP servers, mirroring FTP traffic to this server and defining an alert on such an issue will reveal attempts to replace the HTTP contents even before they have succeeded. This is just one example, there are many possible examples (ssh or nfs to Windows servers, RDP, SMB, NetBIOS and LDAP connections to Linux servers) you probably can come up with more based on your particular environment. The idea here is that since an attacker doesn’t have any knowledge of the organization’s infrastructure, the attacker will have to first scan hosts to see which operating systems are running and which services they are hosting, for example by trying to connect via SMB (a protocol mostly used by Windows computers) and if there is a response, the attacker would assume that it is Windows. Of course, the same applies to Linux.

Cloud Access

In cloud infrastructures, instances and even the mirroring configuration are accessible via the Internet and therefore theoretically allows an attacker to find out whether an instance is mirrored and to act accordingly. Because of this, it is even more important to make sure that access to the cloud management console is properly secured and monitored.

What is the Coralogix Security Traffic Analyzer (STA), and Why Do I Need It?

The wide-spread adoption of cloud infrastructure has proven to be highly beneficial, but has also introduced new challenges and added costs – especially when it comes to security.

As organizations migrate to the cloud, they relinquish access to their servers and all information that flows between them and the outside world. This data is fundamental to both security and observability.

Cloud vendors such as AWS are attempting to compensate for this undesirable side effect by creating a selection of services which grant the user access to different parts of the metadata. Unfortunately, the  disparate nature of these services only creates another problem. How do you bring it all together?

The Coralogix Cloud Security solution enables organizations to quickly centralize and improve their security posture, detect threats, and continuously analyze digital forensics without the complexity, long implementation cycles, and high costs of other solutions.

The Security Traffic Analyzer Can Show You Your Metadata

Using the Coralogix Security Traffic Analyzer (STA), you gain access to the tools that you need to analyze, monitor and alert on your data, on demand.

Here’s a list of data types that are available for AWS users:

[table id=43 /]

Well… That doesn’t look very promising, right? This is exactly the reason why we developed the Security Traffic Analyzer (STA).

What can the Security Traffic Analyzer do for you?

Simple Installation

When you install the STA, you get an AWS instance and several other related resources.

Mirror Your Existing VPC Traffic

You can mirror your server traffic to the STA (by using VPC traffic mirroring). The STA will automatically capture, analyze and optionally store the traffic for you while creating meaningful logs in your Coralogix account. You can also create valuable dashboards and alerts. To make it even easier, we created the VPC Traffic Mirroring Configuration Automation handler which automatically updates your mirroring configuration based on instance tags and tag values in your AWS account. This allows you to declaratively define your VPC traffic mirroring configuration.

Machine Learning-Powered Analysis

The STA employs ML-powered algorithms which alert you to potential threats with the complete ability to tune, disable and easily create any type of new alerts.

Automatically Enrich Your Logs

The STA automatically enriches the data passing through it such as domain names, certificate names, and much more by using data from several other data sources. This allows you to create more meaningful alerts and reduce false-positives while not increasing false-negatives.

What are the primary benefits of the Coralogix STA?

Ingest Traffic From Any Source

Connect any source of information to complete your security observability, including Audit logs, Cloudtrail, GuardDuty or any other source. Monitor your security data in one of 100+ pre-built dashboards or easily build your own using our variety of visualization tools and APIs.

Customizable Alerts & Visualizations

The Coralogix Cloud Security solution comes with a predefined set of alerts, dashboards and Suricata rules. Unlike many other solutions on the market today, you maintain the ability to change any or all of them to tailor them to your organization’s needs.

One of the most painful issues that usually deters people from using an IDS solution is that they are notorious for their high false-positive rate, but Coralogix makes it unbelievably easy to solve these kinds of issues. Dynamic ML-powered  alerts, dashboards, and Suricata rules are just a matter of 2-3 clicks and you’re done.

Automated Incident Response

Although Coralogix focuses on detection rather than prevention, it is still possible to achieve both detection and better prevention by integrating Coralogix with any orchestration platform such as Cortex XSOAR and others. 

Optimized Storage Costs

Security logs need to be correlated with packet data in order to provide needed context to perform deep enough investigations. Setting up, processing, and storing packet data can be laborious and cost-prohibitive.

With the Coralogix Optimizer, you can reduce up to 70% of storage costs without sacrificing full security coverage and real-time monitoring. This new model enables you to get all of the benefits of an ML-powered logging solution at only a third of the cost and with more real-time analysis and alerting capabilities than before.

How Does the Coralogix STA Compare to AWS Services?

Here’s a full comparison between the STA and all the other methods discussed in this article:

[table id=44 /]

(1) Will be added soon in upcoming versions

As you can see, the STA is already the most effective solution for gaining back control and access to your metadata. In the upcoming versions, we’ll also improve the level of network visibility by further enriching the data collected, allowing you to make even more fine-grained alerting rules.

What’s the Most Powerful Tool in Your Security Arsenal? 

Trying to work out the best security tool is a little like trying to choose a golf club three shots ahead – you don’t know what will help you get to the green until you’re in the rough.

Traditionally, when people think about security tools, firewalls, IAM and permissions, encryption, and certificates come to mind. These tools all have one thing in common – they’re static. In this piece, we’re going to examine the security tools landscape and understand which tool you should be investing in.

Security Tools – The Lay of the Land

The options available today to the discerning security-focused organization are diverse. From start-ups to established enterprises, understanding who makes the best firewalls or who has the best OWASP top ten scanning is a nightmare. We’re not here to compare vendors, but more to evaluate the major tools’ importance in your repertoire.

Firewalls and Intrusion Detection Systems

Firewalls are a must-have for any organization, no one is arguing there. Large or small, containerized or monolithic, without a firewall you’re in big trouble.

Once you’ve selected and configured your firewall, ensuring it gives the protection you need, you might think you’ve uncovered a silver bullet. The reality is that you have to stay on top of some key parameters to make sure you’re maximizing the protection of the firewall or IDS. Monitoring outputs such as traffic, bandwidth, and sessions are all critical to understanding the health and effectiveness of your firewalls.

Permissions Tooling

The concept of Identity and Access Management has evolved significantly in the last decade or so, particularly with the rise of the cloud. The correct provisioning of roles, users, and groups for the purposes of access management is paramount for keeping your environment secure.

Staying on top of the provisioning of these accesses is where things can get a bit difficult. The ability to understand (through a service such as AWS Cloudwatch) all of the permissions assigned to individuals, applications, and functions alike is difficult to keep track of. While public CSPs have made this simpler to keep track of, the ability to view permissions in the context of what’s going on in your system gives enhanced security and confidence.

Encryption Tooling

Now more than ever, encryption is at the forefront of any security-conscious individual’s mind. Imperative for protecting both data at rest and in flight, encryption is a key security tool. 

Once implemented, you need to keep track of your encryption, ensuring that it remains in place for whatever you’re trying to protect. Be it disk encryption or encrypted traffic on your network, it needs to be subject to thorough monitoring.

Monitoring is the Foundation

With all of the tool types that we’ve covered, there is a clear and consistent theme. Not only do all of the above tools have to be provisioned, they also rely on strong and dependable monitoring to assist proactive security and automation.

Security Incident Event Management

The ability to have a holistic view of all of your applications and systems is key. Not only is it imperative to see the health of your network, but if part of your application stack is underperforming it can be either symptomatic of, or inviting to, malicious activity.

SIEM dashboards are a vital security tool which use the concept of data fusion to provide advanced modelling and provide context to otherwise isolated metrics. Using advanced monitoring and altering, the best SIEM products will not only dashboard your system health and events in realtime, but also retain log data for a period of time to provide event timeline reconstruction.

The Power of Observability

Observability is the new thing in monitoring. It expands beyond merely providing awareness of system health and security status to giving cross-organizational insights which drive real business outcomes.

What does this mean for our security tooling? Well, observability practises drive relevant insights to the individuals most empowered to act on them. In the instance of system downtime, this would be your SREs. In the case of an application vulnerability, this would be your DevSecOps ninjas. 

An observability solution working in real time will not only provide telemetry on the health and effectiveness of your security tool arsenal, but will also give real-time threat detection. 

Coralogix Cloud Security

Even if you aren’t certain which firewall or encryption type comes out on top, you can be certain of Coralogix’s cloud security solution.

With a quick, 3-step setup and out-of-the-box functionality including real-time monitoring, you can be sure that your tools and engineers can react in a timely manner to any emerging threats. 

Easily connect any data source to complete your security observability, including Audit logs, Cloudtrail, GuardDuty or any other source. Monitor your security data in one of 100+ pre-built dashboards or easily build your own using our variety of visualization tools and APIs.

How to automate VPC Mirroring for Coralogix STA

After installing the Coralogix Security Traffic Analyzer (STA) and choosing a mirroring strategy suitable for your organization needs (if not, you can start by reading this) the next step would be to set the mirroring configuration in AWS. However, the configuration of VPC Traffic Mirroring in AWS is tedious and cumbersome – it requires you to create a mirror session per network interface of every mirrored instance, and just to add an insult to injury, if that instance terminates for some reason and a new one replaces it you’ll have to re-create the mirroring configuration from scratch. If you, like many others, use auto-scaling groups to automatically scale your services up and down based on the actual need, the situation becomes completely unmanageable.

Luckily for you, we at Coralogix have already prepared a solution for that problem. In the next few lines I’ll present a tool we’ve written to address that specific issue as well as few use-cases for it.

The tool we’ve developed can run as a pod in Kubernetes or inside a Docker container. It is written in Go to be as efficient as possible and will require only minimal set of resources to run properly. While it is running it will read its configuration from a simple JSON file and will select AWS EC2 instances by tags and then will select network interfaces on those instances and will create VPC Traffic Mirroring sessions for each network interface selected to the configured VPC Mirroring Target using the configured VPC Mirroring Filter.

The configuration used in this document will instruct the sta-vpc-mirroring-manager to look for AWS instances that have the tags “sta.coralogixstg.wpengine.com/mirror-filter-id” and “sta.coralogixstg.wpengine.com/mirror-target-id” (regardless of the value of those tags), collect the IDs of their first network interfaces (that are connected as eth0) and attempt to create a mirror session for each network interface collected to the mirror target specified by the tag “sta.coralogixstg.wpengine.com/mirror-target-id” using the filter ID specified by the tag “sta.coralogixstg.wpengine.com/mirror-filter-id” on the instance that network interface is connected to.

To function properly, the instance hosting this pod should have an IAM role attached to it (or the AWS credentials provided to this pod/container should contain a default profile) with the following permissions:

  1. ec2:Describe* on *
  2. elasticloadbalancing:Describe* on *
  3. autoscaling:Describe* on *
  4. ec2:ModifyTrafficMirrorSession on *
  5. ec2:DeleteTrafficMirrorSession on *
  6. ec2:CreateTrafficMirrorSession on *

Installation

This tool can be installed either as a kubernetes pod or a docker container. Here are the detailed instructions for installing it:

Installation as a docker container:

  1. To download the docker image use the following command:
    docker pull coralogixrepo/sta-vpc-mirroring-config-manager:latest
  2. On the docker host, create a config file for the tool with the following content (if you would like the tool to report to the log what is about to be done without actually modifying anything set “dry_run” to true):
    {
      "service_config": {
        "rules_evaluation_interval": 10000,
        "metrics_exporter_port": ":8080",
        "dry_run": false
      },
      "rules": [
        {
          "conditions": [
            {
              "type": "tag-exists",
              "tag_name": "sta.coralogixstg.wpengine.com/mirror-target-id"
            },
            {
              "type": "tag-exists",
              "tag_name": "sta.coralogixstg.wpengine.com/mirror-filter-id"
            }
          ],
          "source_nics_matching": [
            {
              "type": "by-nic-index",
              "nic_index": 0
            }
          ],
          "traffic_filters": [
            {
              "type": "by-instance-tag-value",
              "tag_name": "sta.coralogixstg.wpengine.com/mirror-filter-id"
            }
          ],
          "mirror_target": {
            "type": "by-instance-tag-value",
            "tag_name": "sta.coralogixstg.wpengine.com/mirror-target-id"
          }
        }
      ]
    }
    
  3. Use the following command to start the container:
    docker run -d 
       -p <prometheus_exporter_port>:8080 
       -v <local_path_to_config_file>:/etc/sta-pmm/sta-pmm.conf 
       -v <local_path_to_aws_profile>/.aws:/root/.aws 
       -e "STA_PM_CONFIG_FILE=/etc/sta-pmm/sta-pmm.conf" 
       coralogixrepo/sta-vpc-mirroring-config-manager:latest

Installation as a Kubernetes deployment:

    1. Use the following config map and deployment configurations:
      apiVersion: v1
      kind: ConfigMap
      data:
        sta-pmm.conf: |
          {
            "service_config": {
              "rules_evaluation_interval": 10000,
              "metrics_exporter_port": 8080,
              "dry_run": true
            },
            "rules": [
              {
                "conditions": [
                  {
                    "type": "tag-exists",
                    "tag_name": "sta.coralogixstg.wpengine.com/mirror-target-id"
                  },
                  {
                    "type": "tag-exists",
                    "tag_name": "sta.coralogixstg.wpengine.com/mirror-filter-id"
                  }
                ],
                "source_nics_matching": [
                  {
                    "type": "by-nic-index",
                    "nic_index": 0
                  }
                ],
                "traffic_filters": [
                  {
                    "type": "by-instance-tag-value",
                    "tag_name": "sta.coralogixstg.wpengine.com/mirror-filter-id"
                  }
                ],
                "mirror_target": {
                  "type": "by-instance-tag-value",
                  "tag_name": "sta.coralogixstg.wpengine.com/mirror-target-id"
                }
              }
            ]
          }
      
      metadata:
        labels:
          app.kubernetes.io/component: sta-pmm
          app.kubernetes.io/name: sta-pmm
          app.kubernetes.io/part-of: coralogix
          app.kubernetes.io/version: '1.0.0-2'
        name: sta-pmm
        namespace: coralogix
      
      ----------
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        labels:
          app.kubernetes.io/component: sta-pmm
          app.kubernetes.io/name: sta-pmm
          app.kubernetes.io/part-of: coralogix
          app.kubernetes.io/version: '1.0.0-2'
        name: sta-pmm
        namespace: coralogix
      spec:
        selector:
          matchLabels:
            app.kubernetes.io/component: sta-pmm
            app.kubernetes.io/name: sta-pmm
            app.kubernetes.io/part-of: coralogix
        template:
          metadata:
            labels:
              app.kubernetes.io/component: sta-pmm
              app.kubernetes.io/name: sta-pmm
              app.kubernetes.io/part-of: coralogix
              app.kubernetes.io/version: '1.0.0-2'
            name: sta-pmm
          spec:
            containers:
              - env:
                  - name: STA_PM_CONFIG_FILE
                    value: /etc/sta-pmm/sta-pmm.conf
                  - name: AWS_ACCESS_KEY_ID
                    value: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                  - name: AWS_SECRET_ACCESS_KEY
                    value: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
                image: coralogixrepo/sta-vpc-mirroring-config-manager:latest
                imagePullPolicy: IfNotPresent
                livenessProbe:
                  httpGet:
                    path: "/"
                    port: 8080
                  initialDelaySeconds: 5
                  timeoutSeconds: 1
                name: sta-pmm
                ports:
                  - containerPort: 8080
                    name: sta-pmm-prometheus-exporter
                    protocol: TCP
                volumeMounts:
                  - mountPath: /etc/sta-pmm/sta-pmm.conf
                    name: sta-pmm-config
                    subPath: sta-pmm.conf
            volumes:
              - configMap:
                  name: sta-pmm-config
                name: sta-pmm-config
      

Configuration

To configure instances for mirroring, all you have to do is to make sure that the instances you would like their traffic to be mirrored to your STA, will have the tags “sta.coralogixstg.wpengine.com/mirror-filter-id” and “sta.coralogixstg.wpengine.com/mirror-target-id” pointing at the correct IDs of the mirror filter and target respectively. To find out the IDs of the mirror target and mirror filter that were created as part of the installation of the STA, enter the CloudFormation Stacks page in AWS Console and search for “TrafficMirrorTarget” and for “TrafficMirrorFilter” in the Resources tab:

To assign different mirroring policy to different instances, for example to mirror traffic on port 80 from some instances and mirror traffic on port 53 from other instances, simply create a VPC Traffic Mirror Filter manually with the correct filtering rules (just like in a firewall) and assign its ID to the “sta.coralogixstg.wpengine.com/mirror-filter-id” tag of the relevant instances.

Pro Tip: You can use AWS “Resource Groups & Tag Editor” to quickly assign tags to multiple instances based on an arbitrary criteria.

Good luck!

How SIEM is evolving in 2020

The evolution of Security Information and Event Management (SIEM) is deeply intertwined with cloud computing, both in terms of technological breakthroughs the cloud provided and from its inherent security challenges. 

What are the Security Challenges in the Cloud?

With the rise of cloud computing, we no longer rely on long-lived resources. An ephemeral infrastructure obscures the identity of the components and, even if you do have the visibility it doesn’t necessarily mean you can comprehend the meaning behind the components. In fact, with containers and functions being an integral part of modern cloud-native applications the visibility is now even harder.

The cost, in particular of the network and bandwidth, also creates an unexpected security challenge. With on-premise infrastructure, there is no need to pay to mirror network traffic (required to monitor) but, in contrast, public cloud providers charge a hefty sum for this functionality. As an example, in AWS you pay for each VPC mirroring session and also for the bandwidth going from the mirror to the monitoring system

Raising logging and metric data costs put pressure to collect fewer data and has a negative impact on the effectiveness of the security monitoring solution because it’s almost impossible to predict what the future needs of an investigation will be. On the other hand, the technical ability to properly scale using cloud and ingest more data had a tremendous positive impact. 

However, even that ended up creating analyst fatigue, with most SIEMs having low satisfaction rates due to false positives.

The scalability of public cloud, combined with gross misconfigurations and over-exposure of resources to the internet, also widens the scope of attacks and security threats. Today, advanced persistent threats (APT), ransomware and geo-distributed Denial of Service (DDoS/DDoD) attacks among many other security issues pose real peril to organizations of any size. The need for proper enterprise detection and response capabilities and services is vital to tackle the cloud security challenges, and a good SIEM solution is one of the best weapons you can get.

SIEM solutions and their evolution

The term Security Information and Event Management (SIEM) is not new but, during the last few years has become incredibly popular. 

The concept behind it is rather simple yet ambitious, a SIEM solution collects and analyzes events – in both real-time and by looking back at historical data – from multiple data sources within an organization to provide capabilities such as threat detection, security incident management, analytics and overall visibility across the systems in an organization. 

SIEM-like systems have existed for several years but it was only in the early 2000s that the term really came to life. At the time, cloud computing was practically non-existent with only some providers giving first steps in the domain. 

The first generation of SIEM was then fully on-premise, incredibly expensive and suffering from huge problems related to data source integrations and mostly scalability (vertical was the only way to go) without being able to properly cope with ever-growing amounts of data.

With the rise of cloud computing popularity and maturity, also SIEM solutions evolved and were able to benefit from it. The second generation of SIEM solution was then born and being able to leverage those technology advancements, such as near-unlimited growth storage capacity and elastic computing resources, the scalability limitations were then solved by enabling growth to happen horizontally and therefore coping with the constant data growths. 

However, a new problem emerged: too much data. Security professionals and engineers using SIEM solutions became drowned with so much information that it became hard to find and understand what was actually relevant. A next-generation of SIEM solutions started to appear with focus on the operational capabilities and properly leverage analytics & machine learning algorithms to find those needles in the haystack. 

What should you be looking for in a SIEM solution?

The new generation of SIEM effectively enables teams to gain meaningful visibility over the infrastructure and take relevant actions. 

Even people without a security background are now able to use a modern SIEM solution. The dashboards, alerts and recommended actions are now made to be comprehensive, straightforward and user friendly, while still enabling security professionals to leverage forensic capabilities within the solution if needed.

When looking for a SIEM solution, you should obviously look for one that can truly cope with data growth, and cloud-based managed solutions are naturally a good fit for this, but scalability by itself is not enough. One aspect to keep in mind is the data optimization capability, such as the ability to process log insights without actually retaining the log data which is a prime example of a key functionality that enables massive cost savings in the long run. 

Integrations and Data Correlation

Without data, a SIEM is rendered useless and, with the current wide technology landscape is becoming incredibly hard to connect to all the existing data sources within an organization. Having good ready available integration capabilities with 3rd party software & data sources is important functionality to take into account. The value of SIEM comes from being able to correlate events (e.g. network traffic) with applications, cloud infrastructure, assets, transactions, and security feeds.

In the end, you want to get as much visibility and insights as possible into your cloud infrastructure by relying on raw traffic data (e.g. using traffic mirroring), because that enables you to detect anomalies such as advanced persistent threats (APT’s) without the attacker even noticing it. Raw network traffic data capture and deep packet inspection is something that one might often overlook. 

Normally, web server logs wouldn’t contain the actual data (payload) sent to the server (e.g. database). For example, if someone sends a malicious query containing a SQL injection, it will go unnoticed unless your SIEM solution is capable of capturing and inspecting that raw data.

Enriching the Value of Existing Data

While the collection and correlation of the information you have within the organization – especially network traffic events – is the most important data point for a SIEM solution, quite often it is not enough to reason only with that data in a manner that can provide actionable intelligible alerts and recommendations.

A SIEM solution should be capable of enriching the organization’s data with additional relevant information. The techniques may vary, but some of the obvious ones are to extract the geolocation of IP addresses, detect suspicious strings that may indicate usage of Domain Generation Algorithms (DGA), public information regarding domain names (e.g. WHOIS data), and enriching log data with infrastructure information based on the cloud provider. 

A big part of that enrichment is only made possible if the SIEM solution subscribes and imports additional data sources, such as IP blacklists and 3rd party security feeds. These feeds are extremely important and often dictate the effectiveness of the SIEM solution, therefore having quality built-in feeds from cybersecurity vendors and other organizations is crucial.

Intelligence, Alerting and Insights

From the next generation of SIEM solutions we need to expect nothing less than meaningful alerts that leave little to interpretation. A solution capable of using AI and Machine Learning algorithms to power analytics combined with strong security knowledge will have a clear advantage. 

The traditional and old school SIEM solutions relied on static thresholds to produce alerts, which causes fatigue and quite often useless alerts and false positives. Although there were improvements like somewhat dynamic thresholds, this only reduced false positives, but left the company open to more attacks. 

Instead, by leveraging machine learning algorithms coupled with expert security knowledge, a modern SIEM solution will learn from the behavior and natural patterns of a system and users and only alert you true causes for concern.

In fact, User and Entity Behaviour Analytics, or simply UEBA, is nowadays one of the must-have capabilities for a next-generation SIEM. 

UEBA, a term coined by Gartner a few years back is the concept of detecting anomalous behavior of users and machines against a defined normal baseline behavior. Profiling the user of a given system is incredibly helpful. As an example, one could expect that a machine or system used by a Developer can execute several PowerShell commands in a given period of time, while the same exact behavior from a machine used by someone working in Marketing would be very suspicious.

Similarly, the entity (i.e. machine) profiling is equally important. If a machine that is usually only used during working hours suddenly initiates network connections (e.g. VPN) to a distant remote location at 4 AM is a strong indicator that something is wrong and likely to raise suspicion.

Conclusion 

The evolution of SIEM solutions has been nothing short of remarkable. A major driver has been as mentioned above cloud computing, both with the technology innovations and developments that it gave us but also the inherent security challenges. 

As you move forward in developing your existing SIEM capabilities or adopting a whole new solution, it is very important to choose a solution that brings true value to the organization and provides you actionable intelligible alerts and recommendations. As we covered in this article, scalability and good integration capabilities with the 3rd party systems and data sources your organization has are key, as is the information enrichment, intelligence, adopting the UEBA concept combined with security expertise. 

Want to read more about security? There is a lot more on our blog, including this article on AWS Security. Check it out! 

Does Complexity Equal Security?

“Teacher somewhere in India: The world you see is supported by a giant turtle.

Student: And what holds this giant turtle down?

Teacher: Another giant turtle, of course.

Student: And what is it that holds this one?

Teacher: Don’t worry – there are turtles all the way down!”

Throughout my years of experience, I frequently encountered solutions (or at least attempts to find a solution) for security issues that were rather complicated and cumbersome, that has led me to wonder if those who came up with them sincerely believe that their complexity and subtlety could improve the safety level in their organizations.

In this article, I will attempt to support the claim that states that ‘very often the opposite is correct’, while towards the end I will present two examples of situations where this rule is intentionally bent in order to improve the security of critical assets.

 

The Scenario

Suppose you have been entitled to protect the building you live in, how would you start the planning process for it? Evidently, you would try to analyze and point out the most critical assets for the enterprise like tenants, letters in the mailboxes, and the bicycles locked down in the lobby. You would attempt to predict the possible threats to these assets and their likelihood. Subsequently, you could effectively reduce the likelihood to an occurrence of one of these threats or minimize its expected risks through a variety of methods. The most common aspect between these methods is that the way in which each method will reduce risk or the chance of realizing one or the other threats, is known to the user.

Often times, in the world of data security or cyber, those in charge of protecting systems do not necessarily understand the thinking process in a potential attackers’ mind deeply enough. Frequently, the result is a network, that paradoxically, is built for managing and maintenance but is easy to break into

 

A few examples of weak security:

DMZ – according to its’ common definition, is a network located between the public network (usually the Internet) and the corporate network. While controlled access to servers located in it through the internet and the corporate network will be enabled, access to the internet or corporate network from it will not be enabled.

However, in various organizations, there exist a number of DMZ networks and at times several corporate networks. However, as aptly put by Bruce Shneier in his excellent article ‘The feeling and reality of security’ we, as human beings often respond to a perception of security rather than to security itself. The result of such a reality could be a sharp increase in network complexity and thus also the complexity of its treatment and maintenance. In many ways, such an approach requires more attention to small details on the part of network administrators – and in the case of multiple network administrators, special attention to the coordination between them.

If we were to ask ourselves questions about the vulnerabilities and weaknesses of human beings – at the top of the list we would find the weakness of human beings of paying attention to a large quantity of details over time, maintaining concentration over time, and following complex processes…you can already see where this is going…

Another example: A technique that is frequently recommended for network administrators is to place two firewalls in the organization – both being produced by two different manufacturers in back-to-back configuration. This is done so that if any weakness is revealed in one manufacturer, the chances that the same weakness will occur in the other is rather low.

However, the maintenance of these firewalls would require a large amount of resources, and in addition, the same policy would need to be applied to both firewalls in order to take full advantage of them. As the equipment were produced by two different manufacturers, the updates of the equipment changes and becomes more complex over time.

Once again – if we ask ourselves for another commonly exploited weakness, one of the most prominent on the list would be – lack of consistency.

Parents of several children will notice that all principles of their education that they had acquired from childhood is implemented to near perfection with the first child. Suddenly, with the addition of a second child, that disciplined approach may loosen and by the third child, much more has given way – leaving the elder child amazed at how things have changed.

In our case, when we inquire about the network administrator, he explains that he mainly works in back-to-back configuration, but through a deeper examination, we may discover that he was using two different firewalls made by different manufacturers – each one being intended for a different task. For example, one may be responsible for the treatment of IPSEC VPN and services available online, while another for separating the network from several DMZ servers. In this way, filtering takes place by the first or the second firewall, but rarely by both.

Another great example, but with different content:

Many programmers prefer to develop components on their own rather than using open source components. In their opinion, we can never know if someone changed the code on the cloud to a malicious one. When we have control over our own code, it will naturally be perceived with less danger in comparison to code controlled by people whom we do not personally know. Although, as a matter of fact, a code present on the internet is more likely to be secure and organized, since it is open to the public and therefore has more chances of someone detecting flaws in it and offering corrections.

In continuation to the security risks mentioned above, proprietary code may also lead to the likelihood that it contains bugs – this is due to the fact that such a development also requires continuous maintenance, fixing, and a combination with existing components in the organization, especially when it comes to infrastructure development (queue jobs or pipeline components, for example). The “solutions” to such issues typically lead to cumbersome codes that are difficult to maintain and are full of bugs. In addition, in many cases, the programmer who wrote the code leaves the org or changes roles – leaving the organization with code written by an unknown source…which is exactly the point of proprietary code in the first place.

In many organizations, redundant communication equipment as well as parallel communication pathways are stationed in such a way as to reduce the risk for communication failure as a result of the downfall of a Backbone Switch for example.

It is worth noting that while this type of architecture will significantly minimize the risk for communication loss due to a downfall of communication equipment, and if it is set to an active configuration, will also likely reduce the risk of a downfall due to DoS attacks, it also simultaneously creates new risks: risks of attack or malfunction in looping mechanisms (Spanning Tree being one example).

This results in a number of channels that now need to be secured in order to prevent eavesdropping, and a higher chance that equipment failures will go undetected. Meanwhile packet-based monitoring equipment will be unable to collect the needed information easily and effectively.

It is important to understand – I am not stating that the use of communication equipment in order to reduce the risk of communications downfall or size-based attacks is a bad idea. On the contrary, I am only trying to take everything into consideration whilst making such decisions. I most definitely believe that today, communication equipment are rather reliable, therefore organizations need to understand that redundant communication equipment isn’t a magic solution without downside, and that it needs to be tested against the challenges and difficulties that it poses.

I had promised that at the end of this post, I would present an example that in fact demonstrates a situation where complexity is deliberately used in order to create a higher level of security. One example of such a situation, in my opinion, is obfuscation. It is important to keep in mind that compiling source codes into real machine language such as C or C ++ or GO languages, becomes something that is very difficult to reverse back into readable text, and in large projects it becomes nearly impossible. Therefore, many closed source software programs are written in these languages in order to make it difficult for potential thieves to obtain the source code. However, in many cases, due to a variety of technological reasons, many closed source software also need to contain some parts in non – compiled languages or partially compiled ones (such as #C and JAVA), therefore some source code may need to be protected even when compilation is not a possibility. The solution that was discovered was the obfuscation process which converts the code into hard-to-read code by, inter alia:

  • Transferring the whole code into one single row containing all different classes and methods.
  • Switching variable names, classes and functions of random texts.
  • Add demo code that does nothing
  • Dismantling functions into sub-functions

In conclusion, this post demonstrated how the fact that people are managing data security in organizations is what creates complexity often due to the biases inherent in the human brain. This complexity may contribute to the data security of the organization, but it also aggravates it at times. Nevertheless, I have also shown one example where in certain cases complexity itself is being used in order to improve data security.