Four Things That Make Coralogix Unique

SaaS Observability is a busy, competitive marketplace. Alas, it is also a very homogeneous industry. Vendors implement the features that have worked well for their competition, and genuine innovation is rare. At Coralogix, we have no shortage of innovation, so here are four features of Coralogix that nobody else in the observability world has.

1. Customer Support Is VERY Fast

Customer support is the difference between some interesting features and an amazing product. Every observability vendor has some form of customer support, but none of them are even close to our response time.

At Coralogix, we respond to customer queries in 19 seconds (median) and achieve <40 minute resolution times on average.

This is fine for now – but how will Coralogix scale this support?

We already have! Coralogix has over 2,000 customers, all of whom are getting the same level of customer support because we don’t tier our service. 1 gigabyte or 100 terabytes – everyone gets the same fantastic standard of service. Don’t believe me? Sign up for a trial account and test our service!

2. Coralogix is Built Differently

The typical flow for data ingestion follows a set of steps:

  1. Data is initially stored and indexed. 
  2. Indexed data then triggers a series of events downstream, such as dashboard updates and triggering alarms.
  3. Finally, cost optimization decisions and data transformations are made.

This flow adds latency and overhead, which slows down alarms, log ingestion, dashboard updates, and more. It limits the decision-making capabilities of the platform. It’s impossible to skip indexing and go straight to archiving because every process depends on indexed data. At Coralogix, we saw that this wouldn’t work and endeavored to build our platform differently

So how does Coralogix do it?

Coralogix leverages the Streama© architecture. Streama focuses on processing the data first and delaying storage and indexing until all the important decisions have been made. 

It is a side-effect-free architecture, meaning it is entirely horizontally scalable, and adapts  beautifully to meet huge daily demands. This means the Coralogix platform is exponentially more efficient. 

3. Archiving and Remote Query

Many observability providers allow customers to archive their data in low-cost storage. In most providers, data is compressed and stored in low-cost storage, like Amazon S3. Customers need to rehydrate their data if they wish to access their data.

There are some key issues with this approach:

  • Archived data is far less discoverable.
  • Historical data may be held hostage by a SaaS provider using proprietary compression.
  • Customers now have to pay again for a massive volume of data in hot storage.

So how does it work at Coralogix?

At Coralogix, we do not demand that data must be rehydrated before it can be queried. Instead, archives can be queried directly. Our remote query engine is fast. Up to 5x faster than Athena and capable of processing terabytes of data in seconds. 

With support for schema on read and schema-on-write, Coralogix Remote Query is much more than a simple archiving solution. It’s an entire data analytics platform capable of processing Lucene, SQL, and DataPrime queries.

Does Remote Query save customers money?

In summary, yes. Customers are migrating to Coralogix daily, and they constantly report cost savings. One of the most interesting behaviors in new customers is their willingness to hold less data in “frequent search.” This means customers are paying for less data in hot storage because that data is still easily and instantly accessible in the archive.

This behavior shift and our TCO Optimizer regularly drive cost savings of between 40% and 70%. Speaking of…

4. The Most Advanced Cost Optimization on the Market

Most observability providers have a tiered solution, especially regarding cost optimization. Spending enough money unlocks certain features, like tiered storage. Our competitors need to gatekeep their cost optimization features because they are not architected for this type of early decision-making in the data process. This means they can only afford to optimize their biggest customers. 

Coralogix is Perfect for the Cost Optimization Challenge

Our Streama© architecture means we can make very early decisions, long before storage and indexing. This allows us to make cost-optimization decisions for all of our customers. Whether it’s our Frequent Search, Monitoring, or Compliance use case, Coralogix and our unique architecture regularly drive down customer costs.

More than this, we also have features that allow our customers to transform their data on the fly. This allows them to keep only the necessary information and drop everything they don’t. For example, Logs2Metrics, allows our customers to transform their expensive logs into optimized metrics that can be retained for far longer at a fraction of the cost.  

Coralogix is Different in All the Best Ways

Coralogix is more than just a full-stack observability platform with some interesting tools. It’s a revolutionary product that will scale to meet customer demands—every time. Our features, coupled with unprecedented customer support and incredible cost optimization make us one of the few observability providers that will help you to grow, help you to optimize, and, at the same time, save you money in the process.

What’s Missing From Almost Every Alerting Solution in 2022?

Alerting has been a fundamental part of operations strategy for the past decade. An entire industry is built around delivering valuable, actionable alerts to engineers and customers as quickly as possible. We will explore what’s missing from your alerts and how Coralogix Flow Alerts solve a fundamental problem in the observability industry. 

What does everyone want from their alerts?

When engineers build their alerts, they focus on making them as useful as possible, but how do we define useful? While this is a complicated question, we can break the utility of an alert into a few easy points:

  • Actionable: The information that the alert gives you is usable, and tells you everything you need to know to respond to the situation, with minimal work on your part to piece together what is going on.
  • Accurate: Your alerts trigger in the correct situation, and they contain correct information.
  • Timely: Your alerts tell you, as soon as possible, the information you need, when you need it.

For many engineers, achieving these three qualities is a never-ending battle. Engineers are constantly chasing after the smallest, valuable set of alerts we can possibly have to minimize noise and maximize uptime. 

However, one key feature is missing from almost every alerting provider, and it goes right to the heart of observability in 2022.

The biggest blocker to the next stage of alerting

If we host our own solution, perhaps with an ELK stack and Prometheus, as is so common in the industry, we are left with some natural alerting options. Alertmanager integrates nicely with Prometheus, and Kibana comes with its own alerting functionality, so you have everything you need, right? Not quite.

Your observability data has been siloed into two specific datastores: Elasticsearch and Prometheus. As soon as you do this, you introduce an architectural complication.

How would you write an alert around your logs AND your metrics?

Despite how simple this sounds, this is something that is not supported by the vast majority of SaaS observability providers or open-source tooling. Metrics, logs, and traces are treated as separate pillars, filtering down into our alerting strategies.

It isn’t clear how this came about, but you only need to look at the troubleshooting practices of any engineer to work out that it’s suboptimal. As soon as a metric alert fires, the engineer looks at the logs to verify. As soon as a log alert fires, the engineer looks at the metrics to better understand. It’s clear that all of this data is used for the same purpose, but we silo it off into separate storage solutions and, in doing so, make our life more difficult.

So what can we do?

The answer is twofold. Firstly, we need to bring all of our observability data into a single place, to build a single pane of glass for our system. Aside from alerting, this makes monitoring and general querying more straightforward. It removes the complex learning curve associated with many open-source tools, which speeds up the time it takes for engineers to become familiar with their chosen approach to observability. However, getting data into one place isn’t enough. Your chosen platform needs to support holistic alerting. And there is only one provider on the market – Coralogix.

Flow alerts cross the barrier between logs, metrics, and traces

There are many SaaS observability providers out there that will consume your logs, metrics, and traces, but none of them can tie all of this data together into a single, cohesive alert that completely describes an outage, making use of your logs, metrics, and traces in the same alert. 

Flow alerts enable you to view your entire system globally without being constrained to a single data type. This brings some key benefits that directly address the great limitations in alerting:

  • Accurate: With flow alerts, you can track activity across all of your observability data, enabling you to outline precisely the conditions for an incident. This reduces noise because your alerts aren’t too sensitive or based on only part of the data. They’re perfectly calibrated to the behavior of your system.
  • Actionable: Flow alerts tell you everything that has happened, leading up to an incident, not just the incident itself. This gives you all of the information you need, in one place, to remedy an outage, without hunting for associated data in your logs or metrics. 
  • Timely: Flow alerts are processed within our Streama technology, meaning your alerts are processed and actioned in-stream, rather than waiting for expensive I/O and database operations to complete. 

Coralogix’s Streama Technology: The Ultimate Party Bouncer

Coralogix is not just another monitoring or observability platform. We’re using our unique Streama technology to analyze data without needing to index it so teams can get deeper insights and long-term trend analysis without relying on expensive storage. 

So you’re thinking to yourself, “that’s great, but what does that mean, and how does it help me?” To better understand how Streama improves monitoring and troubleshooting capabilities, let’s have some fun and explore it through an analogy that includes a party, the police, and a murder!

Grab your notebook and pen, and get ready to take notes. 

Not just another party 

Imagine that your event and metric data are people, and the system you use to store that data is a party. To ensure that everyone is happy and stays safe, you need a system to monitor who’s going in, help you investigate, and remediate any dangerous situations that may come up. 

For your event data, that would be some kind of log monitoring platform. For the party, that would be our bouncer.

Now, most bouncers (and observability tools) are concerned primarily with volume. They’re doing simple ticket checks at the door, counting people as they come in, and blocking anyone under age from entering. 

As the party gets more lively, people continue coming in and out, and everyone’s having a great time. But imagine what happens if, all of a sudden, the police show up and announce there’s been a murder. Well, shit, there goes your night! Don’t worry, stay calm – the bouncer is here to help investigate. 

They’ve seen every person who has entered the room and can help the police, right?

Why can’t typical bouncers keep up?

Nothing ever goes as it should, this much we know. Crimes are committed, and applications have bugs. The key, then, is how we respond when something goes wrong and what information we have at our disposal to investigate.

Suppose a typical bouncer is monitoring our party, and they’re just counting people as they come in and doing a simple ID check to make sure they’re old enough to enter. In that case, the investigation process starts only once the police show up. At this point, readily-available information is sparse. You have all of these people inside, but you don’t have a good idea of who they are.

This is the biggest downfall of traditional monitoring tools. All data is collected in the same way, as though it carries the same potential value, and then investigating anything within the data set is expensive. 

The police may know that the suspect is wearing a black hat, but they still need to go in and start manually searching for anyone matching that description. It takes a lot of time and can only be done using the people (i.e., data) still in the party (i.e., data store). 

Without a good way to analyze the characteristics of people as they’re going in and out, our everyday bouncer will have to go inside and count everyone wearing a black hat one by one. As we can all guess, this will take an immense amount of time and resources to get the job done. Plus, if the suspect has already left, it’s almost like they were never there.

What if the police come back to the bouncer with more information about the suspect? It turns out that in addition to the black hat, they’re also wearing green shoes. With this new information, this bouncer has to go back into the party and count all the people with black hats AND green shoes. It will take him just as long, if not longer, to count all of those people again.

What makes Streama the ultimate bouncer?

Luckily, Streama is the ultimate bouncer and uses some cool tech to solve this problem.

Basically, Streama technology differentiates Coralogix from the rest of the bunch because it’s a bouncer that can comprehensively analyze the people as they go into the party. For the sake of our analogy, let’s say this bouncer has Streama “glasses,” which allow him to analyze and store details about each person as they come in.

Then, when the police approach the bouncer and ask for help, he can already provide some information about the people at the party without needing to physically go inside and start looking around.

If the police tell the bouncer they know the murderer had on a black hat, he can already tell them that X number of people wearing a black hat went into the party. Even better, he can tell them that without those people needing to be inside still! If the police come again with more information, the bouncer can again give them the information they need quite easily.  

In some cases, the bouncer won’t have the exact information needed by the police. That’s fine, they can still go inside to investigate further if required. By monitoring the people as they go in, though, the bouncer and the police can save a significant amount of time, money, and resources in most situations.

Additional benefits of Streama

Since you are getting the information about the data as it’s ingested, it doesn’t have to be kept in expensive hot storage just in case it’s needed someday. With Coralogix, you can choose to only send critical data to hot storage (and with a shorter retention period) since you get the insights you need in real-time and can always query data directly from your archive.

There are many more benefits to monitoring data in-stream aside from the incredible cost savings. However, that is a big one.

Data enrichment, dynamic alerting, metric generation from log data, data clustering, and anomaly detection occur without depending on hot storage. This gives better insights at a fraction of the cost and enables better performance and scaling capabilities. 

Whether you’re monitoring an application or throwing a huge party, you definitely want to make sure Coralogix is on your list!

We’re Thrilled To Share – Coralogix has Received AWS DevOps Competency

At Coralogix, we believe in giving companies the best of the best – that’s what we strive for with everything we do. With that, we are happy to share that Coralogix has received AWS DevOps Competency! 

Coralogix started working with AWS observability in 2017, and our partnership has grown immensely in the years since. So, what is our new AWS DevOps Competency status, and what does it mean for you?

As stated by Ariel Assaraf, CEO of Coralogix, “The dramatic surge in data volume is forcing companies to choose between cost and coverage. Achieving the AWS DevOps Competency designation validates our stateful streaming analytics technology, which combines real-time data analysis with stateful analytics—and decouples that from storage—to reduce costs and improve performance for customers.”

This designation recognizes that Coralogix’s stateful Streama© technology helps modern engineering teams gain real-time insights and trend analysis for logs, metrics, and security data with no reliance on storage or indexing.

What does AWS Competency Mean For You

What does DevOps competency mean? AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise.

What’s the benefit for you, as an engineer? Building a sustainable, durable, and secured cloud system isn’t easy and requires extensive skills and a sustained effort. Knowing that a vendor was audited by AWS helps to know they are investing in the right skills and effort to provide the best and most secure possible service for you. So you can work with a peace-of-mind knowing that you’re in good hands. 

Why should engineers care? This designation recognizes that our technology helps modern engineering teams gain real-time insights and trend analysis for logs, metrics, and security data with no reliance on storage or indexing.

About AWS Competency Program

AWS is enabling scalable, flexible, and cost-effective solutions from startups to global enterprises. To support the seamless integration and deployment of these solutions, AWS established the AWS Competency Program to help customers identify Consulting and Technology APN Partners with deep industry experience and expertise.

For more information, you can read the full press release here. You can also learn more about AWS solutions within Coralogix here

Announcing our $55M Series C Round Funding to further our storage-less data vision

It’s been an exciting year here at Coralogix. We welcomed our 2,000th customer (more than doubling our customer base) and almost tripled our revenue. We also announced our Series B Funding and started to scale our R&D teams and go-to-market strategy.

Most exciting, though, was last September when we launched Streamaⓒ – our stateful streaming analytics pipeline.

And the excitement continues! We just raised $55 million for our Series C Funding to support the expansion of our stateful streaming analytics platform and further our storage-less vision.

Streamaⓒ technology

Streamaⓒ technology allows us to analyze your logs, metrics, and security traffic in real-time and provide long-term trend analysis without storing any of the data. 

The initial idea behind Streamaⓒ was to support our TCO Optimizer feature which enables our customers to define how the data is routed and stored according to use case and importance.

“We started with 3 very big international clients spending half a million dollars a year for our service, and we reduced that to less than $200,000. So, we created massive savings, and that allowed them to scale,” CEO Ariel Assaraf explains. “Because they already had that budget, they could stop thinking about whether or not to connect new data. They just pour in a lot more data and get better observability.”

Then we saw that the potential of Streama goes far beyond simply reducing costs. We are addressing all of the major challenges brought by the explosive growth of data. When costs are reduced, scale and coverage are more attainable. Plus, Streamaⓒ is only dependent on CPU and automatically scales up and down to match your requirements so we can deliver top-tier performance in the most demanding environments.

What’s next for Coralogix

Moving forward, our goal is to advance our storage-less vision and use Streamaⓒ as the foundation for what we call the data-less data platform.

There are two sides to this vision. On the one hand, we have our analytics pipeline which is providing all of the real-time and long-term insights that you need to monitor your applications and systems without storing the data. On the other hand, we’re providing power query capabilities for archived data that hasn’t ever been indexed.  

So, imagine a world where you can send all of your data for analysis without thinking about quotas, without thinking about retention, without thinking about throttling. Get best-in-class analytics with long-term trends and be able to query all the data from your own storage, without any issues of privacy or compliance.

With this new round of funding, we’re planning to aggressively scale our R&D teams and expand our platform to support the future of data.

Thank you to our investors!

We’re proud to partner with Greenfield Partners, who led this round, along with support from our existing investors at Red Dot Capital Partners, StageOne Ventures, Eyal Ofer’s – O.G. Tech, Janvest Capital Partners, Maor ventures, and 2B Angels.

We have a lot of ambitious goals that we expect to meet in the next few quarters, and this funding will help us get there even faster.

Learn more about Coralogix: https://coralogixstg.wpengine.com/