Loggly Alternatives

loggly alternatives

If you’re looking to gain more than just logging management from your log management host, Loggly isn’t the only company you should be considering. Here are a few pointers that will make you take some Loggly alternatives into account.

As one of the best-known logging monitoring tools in the industry, Loggly seems to cover practically everything your company could possibly need, but does it? Loggly focuses primarily on collecting logs from cloud-based services using existing open standards like Syslog and HTTP instead of forcing you to install heavy, proprietary collection agents.

Small companies with modest needs (~Up to 1M logs per day) will be able to enjoy the services Loggly offers as long as they ingest a small amount of data per day and still have the majority of their code written by their core team. Larger companies, however, need to deal with huge amounts of log data and simply don’t know what to search, where to search, and most of all when to search.

We, however, offer companies a set of tools that enable fast production analysis instead of relying on constant log searching

Making Big Data Small

Coralogix is able to automatically turn millions of log entries into a set of log patterns and provide companies with all they need in order to analyze their production problems and solve them in minutes.

User Defined Alerts Are Not Enough

Unlike Loggly that is only able to alert to user-defined errors, Coralogix not only does that but also alerts users to anomalies which were automatically detected by the system, without any pre-definitions.

How Long Does it Take to retrieve a Query?

While Loggly users, unfortunately, need to wait minutes for a week’s worth of logs, Coralogix users receive that information in whopping 6-7 seconds. In fact, Coralogix is on in a league of their own when it comes to their being the fastest at query requests. Unlike Loggly, Coralogix is able to query half a billion logs in a mere 30 seconds.

Understanding Your Flow

Competing companies aren’t supplying their users with the means of understanding their log data properly. Coralogix, on the other hand, automatically maps the software flows, detects production problems and delivers pinpoint insights by automatically learning the system’s log sequences to detect software errors in real time. The algorithm identifies which logs arrive together and in what arrival ratio, and alerts the user in case this ratio has been broken.

Alerting to New Errors

You may notice certain errors and critical logs and ignore them or neglect to take care of them. However, the problem with such known errors is that they are the ones that cause the most problems to your system eventually. Coralogix can detect any anomaly in your logs and separate different logs into their original templates in order to alert users in case of a new error or critical log occurring in their system. All users now have to do is wait for the daily digest email to understand and tackle their detected errors. You can then view the logs one by one or all together in Coralogix’s aggregated viewer – Loggregation.

In short, when it comes to the time it takes to request a query or to a system that is able to alert to anomalies that weren’t defined by the user, Coralogix definitely does it best.

If you too would like to feel the Coralogix difference, check out our pricing ranges and start now for free. We know you won’t regret it.

The Dark Side of Microservices

If you build it right and not just use it as a buzz word, I believe we can all agree that microservices are great and that they make your code & production environment simpler to handle.

Here are some of the clear benefits when writing a micro service architecture:

  • Software language agnostic – each service can be written in the most suitable language for the specific task that you need from that service.
  • Your system is designed to scale with ease – many services with the same purpose can perform the same action concurrently without interrupting one another. So if you are using Docker with auto-discovery, for example, you can automatically scale your system according to your needs
  • If a service crashes for any reason, others can replace it automatically, making your system robust and with high availability.
  • Failure of one micro service does not effect on another micro service, making your system highly decoupled and modular

Although all of the nice things written above, micro services have a very dark side to them, which if not checked & managed, can cause havoc to your production system and not only crash it but exhaust your cloud resources. Here is an example for such a scenario:

Let’s assume that one of your microservices is responsible for pulling messages from your Kafka queue and in turn, process each message and insert it into your Elastic search. Now, your service is on docker, fully scaled, can work in parallel with many other services of the same kind and is managed by an auto-discovery service.

Now let’s assume that you have a bug in your service which only happens when a certain type of data resides in the pulled messages. That bug causes the service to work slower and slower as time goes by, (not due to a memory leak but rather because the calculations needed due to the prior unexpected type of data are higher) thus your service pulls fewer messages each second. And if that’s not enough, your auto-discovery is not able to communicate with this service because the CPU is so highly utilized that the handshake with the machine takes too long and fails after a 5 seconds timeout.

What will happen next is not pleasant; your monitoring system will probably detect the backlog in your Kafka and alert your auto scale system which will start opening more and more micro services on machines that have no clue what their IPs are. This will keep going until you’ll run out of allocated machines or (if you’re smart) reach the pre-defined limit you’ve defined in your auto scale service. When you figured out what’s wrong and run to shut down the zombie machines from the cloud console, it will probably be too late.

If you’ve seen the walking dead, this case is not far from it; you have dozens of zombie machines with hundreds of micro services. The piece of information you have left are log records which you inserted in your service, and if you did not insert solid logs, you are probably in a world of pain.

The moral of the story is that it is vital to log your micro service well and to make sure that your logs provide you with all the necessary data you need to control them, here are a few thumb rules.

Here are a few simple steps to help you prevent catastrophe:

  1. Define periodic health checks to verify your microservices work properly, in case the results indicate a critical issue then you might want to pull the machines out of service automatically
  2. Define a limit on your auto scale, that way even if all protections are crossed you still don’t wake up to a huge bill
  3. Insert a log entry after each important task that your service does, if that task happens hundreds of times a second, add a log entry after finishing a bulk of these takes (including the bulk’s metadata, for example, the number of tasks it did and their outcome).
  4. Make sure you add a GUID to your host names, that way your services will log with that GUID allowing you to quickly distinguish between the different hosts and identify the root cause of potential disasters
  5. Insert log entries which count the time it took to process each task your micro service did and monitor this number, you would want to know that your service is slowing down sooner rather than later.
  6. Do not only log exceptions, log error cases which do not result in exceptions, for example, if you’ve received a message from your Kafka which does not contain the data you’ve anticipated to receive.

To summarize, microservices are an excellent piece of architecture, and if you built it right, this design would provide your code flexibility, durability, and modularity, BUT you need to take into account that a micro service can go rogue and plan for this day.

For questions and feedback don’t hesitate to email us at info@coralogixstg.wpengine.com

Why Are we Analyzing Only 1% of Our Logs?

Here at Coralogix, one of the main goals we have is what we call “Make Big Data small” the idea is to allow our users to view a narrow set log patterns instead of just indexing terabytes of data and providing textual search capabilities.

Why?

Because as cloud computing and open source lowered the bar for companies to create distributed systems at high scale, the amount of data they have to deal with are overwhelming to them.

What most companies do is to collect all their log data and index it, but they have no idea of what to search, where to search and most of all, when to search.

To examine our assumption we did a little research on the data we have from 50 customers who are generating over 2TB of log data on a daily basis. The disturbing lessons we’ve learned are below.  

One definition I have to make before we start is “Log Template”.

What we call a log template is basically similar to the printf you have in your code, which means it contains the log words split to constants and variables.

For instance if I have these 3 log entries:

  • User Ariel logged in at 13:24:45 from IP 1.1.1.1
  • User James logged in at 12:44:12 from IP 2.2.2.2
  • User John logged in at 09:55:27 from IP 3.3.3.3

Then the template for these entries would be:

  • User * logged in at * from *

And I can say that this template arrived 3 times.

Now that we are on the same page, here’s what we’ve learned from clustering daily Terabytes of log data for over 50 customers :

1) 70% of the log data is generated by just 5 different log templates. This demonstrates how our software usually has one main flow which is frequently used while other features and capabilities are rarely in use. So we kept going and found out that 95%(!) of your log data is generated by 10 templates, meaning you are basically analyzing the same log records over and over not even knowing 99% of your log data.

2) To support fact #1, we found out that over 90% of the queries ran by users are on the top 5 templates. These statistics show us how we are so blinded by these templates dominance we simply ignore other events.

3) 97% of your exceptions are generated by less than 3% of the exceptions you have in your code. You know these “known errors” that always arrive? they are creating so much noise that we fail to see the real errors in our systems.

4) 0.7% of your templates are of level Error and Critical, and they generate 0.025% of your traffic. This demonstrates just how easy it is to miss these errors, not to mention that most of them are generated by the same exceptions.

5) Templates that arrive less than 10 times a day are almost never queried (1 query every 20 days in average by all 50 customers together!). This is an amazing detail that shows how companies keep missing those rare events and only encounter them once they become a widespread problem.

Conclusions

The facts above show how our current approach towards logging is very much affected by the log variance and not from our perspective. We react to our data instead of proactively analyzing it according to our needs because the masses of data are so overwhelming we can’t see past them.

By automatically clustering log data back to its original structure, we allow our users to view all of their log data in a fast and simple way and quickly identify suspicious events that they might ignore otherwise.

Learn more about Coralogix Loggregation & Flow anomaly detection and how it can help you detect and solve your production problems faster.

How to Leverage Your Log Analytics

Here at Coralogix, we’ve mentioned before how log analytics has increasingly been gaining attention and is creating a way to cut back on software maintenance costs. In this post, we want to expand on the ways that Log Analytics can benefit the developers.

Let’s look at three facts about the growing trends of general Big Data Analytics for 2015:

1) Analytics and Business Intelligence Standardization is declining
Companies using Big Data Analytics tools are straying away from the known management tools, from a standardization of 35% in 2014 to 28% in 2015

2) Cloud-based data warehousing is on the rise
Cloud-based data warehousing services showed the biggest increase jumping to 34% in 2015 from 24% in 2014

3) Real-time technology is seeing real gains
With the development of complex event processing (CEP), mainly in the realms of financial services, security, telco, and intelligence agencies

Taking these factors into consideration, it is evident that log analytics is going places this year, and developers can benefit greatly from it. Not only will these tools and services help enrich the platform for logging, but they also can benefit in identifying bugs quicker. Saying that, we’ve come up with eight ways the developers can have greater insights and advantages from using log analytics.

1) Log Trigger Events: Log critical actions, such as user activity. This data can later be related to user cohorts for marketing, product management, and growth activity purposes. But also for the developers to optimize user flow.

2) Run Duration for Bigger Processes: For larger processes, intensive or lengthy application processes, log start and stop times. It can be correlated to pegged servers outages or load issue if they arise. This data can help to identify areas where the product is slow to respond or issues of application processes affecting infrastructure.

3) Log Deprecated Functionality: When a deprecated functionality is called, log it so the team can be alerted. This will help QA and Development know when other team members are using outdated classes by accidents.

4) Log Flag Changes: Some developers like to use flags in their code to indicate different states. Create a function for changing flags so that it is logged and clear when flags are off or on. This allows the response or alerting of when a particular flag that should have been renamed on in production.

5) Log Component Usage: Applications are a treasure pile of 3rd party component libraries. These libraries pose a risk when they are not monitored. For all the components used, it’s nice to log their usage – components name and version. There is a downside as this can be very chatty if not designed well. And it has to be by some real component tracking process outside of the collection, and from the insights.

The last three benefit the developers and their teams:

1) Standardize your log severity: an organization needs to have a standard for log severity. So that when a log is “Critical”/“Error” it is clear to the developer that there is a problem and on the other hand if you decide to log everything above information level you’d want information not to contain Debug level logs.

2) Aggregate your log data: each log resembles an action on your system, but the same log that is created by the same line of code can be generated millions of times a day with different variables (e.g., ‘User ID xxxxxxx logged in’). To see the logs that matter to you and not get lost in millions of identical logs, you need to have an easy option to aggregate logs by their log templates.

3) Decide on an alert policy: What events should be alerted, who should be notified, what alerts require an Email/SMS notifications. All these will make your life much more easy and clear with a focal point to each issue that is brought up.

Coralogix brings a new actionable approach to the Log Analytics & Management. By analyzing every log that is generated by the software, Coralogix learns the system’s flow, automatically detects broken Log patterns, and provides their root cause in a single click. Coralogix finds the patterns of broken vs. correct behavior and returns a few log entries, which is a lot more manageable than the millions of log entries that Log Analytics users have to deal with today.

The 5 quality pillars any R&D must own

In today’s competitive world of software development, one of the key factors for a company’s success is the R&D capability of releasing and monitoring multiple high quality versions in order to provide top notch services to their customers.

With software systems becoming more complex and distributed, the need for version delivery and software monitoring grew. This caused an explosion of new startup companies targeting these markets. With these new companies, came various solutions for the different stages of the development and delivery process of software systems. These solutions attempt to enhance and make these stages more efficient.

Roughly, this quality cycle can be divided into 5 main sections. Below, we’ll walk you through the essence of each stage and share a few tools that stand out.

  1. Static code analysis: A static code analysis tool checks the source code in order to verify compliance with the predefined rules and best practices set by the tool provider per coding language. Static code analysis is great for preemptively detecting code defects and inconsistencies especially in large code bases with newly created and legacy code.

Code analysis tools worth mentioning:

  • Klockwork Insight: a thorough static code analysis solution for various purposes that detects vulnerabilities and defects on the fly while you code.
  • Checkmarx: A security oriented code analysis tool that helps identifying, tracking and fixing technical and logical security flaws from the root. Checkmarx provides a broad support for various coding languages
  • Parasoft: A strong rule based code analysis, with integrations to automation and load testing, which allows the building of an E2E flow – supports C#, Java, C++.
  1. QA automation: QA automation is a powerful supplement to manual QA testing. It ensures a thorough test procedure and verifies behavioral consistencies. QA automation tools are able to playback pre-recorded actions, compare the results to the defined expected behavior and report success or failure to the QA engineer. After a small investment of creating these automated tests, they can be easily repeated and perform testing tasks that are impossible for manual testers.

QA automation tools worth mentioning:

  • Ranorex: A testing platform for Web, Desktop and mobile applications, capable of testing UI, Data driven tests, and functional tests.  Provides a unified dashboard with the test results.
  • LoadRunnerOne of the firsts in the market – a performance oriented automation tool that detects bottlenecks across the E2E flow.
  • Selenium: An open source project for browser automation, which provides various adjustable tools for recording and executing test on web platforms.

As selenium alone is not considered to be very intuitive, we recommend trying browser automation tools that rely on selenium engine, for example: the great https://www.testim.io solution.

 

  1. Continuous delivery: Continuous delivery is a novel approach that is aimed to help large organizations become as lean and agile as startups and be able to adapt their software in line with user feedback. By releasing multiple small versions, companies can improve their quality and customer satisfaction dramatically and in case of need even shift business focus.

Continuous delivery tools worth mentioning:

  • Jenkins: a continuous delivery enabler that provides both a continuous integration system, which allows developers to easily integrate changes into their project, and monitoring capabilities for externally running jobs (even on remote machines)
  • Travis: A Github oriented solution, although not as flexible as Jenkins, relies on common practices to enable a very simple integration (1 file in the root of your code), which makes the continuous integration and delivery almost seamless. If you are not using Github, Travis is probably not the tool for you.
  • Bamboo: a paid service (unlike the other tools listed in this category), which provides a very simple integration and user interface, but lacks the flexibility and plug-in variety of Jenkins.

 

  1. APM – Application performance monitoring: 2 critical parts of the quality chain are the application performance and the application availability. APM tools integrate with software systems and understand the system’s infrastructure to detect availability issues, slow response times and other performance oriented problems.

APM tools worth mentioning:

  • New Relic: New relic provides a deep view into the system, with real time performance and availability overview, real time alerting mechanism, and an API for personal custom performance monitoring apps
  • Datadog: DataDog provides a search and visualization mechanism that allows the user to drill into the heaps of data collected by DataDog. In addition, DataDog also offer a way to overlay data of different sources in order to find correlations between separate components (e.g web server & web application)
  • Raygun: Raygun is a more disaster-oriented tool. Once an application crashes, Raygun collects the exceptions and crash reports and alerts its users.
  1. Log Analytics & Management : This is an emerging market with huge demand. Log Analytics & Log Management tools enable the development, QA, and operations to get a hold of their log data and understand the system that flows from it. Moreover, Log Analytics & Management tools are crucial for monitoring production systems and understanding the root cause of software problems.

Log Analytics & Management tools worth mentioning:

  • Splunk: Splunk is a powerful tool that can index and visualize an entire organization, from various IT services to any machine data, which is being emitted from its customer’s servers. Splunk works in an unstructured manner and is agnostic to the data that is being sent to it. Splunk offer both SaaS (Splunk Storm) and on premise services.
  • Sumo Logic: SumoLogic offers a Log Analytics solution that is in many ways very close to Splunk. On top of the log collection and indexing capabilities, SumoLogic offers its “LogReduce” feature to aggregate log query results, and transactions tracking capabilities
  • Logentries: Logentries is a relatively new player in the market of Log Analytics & Management, which brings a fresh approach towards the integration process with simple plug-ins. In terms of Analytic capabilities, it does not bring too many new capabilities to the table besides the log tagging mechanism that can be helpful if you have a small defined set of logs generated from your software.

Coralogix brings a new actionable approach to the Log Analytics & Management. By analyzing every log that is generated by the software, Coralogix learns the system’s flow, automatically detects broken Log patterns, and provides their root cause in a single click. Coralogix finds the patterns of broken vs. correct behavior and returns a few log entries, which is a lot more manageable than the millions of log entries that Log Analytics users have to deal with today.

To get a 1/1 online demo of Coralogix and join the growing movement of companies that are shifting towards meaningful results in the world of Log Analytics & Management, simply  “Book a demo” .

These 4 parameters will make or break your E-commerce

During our research here at Coralogix, we discovered the growing need and awareness of Log Analytics as companies strive to control their log data and monitor their application and server logs.

One of the markets which made the most out of Log Analytics is the E-commerce application market. E-commerce is extremely dependent on the completeness of their transactions. They must analyze data from various sources to optimize their user experience, understand their customer behavior, and most importantly, make sure their stores are functioning at 100% as each and every broken transaction or link can result in immediate money loss.

Each user action needs to be monitored at several levels in order to achieve the required coverage.

  • Web server logs: The requests and responses from/to web servers (Apache etc.). Monitoring the web server logs allows the ability to understand the latency of the responses to the client requests, as long latencies are a known cause of user bounce. This monitoring can be achieved by creating statistic models on the response times the web server returns to understand their patterns and identifying abnormal behavior or poor performance of servers.
  • Performance logs: These logs can provide a sense of the application’s efficiency and performance. By monitoring the machine CPU, thread count, handle count, and memory usage, one can evaluate whether the system is suffering from low performance which might require a hardware upgrade or a software performance enhancement. These parameters are very easy to monitor and comprehend, we found them extremely valuable in identifying problems before they take place and preventing them.
  • HTTP errors: There are many possible HTTP errors. The first thing to be done is to aggregate these errors and find out how many types of different error there are (and how many times each type appear). The second step would be to drill deeper into the errors and determine what requests cause them.

A few common HTTP error types:

  1. 400 Bad Request – usually returned by the web server after it receives an invalid request – can indicate problems with the links on your website
  2. 404 Not Found – Returned by the web server in case it got a request for a non-existing URL, 404 does not necessarily mean you have a problem because there is a possibility the user actually browsed to a non-existing URL. To understand If there is a real problem you can either compare the request itself to similar requests on this URL or try browsing that URL yourself and verify if it exists
  3. 408 Request Timeout – Returned by the server in case the client did not produce the request in the defined timeframe, aggregate these errors and verify how many of them you have, too many might indicate a problem on your web server
  4. 500 Internal server error – The Web server encountered an unexpected condition that prevented it from fulfilling the request by the client. This is a ‘catch-all’ error generated by the Web server. Basically, something has gone wrong, but the server can not be more specific about the error condition in its response to the client.
  5. Application logs: The logs that were inserted by the developer in order to track the software behavior. The more detailed these logs are the more value they produce. Application logs can reflect the actual patterns of your system and analyze them can help identify software problems that cause major money losses you are not even aware of.

Via the above information, one could measure the key parameters of an e-commerce application

  1. Your download timing: how much time it takes your web server to download a web page to your client. This is extremely important as customers these         days tend to be very impatient to loading times and bounce very fast
  2. Application response time: how much time does it take your server to respond to user actions, once again, high response time results in customer attrition and high bounce rates.
  3. Last visited pages: What are the pages that are often the last visited before users leave your store? This can be a good indication that these pages need to be optimized.
  4. Your peak times: Know what are the days and hours in which your store is most active. This can assist you in scheduling promotions, preparing to handle a large amount of traffic or managing your support center.

There are many Log Analytics solutions out there that can handle such data sources and offer parsing and graphic tools, however, the problem with these tools is that they require the e-commerce owner to invest a lot of his valuable time in investigations and analysis instead of focusing on his main business which is selling and supplying.

Coralogix combines methods from the worlds of cyber and intelligence to create the world’s first Actionable Log Analytics solution, with a unique algorithm that automatically detects problems on any software, presents them in one simple view, and offers their root cause in a single click. Hit the “Request a demo” button above and join a global movement that is shifting towards actionable and meaningful results in the world of Log Analytics.

Our Journey from Homeland Security to Actionable Log Analytics

5 years ago, when I finished my Army service at the IDF 8200 intelligence unit, it was clear to me that my future is in the world of intelligence and cyber. It was only natural for me to start my first job at a successful Homeland security firm.

Although my service prepared me quite well for this market, it was only when I was working for a global company that I was introduced to the 3 ground rules for success in the world of Intelligence and Cyber:

1) Recognize the routine and identify abnormal behavior

2) Provide real-time insights

3) Give the user the data he needs to take action

One phrase that was particularly emphasized was “Actionable”. We were always instructed to think of methodologies which will not just display our customers with the data we collect, but rather provide the information they need in order to take action; this is a must in the world of intelligence and cyber security because a quick response to events is crucial for success.

A large part of my 4 years in that firm was spent on software debugging and endless log reading, hoping to find what I was looking for – that needle in a haystack.

One day I had lunch with a friend from work who was the company’s Tech & Innovation Leader, we talked about how much time we spend on this exhausting and non-efficient process and how there is no viable solution out there. My friend then smiled and said that we should meet after work because he has something in mind he thinks I will like.

When we started talking about Coralogix’ solution, it was quite different than what it is today. But 3 main rules have always lead us:

1) Recognize the routine and identify abnormal behavior

2) Provide real-time insights

3) Give the user the data he needs in order to take action

In other words, we decided to bring the intelligence and cyber methodologies into the world of Log Analytics and provide Actionable insights that will allow software companies to not only retrieve their data and analyze it, but also to take action and proactively monitor their systems.

But what seemed simple at the time was and still is a huge challenge, since log data is nothing like network traffic, each log entry has its own individual style that varies by the way a certain developer expresses himself. We found that the algorithms and methodologies we knew from the cyber world were not sufficient in order to bring the value our customers need.

After a few weeks of investigations we ran out of solutions and started to re-think our concept of simply applying cyber algorithms on software logs. We understood that we would need to tackle the problem from a different angle and by pure chance, we found just the right guy for the job – our Neuroscientist friend (who was also my roommate back then).  He offered a whole new perspective to the problem that comes from the world of protein sequencing. This approach allowed us to overcome the challenges log data presents, and push our anomaly detection model back to the world of cyber security.

Since then, 3 more IDF 8200 intelligence unit veterans joined Coralogix and helped us make our dream a reality. A scalable Log Analytics platform that can connect to any software, collect all log types (regardless of their content), learn the system’s normal flows, automatically detect anomalies, and provide their root cause and actionable solution. Coralogix is now changing the way companies perform their software maintenance and delivery and will be out on the market once we finish connecting to the first 30 businesses that have already registered to our Beta version.

To sign up for our Beta version and be one of the firsts to enjoy Coralogix’ Actionable Log Analytics solution, just click on the “Join the Beta” button above and become a part of a global movement that is shifting towards actionable and meaningful results in the world of DevOps & Log Analytics.

Log Analytics will turn that junk on your servers into gold

Although many tend to think Log Analytics is a new concept, logs were actually born together with computers and were always used as a system monitoring tool.

However, nowadays with the ever increasing complexity of software and infrastructures, we see that Log Analytics is no longer a side task aimed to put out fires, but a crucial element for software companies success.

Servers and applications generate enormous amounts of log records with valuable data emitted from a variety of processes. Those who decide to dive in and dig for gold discover a whole new world of proactive monitoring and business insights.

To better explain about the value of Log Analytics we’ve gathered some common use cases that can give a sense of the value hidden within those cluttered text files.

Production monitoring:

Maybe the most common use of Log Analytics; The tools today allow users to get a centralized view of all their production logs, define customized alerts and view statistics displays which make the data allot more accessible. This way software providers can react much faster to production problems and version release ricochets.

Production troubleshooting:

We all know that long and exhaustive process of searching log files for errors or abnormal behavior. It can take hours, and more often than not, it is simply impossible to go over the huge amounts of log data. Log Analytics tools provide users the ability to quickly search and parse log files in order to reach the problem’s root cause.

Software QA:

The entire process of bug documentation today is based on an old-fashioned and time-consuming process:

1) A QA engineer witnesses a problem.

2) The QA engineer opens the relevant log files and reproduces the problem.

3) The QA engineer saves all the relevant log files on his PC.

4) The QA engineer documents the bug with the log files attached for the developer to solve.

Log Analytics tools allow QA to query the relevant logs from the time the problem was witnessed and transfer the query results link to the developer, thus there is no need to try and reproduce the problem, plus, the relevant logs arrive to the developer with no effort by the QA.

Business intelligence:

Between endless CRM and user analytics reports, priceless data is hidden regarding business performance: number of transactions, peak and low activity distribution, count of specific events according to user tagging and much more. All this valuable data is already there. Log Analytics tools make this data accessible, and some even extract the important parts for you.

So why isn’t everyone pulling up their sleeves, dig into their logs and reap the rewards of their work?

One of the biggest obstacles faced when starting to manage logs is that it requires a wide range of knowledge of the system (and its logs in particular). Today’s Log Analytics tools allow the user to parse the text, define alerts and configure graphic displays. Some even offer aggregation and tagging capabilities, but they are all dependent on the user’s capability to define what to search and what is important.

Coralogix changes the Log Analytics markets with a brand new concept: machine learning algorithms that automatically detect software problems, present them in a single view and provide their root cause in a click.

Want to try Coralogix? Click the “Request a demo” button above to sign up and get an offer that will change the way you maintain your production system.

$100M raised | 12 Months | 4 Companies | 1 Surging market

In the world of DevOps and continuous delivery, Log Analytics tools are quickly emerging as a crucial component for an effective strategy towards improved maintenance procedures and support of complex cloud infrastructures.

Just to provide a scale; the software maintenance market is estimated at around $140B annually, among which, $3B are invested annually in log management alone, with an amazing predicted growth rate of 12% a year for the next 5 years.

The complexity of log data and the scale of production systems are constantly increasing, making the traditional methods of writing log files and manually investigating them no longer adequate to analyze production systems.

The main challenge of Log Analytics companies is to simplify this process by using practical visualization methods and search tools that allow users to filter out noise and get the insights they need.

After years in which Splunk ruled supreme with its on-premise solution, Loggly’s SaaS alternative rose and was quickly adopted by the market, forcing Splunk to come out with its own SaaS solution “Splunk Storm”.

Following Loggly’s success, companies like LogRhythm, SumoLogic, and Logentries emerged and launched their own SaaS solutions for Log Analytics, each company with its own approach.

Loggly: The classic Log Analytics tool.
Provides powerful search and visualization tools, allows integration with other products but the focus remains on Log Management.
Loggly announced that it raised $15M in series C:
Loggly’s round C

LogRhythm: A security-oriented product (SIEM) with less value to maintenance purposes
Provides strong pattern matching and report capabilities for security purposes, the drawback, however, is the solution’s price, which is quite higher than the competition
LogRhythm announced that it raised $40M in series E:
LogRhythm round E

SumoLogic: Unstructured big data management software.
Provides statistic views and data aggregation technologies with their prime feature of “Log Reduce”, lately SumoLogic expanded their solution to the areas of business intelligence and security in order to place themselves as an alternative to Splunk (hard to miss the imagination between the two products)
SumoLogic announced that it raised $30M in a venture investment, and they intend to go through an IPO in the future:
Sumo Logic venture investment

LogEntries: A relatively light Log Analytics tool
Provides automatic log collection, log search and personal tagging of logs, those who wish to use Logentries for more complicated analytics will mostly fall short
Logentries announced that it raised $10M in series A:
LogEntries round A

All in all, it’s safe to assume that the Log Analytics market is here to stay and will continue growing as software systems will become more complex.