Go back to blog
8 Top Log Management Collectors Compared by Data Throughput
Veysi Soyvural Jun 13, 2021

8 Top Log Management Collectors Compared by Data Throughput

In today’s fast paced and competitive world, time is money. Getting a key alert a few minutes late translates to frustrated customers and lost revenue.

That means that modern DevOps teams have to be laser focused on discovering and solving customer-impacting issues quickly. The response times of these teams are heavily impacted by the tools they use. Analytics tools that can’t keep up with the high volume of data produced by modern applications directly translate to slower response times for DevOps teams.

We are obsessed with DevOps performance at EdgeDelta, so we did a side-by-side comparison of the top log management collectors on the market. We used the open-source Vector Test Harness tool which was developed by Timber.io (https://github.com/timberio/vector-test-harness).

The results show that data throughput varies widely across the different agents, as well across different functions of the agents.

high-performance.png

Testing Methodology

Testing Elements

Subject: This node indicates a server that the agent is working on.

Producer: Producer is responsible for generating log messages for the subject to process.

Consumer: Consumer collects the messages from subject output messages.

Test Cases

  1. TCP to Blackhole Case: In this case test flow starts with messages from producers and terminates with the agent when they are ingested to itself.
  2. TCP to TCP Case: In this case agent ingests messages from TCP producers and pushes the processed messages to a TCP consumer.
  3. TCP to HTTP Case: In this case agent ingests messages from TCP producers and pushes the processed messages to a HTTP consumer.
  4. File to TCP Case: In this case agent ingests messages from file pushes the processed messages to a TCP consumer.
  5. Regex Parsing Case: In this case agent ingests messages from TCP and applies an apache common regex processor then pushes the processed messages to a TCP consumer.

We built out substantial resources on AWS Elastic Computing Cloud (EC2) to perform the tests and leveraged Terraform and Ansible to automate the test deployments. While the agent is running on the subject server the OS resource stats are being collected by dsta, and these results are used to calculate I/O Throughput values. You can find more detailed information on the Vector Test Harness page.

Vector already did benchmark tests for the performance of different tools including Vector, Filebeat, FluentBit, FluentD, Logstash, Splunk UF and Splunk HF and published the results. We supplemented the results obtained by Vector and extended the results table by adding Edge Delta agent results.

Results

The outstanding Edge Delta results aren’t by accident. The Edge Delta team earned these results by focusing on architecture, robust code design, and rigorous performance testing. The Edge Delta agent is built on the GoLang development language. GoLang has a great impact on performance because it produces native code to execute. And furthermore our team cares about implementing code with the minimum complexity and using cache facilities for the right cases. Simple, robust architectures always lead to better results. Additionally, our asynchronous implementation with pipes drastically improves performance.

Product and engineering teams can often focus on features and widgets rather than core architecture. The Edge Delta team has always focused on end-to-end performance and rigorous testing because we believe this has a real impact on our customers and their businesses.

If you are interested in testing out the Edge Delta agent in action, get started with a free trial today!

Go back to blog

Read more articles

Stay in touch

Sign up for our newsletter
to be the first to know about new articles.