Last year, Splunk launched its new Splunk Cloud Platform Workload Pricing model. Historically, customers were billed based on the volume of data they ingested into their Splunk Cloud deployment. With data volumes exploding, however, observability costs have become a common pain point for customers.
The effect of high Splunk costs goes beyond sheer overages. Organizations often have to predict which datasets they need to centralize – and discard the rest – to contain costs. In other scenarios, organizations will sample or filter out log lines to lower ingest volumes. With both these workarounds, you’re ultimately sacrificing visibility.
The shift away from a volume-based pricing model shows that Splunk is trying to give their customers more flexibility in the systems they monitor. However, that does not mean you should expect to pay less – or even monitor more – by switching to Workload Pricing. Before diving into why this is the case, let’s break down the pricing model.
Breaking down the SVC pricing model
The Splunk Workload Pricing Model is based on two components:
- Splunk Virtual Compute (SVC): The compute, I/O, and memory required to monitor your data sources.
- Storage Blocks: The storage volume required to fulfill your data retention policies.
SVCs are the equivalent of vCPUs for Splunk Cloud consumption. The factors that have the biggest impact on these areas are the volume of data you ingest, the queries you run on top of that data, and your ongoing dashboards and alerts.
When you engage Splunk, their team will assign a predetermined amount of SVC unit credits based on the aggregate data sources you are monitoring, and how you use the data once it is ingested:
- A dataset that isn’t queried upon frequently or analyzed in real-time will have a higher ingestion-to-SVC ratio (examples: compliance storage or a data lake).
- A dataset that is monitored with real-time queries and constant dashboards will have a lower ingestion-to-SVC ratio (examples: security or service monitoring).
The visual above demonstrates that customers can ingest more data into Splunk Cloud per SVC unit credit when supporting use cases that require a lower number of queries, dashboards, and alerts on raw data.
Enterprise Security, IT Service Intelligence (ITSI), and continuous monitoring use cases naturally consume SVC unit credits faster given their compute-intensive nature. That’s because these use cases require customers to continuously run high-volume batch processing jobs on raw datasets. Take, for example, a customer running a dashboard with nine panels. Every time the dashboard is rendered or viewed in the browser, each of the nine queries:
- Pulls all of the associated raw datasets from indexes
- Filters out the data that is not needed
- Begins crunching raw data to produce statistics and analytics
When there are a high number of dashboards, running those queries can be extremely taxing on the system – especially when viewed or loaded concurrently – therefore driving compute and SVC consumption. The same logic applies to more complex alerts running every few minutes – you’re processing raw datasets when the underlying query runs, causing a spike in compute.
If you consume all of your SVC credits, you’ll face overages. Additionally, you won’t be able to run another query until all other active queries are complete.
By charging for Storage Blocks, Splunk has decoupled your compute needs from your storage needs. In essence, you can purchase Storage Blocks to account for however long you’d like to retain your data.
How will moving to Splunk Workload Pricing impact your bill?
When quantifying the impact of moving to Splunk Workloads Pricing, there is no one-size-fits-all solution. It ultimately depends on the data sources you are monitoring with Splunk Cloud.
That said, you should not expect a lower Splunk bill. SVC credits are highly correlated with the volume of data you ingest into Splunk Cloud. That is because the ingestion process consumes a significant amount of compute resources. As a result, we’ve seen customers continue to neglect datasets and/or sample and filter log lines after making the switch.
Moreover, customers migrating from Splunk Enterprise to Splunk Cloud should expect the initial ingestion process to create a spike in compute. Once your data is in Splunk Cloud, ongoing dashboards and real-time queries on large datasets can exhaust your SVC credits.
How you can use Edge Delta to lower SVC costs
Since the launch of Splunk Workload Pricing, we’ve seen customers use Edge Delta in multiple ways to streamline the adoption of the new model. Here are a couple of examples you can apply to your Splunk Cloud deployment.
Lowering ingestion-to-SVC ratio
Edge Delta uses distributed stream processing to analyze data as it’s created at the source. In doing so, Edge Delta uncovers insights, statistics, and aggregates that are streamed to Splunk in real-time. When you need raw data – like when an anomaly occurs – Edge Delta dynamically routes full-fidelity logs to your preferred observability platform. This approach dramatically lowers the volume of data you need to ingest into Splunk, which in turn alleviates your SVC needs. How?
First, as we covered above, the ingestion process is one of the biggest drivers of compute consumption. Reducing ingestion volumes by nature lowers your compute needs.
Additionally, Edge Delta plays a massive role in optimizing dashboard load times and query performance, ultimately reducing SVC usage. It does so by pre-processing the raw data into datasets that are easily digestible and queryable in Splunk, and making query results natively available. As a result, customers can reduce a five-minute query on raw logs into a five-second query on optimized data.
Integrating new services and data sources
By streaming your raw data through Edge Delta before ingesting into Splunk, you are ultimately optimizing the dataset – and clearing capacity to monitor more systems. As a result, Edge Delta has allowed customers to onboard new services and data sources into their Splunk deployment, gaining 100% observability without the need to purchase additional SVC units. In other words, you no longer need to neglect datasets.
Cutting Storage Block costs
All raw data that passes through Edge Delta is automatically routed to the low-cost storage target of your choice (like Amazon S3, Microsoft Azure Blob, or Google Cloud Storage). You can also route this data to a data lake or data warehouse provider. By storing raw data in these destinations instead of Splunk, you can meet your data retention needs in a more cost-effective manner. As a result, you don’t need to procure as many Storage Blocks as before.
The visual above demonstrates how Edge Delta can help you better support various use cases with Splunk Workload Pricing – both by reducing data ingest volumes and compute demands.
By using Edge Delta in tandem with Splunk, you can expect to optimize your SVC consumption by over 90%. With this level of optimization, customers have been able to gain 100% visibility into all of their existing data sources, as well as add new data sources without procuring more SVC credits. It’s because of this value that companies at all stages of SVC adoption are considering Edge Delta.