All Collections
Billing
Understanding Pricing Models Based On Usage Metrics
Understanding Pricing Models Based On Usage Metrics
Openbridge Support avatar
Written by Openbridge Support
Updated over a week ago

Openbridge does not use usage metrics like row counts, the amount of data stored, compute credits, etc., for pricing. However, those models are commonly used in the industry. Those models are predicated on something most companies can not control; that data can fluctuate significantly based on the source and usage of that source. As a result, understanding those vendors' pricing models' usage metrics is crucial to understand costs effectively.

Let's delve into variations in different data sources and how those variations can impact costs in a usage-based pricing model.

1. Dynamic, Variable Data Sources

Data pours in from diverse sources, and each source contributes distinctively to the overall data volumes. For instance, consider the Amazon Product Ads report for 20 medium to large-sized advertisers. On average, they generate 2-4 GB of compressed data and 2 million records daily.

Using Vendor's best rates with sales discounts for this use case, the monthly charges would be approximately $1500. Openbridge? It would be a flat rate of $350.

While smaller advertisers will generate lower volumes, growth in the number of advertisers, or volume of activity (i.e., more campaigns, more product ad versions, etc.), the cost metrics rise accordingly.

2. Static, Fixed Data Sources

Unlike Ads, some data sources deliver consistent, predictable data each day. For example, Amazon Search Terms churn out approximately 3.5 GB of compressed data and 3.3 million daily records. Since this is a global feed for Amazon, only one feed is needed, unlike Ads, which vary per Advertiser. However, despite one global feed, the volume of data is significant.

Using our Amazon Search Terms as a reference point, this would equate to about 105GB of compressed data and nearly 100M rows per month. Using Vendor's best rates with sales discounts for this use case, the monthly charges would be $1700. Openbridge? $50.

Another example is Adobe Analytics. You have ten daily warehouse report exports. On average, each export is about 1GB, with 5,500,00 rows. Per day, these ten report exports will be 10 GB and total 55 million rows. Over the course of a month, these ten report exports would total 300 GB of data, 1.65 billion rows. Using Vendor's best rates with sales discounts for this use case, the monthly charges would be $7000. Openbridge? $275.

3. Compression and Storage Optimization

Compression techniques are used to mitigate the costs associated with data storage and processing. As a result, compression reduces data size while retaining essential information. Pricing based on uncompressed data volumes could inflate charges by 20x. For example, while compressed data averages 2-4 GB per day, uncompressed data can be 15-40GB. As a result, data storage costs can escalate rapidly if the pricing model used by a vendor does not use compressed storage as a metric.

4. Scaling Data Sources

As you scale to more data sources and accounts, the probability of highly variable costs as data volumes fluctuates increases. Let's look at daily, monthly, and yearly data volume examples across several sources to illustrate this point.

  • Amazon Product Ads: Daily volume - 2-4 GB (compressed), 2 million records.

  • Amazon Search Terms: Daily volume - 3.5 GB (compressed), 3.3 million records.

  • Google Ads Campaign Data: Daily volume - 84 MB (compressed), 111,000 rows.

  • Google Ads Search Terms: Daily volume of 22 MB(compressed), 50,000 rows.

You can see variations per data source, which often reflect what the source provides and the usage there, like the number of campaigns, creative units, etc. For 20 Google and Amazon Advertising accounts, the monthly volume of data would be approximately 150GB and 160M rows.

Using Vendor's best rates with sales discounts and pre-paying for the year would be $30000 or $2500 monthly. Openbridge costs those same 20 Advertisers across those sources? Approximately $400 per month.

Going back to our Adobe Analytics example, if you needed to scale beyond the 10 report exports for a single report suite and scale that to 30 report suites, you will have 300 daily exports totaling 300 GB of data, with over 1.6 billion records per day. This equals about 9 TB of data per month and 36 billion records. Vendors' costs would exceed $40K a month, and they do not even advertise costs at that scale. Openbridge costs those same 300 Adobe warehouse exports across those sources. Approximately $2,000 per month.

5. Data Volume Spikes

Certain events can trigger significant spikes in data, adding another layer of complexity to pricing models based on usage metrics. Let's consider the example of the Amazon Finance API, which processes event-based feeds. Let's assume a moderately sized seller handles around 275,000 events daily, resulting in approximately 100 MB of data. However, during major events like Amazon Prime Day, data volumes can surge exponentially, surpassing typical averages by 3-5X. This surge in data volumes directly impacts pricing, leading to significantly higher costs during these event periods.

Let's examine the numbers for better clarity:

  • Average Monthly Data Volume: 8.2 million records, 3.1 GB of storage.

  • Average Prime Day Event Period Data Volume: 25 million+ records, 10 GB of storage.

In pricing models with usage metrics for 8.2 million records, 3.1 GB is typically about $250. However, the monthly cost of Prime Day doubles the price to $500 due to increasing row and data volumes. During such events, the spikes in data volumes result in disproportionately higher costs, making pricing highly variable based on events outside the seller's control.

But I don't have much data!

While data usage metrics can significantly impact costs, it is essential to recognize the flip side. Usage models can be cost-effective with low data volumes and small use cases.

Unlike our search terms example, where Amazon sets the volumes, other datasets' volumes can vary significantly based on your usage of those sources.

For example, if you run a few campaigns or a handful of product ads, you may have a few hundred records daily, and your Amazon Ads monthly data may not break 100K records or 10MB of data. If this pattern is repeated across other data sources, like Google Ads, depending on the service, your costs may be negligible, or you may be able to operate in their "free" tier as long as they offer it.

For situations like this, it can be a very cost-effective path to take. However, knowing where the line is between low or no cost and entering rapidly escalating costs is essential.

Our Approach To Pricing

Unlike pricing, which uses row counts, data size, compute units or other metrics, our approach to pricing is designed for businesses that demand flexibility, consistency, scalability, and cost control.

  1. Flexibility for Different Use Cases: Openbridge's pricing model caters to longer-term and short-term use cases, accommodating the diverse needs of businesses. Whether a company requires data integration and processing services for an extended period or a specific project, Openbridge's pricing model can adapt accordingly. This flexibility allows businesses to choose the pricing structure that aligns with their unique requirements, optimizing costs without sacrificing functionality.

  2. Consistency for Forecasting: Forecasting costs accurately is a crucial aspect of budgeting and financial planning for any business. Openbridge's pricing model is consistent, ensuring companies can predict and plan their data integration and processing expenses. By providing transparent and consistent pricing, Openbridge enables enterprises to avoid surprises and make informed decisions about resource allocation and investment.

  3. Scalability Despite Data Source Variability: Data volumes can vary significantly based on different data sources, creating challenges when it comes to scalability. Openbridge's pricing model overcomes this by allowing businesses to scale predictably, regardless of the variability in data volumes from upstream sources. This scalability ensures companies can handle data fluctuations without incurring unexpected costs, providing a reliable and cost-effective solution.

  4. Cost Control: One of the critical advantages of the Openbridge pricing model is that it empowers customers to control their costs. Unlike traditional consumption-based models where usage volumes determine costs, Openbridge's model allows businesses to manage and control their expenses proactively. By having the ability to define and manage the scope of their data integration and processing services, customers can avoid unwelcome surprises and have greater control over their overall expenditure. Success on Prime Day should be celebrated, not a source of increased charges for data processing.

While this model might not fit all use cases, it delivers for those needing flexibility for different use cases; it ensures financial predictability, consistency for accurate forecasting, scalability despite data source variability, and customer cost control.

Did this answer your question?