Monster SCALE Summit 2025 — Watch 60+ Sessions Now

See all blog posts

How to Reduce DynamoDB Costs: Expert Tips from Alex DeBrie

DynamoDB consultant Alex DeBrie shares where teams tend to get into trouble

DynamoDB pricing can be a blessing and a curse. When you’re just starting off, costs are usually quite reasonable and on-demand pricing can seem like the perfect way to minimize upfront costs. But then perhaps you face “catastrophic success” with an exponential increase of users flooding your database…and your monthly bill far exceeds your budget.

The more predictable provisioned capacity model might seem safer. But if you overprovision, you’re burning money – and if you underprovision, your application might be throttled during a critical peak period. It’s complicated. Add in the often-overlooked costs of secondary indexes, ACID transactions, and global tables – plus the nuances of dealing with DAX – and you could find that your cost estimates were worlds away from reality.

Rather than learn these lessons the hard (and costly) way, why not take a shortcut: tap the expert known for helping teams reduce their DynamoDB costs. Enter Alex DeBrie, the guy who literally wrote the book on DynamoDB.

Alex shared his experiences at the recent Monster SCALE Summit. This article recaps the key points from his talk (you can watch his complete talk here).

Watch Alex’s Complete Talk

Note: If you need further cost reduction beyond these strategies, consider ScyllaDB. ScyllaDB is an API-compatible DynamoDB alternative that provides better latency at 50% of the cost (or less), thanks to extreme engineering efficiency.

Learn more about ScyllaDB as a DynamoDB alternative

DynamoDB Pricing: The Basics

Alex began the talk with an overview of how DynamoDB’s pricing structure works. Unlike other cloud databases where you provision resources like CPU and RAM, DynamoDB charges directly for operations. You pay for:

  • Read Capacity Units (RCUs): Each RCU allows reading up to 4KB of data per request
  • Write Capacity Units (WCUs): Each WCU allows writing up to 1KB of data per request
  • Storage: Priced per gigabyte-month (similar to EBS or S3)

Then there’s DynamoDB billing modes, which determine how you get that capacity for reads and writes.

Provisioned Throughput is the traditional billing mode. You specify how many RCUs and WCUs you want available on a per-second basis. Basically, it’s a “use it or lose it” model. You’re paying for what you requested, whether you take advantage of it or not. If you happen to exceed what you requested, your workload gets throttled.

And speaking of throttling, Alex called out another important difference between DynamoDB and other databases. With other databases, response times gradually worsen as concurrent queries increase. Not so in DynamoDB. Alex explained, “As you increase the number of concurrent queries, you’ll still hit some saturation point where you might not have provisioned enough throughput to support the reads or writes you want to perform. But rather than giving you long-tail response times, which aren’t ideal, it simply throttles you. It instantly returns a 500 error, telling you, ‘Hey, you haven’t provisioned enough for this particular second. Come back in another second, and you’ll have more reads and writes available.’” As a result, you get predictable response times – to a limit, at least.

On-Demand Mode is more like a serverless or pay-per-request mode. Rather than saying how much capacity you want in advance, you just get charged per request. As you throw reads and writes at your DynamoDB database, AWS will charge you fractions of a cent each time. At the end of the month, they’ll total up all those costs  and send you a bill.

Beyond the Basics

For an accurate assessment of your DynamoDB costs, you need to go beyond simply plugging your anticipated read and write estimates into a calculator (either the AWS-hosted DynamoDB cost calculator or the more nuanced DynamoDB cost analyzer we’ve designed). Many other factors – in your DynamoDB configuration as well as your actual application – impact your costs.
Critical DynamoDB cost factors that Alex highlighted in his talk include:

  • Table storage classes
  • WCU and RCU cost multipliers

Let’s look at each in turn.

Table Storage Classes

In DynamoDB, “table storage classes” define the underlying storage tier and access patterns for your table’s data. There are two options: standard mode for hot data and Standard-IA for infrequently accessed, historical, or backup data.

  • Standard Mode: This is the traditional table storage class. It provides high-performance storage optimized for frequent access. It’s the cheapest mode for paying for operations. However, be aware that storage cost is more expensive (about 25 cents per Gigabyte-month in the cheapest regions).
  • Standard-IA (Infrequent Access): This is a lower-cost, less performant tier designed for infrequent access. If you have a table with a lot of data and you’re doing fewer operations on it, you can use this option for cheaper storage (only about 10 cents per Gigabyte-month). However, the tradeoffs are that you pay a premium on operations and you cannot reserve capacity.

[Amazon’s tips on selecting the table storage class]

WCU and RCU Cost Multipliers

Beyond the core settings, there’s also an array of “multipliers” that can exponentially increase your capacity unit consumption. Factors such as item size, secondary indexes, transactions, global table replication, and read consistency can all cause costs to skyrocket if you’re not careful.

The riskiest cost multipliers that Alex called out include:

  • Item size: Although the standard RCU is 4KB and the standard WCU is 1KB, you can go beyond that (for a cost). If you’re reading a 20KB item, that’s going to be 5 RCUs (20KB / 4KB = 5 RCUs). Or if you’re writing a 10KB item, that’s going to be 10 WCUs (10KB / 1KB= 10 WCUs).
  • Secondary indexes: DynamoDB lets you use secondary indexes, but again – it will cost you. In addition to paying for the writes that go to your main table, you will also pay for all the writes to your secondary indexes. That can really drive up your WCU consumption.
  • ACID Transactions: You can configure ACID transactions to operate on multiple items in a single request in an all-or-nothing way. However, you pay quite a premium for this.
  • Global Tables: DynamoDB Global Tables replicate data across multiple regions, but you really pay the price due to increased write operations as well as increased storage needs.
  • Consistent reads: Consistent reads ensure that a read request always returns the most recent write. But, you pay higher costs compared to eventually consistent reads that might return slightly older data.

How to Reduce DynamoDB Costs

Alex’s top tip is to “mind your multipliers.” Make sure you really understand the cost impacts of different options. Also, avoid any options that don’t justify their steep costs. In particular…

Watch Item Sizes

DynamoDB users tend to bloat their item sizes without really thinking about it. This consumes a lot of resources (disk/memory/CPU), so review your item sizes carefully:

  • Remove unused attributes
  • If you have large values, consider storing them in S3 instead
  • Reduce the attribute names (Since AWS charges for the full payload transmitted over the wire, large attribute names result in larger item sizes)
  • If you have a smaller amount of frequently updated data and a larger amount of slow-moving data, consider splitting items into multiple different items (vertical partitioning)

Limit Secondary Indexes

Secondary indexes are another common culprit behind unexpected DynamoDB costs. Be vigilant about spotting and removing secondary indexes that you don’t really need. Remember, they’re causing you to pay twice: you pay in terms of storage and also on every write. You can also use Projections to limit the number of writes to your secondary indexes and/or limit the size of the items in those indexes.
Regularly review secondary indexes to ensure they are being utilized. Remove any index that isn’t being read and evaluate the “write:read” cost ratio to determine if the cost is justified.

Use Transactions Sparingly

Limit transactions. Alex put it this way: “AWS came out with DynamoDB transactions six or seven years ago. They’re super useful for many things, but I wouldn’t use them willy-nilly. They’re slower than traditional DynamoDB operations and more expensive. So, I try to limit my transactions to high-value, low-volume, low-frequency applications. That’s where I find them worthwhile — if I use them at all. Otherwise, I focus on modeling around them, leaning into DynamoDB’s design to avoid needing transactions in the first place.”

Be Selective with Global Tables

Global tables are critical if you need data in multiple regions, but make sure they’re really worth it. Given that they will multiply your write and storage costs, they should add significant value to justify their existence.

Consider Eventually Consistent Reads

Do you really need strongly consistent reads every time? Alex has found that in most cases, users don’t. “You’re almost always going to get the latest version of the item, and even if you don’t, it shouldn’t cause data corruption.”

Choose the Right Billing Mode

On-demand DynamoDB costs about 3.5X the price of fully utilized provisioned capacity (and this is quite an improvement from the previous 7X the price). However, achieving full utilization of provisioned capacity is difficult because overprovisioning is often necessary to handle traffic spikes. Generally, ~28-29% utilization is needed to make provisioned capacity cost effective.

For smaller workloads or those with unpredictable traffic, on-demand is often the better choice. Alex advises: “Use on-demand until it hurts. If your DynamoDB bill is under $1,000 a month, don’t spend too much time optimizing provisioned capacity. Instead, set it to on-demand and see how it goes. Once costs start to rise, then consider whether it’s worth optimizing. If you’re using provisioned capacity, aim for at least 28.8% utilization. If you’re not hitting that, switch to on-demand. Autoscaling can help with provisioned capacity – as long as your traffic doesn’t have rapid spikes. For stable, predictable workloads, reserved capacity (purchased a year in advance) can save you a lot of money.”

Review Table Storage Classes Monthly

Review your table storage classes every month. When deciding between storage classes, the key metric is whether total operations costs are 2.4X total storage costs. If operations costs exceed this, standard storage is preferable; otherwise, standard infrequent access (IA) is a better choice.

Also, be aware that the optimal setting could vary over time. Per Alex, “Standard storage is usually cheaper at first. For example, writing a kilobyte of data costs roughly the same as storing it for five months, so you’ll likely start in standard storage. However, over time, as your data grows, storage costs increase, and it may be worth switching to standard IA.”

Another tip on this front: use TTL to your advantage. If you don’t need to keep data forever, use TTL to automatically expire it. This will help with storage costs.

“DynamoDB pricing should influence how you build your application”

Alex left us with this thought: “DynamoDB pricing should influence how you build your application. You should consider these cost multipliers when designing your data model because you can easily see the connection between resource usage and cost, ensuring you’re getting value from it. For example, if you’re thinking about adding a secondary index, run the numbers to see if it’s better to over-read from your main table instead of paying the write cost for a secondary index. There are many strategies you can use.”

Browse our DynamoDB Resources 

Learn how ScyllaDB Compares to DynamoDB

 

About Cynthia Dunlop

Cynthia is Senior Director of Content Strategy at ScyllaDB. She has been writing about software development and quality engineering for 20+ years.

Blog Subscribe Mascots in Paper Airplane

Subscribe to the ScyllaDB Blog

For Engineers Only. Subscribe to the ScyllaDB Blog. Receive notifications about database-related technology articles and developer how-tos.