Monster Scale Summit Planet background
yellow-star
blue-star
Planet-Jodorowski
yellow-star
blue-star
yellow-star
yellow-star
yellow-star
blue-star
blue-star
yellow-star
blue-star

Monster Scale Summit

Monster Scale Summit logo

Extreme scale engineering

Discover the latest trends and best practices impacting data-intensive applications. Register for access to all 50+ sessions available on demand.

Planet Herbert
planet-path

DynamoDB Cost Optimization Considerations and Strategies

Alex DeBrie21 minutes
Share this
Share this

Register for access to all sessions available on demand.

Enter your email to watch this session from the Monster Scale Summit 2025 livestream. You’ll also get access to all available recordings

In This NoSQL Presentation

How to optimize performance and reduce costs with data modeling.

Planet-McKenna
Planet-McKenna
Monster Scale Summit 2025

Alex DeBrie, DeBrie Advisory, Principal and Founder

Alex DeBrie is the Principal and Founder of DeBrie Advisory.

Additional Details

Summary: DynamoDB expert Alex DeBrie explains DynamoDB’s pricing model and how to cut costs. He covers RCUs and WCUs, storage charges, and how billing is decoupled from performance via throttling. He walks through macro settings (provisioned vs. on-demand capacity, storage classes) and micro multipliers (item size, secondary indexes, transactions, global tables, consistency level, query patterns). He urges regular audits, running the numbers monthly, trimming indexes and item bloat, using TTL, and switching modes when utilization changes.

Topics discussed

  • What RCUs and WCUs represent and how DynamoDB bills per read/write instead of instance size.
  • How billing and performance are separated, with throttling replacing long tail latency as load grows.
  • What macro configuration choices (provisioned vs. on-demand capacity, standard vs. standard-IA storage classes) do to your bill.
  • How item size, secondary indexes, transactions, global tables, consistent reads, and query vs. batch get multiply RCUs/WCUs.
  • Why auditing utilization, storage ratios, and index usage monthly keeps costs aligned with value.
  • How TTL, projections, vertical partitioning, and reduced attribute names shrink item size and write cost.
  • When provisioned capacity, autoscaling, or reserved capacity make sense compared to on-demand.
  • What tradeoffs exist between strongly consistent and eventually consistent reads.
  • How lessons apply to other systems (e.g., Postgres toast, S3/EBS storage economics).

Takeaways

  • Mind every multiplier: large items, extra indexes, transactions, global tables, and consistent reads all linearly increase RCUs/WCUs. Track them in code reviews and model design. Strip unused attributes, project only needed fields, and send blobs to S3 instead of DynamoDB.
  • Audit monthly: compare actual utilization to the ~29% break-even for provisioned vs. on-demand, check the 2.4× ops-to-storage ratio for storage classes, and drop idle GSIs. Simple spreadsheets are enough; flip modes without touching application code.
  • Use transactions and global tables sparingly. Reserve them for high-value, low-volume operations or true multi-region needs, since they double writes, storage, and coordination cost.
  • Prefer eventually consistent reads and batch gets when correctness allows. You reduce leader routing and node fan-out, keeping both latency and cost predictable.

Top takeaway: Treat every multiplier—transactions, global tables, strong reads—as a deliberate tradeoff; only pay for them when they deliver clear value at scale.

Moebius-Planet
planet-glow-purple
Planet-Jabir