Apr 15, 2026 // aws

AWS DynamoDB Cost Optimization: Capacity Modes, Design Patterns, and the Charges That Surprise You

DynamoDB costs are non-obvious. The right capacity mode, table design, and read/write patterns can reduce your DynamoDB bill by 60-80%. Here's what to look at.

DynamoDB bills are one of the more common surprises in AWS cost audits. The service looks cheap at first — a few cents per million requests — until you’re running millions of requests per second and the bill is $40,000/month.

The cost levers in DynamoDB are non-obvious because they interact with table design decisions made months earlier. Here’s how to read a DynamoDB bill and what to change.


The two capacity modes

Everything in DynamoDB cost starts with capacity mode selection.

On-Demand mode:

  • Write request unit (WRU): $1.25 per million
  • Read request unit (RRU): $0.25 per million
  • No capacity provisioning, no planning required
  • Scales to any traffic level instantly

Provisioned mode:

  • Write capacity unit (WCU): $0.00065/hour ($0.47/month)
  • Read capacity unit (RCU): $0.00013/hour ($0.094/month)
  • With Reserved Capacity (1-year): ~50% discount
  • Requires capacity planning; auto-scaling available

When to use each:

On-Demand is appropriate for:

  • New tables with unpredictable traffic
  • Spiky workloads with low average vs. peak ratios
  • Dev/staging tables
  • Traffic under ~100 WCU / 200 RCU average sustained

Provisioned is appropriate for:

  • Stable, predictable traffic
  • Any table doing consistent throughput that you can measure
  • Traffic above ~100-200 average WCU where the per-unit cost difference adds up

The crossover math:

At 100 WCU sustained:

  • On-Demand: 100 WCU × 3600s × 24h × 30d × $1.25/million = ~$13.50/month
  • Provisioned: 100 WCU × $0.47/month = $47/month

Wait — On-Demand looks cheaper here. But the calculation reverses at higher throughput once you add Reserved Capacity:

At 1,000 WCU sustained with 1-year Reserved Capacity (~50% off):

  • On-Demand: ~$135/month
  • Provisioned reserved: ~$23.50/month

The real crossover depends on your peak-to-average ratio. If your traffic is bursty (peak 10x average), On-Demand may be cheaper despite higher per-unit cost because Provisioned would require over-provisioning for the peak.


RCU math and the expensive reads

This is where most teams lose money without realizing it.

Read Capacity Unit (RCU) calculation:

  • 1 RCU = 1 strongly consistent read per second for items up to 4 KB
  • 1 RCU = 2 eventually consistent reads per second for items up to 4 KB
  • Items larger than 4 KB: rounded up to nearest 4 KB increment

A 10 KB item costs 3 RCUs per strongly consistent read (10/4 = 2.5, rounded up to 3).

A 100 KB item costs 25 RCUs per read.

The expensive pattern teams don’t notice: Storing large blobs (JSON documents, serialized objects) in DynamoDB items and reading them frequently. A 50 KB item read 1 million times per day costs:

  • 50/4 = 12.5 → 13 RCUs per read
  • 1,000,000 × 13 RCUs = 13 million RCUs/day
  • On-Demand: 13M × $0.25/million = $3.25/day = ~$98/month for one access pattern on one table

The fix: store large blobs in S3, store only the S3 key in DynamoDB. DynamoDB item remains small (under 4 KB), RCU consumption drops dramatically.


WCU math and write amplification

Write Capacity Unit (WCU) calculation:

  • 1 WCU = 1 write per second for items up to 1 KB
  • Items larger than 1 KB: rounded up to nearest 1 KB increment

The 1 KB threshold (vs. 4 KB for reads) means writes are 4x more expensive on a per-byte basis than reads.

Write amplification from GSIs: Every Global Secondary Index (GSI) is a full copy of the projected attributes. A write to a table with 3 GSIs effectively costs 4x the WCUs of a write to a table with no GSIs (1 base table + 3 index writes).

If you’re writing to a high-throughput table and have GSIs with large attribute projections:

  1. Use KEYS_ONLY or specific attribute projections on GSIs instead of ALL
  2. Remove GSIs that aren’t being queried
  3. Consolidate access patterns to reduce GSI count

The Global Tables cost multiplier

Global Tables (multi-region replication) replicate every write to every region. Cost structure:

  • Replicated Write Request Units (rWRU): $1.875 per million (1.5x the On-Demand WRU price)
  • Each replica region charges rWRUs independently

A table replicated to 3 regions pays effectively 3x the write cost of a single-region table (each region charges for inbound replication writes).

If you enabled Global Tables for a workload that doesn’t need true multi-region active-active (most don’t), this is a significant cost to reconsider. Read replicas for low-latency reads in other regions can be done more cheaply with DynamoDB Accelerator (DAX) or application-level caching.


DynamoDB Accelerator (DAX) — when it helps and when it doesn’t

DAX is an in-memory cache for DynamoDB that returns cached results for GetItem and Query calls. Cache hit = no RCU consumed. Cache miss = RCU consumed + DAX charges.

DAX pricing: $0.269/hour for a dax.r4.large node (3-node minimum for production = $580/month).

When DAX saves money: If you’re spending more than $580/month on DynamoDB reads and your access patterns have good cache hit rates (repetitive reads of the same items).

When DAX costs more: Workloads with low cache hit rates (high cardinality keys, write-heavy tables, infrequently accessed items). You pay DAX costs plus full DynamoDB read costs on cache misses.

Simpler alternative: Application-level caching with ElastiCache Redis is often cheaper for read-heavy workloads. DAX is only appropriate if you need sub-millisecond read latency and want AWS to manage the caching layer.


Table design decisions that drive cost

Partition key selection

Poor partition key choice causes hot partitions. A hot partition means a small number of partition keys receive the majority of requests — those partition keys hit the 3,000 RCU / 1,000 WCU per-partition limit, throttling occurs, and you have to over-provision capacity to avoid it.

High-cardinality partition keys distribute load evenly. Low-cardinality partition keys (e.g., status with values active/inactive) concentrate load.

For Provisioned mode, hot partitions are the most common cause of throttling and over-provisioning costs.

Item size

Design for small items. The cost functions (1 WCU per KB for writes, 1 RCU per 4 KB for reads) penalize large items heavily. Attributes that are rarely accessed should be in a separate table or S3.

Single-table design

Consolidating multiple entity types into a single table (single-table design) reduces:

  • Number of tables (fewer table-level costs)
  • Cross-entity query RCU consumption (related items co-located on same partition)
  • GSI count when access patterns are carefully planned

Single-table design has a learning curve but is the canonical DynamoDB pattern for cost-efficient high-throughput workloads.


TTL — the free delete

DynamoDB Time-To-Live (TTL) automatically deletes items when a specified timestamp attribute expires. TTL deletions are free — they don’t consume WCUs.

Any DynamoDB table storing time-bounded data (sessions, ephemeral tokens, cache entries, event logs) should use TTL. Without TTL, you’re paying for stored data indefinitely and paying WCUs to delete it manually.

TTL deletions happen within 48 hours of expiration (not exactly at expiration time). If exact deletion timing matters, don’t rely on TTL — use conditional writes or scheduled Lambdas.


Storage costs

DynamoDB storage: $0.25/GB-month (Standard class), $0.10/GB-month (Standard-Infrequent Access).

Standard-IA is appropriate for tables with low read/write frequency. The trade-off: Standard-IA charges higher read prices ($0.25 vs. $0.25 per million read request units — actually same). The actual difference is write prices are slightly higher for IA. For truly infrequently accessed tables (archival, audit logs), IA is worth evaluating.

For tables with large storage footprints, TTL + archival to S3 is usually more cost-effective than paying $0.25/GB-month indefinitely for data you rarely access.


Practical cost reduction checklist

For each high-cost DynamoDB table:

  1. Check capacity mode fit — Is traffic steady enough to justify Provisioned + Reserved Capacity? If yes, the savings can be 50-70% vs. On-Demand.

  2. Check item sizes — Are any items over 10 KB? Candidates for S3 offload of large attributes.

  3. Check GSI count and projections — Are all GSIs being used? Are projections ALL when KEYS_ONLY would suffice?

  4. Check for Global Tables — Is multi-region replication actually needed? Each unnecessary replica doubles write costs.

  5. Enable TTL — For any table with time-bounded data.

  6. Check for scan operationsScan reads every item in the table. At scale, a single scan can consume more RCUs than normal workload. Queries should always use the primary key or a GSI.

  7. Consider DAX — Only if read costs exceed DAX operational cost and cache hit rates will be high.


DynamoDB cost problems almost always trace back to design decisions made early in the project — item size, access patterns, and capacity mode. If your DynamoDB spend has grown beyond expectations, a cost audit is the right first step.


Nick Allevato is an AWS Certified Solutions Architect Professional with 20 years of infrastructure experience. He runs Cold Smoke Consulting, an independent AWS consulting practice.


← all writing