Apr 16, 2026 // aws

AWS ElastiCache: Redis vs Memcached and When to Use Each

ElastiCache Redis and Memcached both cache data, but they have very different capabilities, durability models, and cost profiles. Here's how to choose and what the gotchas are.

ElastiCache offers two engines: Redis and Memcached. They share a purpose — reduce database load by caching frequently accessed data — but beyond that they’re substantially different services with different use cases, failure models, and cost structures.

Here’s how to think through the choice.


What each engine does

Memcached is a simple, distributed key-value store. It stores strings. It’s multi-threaded. It has no persistence, no replication, and no cluster failover. When a Memcached node fails, the data on that node is gone. It’s fast, simple, and horizontally scalable.

Redis is a data structure server. It stores strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs, and streams. It supports pub/sub messaging. It supports Lua scripting. It has persistence (RDB snapshots and AOF logging), replication (primary with read replicas), and automatic failover. It’s the more capable engine by a wide margin.

In practice, most use cases that start with Memcached because it’s simpler eventually need Redis features. Most teams building anything beyond simple string caching should start with Redis.


Feature comparison

FeatureRedisMemcached
Data structuresStrings, hashes, lists, sets, sorted sets, streamsStrings only
PersistenceRDB + AOFNone
Replication✓ (primary + replicas)
Automatic failover✓ (Multi-AZ)
Cluster mode✓ (sharding)✓ (horizontal scaling)
Pub/sub
Lua scripting
Transactions✓ (MULTI/EXEC)
Streams
Multi-threadingSingle-threaded (Redis 6+ I/O multi-threaded)Multi-threaded
Max item size512 MB1 MB

Cost comparison

ElastiCache pricing is by node type and hours running. Both engines use the same node types.

Sample pricing (us-east-1, On-Demand):

NodevCPUsMemoryOn-Demand/hrReserved 1yr
cache.t4g.micro20.5 GB$0.0128~$0.008
cache.t4g.medium23.1 GB$0.068~$0.044
cache.r7g.large213.07 GB$0.219~$0.138
cache.r7g.xlarge426.32 GB$0.438~$0.277
cache.r7g.4xlarge16105.81 GB$1.751~$1.106

Multi-AZ Redis (recommended for production): Requires a primary node + at least one replica. Cost doubles for the replica. A production cache.r7g.large cluster (primary + 1 replica) costs $0.219 × 2 × 730 hrs = ~$320/month On-Demand, ~$200/month Reserved.

Memcached multi-node: Each node is independent; you pay per node. No standby cost because there’s no replication.

Redis Serverless (new): Priced at $0.125/GB-hour of data stored + $0.0034 per ECPU. Removes capacity planning but adds unpredictability for high-throughput workloads. Appropriate for variable or unpredictable workloads; generally more expensive than provisioned nodes at sustained load.


When to use Redis

Session storage: HTTP session data in Redis with TTL is the canonical use case. Redis handles the expiration natively, replicates for durability, and fails over automatically. A lost Memcached node means lost sessions — users are logged out. Redis Multi-AZ means lost nodes trigger failover with minimal data loss.

Sorted sets for leaderboards/rankings: Redis sorted sets (ZADD, ZRANGE, ZRANGEBYSCORE) handle leaderboard data natively — ordered by score, efficiently queryable by rank range. Building this on Memcached requires external sorting logic.

Rate limiting: Redis atomic operations (INCR + EXPIRE) support sliding window rate limiting without race conditions. The Lua scripting capability allows atomic check-and-increment patterns. Not possible in Memcached without external coordination.

Pub/sub and real-time messaging: Redis pub/sub supports basic fanout messaging. Not a replacement for SQS or SNS at scale, but useful for lightweight real-time features (live notifications, chat, presence).

Geospatial queries: Redis geospatial commands (GEOADD, GEODIST, GEORADIUS) store latitude/longitude and support proximity queries. Useful for “find stores near me” patterns without a full geospatial database.

Caching with complex invalidation: Redis key patterns, pub/sub-based invalidation, and hash structures support more sophisticated cache invalidation strategies than simple key-delete available in Memcached.


When to use Memcached

Simple object caching at very high throughput: Memcached’s multi-threaded architecture can handle more concurrent connections per node than Redis on equivalent hardware. For pure caching workloads where data loss on node failure is acceptable and you don’t need persistence or replication, Memcached performs well.

Horizontal scaling simplicity: Memcached’s client-side sharding model is simpler to reason about than Redis Cluster’s hash slot model. For teams that want to add cache nodes and have clients automatically distribute keys, Memcached’s consistent hashing approach is straightforward.

When you truly don’t care about data loss: Caching computed results that can always be recomputed from the source — query results, rendered HTML fragments, computed aggregates — Memcached is a valid choice. The cache miss cost is the recomputation cost, which is often acceptable.

In practice, these cases are less common than they appear. Most teams that think they don’t care about cache durability discover they do when a Memcached node fails during peak traffic and the database gets hit by all the cache misses simultaneously.


ElastiCache Redis architecture patterns

Single-node (dev/staging only)

One primary node, no replicas. Appropriate for development and staging. Fails completely on node failure. Never use in production for anything that matters.

Multi-AZ with automatic failover

One primary node, one or more read replicas in different AZs. On primary failure, ElastiCache promotes a replica to primary automatically (typically 30-60 seconds). Use for all production workloads.

resource "aws_elasticache_replication_group" "main" {
  replication_group_id       = "app-cache"
  description                = "Application cache"
  node_type                  = "cache.r7g.large"
  num_cache_clusters         = 2  # primary + 1 replica
  automatic_failover_enabled = true
  multi_az_enabled           = true
  
  engine_version = "7.1"
  port           = 6379

  subnet_group_name  = aws_elasticache_subnet_group.main.name
  security_group_ids = [aws_security_group.elasticache.id]

  at_rest_encryption_enabled = true
  transit_encryption_enabled = true
  auth_token                 = var.redis_auth_token  # Redis AUTH password
  
  snapshot_retention_limit = 1  # Daily snapshot, 1 day retention
}

Cluster Mode (Redis Cluster)

Shards data across multiple node groups. Each shard has a primary + replicas. Use when data doesn’t fit in a single node or when you need throughput beyond what a single primary can provide. Adds client complexity — the client must be cluster-aware.

Most applications don’t need Redis Cluster. A single cache.r7g.2xlarge handles millions of operations per second. Reach for cluster mode only when you’ve hit actual throughput or memory limits.


Security configuration

Encryption in transit: Enable transit_encryption_enabled. Required for any production workload with sensitive data. Minor CPU overhead.

Encryption at rest: Enable at_rest_encryption_enabled. Encrypts data on disk (relevant for Redis persistence snapshots).

Redis AUTH: Set auth_token to require password authentication. Combined with VPC security groups that restrict port 6379 to your application subnet, this is a defense-in-depth approach.

VPC placement: ElastiCache should always be in private subnets — never publicly accessible. Security groups should only allow inbound 6379 from your application security groups, not from anywhere.

resource "aws_security_group" "elasticache" {
  name   = "elasticache-sg"
  vpc_id = var.vpc_id

  ingress {
    from_port       = 6379
    to_port         = 6379
    protocol        = "tcp"
    security_groups = [aws_security_group.app.id]  # Only from app tier
  }
}

Cost optimization for ElastiCache

Reserved Nodes: Like EC2 and RDS, ElastiCache offers 1-year and 3-year Reserved Nodes at 30-50% off On-Demand. For stable production clusters that have been running for several months, Reserved Nodes are a straightforward cost reduction.

Graviton instances (r7g, m7g, t4g): AWS Graviton-based ElastiCache nodes provide ~20% better price/performance than equivalent x86 instances (r6g vs. r5, etc.). Use Graviton by default unless you have a specific reason not to.

Right-size by memory utilization: Pull FreeableMemory from CloudWatch. If you’re consistently using less than 50% of node memory, the cluster is over-provisioned. Down-tier to the next smaller node type.

Eviction policy selection: The maxmemory-policy setting controls what happens when the cache is full. allkeys-lru (evict least recently used from all keys) is appropriate for pure caching workloads. volatile-lru (evict LRU only from keys with TTL) protects keys without TTL from eviction — useful when you’re mixing cacheable data with durable session data in the same cluster.


The decision in one paragraph

For anything beyond the simplest pure-caching use case, use Redis. The durability, replication, data structures, and operational features (automatic failover, snapshots) are worth the slightly higher operational complexity. Memcached is appropriate for pure, ephemeral, simple-string caching at very high throughput where data loss on node failure is genuinely acceptable — a much narrower use case than most teams initially think.

If you’re sizing or designing an ElastiCache deployment for a production workload, I’m available to help scope it.


Nick Allevato is an AWS Certified Solutions Architect Professional with 20 years of infrastructure experience. He runs Cold Smoke Consulting, an independent AWS consulting practice.


← all writing