AWS offers three primary storage services that teams routinely confuse: EBS (block storage), EFS (file storage), and S3 (object storage). They have different performance characteristics, durability models, and cost structures — and using the wrong one costs money and creates operational problems.
Here’s how to choose.
The fundamental difference
EBS (Elastic Block Store) is block storage — the equivalent of a hard drive attached to a server. It can only be mounted by a single EC2 instance at a time (with exceptions). It’s accessed like a local filesystem. Latency is single-digit milliseconds.
EFS (Elastic File System) is a managed NFS file system. Multiple EC2 instances can mount and access the same EFS simultaneously. It’s a shared filesystem — concurrent reads and writes from many machines to the same directory structure. Latency is low single-digit milliseconds with some NFS overhead.
S3 (Simple Storage Service) is object storage. You store and retrieve objects (files) via HTTP API calls — not filesystem mounts (unless using S3 Mountpoint or a FUSE driver). No filesystem semantics, no locking, no directory operations in the traditional sense. Massively scalable. Latency for first-byte is typically 10-100ms.
Cost comparison
| Service | Storage cost | I/O cost | Notes |
|---|---|---|---|
| EBS gp3 | $0.08/GB-month | $0.005/provisioned IOPS above 3,000 | Pay for provisioned capacity |
| EBS io2 | $0.125/GB-month | $0.065/provisioned IOPS | High-performance, NVMe |
| EFS Standard | $0.30/GB-month | None | Pay for actual storage used |
| EFS Standard-IA | $0.025/GB-month | $0.01/GB accessed | Infrequent access tier |
| S3 Standard | $0.023/GB-month | $0.0004/1000 GET, $0.005/1000 PUT | Pay for actual storage + requests |
| S3 Intelligent-Tiering | $0.023/GB-month (frequent) | Monitoring: $0.0025/1000 objects | Automatic tiering |
Key observations:
- EBS is priced on provisioned capacity — you pay whether you use it or not
- EFS Standard is 3.75x more expensive per GB than S3 Standard, but has no I/O charges and supports concurrent access
- S3 is the cheapest at scale for data that doesn’t need filesystem semantics
- EFS Infrequent Access ($0.025/GB-month) is similar to S3 Standard pricing, making it competitive for large shared filesystems with mixed access patterns
EBS: when to use it
Single EC2 instance workloads requiring low latency block access.
EBS is the right choice for:
- Operating system volumes (root volumes on all EC2 instances use EBS)
- Database data files (MySQL, PostgreSQL, MongoDB, Elasticsearch indices) on self-managed instances
- Application data that requires filesystem semantics on a single server
- High-IOPS workloads — EBS io2 provides up to 64,000 IOPS per volume
Volume types:
gp3— General purpose, 3,000 IOPS baseline, 125 MB/s throughput, configurable up to 16,000 IOPS and 1,000 MB/s. The default for most workloads.gp2— Older generation. IOPS scales with volume size (3 IOPS/GB). Use gp3 instead unless there’s a specific reason.io2 Block Express— Designed for databases requiring up to 256,000 IOPS and sub-millisecond latency. Oracle, SQL Server, high-traffic MySQL/PostgreSQL.st1— Throughput-optimized HDD. Sequential reads (Kafka, Hadoop). Not suitable for random access.sc1— Cold HDD. Lowest cost EBS. Infrequent access only.
EBS Multi-Attach: io1 and io2 volumes support Multi-Attach — mounting a single volume to up to 16 instances simultaneously. Requires cluster-aware filesystem (GFS2, OCFS2). Not appropriate for standard Linux filesystems (ext4, XFS) which aren’t designed for concurrent block-level writes. Niche use case (cluster databases, high availability applications with custom clustering logic).
EBS limitation: Volumes are Availability Zone-specific. A volume in us-east-1a cannot be attached to an instance in us-east-1b without snapshotting and creating a new volume. Plan your architecture accordingly — instances and their EBS volumes must be in the same AZ.
EFS: when to use it
Shared filesystem access from multiple EC2 instances simultaneously.
EFS is the right choice for:
- Shared content repositories that multiple application servers read/write concurrently
- Lift-and-shift of legacy applications that require NFS mounts
- Build artifacts and shared configuration files across an Auto Scaling Group
- Container workloads (ECS/EKS) requiring persistent shared storage across containers on different hosts
- Home directories in multi-user Linux environments (HPC, development environments)
Performance modes:
generalPurpose— Default. Sub-millisecond metadata latency. 35,000 read IOPS, 7,000 write IOPS per file system. Appropriate for most workloads.maxIO— Higher aggregate throughput (10+ GB/s) at the cost of higher per-operation latency. For large-scale parallel workloads (HPC, genomics, media processing).
Throughput modes:
elastic— Automatically scales throughput. Pay-per-use. Best default for workloads with variable throughput.bursting— Baseline 50 KB/s per GB stored, with burst credits. Appropriate for small file systems with occasional throughput needs.provisioned— Set a fixed throughput. Use when you need predictable throughput larger than the bursting baseline.
Storage classes:
STANDARD— $0.30/GB-month. Frequently accessed files.STANDARD_IA— $0.025/GB-month, $0.01/GB data access fee. Files accessed less than once a month.
Enable EFS lifecycle management to automatically move files to STANDARD-IA after 7, 14, 30, 60, or 90 days of non-access. For typical shared application content (mix of frequently and infrequently accessed files), this reduces effective storage cost significantly.
S3: when to use it
Durable, scalable object storage for data accessed via API.
S3 is the right choice for:
- Static assets (images, CSS, JavaScript, media files)
- Data lake and analytics storage (Athena, EMR, Redshift Spectrum query S3 directly)
- Application-generated files (user uploads, exports, reports)
- Backup and archive storage
- Big data and ML training datasets
- Log archives
- Anything where you’re storing and retrieving files by name via API, not needing filesystem traversal
S3 is NOT a filesystem. Common misconceptions:
- There are no real directories — “folders” are key prefixes (the
/inphotos/2024/image.jpgis part of the key, not a directory separator) - No file locking — concurrent writes to the same key have undefined winner-take-all semantics
- No append — you can’t append to an S3 object; you must overwrite or use S3 Multipart Upload
- Eventual consistency is now strong consistency (AWS fixed this in 2020), but be aware of history
Storage classes:
| Class | Cost/GB-month | Retrieval | Use case |
|---|---|---|---|
| Standard | $0.023 | Immediate | Frequently accessed data |
| Intelligent-Tiering | $0.023 (frequent) → $0.0036 (archive) | Immediate to hours | Unknown or changing access patterns |
| Standard-IA | $0.0125 | Immediate | Infrequent access, needs fast retrieval |
| One Zone-IA | $0.01 | Immediate | Infrequent, non-critical, single AZ |
| Glacier Instant | $0.004 | Milliseconds | Archives accessed occasionally |
| Glacier Flexible | $0.0036 | Minutes to hours | Long-term archives |
| Glacier Deep Archive | $0.00099 | 12-48 hours | 7-10 year retention compliance archives |
Use S3 Lifecycle policies to automatically transition objects between classes as they age. Logs and backups older than 90 days rarely need immediate access — move them to Glacier Instant ($0.004/GB) or Glacier Deep Archive ($0.00099/GB) for dramatic storage cost reduction.
Decision matrix
| Need | Use |
|---|---|
| Database data files (self-managed) | EBS |
| OS root volume | EBS |
| Shared filesystem, multiple instances | EFS |
| NFS mount for legacy app | EFS |
| ECS persistent storage across containers | EFS |
| Static web assets | S3 |
| User file uploads | S3 |
| Backup storage | S3 (+ Glacier lifecycle) |
| Data lake / analytics | S3 |
| ML training datasets | S3 |
| HPC shared scratch space | EFS (maxIO) |
| Low-latency high-IOPS database | EBS io2 |
| Logs archive | S3 (Glacier lifecycle) |
Common mistakes
Using EFS for data that only one instance needs: EFS is more expensive than EBS ($0.30 vs $0.08/GB-month) and adds NFS latency. If only one EC2 instance ever accesses the data, use EBS.
Storing large binary data in EBS when the workload doesn’t need low latency: Application-generated files (PDFs, exports, uploads) that get written once and read occasionally don’t need EBS. Store in S3, reference by key.
Using S3 where you need filesystem semantics: Applications that need readdir(), file locking, or append operations shouldn’t use S3 directly. A common pattern: use S3 for long-term storage, mount a small EBS volume as a working directory, sync to/from S3 as needed.
Not using EFS lifecycle policies: EFS Standard is expensive. Without lifecycle management, all files stay in Standard class indefinitely. Enable lifecycle management from day one.
If you’re evaluating storage architecture for a new workload or have growing storage costs you want to understand, I’m available to help.
Nick Allevato is an AWS Certified Solutions Architect Professional with 20 years of infrastructure experience. He runs Cold Smoke Consulting, an independent AWS consulting practice.