AWS S3 Cost Optimization: Where the Money Actually Goes

April 10, 2026 — Nick Allevato

S3 is one of those services where the billing seems straightforward — you pay for storage and requests — and then you get an AWS bill with a line item that surprises you. Teams that haven’t looked closely at S3 costs often discover they’ve been paying two to five times what’s necessary.

Here’s where the money goes and how to find it.


Storage class selection: most teams leave money on the table

S3 has six storage classes with different cost and availability tradeoffs. Most teams use Standard for everything — which is correct for frequently accessed data and wrong for almost everything else.

Standard: $0.023/GB/month. Full availability, no retrieval fee. Right for data accessed weekly or more frequently.

Standard-IA (Infrequent Access): $0.0125/GB/month — about half the storage cost, but with a $0.01/GB retrieval fee and a 30-day minimum storage charge. Right for data you access less than once a month. Wrong for data you access unpredictably or frequently in small amounts.

One Zone-IA: Same as Standard-IA but stored in one AZ (not three). Lower durability — AWS guarantees 99.5% availability vs 99.9% for Standard-IA. Right for reproducible data (thumbnails, processed outputs you can regenerate). Not for primary copies of critical data.

Glacier Instant Retrieval: $0.004/GB/month — 80% cheaper than Standard — with millisecond retrieval. Right for quarterly or annual access patterns. Long-term backups, audit logs, compliance data you might need to pull but rarely do.

Glacier Flexible Retrieval: $0.0036/GB/month, with retrieval times of minutes to hours. For archive data you’d access at most annually. Replacing tape archives.

Glacier Deep Archive: $0.00099/GB/month — the cheapest option, 12-hour retrieval. For long-term regulatory data you’re required to keep but expect to never access.

The practical question for each bucket: how frequently is the data actually accessed? CloudWatch storage metrics and S3 Storage Lens can tell you this.


S3 Intelligent-Tiering: useful but not free

Intelligent-Tiering automatically moves objects between frequent and infrequent access tiers based on access patterns. It sounds ideal and often is — but there are two costs people miss:

Monitoring fee: $0.0025 per 1,000 objects per month. For a bucket with 100 million objects, that’s $250/month in monitoring fees regardless of your actual storage cost. This fee makes Intelligent-Tiering cost-ineffective for small objects. The math works out to: Intelligent-Tiering is only net-positive for objects larger than approximately 128KB.

No retrieval charge for frequent access, but if your data has predictable access patterns (always hot, always cold), a fixed storage class is cheaper than Intelligent-Tiering because you avoid the monitoring fee.

When to use it: variable access patterns, objects larger than 128KB, no need to predict the access pattern. When to avoid it: millions of small objects, predictably hot data, predictably cold data where you’d be better served by a lifecycle policy.


Lifecycle policies: the automation you should already have

S3 Lifecycle policies automatically transition objects between storage classes or delete them based on age. Every bucket that holds data that ages (logs, backups, processed outputs) should have lifecycle rules.

Common pattern for application logs:

{
  "Rules": [
    {
      "Status": "Enabled",
      "Transitions": [
        {"Days": 30, "StorageClass": "STANDARD_IA"},
        {"Days": 90, "StorageClass": "GLACIER_IR"},
        {"Days": 365, "StorageClass": "DEEP_ARCHIVE"}
      ],
      "Expiration": {"Days": 2555}
    }
  ]
}

This rule: after 30 days, move to IA (50% cheaper). After 90 days, Glacier Instant (80% cheaper). After a year, Deep Archive (95% cheaper). After 7 years, delete.

The mistake teams make: creating lifecycle rules with transitions that don’t account for minimum storage charges. Objects in Standard-IA, Glacier IR, and deeper tiers have minimum storage charges (30, 90, and 180 days respectively). If you transition objects after 5 days and then delete them after 20 days, you pay the minimum 30-day charge for the IA tier even though you deleted the data early. Lifecycle rules should account for these minimums.


Request costs: small costs that add up

S3 charges per API request. The costs look small in isolation:

  • PUT/COPY/POST/LIST: $0.005 per 1,000 requests
  • GET/SELECT: $0.0004 per 1,000 requests

At scale, these add up. Common patterns that generate unexpected request costs:

Excessive LIST operations. Some applications, particularly those using S3 as a file system, call ListObjectsV2 frequently. At $0.005 per 1,000 requests, listing a bucket with millions of objects repeatedly accumulates cost.

Small object uploads. Uploading 10 million 1KB files costs the same in requests as 10 million 1GB files, but with far more transactions overhead. If you’re uploading many small objects, batch them.

Cross-region requests. Standard request charges apply plus data transfer charges. Avoid cross-region S3 access patterns where possible.

Check your S3 request costs in the AWS Cost Explorer, broken down by bucket and request type. Unexpected spikes usually indicate a new application pattern or a bug.


Data transfer costs: the most overlooked S3 expense

Data transfer is often the largest S3 cost in production workloads, and the most avoidable.

S3 to internet: $0.09/GB for the first 10TB/month. This is unavoidable for data you’re actually serving to users.

S3 to EC2/Lambda in the same region: Free. Data transfer within the same region between S3 and AWS compute is free.

S3 to another AWS region: $0.02/GB. Transferring data between regions adds up fast for large workloads.

NAT Gateway data processing: If you’re accessing S3 from resources in a private subnet without a VPC gateway endpoint, traffic goes through the NAT Gateway. NAT Gateway charges $0.045/GB for data processing. For a Lambda or ECS task in a private subnet that reads from S3, every GB you process costs $0.045 in NAT fees — for something that could be free.

Fix: Add an S3 VPC Gateway Endpoint. It’s free, takes 5 minutes to configure, and routes S3 traffic through the AWS network without NAT Gateway. This is one of the highest-ROI configurations in AWS and is missed constantly.

In CloudFormation:

S3GatewayEndpoint:
  Type: AWS::EC2::VPCEndpoint
  Properties:
    VpcId: !Ref VPC
    ServiceName: !Sub 'com.amazonaws.${AWS::Region}.s3'
    RouteTableIds:
      - !Ref PrivateRouteTable

Versioning: storage accumulation risk

S3 versioning is useful for data protection but creates storage accumulation. Every overwrite creates a new version. Every delete creates a delete marker. Without lifecycle rules on non-current versions, your bucket accumulates every historical version of every object indefinitely.

If you have versioning enabled, lifecycle rules should include:

{
  "NoncurrentVersionExpiration": {
    "NoncurrentDays": 30
  },
  "AbortIncompleteMultipartUpload": {
    "DaysAfterInitiation": 7
  }
}

AbortIncompleteMultipartUpload is particularly easy to miss. Incomplete multipart uploads consume storage and are never visible in the console. They accumulate silently unless you have a rule to expire them.


S3 Storage Lens: where to start

AWS S3 Storage Lens provides organization-wide visibility into S3 usage and activity. The free tier shows storage, object count, and request activity across all buckets.

Enable Storage Lens at the organization or account level and look for:

  • Buckets with large amounts of Standard storage and low GET/HEAD request rates (candidates for IA or archival)
  • Buckets with versioning enabled and growing non-current object counts
  • Buckets with no lifecycle rules
  • Request distribution anomalies

Storage Lens makes the S3 audit systematic rather than bucket-by-bucket.


The audit approach

For a structured S3 cost review:

  1. Pull S3 costs by bucket in Cost Explorer, last 3 months
  2. Enable Storage Lens if not already on — shows access patterns at bucket level
  3. Identify the top 5 buckets by cost — focus there first
  4. For each bucket: check current storage class, access frequency, versioning status, lifecycle rules
  5. Add lifecycle rules where missing — especially for log/backup/output buckets
  6. Add S3 VPC Gateway Endpoint if not in place
  7. Review Intelligent-Tiering configurations for object count/size fit

Done systematically, this typically reduces S3 costs by 30-60% for teams that haven’t looked before.


When to bring in help

S3 cost optimization is a defined process that takes 1-2 days and consistently delivers measurable savings. If your S3 bill has never been reviewed and you’re running any significant workload, assume there are savings to find.

I include S3 as part of broader AWS cost audits. If you want to run through this systematically across your account, let’s talk.


Nick Allevato is an AWS Certified Solutions Architect Professional with 20 years of infrastructure experience. He runs Cold Smoke Consulting, an independent AWS consulting practice.