Migrating S3 Objects Safely: From Console Failures to CLI Reliability

When I first tried copying thousands of objects from one S3 bucket to another, I did what most engineers would do: I opened the AWS Management Console and started a bulk copy.

Big mistake.

About 5,000 objects transferred successfully… and then my AWS console session expired. I logged back in and discovered ~1,500 objects were missing from the destination. That’s when I realized: the console is the worst tool for large-scale S3 transfers.

Here’s how I fixed it — and how you can avoid the same pitfalls.


🚨 Why the AWS Console Fails for Large Copies

  • Session timeouts: The AWS console automatically logs you out after a few hours. Long copy jobs get cut off.

  • No retries: If an object fails midway, the console doesn’t retry.

  • No visibility: You don’t get a clean log of what was copied and what wasn’t.

Moral of the story: never rely on the console for bulk object transfers.


🔎 Step 1: Identify What’s Missing

After my session dropped, I needed to know which files actually made it. Luckily, the AWS CLI makes this simple:

# List source bucket contents
aws s3 ls s3://sharkro$$$$/tadagames/ --recursive > source.txt

# List destination bucket contents
aws s3 ls s3://zenob$$$$/tadagames/ --recursive > target.txt

# Compare the two
diff source.txt target.txt

This showed me exactly which objects didn’t copy.


📦 Step 2: Use AWS CLI Sync Instead of Console

The real fix was using aws s3 sync. Unlike the console, sync:

  • Copies only new/missing files

  • Can be safely re-run

  • Provides logging

Here’s the exact command I used:

aws s3 sync s3://shark$$$$/tadagames/ \\
             s3://zenob$$$$/tadagames/ \\
             --source-region eu-west-4\\
             --region us-east-3 \\
             --exact-timestamps \\
             --delete

Explanation of flags:

  • -source-region → tells AWS where the source bucket lives (Ireland in this case).

  • -region → tells AWS the destination bucket’s region (N. Virginia).

  • -exact-timestamps → ensures only truly different files are recopied.

  • (optional) --delete → removes files from the destination that no longer exist in the source.

⚠️ Don’t use --delete unless you really want a perfect mirror.


✅ Step 3: Verify the Transfer

Once the sync was done, I verified both buckets:

aws s3 ls s3://sharkr$$$/tadagames/ \\
    --recursive --human-readable --summarize

aws s3 ls s3://zen$$$$$/tadagames/ \\
    --recursive --human-readable --summarize

This gave me object counts and total sizes at the end of each command. They matched, so I knew everything had copied successfully.


🔄 Step 4: Consider Alternatives for Bigger Transfers

If you’re moving tens of millions of objects or multi-terabyte datasets, aws s3 sync might not be enough. AWS offers stronger tools:

  • S3 Batch Operations – Create a manifest of objects and let AWS run a managed job to copy them. Perfect for billions of keys.

  • AWS DataSync – Managed service that handles retries, parallelism, error tracking. Best for massive or ongoing migrations.

  • Cross-Region Replication (CRR) – If you need continuous sync between regions, set up replication rules at the bucket level.


⚙️ Best Practices for Production

  1. Always use CLI or automation, never console.

  2. Run sync multiple times. It’s safe — only missing/different files are recopied.

  3. Log your operations. Example:

aws s3 sync s3://sharkro$$$$/tadagames/ \\
             s3://zen$$$$/tadagames/ \\
             --exact-timestamps \\
             --source-region eu-west-4 \\
             --region us-east-3 \\
             | tee sync-log.txt
  1. Watch costs. Cross-region transfers incur data transfer charges. Factor this into migration planning.

  2. Plan retries. S3 is eventually consistent — if you miss objects, rerun sync until counts match.


🚀 Final Thoughts

What started as a frustrating session timeout in the AWS console turned into a valuable lesson:

👉 The AWS CLI (or automation) is the only reliable way to migrate S3 data at scale.

Now, whether I’m moving 10 files or 10 million, I don’t touch the console — I script it, I log it, and I sleep easy knowing my objects aren’t lost in the cloud.


Last updated