S3 Buckets are great for cheap storage. But if you have a bucket that continues to grow, it becomes a money pit (this is for us regular folk). Luckily AWS has a great post on what to do. The gist is basically:
- Buckets with 100K objects or more cannot be deleted through the AWS Console.
- Buckets with versioning applied cannot be deleted through the AWS CLI.
- Personally I have found when both scenarios are the case, you can’t delete it through the console or the CLI.
One great tool is S3wipe. One advantage with S3 wipe is that it deletes objects in parallel so it is much faster. One great way to use is with Docker as so:
- Clone the repository
- Create a docker container with the following:
docker run --rm -v ~/.aws: /root/ -v $(pwd):/app -w="/app" python:2.7 bash
- Then you can run
pip install boto
- You can finish it off with
s3wipe --path s3://bw-tf-backends-aws-example-logs --delbucket
There you go, delete til your heart’s content.