Nice job finding that! Yes, that was what was causing my problem, too.
After a little investigation, it looks like that 0-byte object is a
“directory” object. One way it comes into existence is if you use the AWS
S3 web interface to first create the directory (as you might do before
uploading files into the directory).
If you create an object directly by doing something like:
aws s3 cp test.txt s3://
… then one of those “directory” objects is not generated.
Aside from your solution, an easy way to delete a single one of those
0-byte directory objects is simply to delete that “directory” object (in
this case, “foo”):
aws s3 rm s3://my.bucket.com/foo/
… and the “contained” files remain intact.
With all that said, other clients don’t seem to have much trouble with
“directories” containing these “directory” objects, so Lucee should be able
to handle the situation as well, so I’ll file a ticket. (Though now that I
know what we’re dealing with, I finally understand why *s3cmd *throws a
warning about an empty object during some operations.)
JamieOn Wed, May 20, 2015 at 5:41 AM, Tom Chiverton <@Tom_Chiverton> wrote:
I am defeintly able to restore my ability to list effected top level
buckets by running the following Python code.
It basically finds a one byte object, and removes it, from the indicated
s3_conn = boto.connect_s3(aws_access_key, aws_secret_key)
bucket = s3_conn.get_bucket(bucket_name)
for key in bucket.list(‘extravision/’):
if key.size == 1:
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an
email to firstname.lastname@example.org.
To post to this group, send email to email@example.com.
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.