Writing to S3 bucket

I’m tearing my hair out trying to do a simple file copy to an S3 bucket.

S3 extension 0.9.4.122 is installed, Application.cfc includes:

<cfset this.s3.accesskeyid="#accesskey#">
<cfset this.s3.awssecretkey="#secretkey#">

My page includes the following tag:
<cffile action="copy" source="c:\temp\temp.txt" destination="s3:///#bucketname#/temp.txt">
And I get the error:
Can't copy file [c:\temp\temp.txt] to [s3:///[bucketname]/temp.txt]; Access Denied

I have an IAM user whose credentials I use above and the access key’s “Last used” time corresponds with the last time I ran my script. The user’s permission includes the AmazonS3FullAccess policy below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*"
        }
    ]
}

Is there something I have overlooked?

Thanks… Simon

OS: Windows 8
Java Version: Adopt 11.0.7
Tomcat Version: 9.0.35
Lucee Version: 5.3.7.47

I’ve no experience with S3. But is there any chance that this is a file permission issue with your c:\temp\temp.txt? Does Tomcat have file permissions to read it?

It doesn’t seem to be an issue with permissions reading the local file. There is no problem with:

<cffile action="copy" source="c:\temp\temp.txt" destination="c:\temp\temp1.txt">

Have you been able to read any data from your bucket? Are you receiving an Access Denied also then?

Thanks for the cues Andreas. No problems reading from any buckets.

The issue turned out to the Permissions on the bucket. You need to turn off the “Block public access to buckets and objects granted through new access control lists (ACLs)” option, which overrides any permissions you set up through IAM if you want to Write to a bucket. Interesting that you can still Read from a bucket with “Block all public access” enabled.

Simon

1 Like

I’ve just setup S3 to test this stuff around. Could do almost everything, just no file manipulation. I was just there getting this ( possibly same ) stack trace:

org.jets3t.service.S3Service.putObject(S3Service.java:2214)
  at org.lucee.extension.resource.s3.S3._write(S3.java:816)

Glad you were much quickier. By the way, thanks for posting back solution.

In IAM, if you select your user and click Security Credentials, does the Last Used time correspond with the last time you ran your script? If you are getting as far as the “putObject” call, I guess it does. If so, your script is able to log in and you need to check the policy (the one above works for me) and the “Block public access” settings in the Permissions tab for your bucket. Do you receive a similar “Access denied” error message?

My error was “Access Denied” with the whole error outputting my credentials, because I’ve just set up the mapping quick and easy in Web Admininistrator like shown in Michas Video S3 for source code - YouTube. Had also no problems with reading files. But file writing seemed to be a problem. The first important 2 lines of the stack trace showed S3Service.putObject error, and after googling it was clear that it was some sort of IAM permission issue. Then I saw your post back. Didn’t look any further then, deleted my bucket and created IAM user/group. Going to experiment with it in more details at some time in the near future.

Not sure if you got this fixed or not, but sadly, the issue is 100% with Lucee’s implementation of the fileUpload command. For some reason it WILL NOT work without the following settings on your bucket:
Block all public access
Off
Block public access to buckets and objects granted through new access control lists (ACLs)
Off
Block public access to buckets and objects granted through any access control lists (ACLs)
Off
Block public access to buckets and objects granted through new public bucket or access point policies
On
Block public and cross-account access to buckets and objects through any public bucket or access point policies
On

So, take it for what you want, but I have confirmed this with two side-by-side servers one running ACF and one running Lucee. Same credentials, same policies, same code. Good luck.

-jp

A code example helps :slightly_smiling_face:

Dude,

Code example is pointless. Read the post.

-jp

Why pointless, don’t you want us to fix the problem?

I posted to help the person who thought their code was wrong. However, if you want to fix the issue I’d be happy to help. Your response seemed pedantic - as such, my short response.

Under the hood there is something about the way in which the lucee version of fileupload differs in implementation. It isn’t a syntax issue - again, the code example isn’t going to help you solve it. Show me the code under the hood and perhaps I can assist.

If I was to guess, the lucee version does something with an acl and it sends that in to AWS thereby triggering the “new” flag on the security on their side.

I will post code when I am back at a computer.

-jp

We use fileCopy to copy a local file to S3 usually.

Gents,

Somewhere this has gone off the rails. I do not need help. I was simply trying to help the original poster. Lucee has an issue with the implementation of the fileUpload() when going to an S3 bucket. The basic policy that works for ACF does not work for Lucee - even when the code written and running on both servers is identical. For example:




//writeDump(fileToUpload);
uploadResult = FileUpload( destination=fileToUpload, fileField=FORM.fileObj, accept=LOCAL.validFileTypesAccept, nameConflict=“overwrite”);
writeDump(uploadResult);

Works perfectly on ACF with a fully private S3 Bucket using the General ‘Allow All S3’ policy. However, simply running it on Lucee - it will not allow putObject (used under the hood here) on the same bucket without turning off the 2 private blocks as stated in my original reply. So yeah, again, the discussion needs to focus on fixing the under the hood. I simply posted the ‘work-around’.

-jp

ok, confusion cleared up. sorry :slight_smile:

I’ve been thru jira, the only bug possibly related bug I could find is

https://luceeserver.atlassian.net/browse/LDEV-2336

but that was is a slightly different issue as that one was about silent failures, I have added a link to this thread on the task

Interesting, but not exactly the same. I am certainly able to see the error with the dump and it is your typical almost useless AWS response ‘Access Denied’. I would have to trigger it again for the exact text they send, but it isn’t helpful.

As I was saying earlier, the cause of the issue on lucee is not likely syntax being incorrect as it works with the loose policy on AWS without code change, rather it is almost as if there is a timestamp in the send thst is wrong or a ACL id or something that mskes AWS think the policy on the request is new.

Could I see the code lucee translates the fileupload to? I would probably be able to fix it.

-jp

I’m starting to think that mindframe might be on to something. I’m trying to lock down our S3 bucket so only operations authenticated with an access key + secret are allowed. When I turn off all public access to the bucket, write operations fail. If I turn off all public access EXCEPT for Block public access to buckets and objects granted through new access control lists (ACLs), I can write to the bucket.

We’re on lucee version 5.3.7.47 and use a server-level mapping to define our connection to S3 which includes the access key + secret in the S3 URL.

I created this ticket, with some more insight into the root cause.
https://luceeserver.atlassian.net/browse/LDEV-4100