System-impairing bug on Lucee S3 extension with third-party S3 providers

Hi,

We discovered a big (but very easy-to-fix) bug with the standard S3 extension for Lucee.

The bottom line is that the extension assumes that the bucket name for all S3-compatible providers follow the same pattern, modifying the “host” parameter automatically, and preventing the extension to connect to other third-party S3-compatible providers.

So for example a bucket called foo on amazon could have a name like this:
foo.aws.amazon.com

However many providers instead use this notation:
someOtherProvider.com/foo

(notice the difference between a subdomain-name style to a subfolder style).

The way one could get around it is this (which works for the s3ListBucket function but not other functions, as shown below):

<cfset res = s3listBucket(
bucketName=“”,
accessKeyId=application.s3.accessKeyId,
secretAccessKey=application.s3.awsSecretKey,
host=“https://someOtherProvider.com/foo”,
timeout=30 )>

This causes/forces the extension to request bucket access to this address:
https://someOtherProvider.com/foo

Note that the trick was to NOT specify a bucketName (leaving with an empty string) so that the subdomain would not get appended to the provider’s address, and to add the bucket name to the end of the providers’s connection URL as a folder.

That works fine.

However, when we try to do the same trick with other functions, like the s3read() function, it fail with an error saying that the bucketName parameter cannot be left empty.

So here’s our suggestions for fixing this:

  1. The easy way: Do not require the bucketName parameter to be filled out (and test it to make sure it works with the other functions) on all functions so we can use the trick above.
  2. The correct way: Stop automatically appending the bucketName parameter as a subdomain to the provider’s URL for us, simply use whatever value we send in the “host” parameter and nothing more.
  3. Note that if you implement #2 above, that likely you will break existing code that expects you to automatically add the subdomain in, so one suggestion is for you to allow these functions to have an optional last parameter called something like “strict” set to boolean true/false. If we send that parameter and we set it to true, then behave as #2 above, else keep behaving as of today. This way you fix the issue and keep backwards compatibility.
  4. Another option to prevent creating a new parameter as in #3 above is to detect if the user is using aws.amazon.com and only in those cases do the subdomain appending (however you might still break some third-party conenctions which do expect it to be subdomain-style, I don’t have a list of those).

Here are the versions I’m using:

S3 Extension version: 2.0.1.25
Lucee Version: Lucee 5.4.6.9
OS: MacOS Sequoia 15.0.1
Java Version: Whatever CommandBox installs
Tomcat Version: Whatever CommandBox installs

Any ideas on when this bug might be fixed?

We can’t connect to the S3 provider until this bug is gone.

You can sponsor a bug, it will help push development.

You can use something like this to remotely mount a cloud file system as a local drive,

you could try the latest RC.

How can I sponsor a bug? Depending on the cost I’d be willing to pay for this to be fixed instead of relying on a third-party solution.

Maybe @Gert can help.

Hey there,

Just shoot me an email at gert(at)lucee.org and we’ll figure something out.

Have a great weekend

Gert

1 Like

Gert,

I just sent you a new email requesting support from you in an sponsored style.

Email will get there from jcelias at gmail dot com.

without digging to deep, i think the problem is the host defintion you make, the host name does not include any bucket information (foo) or protocol (https), just the host.
so instead of

<cfset res = s3listBucket(
bucketName=“”,
accessKeyId=application.s3.accessKeyId,
secretAccessKey=application.s3.awsSecretKey,
host=“https://someOtherProvider.com/foo”,
timeout=30 )>

you simply do

<cfset res = s3listBucket(
bucketName=“foo”,
accessKeyId=application.s3.accessKeyId,
secretAccessKey=application.s3.awsSecretKey,
host=“someOtherProvider.com”,
timeout=30 )>
3 Likes

i have made the doc more clear about this argument to avoid future confusion about it

3 Likes