NPE declaring a S3 Mapping

We’re running into NPE logged when attempting to set up a mapping to an s3 path included from within our Application.cfc:
<cfset this.mappings["/mappingname"] = variableRetrievedFromEnvVar>
With that variable set to s3://[KEY]:[SECRETKEY][bucketname]/[folder]

This code has been working for years pointing to other buckets… including another one in the same account which I had just created, in the same application but for this bucket which I had just created and another bucket which had been created several years ago, I am getting this Null pointer Exception logged in Application log:

"ERROR","XNIO-2 task-3","06/13/2024","09:50:18","S3","java.lang.NullPointerException;java.lang.NullPointerException
	at java.base/java.lang.reflect.Method.invoke(Unknown Source)
	at org.lucee.extension.resource.s3.S3Properties.getApplicationData(
	at org.lucee.extension.resource.s3.S3ResourceProvider.loadWithNewPattern(
	at org.lucee.extension.resource.s3.S3ResourceProvider.getResource(
	at lucee.runtime.config.ConfigImpl.getResource(
	at lucee.runtime.config.ConfigWebUtil.getExistingResource(
	at lucee.runtime.MappingImpl.initPhysical(
	at lucee.runtime.MappingImpl.getPhysical(
	at lucee.runtime.MappingImpl.check(
	at lucee.runtime.config.ConfigWebHelper.getApplicationMapping(
	at lucee.runtime.config.ConfigWebImpl.getApplicationMapping(
	at lucee.runtime.listener.AppListenerUtil.toMappings(
	at lucee.runtime.listener.AppListenerUtil.toMappings(
	at lucee.runtime.listener.ModernApplicationContext.getMappings(
	at lucee.runtime.listener.AppListenerUtil.toResourceExisting(
	at lucee.runtime.listener.AppListenerUtil.loadResources(
	at lucee.runtime.orm.ORMConfigurationImpl._load(
	at lucee.runtime.orm.ORMConfigurationImpl.load(
	at lucee.runtime.listener.AppListenerUtil.setORMConfiguration(
	at lucee.runtime.listener.ModernApplicationContext.reinitORM(
	at lucee.runtime.listener.ModernApplicationContext.<init>(
	at lucee.runtime.listener.ModernAppListener.initApplicationContext(
	at lucee.runtime.listener.ModernAppListener._onRequest(
	at lucee.runtime.listener.MixedAppListener.onRequest(
	at lucee.runtime.PageContextImpl.execute(
	at lucee.runtime.PageContextImpl._execute(
	at lucee.runtime.PageContextImpl.executeCFML(
	at lucee.runtime.engine.Request.exe(
	at lucee.runtime.engine.CFMLEngineImpl._service(
	at lucee.runtime.engine.CFMLEngineImpl.serviceCFML(
	at lucee.loader.engine.CFMLEngineWrapper.serviceCFML(
	at lucee.loader.servlet.CFMLServlet.service(

I am using Lucee 5.4 - have tried, and with identical results.

After this NPE has been logged, attempting to access the s3 location using the mapping acts as though the mapping is not defined.

The fact that this code has worked for years for other buckets, suggested that the issue could be due to something on the AWS side… but I’m struggling to identify what it could be.

I have tested accessing the bucket path defined in the S3 mapping directly through the AWS CLI using the same AWS Key/Secret - and can read and write to the bucket as expected per the defined permissions.

I am not a Java Expert… but to me, the location within the s3 extension where this NPE is generated is the following:

  • org.lucee.extension.resource.s3.S3Properties.getApplicationData(
    Map attrs = (Map)getCustomAttributes.invoke((Object)null);

Which is I believe attempting to call lucee.runtime.listener.ClassicApplicationContext.getCustomAttributes() through what looks like a mixin…

This code is running within the S3Properties public static Struct getApplicationData(PageContext pc) function - at a point where it looks to me like it’s trying to get the config values as defined in the application (presumably for defaults defined within Application.cfc etc) - and not at a point where it is looking specifically at anything specific to the s3 path defined in the mapping creation… which… leaves me struggling to comprehend how the issue could be anything do do with the AWS configuration of the bucket etc.

Is this something which anyone has encountered before… or anything for which a possible cause may be apparent… as I’m scratching my head here with on the one hand thinking it must be due to an error on our side in how we have configured the bucket (quite possible… but we have defined dozens of buckets in multiple Lucee applications in a similar way… and never run into this issue before…) - and on the other hand, the location and nature of the exception does not appear to be at a point where it’s even looking specifically at what we have defined, but rather just appears to be checking first for general config…

We have got this far in the investigation in order to be able to identify specifically where the trigger point is (not the easiest due to the s3 extension just catching and logging the error in java, without any context to the calling CFML, and with the failure effectively being silent to the calling application (no exception shown, just the mapping appears not to be defined) using FR production debugger.

Any assistance in identifying the cause would be appreciated.

OS: Linux (6.6.12-linuxkit) running in docker container from ortussolutions/commandbox:3.9.4
Java Version: 11.0.23 (tried others)
Commandbox Version: 6.0.0 (tried others)
Lucee Version: (tried others)

That sounds like a real pain. Good job debugging it thus far! We’ve had a lot of trouble with S3 mappings in various Lucee/S3 extension versions.

What version of the S3 extension are you using?

I found it easier to mount the s3 bucket as a volume at the os level.

There are many utilities to do this and not bother with what should be an OS level operation.

We are using the version included within Lucee - which looking at:

has for all the versions we have been testing been the latest version available:

  • 17AB52DE-B300-A94B-E058BD978511E39E;name=S3;label=S3;version=

Was one of the things I checked to see if there had been an update, but no luck - we’ve been running Lucee for some time with other clients using S3 mappings without issue…

If we can’t identify the cause… then we will need to replace the S3 integration though the filesystem with an alternate approach which does not rely on the Lucee S3 extension - something we have code for used elsewhere in the application… but I’d rather understand the underlying cause if possible…

This is a new S3 bucket, right? There must be something different about it. Is it in the same region as your working buckets? Are the bucket policies configured the same way?

I have seen S3 change things like encryption/signing on newer buckets in such a way that old client code can’t access them. It could well be something like that. Sorry I don’t have more specific information.

That was my original thought - and may be the case… however…

I have been looking at 3 environments for this application, 2 or which are using new buckets, and one using a bucket which was created and configured years previously.

The mapping is working for one of the new buckets, and not for the other.

Both buckets were created at the same time, with consistent settings applied across them… and I have tested accessing the non working bucket using the same key:secret from the CLI for performing list/put actions… and all works as expected.

What’s more… certainly as far as I can see, the area within the S3 extension where the exception is raised, is not an area where the specifics of the s3 path are being looked at at all - but rather where the extension is attempting to retrieve application global defaults for s3 access, before checking for path specific access tokens etc.

As can be seen from the following location in the extension where the exception is thrown:

Note that the null check on line 181 is not executing the population of the getCustomAttributes property… which would suggest that it is defined… and yet when it is called, the underlying NPE is immediately thrown…

My understanding of the specific intricacies of how null checks work in java is not sufficient to determine why this is not working… I would have assumed that if the check for getCustomAttributes == null is not true… then calling getCustomAttributes() should be callable… but then again, looking at the code calling this, and the object itself… it does not look like getCustomAttributes is defined anywhere else… which would suggest that this property should be null…

At a loss here… if anyone has any specific understanding of how this is supposed to work… and any ideas why it’s NPE’ing in this case… would be helpful.

At this point, I’m of a mind to just throw the towel in as far as depending on Lucee S3 extension (as others have suggested) - it’s just frustrating to say the least when the same config is used without issue in other locations… makes me think that the issue is on my side and that I’ve missed something obvious… just an NPE is not really giving me much guidance as to what this may be…

Hey, Dan. I haven’t had time to scrutinize the Java code, but I wanted to share info about a couple of failure conditions that we have encountered with S3 mappings.

  1. If any of the values in our application.s3Config struct are null, we observe that all mappings break (not just S3 mappings). This might have been due to a NPE (that was caught and logged), but I don’t remember for sure. Anyway, it’s worth checking the config, looking for possible race conditions in defining it, etc.

  2. Invalid credentials in application.s3Config can break mappings.

  3. We’ve seen issues with Java classpath issues. We’ve resolved this by not using this.javaSettings.loadPaths in our applications.


We have attempted this approach, using s3fs-fuse by adding as another docker container to our solution.

We are successfully able to list / get / put files from our configured filesystem from within our application docker container - however any attempt to access the path which is mapped through s3fs to s3 using Lucee file management functions (directoryexists / fileexists etc.) consistently results in a timeout being logged in Lucess, with nothing being logged from s3fs.

Is there any obvious cause for this? Can anyone recommend an approach which they have validated, and which should work from the context of a docker container (both locally, and running within AWS Elastic Beanstalk)