CFModule vs CFImport - radically different performance?

Yeah, renaming the tags is feasible. Though, it would stop it from being more reusable as a separate library.

Sandboxing the tags is an interesting idea. Right now, our emails are all generating as <cfinclude> tags inside a <cfmail> tag; so, I’d have to come up with some way to break that tight-coupling (in a way that doesn’t add too much overhead / complexity). I’ll have to noodle on that. If you have any suggestions, I’m all ears :slight_smile:

Boom! <cfimport> now beats <cfmodule>

got it working with my cfml resource provider

add a custom cfml resource provider mapping called. ct to lucee-server.xml

set the vfsStoreFileSystem.cfc to point to the custom tag dir or extend it etc

create a web or server level mapping/tags pointing to the ct:/// resource (note that’s 3 /s not 2!)

Add this to your Application.cfc (could be in onApplicationStart)

	public boolean function onRequestStart( String targetPage ){
		directoryCopy(source="./tags/", destination="ct://tags/", filter="*", createPath=true);
		echo('Dump(DirectoryList(path="/tags/", listInfo="query"));');
		Dump(DirectoryList(path="/tags/", listInfo="query"));
		return true;

restart lucee for the new ct resource provider config to show up!


Holy cow!! This is awesome. I don’t really understand how providers work. But, given some of the things you said here, I want to go see if - one more time - if I can get ram:/// to work with triple slashes and no built-in path.

@Zackster what does your <cfimport> statement have in it?

<cfimport prefix="tags" taglib="/tags" />

Ok, well using some of your pointers above, I was at least able to get the ram:// version to compile and run. In the CFAdmin, I setup a web-mapping for:

/mammoth => ram:///

Then, I restarted Lucee. Then, I put this in my test App.cfc:

component { = "tagtest";

	public void function onApplicationStart() {

			source = "./tags/",
			destination = "ram://tags/",
			recurse = true,
			filter = "*",
			createPath = true


Then, in the test page, I did:

<cfimport prefix="tags" taglib="/mammoth/tags/" />
// ...
<tags:MyTag> ... </tags:MyTag>

Unfortunately, it still runs really slow. Even with the ram:/// mapping setup to only check Once. So, there must be something special that your custom provider is doing that the Ram VFS is not doing :frowning:

is ram:/// slower than a normal filesystem mapping set to once?

It seems to be basically the same. Maybe another Docker oddity, though.

Hmmm, I get this error sometimes with that mapping in place after a restart, then all I get is blank pages

“ERROR”,“main”,“02/25/2021”,“16:17:02”,“configuration”,"-1;-1;java.lang.ArrayIndexOutOfBoundsException: -1
at lucee.runtime.config.ConfigImpl.getPageSources(
at lucee.runtime.config.ConfigImpl.getPageSources(
at lucee.runtime.PageContextImpl.getPageSources(
at lucee.runtime.component.ComponentLoader._search(
at lucee.runtime.component.ComponentLoader._search(
at lucee.runtime.component.ComponentLoader.searchComponent(
at lucee.runtime.PageContextImpl.loadComponent(


That’s a strange one! I found that I sometimes had to edit the page with the <cfimport> on it before Lucee would stop complaining – I guess I need to trigger a page-level compilation or something :man_shrugging:

bug logged [LDEV-3300] mapping causes crash and blank pages after a restart - Lucee

can you try creating the mapping to ram://tags rather than just ram://

I don’t think so – it shows up as “red” when I try to include the path:

the red is coz physical isn’t set for ram:// which may explain some of the performance difference???

Now you’re over my head :smiley:

As an aside, I don’t think I’ve seen iif() / de() code in a long time!

it’s all detective work…

you should file a performance bug about the ram mapping being slower than expected (not caching with once?)


You’ve probably done this already, but just in case, check to make sure your Docker instance has enough RAM. Maybe part of the issue is there is a lot of disk swapping occurring because there’s not enough memory available. That could explain why it’s slow using the RAM drive.

That’s a good question. I’m not really good at configuring all the Docker for Mac stuff. I think I have 4 CPUs, 10 GB RAM, and 1 GB of swap space allocated. I have no idea if that’s good or not :man_shrugging: Docker performance is all magic to me.

Let me preface by stating that I don’t use a Mac or Docker, so there might be other more well know issues at play for which I’m unaware.

However, memory allotment gets tricky when dealing with containers and VMs. The problem is you’re dealing with the memory allotment on your Host (i.e. Mac), the VM container and then the various things running inside the VM (including the JVM).

So if you gave your Docker container access to 10GBs of RAM, but your Mac only has 16GBs of RAM you could very well be running into swapping issues on your Mac. I know I always have Firefox and multiple instance of Chrome running and Chrome can really eat up memory. You start to add in all the various Developer Tools you might be running (IDE, Photoshop, Database, etc) and you could easily be having swapping issues at your Mac OS level.

Then there’s the Docker container. Not knowing what’s running in your Docker container, they may be enough, maybe it’s overkill or maybe it’s not enough. That seems like a ton, but maybe there’s a bunch of micro services and a database running inside that container that chews up all that memory. Then there’s the JVM memory settings. If those are set higher than the Docker memory settings, that’s going to cause issues. Or maybe the JVM settings are way to low, so it’s forcing a ton of garbage collection and the JVM is fighting memory issues even though the Docker container has plenty of memory available.

So, it may be all of this is fine, but if you haven’t looked things over, it’s worth doing just to make sure you haven’t created a bottleneck somewhere accidentally.


Ha ha, which basically sums up why some of this stuff is so hard to reason about! I appreciate the insight.

For what it is worth, I finally just deployed some test code to production to see what a “real world” performance scenario would be – turns out, this is much less of an issue in our production *nix machines. In fact, the File IO seems to be 68-times slower in my local Docker setup. So, I think the <cfimport> solution is still viable in production.