Lucee 5.4 feedback; Docker and Heroku

  • I’d like to see the topic of secrets discussed: Support for Secrets?
  • Docker images should use proper versioning ( lucee/lucee5 should mean the latest 5, for instance.
  • Official images shouldn’t be left to rot. For instance, this hasn’t seen an update in a year. . I think the community expects to get patches. ( )
  • Configuration files (lucee-web.xml, scheduler.xml, etc.) may need to be re-thought. It should be easier to configure via environment variables and secrets. I’m having to bake templating into my images to be able to feed in jinja2 templates and populate them at runtime through a custom entrypoint. This results in a pile of package dependencies being baked into my image.
  • I just thought of this yesterday: There is at least one directory which probably ought to be thought of as persistent: the mail spool. If the container is recreated while there is a flaky mail connection, Lucee ought to pick up where it left off when the container restarts. This should be called out in VOLUME and documented. There may be other directories like this that I haven’t thought of, but after having lost mail for a span, this one made itself obvious. (
  • Consider joining forces with the Ortus team. The CommandBox Docker container is weird and complicated, but you learn to appreciate those complications after finding it necessary to haphazardly recreate those features when using the official Lucee image.

For the benefit of those who don’t have time to chase through to the github issues list:

@jamiejackson yes each lucee minor version does have it’s own repo in docker hub, for the primary reason that we’re using automated build triggers

if we drop the docker hub automated build triggers and build a more complex pipeline then we could automate the builds of various different combinations of base image, tomcat version, java version, lucee version the amount of work to get to that point hasn’t quite been justified yet, or i haven’t had enough free time :slightly_smiling_face:so to upgrade from lucee 5.1 to 5.2 you’d change FROM lucee/lucee51-nginx:latest to FROM lucee/lucee52-nginx:latest

you can see the kinds of additional tools/scripts that need to be implemented for the solr image builds here:

It’s worth noting that Lucee 4.x is no longer officially supported. LAS itself only offers significant security patching on this branch. That said, if you need support for later 4.x stack patching you can always lobby for specifically that; the volunteers at Daemon (@justincarter @modius) that manage those images I’m sure will assist.

As it stands, stack patching (ie. ubuntu/java/tomcat) is done every-time a new Lucee build is re-imaged. Without a “trigger” (eg. someone escalating a specific concern) there is no impetus to build images for older Lucee builds that have not themselves been updated.

Specifically which features are you after in the official Lucee image?

A few that I can think of off the top of my head are:

  • FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)
  • CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me.
  • Secrets support (it would be pretty easy to port secrets support from the commandbox image, though)
  • Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)

I’m sure @bdw429s and @Jon_Clausen could point out others, but those are the ones that come to mind for me.

1 Like

I no longer need 4.x patches, but I think that folks that are still using it probably assume that they’re going to get Lucee 4 security patches and security patches from the upstream base images when those happen.

The same holds true of versions which I assume are still supported, though: (That hasn’t been updated in a year, either.)

I’m no expert on automated builds/triggers based on upstream images, but I think there are strategies out there that wouldn’t require community feedback to be the engine for rebuilds.

I have zero experience to back up the following ideas, but maybe they’ll start a conversation:

  • Rebuilding on Docker Hub based up upstream changes:
  • On a schedule (maybe daily), automation runs a docker pull on the tomcat base image. If a pull grabs something new, that’s the cue to rebuild.

The Docker images are rebuilt more frequently than the installers, and we’re happy to accommodate specific requests at the current volume of requests. We would love to improve the Docker imaging build process. However, an automated build process to accomodate versioning for Lucee and Tomcat for example is not trivial.

While improving the official LAS Docker image build process is important, Lucee 5.4 engineering would be focused on improving the ability to build apps within Docker images in general and not the official base images per se.

FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)

Commercial products will not be included in the Lucee base image. However, the Lucee images we ship are designed to work as “base” images; ie. you would have your own Dockerfile that builds on the base image. Your project file would be the appropriate place to put calls to FusionReactor.

For example;

FROM lucee/lucee52-nginx:latest
ENV LUCEE_JAVA_OPTS "-Xms512m -Xmx1024m"
ENV TZ "Australia/Sydney"
# Install FusionReactor;
RUN mkdir -p /opt/fusionreactor/ && wget -q -O /opt/fusionreactor/fusionreactor.jar

CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me

It would be great to see all aspects of Lucee config configurable from the codebase and/or environment variables.

Based on the current state of Lucee configuration by environment, what’s missing to ensure you never need to hardcode/template lucee.web.xml?

Secrets support (it would be pretty easy to port secrets support from the comandbox image, though)

Secrets support in Docker is a little more difficult. Given the current rate of evolution of Docker Swarm vs K8S vs MESOS vs roll your own, I’m not sure there is a generic solution that should be integrated into a base image.

For example, several orchestration tools can automatically move secrets into the environment scope as part of the deployment.

Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)

It’s fair to say the Ortus approach is very different. For example;

  • the CFML engine does not appear to be in the image; commandbox is utilised to download the relevant engine on docker run
  • Tomcat is not the servlet container; this is the LAS distribution standard

FWIW, they’re not really “in” the CommandBox image, either, but that image makes it really easy and convenient to pull them in.

For what it’s worth, I’m already using the official image and we already have this figured out. However, the CommandBox image does it better (from what I’ve heard about it), and consumers of that image don’t need to know how to install it. It took us at least a few hours to figure out all the bits and pieces, and that’s how long it’s going to take the next guy, too, with the official image. It’s nice to be able to hit the ground running as with the CommandBox image, versus everybody rolling their own solutions.

Out of maybe old habits, we’re not doing much “admin” type configuration in Application.cfc, so there’s probably more that we could (should?) do that way; however, I’ll list a few things:

  • Data sources
  • Gateways*
  • Scheduled tasks* (That’s scheduler.xml, but same idea.)
  • Lucee admin password*
  • Mail servers
  • Mappings
  • Error/404 templates
  • Caches

* I don’t think these are application-configurable. So these, I think, are problematic.

Good point, I don’t have a lot of experience with other orchestrators. However, you gave me an idea for a way to further customize my to convert docker secrets to env variables–I’ll have to try that out. For any bits that want both secrets/env-vars and must use admin configuration, though, I’m not sure we have a workaround (other than monkeying with config xmls).

Now that I think about it, maybe something that would help is if lucee-web.xml and scheduler.xml were env-var-aware. (They’re not, presently, right??)

It’s definitely different, and it’s weird that it uses an engine (Lucee 4) to run the box commands (including how it downloads other engines), but it’s a very convenient base image to build on (and this is where you bake in the engine–with a warmup in the build). It makes it a lot easier to configure the build to bake in the engine and other dependencies.

I grant that I was surprised back when I discovered that the commandbox image uses WildFly vs. Tomcat.

That is exactly the idea behind LDEV-1746 - it would make them “env-var-aware”, or at least Java System Property aware, which can be specified via env-vars: (@modius how come my links never pull a preview?)

IMO there are two features that will make Lucee work better with Docker:


We should allow to specify external config files by separating the config files from the work directory.

The Lucee work directory cannot be shared between multiple instances, or even web contexts, because writing to log/tmp/etc from multiple context at the same time will clash and fail.

Allowing to specify a custom path to the config files will enable reuse of the config files outside of the containers (in deployments where the image does not contain the Application code), while keeping the work directories inside the containers.


We should make it easier to log outside of the container. Fluentd seems like an interesting option.

1 Like

Would it help if we enforced a single context installation for container deployments?

It would help to have a generic log handler that ships to STDOUT with some config options.

Atlassian JIRA’s page metadata sucks an orange through a hose pipe :orange_heart:

Actually I am referring to a different scenario here. There are two primary methods for container deployment:

  1. Keep all of the Application data, config, etc. inside the image and build a new image each time you want to update

  2. Mount the Application data, config, etc. from the host machine

Both methodologies have their pros and cons, and you may choose either one according to the process that your organization prefers. For example, large organizations with more resources might choose method 1, while small organizations, or in development environment method 2 might be preferred.

The issue that I’m referring to relates to method 2 above. Let’s say that you have the Application code and configuration on the host’s file system, and you do not want to put it as part of the image so that you can make changes easily without requiring rebuilding of the image, deployment, etc.

If you run one container then there’s no problem, but if you want to run multiple containers for high availability, for example, then you can not mount the config directory anymore because the work directory in it can not be shared between multiple instances.

By allowing to separate “config” from “work” directories, we can mount the config directories from the host, but keep the ephemeral (e.g. work) directories inside the container.

While this is needed for development, I’d argue this is not a best practice or even common practice for deployments. Any container in a production scenario should not rely on a shared file resource on the host for configuration; with the possible exception of “secrets” management.

Why wouldn’t you bring in the config via the build process and environment variables?

Development is a primary reason to use method #2 above, but I’d also argue that for a small team it’s much faster to use it even beyond development. Maintaining a single copy of the Application and the config is easier and faster, and not every tiny tweak should warrant image building and the whole deployment process.

Sure, in a perfect world everyone will follow all of the “best practices”, but not every organization can, or is willing, to pay for the overhead that doing so incurs.

I still think that it’s a good idea to separate the config from the work directories, even when not using containers. It will make backing up your configuration much easier.

In theory… but in practice you have to build a container image – its the natural part of that development cycle. Deploying a change necessitates a new build and deploy. The beauty of Docker even for small teams is that this pipeline can be automated using standards based and inexpensive tooling.

Horses for courses… :horse_racing: :horse_racing: :horse_racing:

But I do believe we should focus efforts on standardising lucee configuration to support a best practice deployment in containers; “what is that?”, is probably the first question we should be discussing.

I’d suggest that might involve:

  • configuration via environment variables (mostly in place)
  • easier extensions deployment via container build pipelines
  • zero downloads, “phone homes”, just checking the version of some jar, etc processes on start-up
  • security;
    – disable all non essential services eg. disable admin
    – generate admin passwords on startup (or other option for closing this vector)
    – precompile app code base and not allow changes as part of build pipeline
  • log shipping options; eg. pushing to STDOUT, syslog, etc
  • centralised stores for persisting state between re-deployments;
    – sessions (done, could always be better)
    – scheduled tasks/jobs/mail (non-existent)
    – event gateway queues (non-existent)

Yeah, this is the biggest annoyance in build processes. We use a warm up image that has to start so that all these things are pulled in as part of some of my client’s setups but that is less than ideal.

Very timely joke tweet:


It’s great that I can drop a *.lex into the deploy directory, but it really stinks that the installation process is so “secretive” that one can’t know it’s finished but through trial-and-error heuristics.

Sure, I solved the CFSpreadsheet installation puzzle a while back (after probably 8 hours of total effort spent after using other methods proved less than 100% reliable), but now I’m about to play with redis, and I dread figuring out what time and black magic I’ll encounter with the effort.

I really hope somebody responds to this and tells me that I’ve missed something, and that third-party extension installation is actually straightforward.

I’d like to see consideration to making all the web admins available under a single url.

With with cdn/proxies, clustering, load balancers etc it can become rather tricky to access
a web admin, especially when locking down access to the lucee admin via ip address.

1 Like

In our Docker clusters we go the opposite way, we have no reason to ever allow access to the Lucee admin and so we want it to be completely disabled/unavailable in production environments. It would be great to have a build where it’s simply not present at all, or at the very least can be completely disabled by an environment variable, rather than removing mappings or using the web server to secure it :slight_smile:

I can see how that ticket might be useful to server installs with many apps in a single Lucee instance, but in a container environment I think it’s less useful as each app runs in it’s own container. Exposing the admin is generally a bad idea, but particularly with Docker then configuration should come from the image and the environment, rather than twiddling settings in the admin.


My newer plugin installation approach:

1 Like

Love the ideas:

  • admin as a plugin that is not activated by default
  • Single signin page for web (aka 1 uri for server, 1 uri for all webs)
  • No phone homes
    Docker Windows Nano (or at least core) containers:

Would love to see an official Windows Server 2016 Nano (or at least core) (and shortly 2019) docker image (iis/lucee)… at lot of CF installs are on windows, so it would make the journey easier for MS shops and those who prefer windows. Choco + Commandbox included.

1 Like