Poor choose of words on my part, of course that is not curved in stone yet and still open for discussion.
The idea to have this as main topic has multiple reasons:
It is an important environment for the future of Lucee (at least in my opinion)
it possible influence other apps like commandbox in a positive way, because one goal is to extend support for env variables as way to configure Lucee, startup time (VERY IMPORTANT) , loading Lucee in single context mode …
what we have so far is outdated and need an update
Configuration files (lucee-web.xml, scheduler.xml, etc.) may need to be re-thought. It should be easier to configure via environment variables and secrets. I’m having to bake templating into my images to be able to feed in jinja2 templates and populate them at runtime through a custom entrypoint. This results in a pile of package dependencies being baked into my image.
I just thought of this yesterday: There is at least one directory which probably ought to be thought of as persistent: the mail spool. If the container is recreated while there is a flaky mail connection, Lucee ought to pick up where it left off when the container restarts. This should be called out in VOLUME and documented. There may be other directories like this that I haven’t thought of, but after having lost mail for a span, this one made itself obvious. (Internal Lucee Directories that Should Be Persistent · Issue #41 · lucee/lucee-dockerfiles · GitHub)
Consider joining forces with the Ortus team. The CommandBox Docker container is weird and complicated, but you learn to appreciate those complications after finding it necessary to haphazardly recreate those features when using the official Lucee image.
For the benefit of those who don’t have time to chase through to the github issues list:
@jamiejackson yes each lucee minor version does have it’s own repo in docker hub, for the primary reason that we’re using automated build triggers
if we drop the docker hub automated build triggers and build a more complex pipeline then we could automate the builds of various different combinations of base image, tomcat version, java version, lucee version the amount of work to get to that point hasn’t quite been justified yet, or i haven’t had enough free time so to upgrade from lucee 5.1 to 5.2 you’d change FROM lucee/lucee51-nginx:latest to FROM lucee/lucee52-nginx:latest
It’s worth noting that Lucee 4.x is no longer officially supported. LAS itself only offers significant security patching on this branch. That said, if you need support for later 4.x stack patching you can always lobby for specifically that; the volunteers at Daemon (@justincarter@modius) that manage those images I’m sure will assist.
As it stands, stack patching (ie. ubuntu/java/tomcat) is done every-time a new Lucee build is re-imaged. Without a “trigger” (eg. someone escalating a specific concern) there is no impetus to build images for older Lucee builds that have not themselves been updated.
A few that I can think of off the top of my head are:
FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)
CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me.
Secrets support (it would be pretty easy to port secrets support from the commandbox image, though)
Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)
I’m sure @bdw429s and @Jon_Clausen could point out others, but those are the ones that come to mind for me.
I no longer need 4.x patches, but I think that folks that are still using it probably assume that they’re going to get Lucee 4 security patches and security patches from the upstream base images when those happen.
The same holds true of versions which I assume are still supported, though: Docker (That hasn’t been updated in a year, either.)
I’m no expert on automated builds/triggers based on upstream images, but I think there are strategies out there that wouldn’t require community feedback to be the engine for rebuilds.
I have zero experience to back up the following ideas, but maybe they’ll start a conversation:
The Docker images are rebuilt more frequently than the installers, and we’re happy to accommodate specific requests at the current volume of requests. We would love to improve the Docker imaging build process. However, an automated build process to accomodate versioning for Lucee and Tomcat for example is not trivial.
While improving the official LAS Docker image build process is important, Lucee 5.4 engineering would be focused on improving the ability to build apps within Docker images in general and not the official base images per se.
FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)
Commercial products will not be included in the Lucee base image. However, the Lucee images we ship are designed to work as “base” images; ie. you would have your own Dockerfile that builds on the base image. Your project file would be the appropriate place to put calls to FusionReactor.
CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me
It would be great to see all aspects of Lucee config configurable from the codebase and/or environment variables.
Based on the current state of Lucee configuration by environment, what’s missing to ensure you never need to hardcode/template lucee.web.xml?
Secrets support (it would be pretty easy to port secrets support from the comandbox image, though)
Secrets support in Docker is a little more difficult. Given the current rate of evolution of Docker Swarm vs K8S vs MESOS vs roll your own, I’m not sure there is a generic solution that should be integrated into a base image.
For example, several orchestration tools can automatically move secrets into the environment scope as part of the deployment.
Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)
It’s fair to say the Ortus approach is very different. For example;
the CFML engine does not appear to be in the image; commandbox is utilised to download the relevant engine on docker run
Tomcat is not the servlet container; this is the LAS distribution standard
FWIW, they’re not really “in” the CommandBox image, either, but that image makes it really easy and convenient to pull them in.
For what it’s worth, I’m already using the official image and we already have this figured out. However, the CommandBox image does it better (from what I’ve heard about it), and consumers of that image don’t need to know how to install it. It took us at least a few hours to figure out all the bits and pieces, and that’s how long it’s going to take the next guy, too, with the official image. It’s nice to be able to hit the ground running as with the CommandBox image, versus everybody rolling their own solutions.
Out of maybe old habits, we’re not doing much “admin” type configuration in Application.cfc, so there’s probably more that we could (should?) do that way; however, I’ll list a few things:
Data sources
Gateways*
Scheduled tasks* (That’s scheduler.xml, but same idea.)
Lucee admin password*
Mail servers
Mappings
Error/404 templates
Caches
* I don’t think these are application-configurable. So these, I think, are problematic.
Good point, I don’t have a lot of experience with other orchestrators. However, you gave me an idea for a way to further customize my setenv.sh to convert docker secrets to env variables–I’ll have to try that out. For any bits that want both secrets/env-vars and must use admin configuration, though, I’m not sure we have a workaround (other than monkeying with config xmls).
Now that I think about it, maybe something that would help is if lucee-web.xml and scheduler.xml were env-var-aware. (They’re not, presently, right??)
It’s definitely different, and it’s weird that it uses an engine (Lucee 4) to run the box commands (including how it downloads other engines), but it’s a very convenient base image to build on (and this is where you bake in the engine–with a warmup in the build). It makes it a lot easier to configure the build to bake in the engine and other dependencies.
I grant that I was surprised back when I discovered that the commandbox image uses WildFly vs. Tomcat.
That is exactly the idea behind LDEV-1746 - it would make them “env-var-aware”, or at least Java System Property aware, which can be specified via env-vars:
IMO there are two features that will make Lucee work better with Docker:
Config
We should allow to specify external config files by separating the config files from the work directory.
The Lucee work directory cannot be shared between multiple instances, or even web contexts, because writing to log/tmp/etc from multiple context at the same time will clash and fail.
Allowing to specify a custom path to the config files will enable reuse of the config files outside of the containers (in deployments where the image does not contain the Application code), while keeping the work directories inside the containers.
Logging
We should make it easier to log outside of the container. Fluentd seems like an interesting option.
Actually I am referring to a different scenario here. There are two primary methods for container deployment:
Keep all of the Application data, config, etc. inside the image and build a new image each time you want to update
Mount the Application data, config, etc. from the host machine
Both methodologies have their pros and cons, and you may choose either one according to the process that your organization prefers. For example, large organizations with more resources might choose method 1, while small organizations, or in development environment method 2 might be preferred.
The issue that I’m referring to relates to method 2 above. Let’s say that you have the Application code and configuration on the host’s file system, and you do not want to put it as part of the image so that you can make changes easily without requiring rebuilding of the image, deployment, etc.
If you run one container then there’s no problem, but if you want to run multiple containers for high availability, for example, then you can not mount the config directory anymore because the work directory in it can not be shared between multiple instances.
By allowing to separate “config” from “work” directories, we can mount the config directories from the host, but keep the ephemeral (e.g. work) directories inside the container.
While this is needed for development, I’d argue this is not a best practice or even common practice for deployments. Any container in a production scenario should not rely on a shared file resource on the host for configuration; with the possible exception of “secrets” management.
Why wouldn’t you bring in the config via the build process and environment variables?
Development is a primary reason to use method #2 above, but I’d also argue that for a small team it’s much faster to use it even beyond development. Maintaining a single copy of the Application and the config is easier and faster, and not every tiny tweak should warrant image building and the whole deployment process.
Sure, in a perfect world everyone will follow all of the “best practices”, but not every organization can, or is willing, to pay for the overhead that doing so incurs.
I still think that it’s a good idea to separate the config from the work directories, even when not using containers. It will make backing up your configuration much easier.
In theory… but in practice you have to build a container image – its the natural part of that development cycle. Deploying a change necessitates a new build and deploy. The beauty of Docker even for small teams is that this pipeline can be automated using standards based and inexpensive tooling.
Horses for courses…
But I do believe we should focus efforts on standardising lucee configuration to support a best practice deployment in containers; “what is that?”, is probably the first question we should be discussing.
I’d suggest that might involve:
configuration via environment variables (mostly in place)
easier extensions deployment via container build pipelines
zero downloads, “phone homes”, just checking the version of some jar, etc processes on start-up
security;
– disable all non essential services eg. disable admin
– generate admin passwords on startup (or other option for closing this vector)
– precompile app code base and not allow changes as part of build pipeline
log shipping options; eg. pushing to STDOUT, syslog, etc
centralised stores for persisting state between re-deployments;
– sessions (done, could always be better)
– scheduled tasks/jobs/mail (non-existent)
– event gateway queues (non-existent)
Yeah, this is the biggest annoyance in build processes. We use a warm up image that has to start so that all these things are pulled in as part of some of my client’s setups but that is less than ideal.