Lucee 5.4 feedback; Docker and Heroku

For every Lucee release we do one main topic, for Lucee 5.4 this should be “Docker & Heroku”.

The idea is to make it easier to use Lucee with this environments.

So we would love your inputs on what you miss so far, what could be better, faster more dynasmic…

1 Like

should be? how about asking for some community input?

Poor choose of words on my part, of course that is not curved in stone yet and still open for discussion.

The idea to have this as main topic has multiple reasons:

  1. It is an important environment for the future of Lucee (at least in my opinion)
  2. it possible influence other apps like commandbox in a positive way, because one goal is to extend support for env variables as way to configure Lucee, startup time (VERY IMPORTANT) , loading Lucee in single context mode …
  3. what we have so far is outdated and need an update

Of course the main topic could also be:

  • update Hibernate
  • update the search
  • improve the REST interface

Next to the main topic we will of course also address feature request from Jira, no question:
https://luceeserver.atlassian.net/issues/?jql=issuetype%20in%20(Enhancement%2C%20Task)%20AND%20status%20in%20("Added%20to%20TAG%20agenda"%2C%20"Awaiting%20Approval"%2C%20Backlog%2C%20"Further%20consultation%20required"%2C%20New)%20AND%20updated%20>%3D%20-52w%20ORDER%20BY%20key%20ASC%2C%20lastViewed%20DESC

I’d like to see [LDEV-1746] - Lucee given priority. It can help with Docker but also in general configurations.

Allow ${system.property} in Lucee config files
https://luceeserver.atlassian.net/browse/LDEV-1746

4 posts were split to a new topic: Line debugging would be awesome

  • I’d like to see the topic of secrets discussed: https://lucee.daemonite.io/t/support-for-secrets/3792
  • Docker images should use proper versioning (Use Conventional Image Version Tagging · Issue #40 · lucee/lucee-dockerfiles · GitHub). lucee/lucee5 should mean the latest 5, for instance.
  • Official images shouldn’t be left to rot. For instance, this hasn’t seen an update in a year. Docker . I think the community expects to get patches. ( Keep Images Up to Date · Issue #42 · lucee/lucee-dockerfiles · GitHub )
  • Configuration files (lucee-web.xml, scheduler.xml, etc.) may need to be re-thought. It should be easier to configure via environment variables and secrets. I’m having to bake templating into my images to be able to feed in jinja2 templates and populate them at runtime through a custom entrypoint. This results in a pile of package dependencies being baked into my image.
  • I just thought of this yesterday: There is at least one directory which probably ought to be thought of as persistent: the mail spool. If the container is recreated while there is a flaky mail connection, Lucee ought to pick up where it left off when the container restarts. This should be called out in VOLUME and documented. There may be other directories like this that I haven’t thought of, but after having lost mail for a span, this one made itself obvious. (Internal Lucee Directories that Should Be Persistent · Issue #41 · lucee/lucee-dockerfiles · GitHub)
  • Consider joining forces with the Ortus team. The CommandBox Docker container is weird and complicated, but you learn to appreciate those complications after finding it necessary to haphazardly recreate those features when using the official Lucee image.
3 Likes

For the benefit of those who don’t have time to chase through to the github issues list:

@jamiejackson yes each lucee minor version does have it’s own repo in docker hub, for the primary reason that we’re using automated build triggers

if we drop the docker hub automated build triggers and build a more complex pipeline then we could automate the builds of various different combinations of base image, tomcat version, java version, lucee version the amount of work to get to that point hasn’t quite been justified yet, or i haven’t had enough free time :slightly_smiling_face:so to upgrade from lucee 5.1 to 5.2 you’d change FROM lucee/lucee51-nginx:latest to FROM lucee/lucee52-nginx:latest

you can see the kinds of additional tools/scripts that need to be implemented for the solr image builds here:
docker-solr/tools at afe43e97be7aa764656f3e0aa068bed90f6bdd27 · docker-solr/docker-solr · GitHub

It’s worth noting that Lucee 4.x is no longer officially supported. LAS itself only offers significant security patching on this branch. That said, if you need support for later 4.x stack patching you can always lobby for specifically that; the volunteers at Daemon (@justincarter @modius) that manage those images I’m sure will assist.

As it stands, stack patching (ie. ubuntu/java/tomcat) is done every-time a new Lucee build is re-imaged. Without a “trigger” (eg. someone escalating a specific concern) there is no impetus to build images for older Lucee builds that have not themselves been updated.

Specifically which features are you after in the official Lucee image?

A few that I can think of off the top of my head are:

  • FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)
  • CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me.
  • Secrets support (it would be pretty easy to port secrets support from the commandbox image, though)
  • Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)

I’m sure @bdw429s and @Jon_Clausen could point out others, but those are the ones that come to mind for me.

1 Like

I no longer need 4.x patches, but I think that folks that are still using it probably assume that they’re going to get Lucee 4 security patches and security patches from the upstream base images when those happen.

The same holds true of versions which I assume are still supported, though: Docker (That hasn’t been updated in a year, either.)

I’m no expert on automated builds/triggers based on upstream images, but I think there are strategies out there that wouldn’t require community feedback to be the engine for rebuilds.

I have zero experience to back up the following ideas, but maybe they’ll start a conversation:

The Docker images are rebuilt more frequently than the installers, and we’re happy to accommodate specific requests at the current volume of requests. We would love to improve the Docker imaging build process. However, an automated build process to accomodate versioning for Lucee and Tomcat for example is not trivial.

While improving the official LAS Docker image build process is important, Lucee 5.4 engineering would be focused on improving the ability to build apps within Docker images in general and not the official base images per se.


FusionReactor installation (and I think other ForgeBox libraries, although I’m not using those, ATM.)

Commercial products will not be included in the Lucee base image. However, the Lucee images we ship are designed to work as “base” images; ie. you would have your own Dockerfile that builds on the base image. Your project file would be the appropriate place to put calls to FusionReactor.

For example;

FROM lucee/lucee52-nginx:latest
ENV LUCEE_JAVA_OPTS "-Xms512m -Xmx1024m"
ENV TZ "Australia/Sydney"
# Install FusionReactor; http://www.fusion-reactor.com/
RUN mkdir -p /opt/fusionreactor/ && wget -q -O /opt/fusionreactor/fusionreactor.jar https://intergral-dl.s3.amazonaws.com/FR/FusionReactor-6.x.x/fusionreactor.jar

CFConfig to configure the admin (instead of relying on hardcoded lucee.web.xml or relying on template processing plumbing in the image). This utility is big selling point for me

It would be great to see all aspects of Lucee config configurable from the codebase and/or environment variables.

https://lucee.daemonite.io/t/lucee-configuration-options/321

Based on the current state of Lucee configuration by environment, what’s missing to ensure you never need to hardcode/template lucee.web.xml?


Secrets support (it would be pretty easy to port secrets support from the comandbox image, though)

Secrets support in Docker is a little more difficult. Given the current rate of evolution of Docker Swarm vs K8S vs MESOS vs roll your own, I’m not sure there is a generic solution that should be integrated into a base image.

For example, several orchestration tools can automatically move secrets into the environment scope as part of the deployment.


Lucee versioning is intuitive and the versions are kept up to date (even for, say, Lucee 4)

It’s fair to say the Ortus approach is very different. For example;

  • the CFML engine does not appear to be in the image; commandbox is utilised to download the relevant engine on docker run
  • Tomcat is not the servlet container; this is the LAS distribution standard

FWIW, they’re not really “in” the CommandBox image, either, but that image makes it really easy and convenient to pull them in.

For what it’s worth, I’m already using the official image and we already have this figured out. However, the CommandBox image does it better (from what I’ve heard about it), and consumers of that image don’t need to know how to install it. It took us at least a few hours to figure out all the bits and pieces, and that’s how long it’s going to take the next guy, too, with the official image. It’s nice to be able to hit the ground running as with the CommandBox image, versus everybody rolling their own solutions.

Out of maybe old habits, we’re not doing much “admin” type configuration in Application.cfc, so there’s probably more that we could (should?) do that way; however, I’ll list a few things:

  • Data sources
  • Gateways*
  • Scheduled tasks* (That’s scheduler.xml, but same idea.)
  • Lucee admin password*
  • Mail servers
  • Mappings
  • Error/404 templates
  • Caches

* I don’t think these are application-configurable. So these, I think, are problematic.

Good point, I don’t have a lot of experience with other orchestrators. However, you gave me an idea for a way to further customize my setenv.sh to convert docker secrets to env variables–I’ll have to try that out. For any bits that want both secrets/env-vars and must use admin configuration, though, I’m not sure we have a workaround (other than monkeying with config xmls).

Now that I think about it, maybe something that would help is if lucee-web.xml and scheduler.xml were env-var-aware. (They’re not, presently, right??)

It’s definitely different, and it’s weird that it uses an engine (Lucee 4) to run the box commands (including how it downloads other engines), but it’s a very convenient base image to build on (and this is where you bake in the engine–with a warmup in the build). It makes it a lot easier to configure the build to bake in the engine and other dependencies.

I grant that I was surprised back when I discovered that the commandbox image uses WildFly vs. Tomcat.

That is exactly the idea behind LDEV-1746 - it would make them “env-var-aware”, or at least Java System Property aware, which can be specified via env-vars:

[LDEV-1746] - Lucee (@modius how come my links never pull a preview?)

IMO there are two features that will make Lucee work better with Docker:

Config

We should allow to specify external config files by separating the config files from the work directory.

The Lucee work directory cannot be shared between multiple instances, or even web contexts, because writing to log/tmp/etc from multiple context at the same time will clash and fail.

Allowing to specify a custom path to the config files will enable reuse of the config files outside of the containers (in deployments where the image does not contain the Application code), while keeping the work directories inside the containers.

Logging

We should make it easier to log outside of the container. Fluentd seems like an interesting option.

1 Like

Would it help if we enforced a single context installation for container deployments?

It would help to have a generic log handler that ships to STDOUT with some config options.

Atlassian JIRA’s page metadata sucks an orange through a hose pipe :orange_heart:

Actually I am referring to a different scenario here. There are two primary methods for container deployment:

  1. Keep all of the Application data, config, etc. inside the image and build a new image each time you want to update

  2. Mount the Application data, config, etc. from the host machine

Both methodologies have their pros and cons, and you may choose either one according to the process that your organization prefers. For example, large organizations with more resources might choose method 1, while small organizations, or in development environment method 2 might be preferred.

The issue that I’m referring to relates to method 2 above. Let’s say that you have the Application code and configuration on the host’s file system, and you do not want to put it as part of the image so that you can make changes easily without requiring rebuilding of the image, deployment, etc.

If you run one container then there’s no problem, but if you want to run multiple containers for high availability, for example, then you can not mount the config directory anymore because the work directory in it can not be shared between multiple instances.

By allowing to separate “config” from “work” directories, we can mount the config directories from the host, but keep the ephemeral (e.g. work) directories inside the container.

While this is needed for development, I’d argue this is not a best practice or even common practice for deployments. Any container in a production scenario should not rely on a shared file resource on the host for configuration; with the possible exception of “secrets” management.

Why wouldn’t you bring in the config via the build process and environment variables?

Development is a primary reason to use method #2 above, but I’d also argue that for a small team it’s much faster to use it even beyond development. Maintaining a single copy of the Application and the config is easier and faster, and not every tiny tweak should warrant image building and the whole deployment process.

Sure, in a perfect world everyone will follow all of the “best practices”, but not every organization can, or is willing, to pay for the overhead that doing so incurs.

I still think that it’s a good idea to separate the config from the work directories, even when not using containers. It will make backing up your configuration much easier.

In theory… but in practice you have to build a container image – its the natural part of that development cycle. Deploying a change necessitates a new build and deploy. The beauty of Docker even for small teams is that this pipeline can be automated using standards based and inexpensive tooling.

Horses for courses… :horse_racing: :horse_racing: :horse_racing:

But I do believe we should focus efforts on standardising lucee configuration to support a best practice deployment in containers; “what is that?”, is probably the first question we should be discussing.

I’d suggest that might involve:

  • configuration via environment variables (mostly in place)
  • easier extensions deployment via container build pipelines
  • zero downloads, “phone homes”, just checking the version of some jar, etc processes on start-up
  • security;
    – disable all non essential services eg. disable admin
    – generate admin passwords on startup (or other option for closing this vector)
    – precompile app code base and not allow changes as part of build pipeline
  • log shipping options; eg. pushing to STDOUT, syslog, etc
  • centralised stores for persisting state between re-deployments;
    – sessions (done, could always be better)
    – scheduled tasks/jobs/mail (non-existent)
    – event gateway queues (non-existent)
2 Likes

Yeah, this is the biggest annoyance in build processes. We use a warm up image that has to start so that all these things are pulled in as part of some of my client’s setups but that is less than ideal.