Docker build issue with LUCEE_ENABLE_WARMUP=true

OS: lucee/lucee:6.2-nginx
Java Version: 21
Tomcat Version: 11
Lucee Version: 6.2.2.91

Hi Everyone,

First post here so would like to say Hi and introduce myself. I’m an engineer based on Brisbane AU and lately have been tasked with setting up a Lucee Docker environnment that will ultimately run on an Azure Web App for containers. Our Dev’s have used Coldfusion on Windows for years and we are aiming to move away from that onto Lucee. I haven’t used Lucee before and I’m getting my head around slowly but I have a question I’d like help with.

I’m experiencing an issue with my Docker build environment where Tomcat stops prematurely when using the LUCEE_ENABLE_WARMUP=true variable, preventing Lucee from completing the processing of my .CFConfig.json config file. This file is copied to the deploy folder during the build process (and removed after Tomcat runs to it doesn’t run in the deploy phase). I’ve found a temporary workaround by adding a sleep between starting and stopping Tomcat and removing the ARG LUCEE_ENABLE_WARMUP from my Dockerfile, allowing Tomcat to run long enough for Lucee to finish processing. However, this solution is not ideal. I’m looking for a more robust fix to ensure that the warm-up process keeps Tomcat running long enough for Lucee to complete its processing. Can you provide a solution or guidance on how to resolve this issue?

Thanks

James

1 Like

James: first, welcome. As for your challenges, perhaps someone will recognize something immediately to recommend.

Until then, it may help if you could clarify a couple things, as there are many ways to run Lucee (and the other cfnl engines) via containers. And that may influence both what you’re experiencing and what might be recommended as a solution. So:

1 - Can you clarify whether you’re building you’re own image from scratch or using one of the available images (such as at docker hub). Even there you have multiple options: several forms from the Lucee team, or Commandbox images from Ortus (which support Lucee, ACF, or Boxlang. And to be clear, Adobe offers ACF images as well).

If you’re not creating your own image, it may help to hear which you’re using.

2 - Are you doing a multi stage build, or not? As you may know, when using that LUCEE_ENABLE_WARMUP env var, the container exits after loading up what’s in the /deploy folder. It may help to hear where/how in your dockerfile/compose file/k8s manifest you may be relying on that env var.

3 - Perhaps most important, are you in fact loading up that /deploy folder up with stuff (.lex and/or .lco files), which would slow the warmup (well, the goal is to do it only once then reuse the resulting image) ? Or would you say you’re putting none in there?

And this goes back to the first question: some folks build images from scratch (and have to “specify everything” they need), while others are satisfied with the pre-built images (and may not put anything into the deploy folder).

Indeed, I’ll note that if you’re using the Lucee images, the Lucee docs clarify that:
Lucee Docker images are already pre-warmed and Lucee 6.2 includes several improvements which make deployment faster.

I’m writing from a phone so not testing/checking things myself. But I hope the thoughts may help you (and in the future, others who may find the thread). I welcome corrections if I’ve misstated anything.

Yeah as Charlie says, our docker images are prewarmed

Theory goes that contents of the deploy folder should be processed before any requests are served, warmup simply exits after the first request hits the web context.

I’ll have a play with this and see what’s going on under the hood.

Greets from an Aussie from Melbourne in Berlin

Also sent from my phone, but HT to Charlie writing such a long reply on his phone.

Thanks for the detailed response from a phone @carehart! I realise the simplest thing would have been to share the config. So here is my Dockerfile which should clarify much of what you’ve queried. At the moment I am running locally on Docker Desktop with the lucee/lucee:6.2-nginx image.

FROM lucee/lucee:6.2-nginx

# Avoid interactive prompts during package installation
ENV DEBIAN_FRONTEND=noninteractive

# Install OpenSSH server and generate host keys
RUN apt-get update \
    && apt-get install -y --no-install-recommends openssh-server \
    && echo "root:Docker!" | chpasswd \
    && ssh-keygen -A \
    && mkdir /run/sshd \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

# Copy our custom SSH configuration
COPY config/sshd/sshd_config /etc/ssh/sshd_config

# Copy our application code
COPY www /var/www

# Create a custom startup script
COPY init.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/init.sh

# Add .CFConfig.json so it deploys at build
COPY config/lucee/.CFConfig.json /opt/lucee/server/lucee-server/deploy/

# Cache-buster to invalidate this layer and force the warmup step to
# re-run. This is set as runtime variable in the GitHub workflow or
# Docker build --build-arg
ARG BUILD_CACHE_BUSTER

# Start Tomcat to explode lucee and install bundles/extensions during
# the build stage.
# ISSUE WITH LUCEE_ENABLE_WARMUP=true
# https://dev.lucee.org/t/docker-build-issue-with-lucee-enable-warmup-true/15375
# RUN LUCEE_ENABLE_WARMUP=true /usr/local/tomcat/bin/catalina.sh run
RUN timeout 60 /usr/local/tomcat/bin/catalina.sh run || true

# Remove .CFConfig.json so it doesn't deploy at runtime
RUN rm -f /opt/lucee/server/lucee-server/deploy/.CFConfig.json

# Expose ports: 2222 for SSH and 8888 for Lucee (redundant but clear)
EXPOSE 2222 8888

# Use the custom startup script as the entrypoint
ENTRYPOINT ["init.sh"]

The critical part being the RUN timeout 60 /usr/local/tomcat/bin/catalina.sh run || true which starts Tomcat for a short period to allow Lucee to start up and process the .CFConfig.json file. usng RUN LUCEE_ENABLE_WARMUP=true /usr/local/tomcat/bin/catalina.sh run starts Tomcat but stops it before Lucee is done.

This is what I’m using for build for reference


docker build --build-arg BUILD_CACHE_BUSTER=$(date +%s) --tag lucinda-crm:nginx .

And run


docker run -d --name lucinda-nginx -p 8080:8888 -p 2222:2222 --env LUCEE_LOGGING_FORCE_APPENDER=console --env LUCEE_ADMIN_PASSWORD=pass lucinda-crm:nginx

Note: Ignore the lack of nginx port - thats on the roadmap as I was using the Tomcat only container to start with… This allows me to get to the Admin to check if my settings applied (or by looking at the extensions installed on the cli)

init.sh

#!/bin/bash
# Start the SSH server in the background
/usr/sbin/sshd -D -e -f /etc/ssh/sshd_config &

# Start Tomcat (Lucee) in the foreground, keeping the container alive
exec catalina.sh run
#exec supervisord -c /etc/supervisor/supervisord.conf

sshd_config

listenaddress 0.0.0.0
PasswordAuthentication yes
PermitEmptyPasswords no
permitrootlogin yes
port 2222

Have you tried dropping your .CFConfig.json into my lucee-server/context directly?

Not yet, I’ll give that a try later today and see how it goes and post back with results.

1 Like

Are you installing extensions via the config, that’s the only thing which takes any extra time

Otherwise, as long as Lucee updates the context CF config, next time you start docker it will be no different

The 6.2.3 SNAPSHOTS do have some improvements to the config merging via deploy (it needed some extra handling for arrays)

https://luceeserver.atlassian.net/browse/LDEV-5758

There also is onBuild, which gives you a lot more control

2 Likes

bug filed with repo

https://luceeserver.atlassian.net/browse/LDEV-5808

Thanks for the bug report!

I had a chance now to test other options and for verbosity of visitors to this thread:

As an example I have the “Ajax Extension” added to my .CFConfig.json amongst some of the other extensions already included in the Docker image. “Ajax Extension” isn’t included in the Lucee Docker images.

Using lucee/lucee:6.2-nginx
COPY .CFConfig.json to /opt/lucee/server/lucee-server/context/
Result: Process runs correctly with LUCEE_ENABLE_WARMUP=true but the Ajax Extension gets added but then removed in the same run which would I presume relates to https://luceeserver.atlassian.net/browse/LDEV-5758

Using lucee/lucee:6.2.3.30-SNAPSHOT-nginx-tomcat11.0-jdk21-temurin-noble
COPY .CFConfig.json to /opt/lucee/server/lucee-server/context/
Result: Works as expected with LUCEE_ENABLE_WARMUP=true and Ajax Extension is installed albeit includes all the available extensions in the docker image are merged instead of just the ones I specify in the .CFConfig.json which I presume is the expected behaviour vs. dropping the file in /deploy which will remove any extensions not defined in .CFConfig.json

Using lucee/lucee:6.2.3.30-SNAPSHOT-nginx-tomcat11.0-jdk21-temurin-noble
COPY .CFConfig.json to /opt/lucee/server/lucee-server/deploy/
Result: With LUCEE_ENABLE_WARMUP=true fails per issue with lucee/lucee:6.2-nginx

Using lucee/lucee:6.2-nginx
RUN LUCEE_EXTENSIONS=“6E2CB28F-98FB-4B51-B6BE6C64ADF35473;name=AjaxExtension;label=Ajax Extension;version=1.0.0.5” /usr/local/tomcat/bin/catalina.sh run
Result: With LUCEE_ENABLE_WARMUP=true Works. Also tried adding updated extensions for docker included ones and this worked well. for example updating lucee admin, lucee documentation etc. I added the extra extension updates as test in case Lucee was processing the Ajax extension fast enough before Tomcat stops. This also merges with the docker included extensions and obviously doesn’t include other desired configuration found in .CFConfig.json

Outcome is I can either keep using a sleep with lucee/lucee:6.2-nginx or a COPY into the /context folder using the latest snapshot image or if I want to be specific about extensions I could perhaps use the light version image.

Are the SNAPSHOTS considered stable for production use?

Quick note that I"m using this workaround to make build more efficient:

RUN /usr/local/tomcat/bin/catalina.sh run & \
    until grep -q "Server startup in .* milliseconds" /usr/local/tomcat/logs/catalina.$(date +%Y-%m-%d).log 2>/dev/null; do \
        sleep 1; \
    done && \
    echo "\nWarmup processing finished. Stopping Tomcat...\n" && \
    /usr/local/tomcat/bin/catalina.sh stop

removing extensions isn’t currently supported via this approach, start with light and then define all the extensions you want

1 Like