5.3.9 stops logging while in a docker container

OS: ubuntu 18.04
Docker: 19.03.6

I’ve tried a wide variety of configurations. No matter what I do, the logging stops after some period of time (usu. something like 25 minutes). Lucee continues to run in the container (I can see it doing the work I want it to do). But no logs are generated.

I have not spun up 5.3.9 outside of docker so I don’t know if this is in 5.3.9 or the dockerization of it.

Anyone else experiencing this unknown stoppage?


Somewhat related to

@Brian_Harcourt Are you hitting this 5.3.9 regression?


Note, that ticket is fixed in the snapshot build.

1 Like

Hmm - @bdw429s - thanks for the idea but it doesn’t look like that’s it. (Unless I built the container incorrectly).

I’ve tried A-B testing so many configurations I’m sure I’m just missing something. At this point I’m just trying to follow any documentation I can find. Namely;

<system err="default" out="default" />

In lucee-server.xml.
And the log of concern in lucee-web.xml.cfm using the config injected by the Admin page; specifically

  <logger appender-arguments="streamtype:output" appender-class="lucee.commons.io.log.log4j2.appender.ConsoleAppender" 
    layout-arguments="pattern:&quot;%p&quot;,&quot;%d{yyyy-MM-dd HH:mm:ssXXX}&quot;,&quot;%t&quot;,&quot;%c{1}&quot;,&quot;%m&quot;%n" 
    layout-class="org.apache.logging.log4j.core.layout.PatternLayout" level="info" name="taskqueue"/>

I cribbed the pattern from a previous working config; maybe Log4j2 doesn’t like patterns rendered this way?

In short, any ideas you could point me to would be appreciated.


@Terry_Whitney - interesting note! I discounted this as not the issue as tomcat9 isn’t a systemd service in my container; just ‘catalina.sh run’ in the foreground. But this looks like a place to try and understand STDOUT in containers - which seems like what I’m not understanding.

Even if its not the cause, running a secure, performance enhanced buggy free version of tomcat will only help you further troubleshoot the issue.

Just spit-balling in public . . . here’s my current theory:
This is a task engine. It takes a request from a scheduled task once per minute that is a script that says “run tasks”.

The request sets down an application scoped marker, then finds a task and processes to completion. Then finds another task and processes to completion. Etc. etc. til it runs out of tasks or sees another request has started. (If, on a completion, it finds the application marker has changed; then a new request has begun and this request ends.) The timeout is 10 minutes. I don’t believe any requests timeout, they simply see the hand-off and stop.
Thus, a second thread may be logging at the same time a previous thread is logging.
This happens all the time so I can’t imagine this would be much of an issue.

But, the task engines with higher loads (more likely for requests to overlap) are the engines that stop logging. So, I’m wondering about new Log4j2 infrastructure and whether it might be frustrated by shutting down a thread when there is no response.

The last log record that comes out is always
org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8888"]

If this spurs thoughts in anyone as to how to assess further; they’d be welcome.