AWS Container logging for Fargate

Docker image lucee/lucee:5.3-nginx running on AWS Fargate

I need the logs to go to the console so they are centrally collected in cloudwatch. How do you stop the log rotation and send to console out?

12 factor apps.

– Bill

If anybody used 12 Factor App principal to build app. I would love to see how they solve the centralized logging issue.

Something we’ve been lobbying for is the ability to push Lucee logs to syslog or equivalent. At the moment you need an agent/sidecar/something to process the text logs into a log drain and pipe to somewhere. For now streaming logs from multiple lucee containers into cloudwatch is non-trivial.

1 Like

Dumb question (since I haven’t looked at it), isn’t that the point of LOG4J Appenders? That you can set one for lucee and that would be a Tomcat Appender (or something like that?)

OK I have aded a simlink to the internal logger… I am testing now…

This was added to my docker file–>

RUN cd /opt/lucee/server/lucee-server/context/logs && ( for i in ls *.log; do ln -sf /proc/1/fd/1 $i ; done ) && cd /opt/lucee/web/logs && ( for i in echo application.log exception.log gateway.log mail.log remoteclient.log requesttimeout.log scheduler.log scope.log trace.log; do ln -sf /proc/1/fd/1 $i ; done )

FROM lucee/lucee:5.3-nginx
# FROM lucee/lucee:

# NGINX configs
COPY app/config/nginx/ /etc/nginx/
# Patch Server
# COPY patches/ /opt/lucee/server/lucee-server/patches/
# Lucee configs
# COPY config/lucee/ /opt/lucee/web/
# Code
# remove hg it has a security vulnerability
run apt-get update -y
run apt-get remove --auto-remove mercurial -y
run apt-get purge mercurial -y
run apt-get install awscli -y
run sed -i 's/<cfLuceeConfiguration.*>/<cfLuceeConfiguration hspw="1e4ff17304e0a1c6baa8f5d8057ffedd61f6f7aa6500e5fae27c6cfed833b518" salt="4FDA588E-318A-445C-898736AA1F229A69" version="5.2">/g' /opt/lucee/server/lucee-server/context/lucee-server.xml
run sed -i 's/<cfLuceeConfiguration.*>/<cfLuceeConfiguration hspw="1e4ff17304e0a1c6baa8f5d8057ffedd61f6f7aa6500e5fae27c6cfed833b518" salt="4FDA588E-318A-445C-898736AA1F229A69" version="5.2">/g' /opt/lucee/web/lucee-web.xml.cfm
run ls -la
ADD app/www /var/www
run ls -la /var/www

RUN cd /opt/lucee/server/lucee-server/context/logs && ( for i in `ls *.log`; do ln -sf /proc/1/fd/1 $i ; done ) && cd /opt/lucee/web/logs && ( for i in `echo application.log exception.log gateway.log mail.log remoteclient.log requesttimeout.log scheduler.log scope.log trace.log`; do ln -sf /proc/1/fd/1 $i ; done )

One would think that who ever is building the Docker container would adjust the logging to support containers. Also they need to use openjdk11 and not the Oracle Java… Better yet used openjdk11 on alpine. The container is too large… Meaning startup/autoscaling response times will be slow.

This my docker compose…

version: '3'

      context: .
      dockerfile: app/Dockerfile
    image: ltt-app:dev
    env_file: docker/app.env
    container_name: ltt-app-dev
      - "80:80"
      - "8888:8888"
      - ./app/www:/var/www
      - postgres

      context: .
      dockerfile: admin/Dockerfile
    image: ltt-admin:dev
    env_file: docker/admin.env
    container_name: ltt-admin-dev
      - "81:80"
      - "8889:8888"
      - ./admin/www:/var/www
      - postgres
      context: .
      dockerfile: web/Dockerfile
    image: ltt-web:dev
    env_file: docker/web.env
    container_name: ltt-web-dev
      - "82:80"
      - "8890:8888"
      - ./web/www:/var/www
      - postgres

    image: postgres:9.6.11-alpine
    container_name: ltt-postgres-dev
    - "5432:5432"
    restart: always
    hostname: postgres
    env_file: docker/postgres.env

    image: waisbrot/wait
    - postgres
    - TARGETS=postgres:5432
      - postgres

    image: boxfuse/flyway
    container_name: ltt-flyway-dev
    command: -url=jdbc:postgresql://postgres/ltt -user=dbo_ltt -password=change_password -connectRetries=300 clean migrate info
    - ../infrastructure/database/sql/:/flyway/sql
    - ../infrastructure/database/seed/:/flyway/seed
    - FLYWAY_LOCATIONS=filesystem:/flyway/sql, filesystem:/flyway/seed
    - wait-postgres

What if we dropped Lucee into Alpine Tomcat…

Only 67.85 MB in size!!!

Maybe… but I’m not clever enough to work it out :slight_smile:

In containers i don’t want any text logs written – they just get destroyed when the container redeploys. It would be super sweet if Lucee would just stream all Lucee logs to syslog. And had a defined template with examples of how to restructure the logs if needed.

There’s an option to set a datasource as a destination; seems like a small jump to having other useful choices.

There’s an awful lot of choice in the build matrix for the official Lucee docker images.

We’ve moved to general support for OpenJDK; the last Oracle Java build was a month ago.

With respect to Tomcat Alpine… that was part of our build matrix until Tomcat support was dropped for Alpine on account of OpenJDK compatibility issues:

Note: The official Tomcat images have removed support for Alpine and so the Lucee -alpine variant can no longer be supported. If the Tomcat base images add support for Alpine in the future then we will look to support the -alpine variant again.

Tomcat project said:

Alpine/musl is not officially supported by the OpenJDK project, so this reflects that – see “Project Portola” for the Alpine porting efforts which I understand are still in need of help

If that changes… let us know and we’ll reinstate Alpine as an option.

Alpine is since it not supported by Tomcat. I’ll stick with lucee/lucee:5.3-nginx version for now. Working though the DSN issues and Mail service issues. I think I can a fix for the logging issues. Now I need to fix the Security issues. Looks like if I remove the python 2.7/3.7 or update this might fix the 2 highs. The journey continues… The goal is to use lucee to build a 12 factor app using Fargate on AWS.

Thank you for the help!!!

– Bill

How do you set the log setting via cfadmin or other? I think setting the output to console will send logs to cloudwatch. Initial testing was positive.

Don’t know if it is supported,but I would check how it is done through the source of admin. i think it happens in:


Would try it they way it is done there.