File write permissions on lucee docker image

Hi guys,

I love lucee docker image as well. In the meantime, it’s very basic for me for now.
My Dockerfile contains only following two lines:

FROM lucee/lucee4:latest
COPY www /var/www

However, it seems I need to tweak with lucee configuration to set file creation permission right because right now, for my Ubuntu 16.04 LTS, any file creation type function fails, which includes CFFILE action=“write” or action=“append”. Though no error, it simply failed to write to target file or append to it.



The Lucee Docker images should be able to create files just fine, so it sounds like a file system permission error.

Are you using a volume for the directory that you are writing the files to? Perhaps the root of the volume needed some initial owner/permissions set, or perhaps you uploaded some directories that have different owner/permissions set?

Following is my file permissions under lucee/www
Anything wrong?

userme@ubuntu-s-2vcpu-4gb-nyc1-01:~/lucee$ pwd
userme@ubuntu-s-2vcpu-4gb-nyc1-01:~/lucee$ ls -l www
total 388
-rw-rw-r-- 1 userme userme    885 Jan 26 23:18 application.cfc
-rw-rw-rw-- 1 userme userme    549 Jan 28 18:45 textfile.json    (this is the target file to be appended to)

There could be a couple of different ways to approach it depending on what you need to achieve.

  1. If your code and assets are inside your Dockerfile, and you are building an image and running your application from a container image, then you might be able to do away with the volumes. e.g. typically all code and static assets are placed into the image at build time, and content managed assets are stored on a CDN. This can help avoid the need to deal with “data volumes” that are used to store critical files.

  2. If you changed the owner of your webroot back to root:root it may resolve your issue for the short term, depending on how you need to interact with those files and with which user accounts

  3. The underlying issue here is that Tomcat images, like many base images, run as root and the container has no concept of your user GID or UID for “userme”. You can get the values for that user with id userme, and set the UID:GID when you do docker run using the --user flag, for example docker run ... --user=1001:1001 ..., or alternatively you can set the user property in your docker-compose.yml if you are using that (reference for docker run;

A combination of the first and third approaches might be best, to both run as a known non-root user and to reduce dependence on volumes that mount directories from the host (particularly if you are scaling across multiple host nodes), at the moment we are using the Tomcat base image as-is and leaving those decisions up to the end user.

1 Like

Very helpful, much appreciated.

Great to know the --user flag for docker run.
Now, cat /etc/passwd has the following entry
does that mean,
docker run --user=0:0 would mean it’s run under root account?

yeah, i’m going for your 1 and 3.

hmm, not sure why, the following code still failed without error.

<cfset sepid = "328">
<cfset hashvalue = "U2D7W83IWNSWIE6723">

<cffile action="append" file="#expandPath('./textfile.json')#" output="#sepid#:#hashvalue#" mode="777">

is Lucee writing logs out under the WEB-INF dir under www?

Well, since it’s a lucee4 docker image, and
I created a directory called lucee under my home directory, then, created a www sub directory under lucee,
now, it now looks like this:
and I then created a simple Dockerfile under lucee, and the Dockerfile reads:

FROM lucee/lucee4:latest
COPY www /var/www

and its webroot looks like this;

I then create cfml files and place them under www sub directory,
and then build and run lucee docker container, nothing else, no WEB-INF etc.

I wonder if there’s something else that I could try with the current super basic Dockerfile…


All of your diagnostics that you’ve posted are from the PoV of your host, not your container. You need to diagnose the status within the built container, not your host.

cd ~/lucee
docker build -t testimage .
docker run -d --name testimage testimage

Now lucee is running with your built image.

docker exec -it testimage bash
ls -ld /
ls -ld /var
ls -ld /var/www
ls -l /var/www

Verify all that works

Determine the processes running.

apt-get update
apt-get install procps
ps awwux

Following these steps it looks to me like most things are good based on what you’ve provided, you SHOULD be able to write to that file… so here’s the next question… Did you try to append or write to /var/www/textfile.json?

I don’t mean expandpath, I mean the ACTUAL ABSOLUTE path.

Or did you try to cfoutput the expandpath results and verify you’re actually getting the file path you think…

When I copy your example code into application.cfc and run it, it works. And by it works, I mean:

docker exec -it testimage cat /var/www/textfile.json

Returns the proper result.

If you’re expecting ~userme/lucee/www to change on the host, you don’t want to create a build, you want a volume mount.


docker stop testimage
docker rm testimage

Interactive dev environment

docker run -d --rm -p 8888:8888 --name testimage -v /home/userme/lucee/www:/var/www lucee/lucee4 
docker logs testimage

Or run without the name, swap -d for -it, and just ctrl-c when done. No build necessary.

So based on what you’re reporting, my guess is it IS writing to the file, in /var/www, in the container, which is a COPY of your files. But that’s not where you’re looking. :slight_smile:

1 Like


You’re a genius!

Indeed, data were successfully appended to the /var/www/textfile.json

Regarding “docker run -d --rm -p 8888:8888 --name testimage -v /home/userme/lucee/www:/var/www lucee/lucee4
I have a few questions,
I usually issue the following command to run my lucee container while testing it:
docker run --rm -p 8888:8888 myluceecontainern
Then if I find something is not right or deficient, I Ctrl + c to kill it.

question, for your “-v /home/userme/lucee/www:/var/www lucee/lucee4” option
I understand map /home/userme/lucee/www to /var/www
Then, what’s the lucee/lucee4 in the end of this option for?

When I feel I’m ready for it, I’ll run it in the background and keep it even upon ssh session ending:
(docker run --rm -p 8888:8888 myluceecontainern &) ( say, this is option 2 )

question, it seems my option 2 is similar to your -d option,
is there any difference?
if so, what?


It seems only with -d (daemon mode) the -v (volume can be mounted), it’s working, I’m very grateful to you for your help, Joe.

The volume mount should definitely work whether there’s the -d option or not. I do it all the time.

docker run --rm -p 8888:8888 myluceecontainern

Runs your myluceecontainern image.

However, since you’re just mounting your source directly into the lucee image, we don’t need your built image at all, we can call the lucee/lucee4 image directly:

docker run --rm -p 8888:8888 -v /home/userme/lucee/www:/var/www lucee/lucee4

Accomplishes the same thing but runs lucee/lucee4 as the image (:latest is implicit) rather than the one you built. That’s the only difference.

And yes, if you DON’T use -d, I’d normally use -it (interactive, terminal) so that I can see everything interactively, ctrl-c to make go away.

If you DO use -d, then it’ll go in the background immediately and not show any output - but you can GET the console output using docker logs on the container.

i.e. docker ps
Get the container id
docker logs -f containerid

At that point you’re watching the output from the container just like not using -d - but when you control c you abort the LOGS process not the LUCEE process…

With -d you’d shut down the container with that container id, docker stop containerid, etc.

Your option 2 IS similar… sometimes when you fork a process (not docker) into background with &, it’s still attached to your tty, so when you close ssh it’ll kill the process… since you’re not running with -it it’ll probably just work that way too, or you could introduce setsid, it’s just the -d option is literally there to make the container go into the background, so might as well use it :slight_smile: Especially cuz what’s happening architecturally is the docker daemon is running runc and creating a container all in another process so it’s in the background anyway, docker is doing MORE work to show you the output from that background process… so might as well tell it we don’t care and just use -d.

Bonus points: I’m too lazy to type all that crap out, so I just create a docker-compose.yml file:

version: '2.2'
    image: lucee/lucee4:latest
    init: true
    container_name: my-lucee-dev
      - 8888:8888
      - /home/userme/lucee/www:/var/www
      - TZ=US/Eastern
    hostname: lucee-dev.local

(container name, environment, tz, hostname all optional - I can’t stand UTC containers)

You need docker-compose: (one-ish liner here)

And then you just hop into your lucee folder and:

docker-compose up
for it to stay on your console, or

docker-compose up -d
to go in the background.

docker-compose down
when you’re done.

Other things you might consider - creating ~/lucee/log and mapping it into your container… I’ll map system logs, tomcat logs, lucee logs etc with volume mounts as well so 1) they persist, and 2) I can access them easily. Sure, lucee’s giving the basic tomcat output, but if you want to dig into the access log, or system logs, or catalina logs directly, you’re going to end up exec bash’ing into the container and mucking around - it can be more convenient to map that outside the container for troubleshooting. Especially if later you decide you want to move up to a lucee4-nginx image and now you have MORE logs to deal with.

[Obligatory discourse on docker logging in general, using ELK/EFK, logstash, filebeat, syslog replication and any other method of NOT volume mapping logs deliberately omitted]

1 Like

Very cool, many thanks, but i’m beat now, will look into it again later.

Looks interesting!