How to create a basic docker image for Lucee 5.2 for ubuntu?

Hi,

I’d like to understand the process (manual) of how to create a docker image for Lucee 5.2.x for ubuntun.
My OS env: Ubuntu 18.04 with Docker 17.x or 18.x installed.
And I’m getting comfortable as a Docker user.

How should it go about it?

  1. Download lucee 5.2.x for ubuntun?
  2. Create a dockerfile for this task?
    see the code for “A simple example Dockerfile:” in the
    “Create a Dockerfile that installs the application” section
    for the following link,
    Containerizing a legacy application: an overview
    questions,
    (a) what <REQUIRED UBUNTU PACKAGES> would be for Lucee to be a docker image?
    (b) what would the image (Lucee) setup.sh be like for our case? relevant sample shell script?
    (c) what would the image (Lucee) startup.sh be like? relevant sample shell script? For this part, maybe I can look into Lucee config dir/files etc…
    How about web app root mapping?
    That is, once the image is successfully created, how do we add web app (myapp.cfm) to where (path)?

Then, how do we add/give the image a tag name?

I don’t need to push it for now.

Some guidance would be much appreciated.

Maybe have a quick read of how to get started. Containers are NOT VMs

Generally I would create a Dockerfile file in the root of my project using the commandbox docker images rather than Lucee’s directly as I would have more control. But in the interest of brevity, you can create a Dockerfile in the root of your project as so:

FROM lucee/lucee52-nginx:latest

# NGINX configs
COPY config/nginx/ /etc/nginx/
# Lucee server configs
COPY config/lucee/ /opt/lucee/web/
# Deploy codebase to container
COPY www /var/www

Then you can do
docker build -t myapp .
Which is “Docker, build with tag (-t myapp) from this directory (.)”
and when you want to run it you can do:
docker run -P myapp
Which is “docker, run the image myapp and map all the ports from inside the container to the outside”
you could of course have to add your lucee and nginx configs

I shall do a separate post about running with commandbox.

1 Like

Running lucee from the commandbox images (see Docker) is actually much simpler IMHO. (read more about commandbox here: https://commandbox.ortusbooks.com/)

Say you have index.cfm with <cfoutput>#now()#</cfoutput> as your application.

I would create a server.json with the following:

{
    "app":{
        "cfengine":"Lucee@5.2"
    },
    "web":{
        "HTTP":{
            "port":"8080"
        }
    }
}

The above says to use Lucee@5.2 as the cfengine and use port 8080 as the http port of our app. next I create a Dockerfile with the following:

FROM ortussolutions/commandbox
COPY . /app

Which say to use ortussolutions/commandbox and then copy my whole app into the image.
Now we build our image:

docker build -t markdrew/myapp .

And now to run it, exposing the port 8080 as port (say for example, 80 on my machine!):

docker run -p 80:8080 markdrew/myapp:latest

(I used port 90 instead of 80 as I am using it for something else)

After some startup you can now go to http://localhost/ and see your app working!

(this is but the start, as you will ask about networks and databases… this is where we go onto something like docker-compose, but that is another story)

if you are in doubt about what containers are running you can do :
docker ps which will show you:

docker ps%                                                                                                                                                                  ➜  docker docker ps
CONTAINER ID        IMAGE                               COMMAND                  CREATED             STATUS                   PORTS                                                                           NAMES
df18b7cab85b        markdrew/myapp:latest               "/bin/sh -c $BUILD_D…"   4 minutes ago       Up 4 minutes (healthy)   8443/tcp, 0.0.0.0:90->8080/tcp                                                  sleepy_poincare
3 Likes

Very helpful, Mark, appreciated. Yes, I can start with using a base image first. Also, I’d like to learn about docker image creation and push (deployment) in general terms not limiting to cf/Lucee.

I failed at step 2 (see below), nginx installed, /etc/nginx exists.

me@myhost:~/lucee$ docker build -t myapp .
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM lucee/lucee52-nginx:latest
# Executing 1 build trigger
 ---> Running in 61c4d977e7fa
Removing intermediate container 61c4d977e7fa
 ---> 60e307649f5d
Step 2/4 : COPY config/nginx/ /etc/nginx/
COPY failed: stat /var/lib/docker/tmp/docker-builder312317834/config/nginx: no such file or directory

Thanks.

Start with the basics, such as an app that just has a single index.cfm file in the webroot.

Your directory structure will look like this;

~/MyLuceeApp
├── Dockerfile
└── www
    └── index.cfm

Your Dockerfile will use a Lucee image as a base, and it will copy the contents of the www folder into the containers webroot /var/www;

FROM lucee/lucee52:latest
COPY www /var/www

Put some test code in your index.cfm, do a build, run the container (Tomcat will listen on port 8888), then give it a test :slight_smile:

From there, the next steps are usually to start using docker-compose to specify your container runtime configuration, and to use volumes so that you can map your code into the container to change and test it on the fly, rather than doing a build each time.

2 Likes

Justin, you rock, exactly start simple.

It built and ran.

Since my terminal 1 (this terminal is serving)
I open terminal 2, and since it’s also ssh session, I tried curl http://localhost:8888, connection refused,
so, I tried curl http://localhost, it pulled content of ignix server,

docker ps indicates myapp1 is listening at tcp 80, tcp 8080

[edit]: I added
expose 8888
and yet it does not help

hmm, … thanks.

What’s the command that you used to start the container?

You might need to check the port that you have exposed, e.g. -p 8888:8888 (the first port is the one that you are binding to on the host, the second port is the one that the container is listening on).

In the Lucee Docker images Tomcat listens on port 8888, the same as the installer and the express versions.

And if you are using the Lucee nginx images then nginx will be listening on port 80, so your port binding might be something like -p 80:80

docker run myapp1

docker ps result:
80/tcp, 443/tcp, 8080/tcp

fyi, the content of my index.cfm under /www is
<cfoutput>#now()#</cfoutput>

thanks.

It looks like you aren’t publishing the port when you run the container, which is why you can’t browse to it.

If you’re using the Lucee nginx image, try;

docker run -p 80:80 myapp1

If you’re using the Lucee image (just Tomcat, no nginx), try;

docker run -p 8888:8888 myapp1

perfect, many thanks Justin.

and say, I’ll install ms sql server for ubuntu onto this ubuntu host, would lucee 5.2 support it as well?
if so, how can I learn more about its usage in such a setting?

I haven’t used SQL Server on Linux yet but I don’t see any reason why it wouldn’t work, AFAIK it uses the same protocol so the existing drivers should be fine.

For local development I have my containers connect directly to MySQL Server or SQL Server running locally (not in Docker).

When running database servers in Docker containers you do need to be careful about data persistence, which means you’d need to specify volumes to bind mount the data to the host – on a single host that might be fine, but in a clustered environment it’s a little more difficult as you’d need shared file storage in case you need to move the container from one host to another.

In production scenarios, such as in cloud hosting environments, it’s probably easier to lean on the hosted DB server products such as Azure SQL Database or Amazon RDS, etc. Docker isn’t necessarily always the best solution. It can be used for DBs, but the best option may depend on many factors :slight_smile:

Excellent, thank you Justin.

So, it seems if we have app1, app2, app{n} under such setting (each as a container), when running each would create an instance of lucee runtime, if app1 fails for whatever reason, it won’t have any adverse effect on app2 and other app{n}, thus, such isolation provides a higher degree of app availability than the traditional lucee service (traditional server - client architecture of all the apps under “one roof” of one lucee instance running). Is such understanding correct?

Also, wouldn’t many such apps running at the same time on one host drain the server (hardware)'s RAM?

This is where you would get into clusters (swarms) and/or kubernetes
Docker Swarm: https://docs.docker.com/engine/swarm/

Kubernetes: https://kubernetes.io/

1 Like

@markdrew excellent point, thanks.

Now, back to the Dockerfile, if I brings back the COPY command for the Lucee config it bombed out, do you folks have any idea on how to fix it? again, I’m Ubuntu 18.04.

For the COPY command, the first parameter is your local location. The second is the docker destination. So on the nginx dockerfile you want the config/nginx directory created on your machine where you run docker build from. It will copy those files to the /etc/nginx in the docker image.

BTW. Ortus has container videos. Here is one.

1 Like

“wouldn’t many such apps running at the same time on one host drain the server (hardware)'s RAM?”

Yes, one Lucee container can need a gig of RAM. So one container per site doesn’t scale for us.

I’ve nearly got named based virtual hosting working as a way to share related sites in a single container however.

See other thread for the final problem with IP addresses though

1 Like

@kabutotx excellent, thanks.

Interesting

Thats right, using containers for your apps – in addition to the obvious packaging and deployment benefits – gives you better process isolation.

In the old days, having app1 and app2 run inside the same JVM means that some bad code in one app could bring down the other app. e.g. app1 fills the available heap space, leaving little memory for app2, and depending on your JVM settings for garbage collection and other things the JVM might not recover. When you deploy app1 and app2 in separate containers they will have their own JVM and are much less likely to affect each other. There are of course exceptions to this, such as apps eating up CPU resources, i/o resources, etc.

Running separate containers will have a small overhead, but at the same time RAM is cheap and the benefits of running apps in separate containers generally far outweigh the benefits of running many apps in a single container. Resource requirements depend on your particular app, so it’s going to be different for everyone, but in our clusters we typically have 10-15 containers running per host. There’s no intrinsic “drain” on resources, you just plan the resources in your cluster according to your apps.

3 Likes