Lucee in Docker - ignoring X-Forwarded-For

I’m trying to get Lucee, in Docker, to see the real client IP.

I’ve been able to login to the Docker container, run tshark, and verify the request is sending


as well as the things like

X-Tomcat-DocRoot: /wwwroot/clients/i/indirect/dev\r\n

that mod_cfml needs.

The server.xml has the Valve

           requestAttributesEnabled="true" />

But if I dump the CGI scope, remote_addr i s

And, oddly, #GetHttpRequestData().headers# says x-forwarded-for is set correctly as expected, so the Valve isn’t working ? Or Lucee is twiddling something else when it makes the CGI scope ?

So, it works for me, but I’m not using mod_cfml, just tomcat and Lucee.



        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log." suffix=".txt" requestAttributesEnabled="true"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

From the Lucee source, it looks like REMOTE_ADDR just references .getRemoteAddr in the HTTPServletRequest. You could test this:

<cfset obj=getPageContext().getRequest() />
<cfset e = obj.getAttributeNames() />
<cfloop condition="e.hasMoreElements()">
  <cfset key = e.nextElement() />
  <cfoutput>#key# => #obj.getAttribute(key)#<br/></cfoutput>

That’ll show you the remoteaddr from tomcat, plus the request attributes that are set. If they’re not set, you know you need to look at the valve.

Or mod_cfml is doing something else tricky; I wouldn’t know.


Are you using the plain Lucee image or the combined Lucee-Nginx image?

It’s possible that it’s behaving differently when nginx (or something else) is being used to proxy the requests. Perhaps the valve in Tomcat needs to include for internalProxies and trustedProxies like Joe’s config does.

Also worth noting that the standard Lucee images don’t use mod_cfml, with the default configuration they are intended for single app containers.

1 Like

It’s using “FROM lucee/lucee52-nginx”, then tweaks the Nginx configuration etc for virtual hosting.

I didn’t know about internalProxies so will take a look at that tomorrow and report back, cheers.

I’ve tried all sorts of things, and couldn’t get anything to work.

If may be complicated by the Nginix+Lucee container itself being behind an Amazon Elastic Load Balencer.

What does work is adding

	public boolean function onRequestStart(string targetPage){
		cgi.REMOTE_ADDR = cgi['x-real-ip'];

But what didn’t work, and really should have, was

 sed -i "s/X-Forwarded-For/x-real-ip/" /usr/local/tomcat/conf/server.xml

The getPageContext() based output returns

A CFDUMP of getHttpRequestData().headers lists x-forwarded-for and x-real-ip as the expected value (i.e. the office outbound gateway). Still feel like I must be missing something about how the remote IP valve is meant to work.

in one of our nginx configs for docker we have a couple lines like this if it helps

# Show real IP address, not the proxy
real_ip_header X-Forwarded-For;

OK, so I pull lucee/lucee52-nginx:latest

I run it:
docker run -p 9876:80 -it lucee/lucee52-nginx:latest

I see this:

So the base config isn’t doing x-forwarded-for, it’s doing x-real-ip.

So lets dump the CGI scope

And it’s correct ( is my host)

Which I find interesting considering the server.xml says to use X-Forwarded-For which isn’t even in the headers… maybe they got stripped, because the nginx config says:

  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Real-IP $remote_addr;

One thing to note, if you have ELB in front, you’re going to have TWO proxies to strip off. I’d imagine remote_addr in nginx will be the ELB address, and proxy_add_x_forwarded_for will be something like: ClientIP, ELBIP

(i.e. from wikipedia)

The general format of the field is:

X-Forwarded-For: client, proxy1, proxy2[[3]](

where the value is a comma+space separated list of IP addresses, the left-most being the original client, and each successive proxy that passed the request adding the IP address where it received the request from. 

So you may have to add both and ELB as trusted proxies

1 Like

In the end I just added to the Nginix config inside our Docker container :slight_smile:

proxy_set_header REMOTE_ADDR $http_x_real_ip;
proxy_set_header REMOTE_HOST $http_x_real_ip; 

And to the Nginx in Amazon’s “outer” instance :

real_ip_header X-Forwarded-For;

Brute force :slight_smile:

Hey @thefalken

My apologies for not noticing this thread earlier. I am curious if you recall exactly where you were placing your RemoteIpValve config. Valves can be placed within an Engine, Host, or Context inside the server.xml. If you placed the RemoteIpValve within the default context where several other valves are, rather than the Engine, that could be why it wasn’t working. The default context only gets hit when there’s no other context to service a request. So it’s possible that your requests were simply bypassing your RemoteIpValve config.

Either way, I’m glad you found a solution. =)

It was going in server.xml, directly above the mod_cfml.core valve. so nested inside the Host tag.

Should it be somewhere else ? Inside Engine, but outside Host ? That would be neater…

Yeah, that would make the RemoveIpValve load only in the default context, which you would rarely, if ever, hit directly.

I did some digging on this and while Valves are supported in wider scopes, it is up to the valve specifically to support being loaded in those wider contexts (like Engine). Looking at the documentation here:

It says “This Valve may be attached to any Container…” but does not mention the wider contexts. So, my guess would be that it doesn’t support it. Instead, you might try loading the RemoteIpValve using the /opt/lucee/tomcat/conf/context.xml - which is theoretically applied to all contexts in that Tomcat instance. There’s even a commented Valve example there. This means you should get the RemoteIpValve functionality in ALL your contexts, regardless of whether those contexts were made by mod_cfml or not.

Not that it matters too much since you already found a solution, but an interesting concept for anyone else who may be facing a similar situation.

Hope this helps!

1 Like

Jordan, you hit it on the nose!
Thanks, this is still an issue with Docker Swarm where you have multiple sites running in one box.
Moved it and boom, it works.


1 Like