Has anyone else experienced Lucee using a lot more memory since upgrading from 220.127.116.11 to .92?
We’ve been on a code freeze so don’t think it’s anything we’ve done, but suddenly our EC2s started running out of RAM and going into swap on 2 instances. Despite setenv.sh settings of a 2 gig max, Lucee keeps shooting up to 6+ gigs.
I would analyze a heap dump to see what is in memory.
Any tips on how to do that? It happened on two different instances with completely different code bases/sites and resolved when we downgraded back to .78.
There’s an admin heap dump extension…
Good tip Zac, I don’t even think I knew that. I usually use jmap to generate a heap dump.
jmap -dump:live,file=C:/heapdump.bin <PID>
Replace the path to where you want to write the dump and the PID of the java process.
It requires you have JDK installed. Heap dumps are a binary file as big as your heap was and you can analyze it with something like the Eclipse Memory Analyzer Tool (MAT). The binary files zip very well. This is not for the faint of heart though. I’d spend a couple minutes to take a look if you want to send it to me, but don’t post the heap dump publicly since it contains everything in your server’s memory which includes potentially sensitive stuff.
Are you sure this is a Java heap issue?
I’m experience issues in metaspace usage.
Do you have debugging enabled?
there’s a recent regression which repeats every error message three times!!!
which when combined with
error parsing large json object dumps out entire json string as exception message
will eat your memory as the debugging logs are kept in memory
in my case it has nothing to do with debugging.
But increasing memory usage, more than limited in heap settings, is what I can see (and rd444 reports).
No debugging here either…
Which versions of java are you using?
How long does it take to occur after a restart?
11.0.3 (AdoptOpenJDK) 64bit
circa 6 hours
And you are sure this is a Java heap issue?
Did you have a look into the memory with tools like VisualVM?
It was definitely the Lucee process on Linux as restarting it would reset it. Memory debugging is above my skill level. Our setenv.sh was set to a max on one server of 1 gig, and another at 2 gigs and both had Lucee using over 5 gigs.
Do you have a setting as -XX:MaxMetaspaceSize=1g in the config?
I have similar observing, and in my cases heap is not the problem, but MetaSpace is.
An older version of Lucee (or Railo) had a less fancy Java memory displaying function in the Admin, which I adapted to my needs years ago.
This will display where your Java Memory is going to just directly from Lucee:
<cffunction name="printMemory" returntype="string">
<cfargument name="usage" type="query" required="yes">
<cfset var height=12>
<cfset var width=100>
<cfset var used=evaluate(ValueList(arguments.usage.used,'+'))>
<cfset var max=evaluate(ValueList(arguments.usage.max,'+'))>
<cfset var init=evaluate(ValueList(arguments.usage.init,'+'))>
<cfset var qry=QueryNew(arguments.usage.columnlist)>
<cfset var ret = "" />
<div title="#pfree#% available (#round((usage.max-usage.used)/1024/1024)#mb), #pused#% in use (#round(usage.used/1024/1024)#mb)"><!---
<cfif StructKeyExists(pool,usage.type& "_desc")>
<div class="comment">#pool[usage.type& "_desc"]#</div>
<cfreturn ret />
Maybe you can post the output here?
Which version of Lucee loader are you using? you can find it at the bottom of the services update page
CATALINA_OPTS="-Xms1g -Xmx2g -XX:+HeapDumpOnOutOfMemoryError";
is our setenv file.
Did you find a solution for the problem?
Did you try the script above, so you can see more, than just heap?
We had to downgrade on our production servers and haven’t been able to schedule a test there. Here’s the output from our dev server, though it doesn’t have live traffic going against it so might be useless info:
That’s not that helpful from server that is not crashing.
If this problem persits (in my case it does) you may add
<cflog file="memory_consumption" application="no" text="Type: #type#; Name: #name#; Used: #ceiling(used/1024/1024)#MB; Percent: #ceiling(used/max*100)#%">
Just before the closing
</cfloop>. I’ve got a schedules tasks on that page, so I write a log file with the memory consumption. Although this will not stop the server from crashing, you can at least see, what part of memory fills up.