Allocated Memory for Request much higher than Used Memory

Trying to reduce the memory usage of an app. I’m using FusionReactor to look at Requests by Memory.

The top request in the report appears to have an extraordinarily high amount of Allocated Memory, relative to the Used Memory:

I’m trying to determine:

  1. What would cause this? The request itself is pretty simple, and while the payload being returned is in the MBs, it’s nothing like the usage being displayed here.
  2. Is this actually an issue? Does this report in FR give me actionable data, or is the fact that this request is at the top not matter, because the used memory is actually much lower?
  3. Why the Transaction Allocated Memory appears to be “off”, given that the container itself has a resource limit of 4GB.

OS: Docker container with eclipse-temurin:11.0.24_8-jre-focal as the base image (so, Ubuntu)
Java Version: 11.0.24+8
Lucee Version: lucee-light@5.4.3+15

Matt, this is a very frequent cause of concern/confusion. The way I explain it is that the tracking is of all memory (heap) allocation during THE LIFE of the request (meaning WITHIN that request, not related to any others to be clear).

Elaboration follows, so this is a classically longer reply from me. Some may get value out of just the first paragraph alone. For others interested to know more, read on. I don’t like the way numbering creates indentation, so I have separated groups of thoughts with a “…” indication.

We all tend to think this “transaction allocated memory” is showing some sort of “peak”…as in the “most memory used” at one time in the request.

But that then doesn’t make sense of it reports gigs of memory (or event 10s of gigs) more than the xmx/heap max for our instance (whether Lucee or CF or any other Java app that fr is monitoring).

Indeed, that “really high amount” shows us that the jvm is doing Gc’s (as it should, during the request) and so it is recovering objects when they’re no longer in use. But the feature is tracking ALL those allocations in that request, without regard to their later cleanup by the Gc’s.

So, that’s the big picture. There’s of course still the question of “what the hell is allocating so much memory??”

And that “allocation of memory during the life of the request” would basically be down to whatever the request is doing: perhaps returning a lot of data from a query, or from an api call, or the like. Or perhaps it’s looping over such data or some other large set of data, perhaps.

Or maybe your code runs some cfml feature that does a lot of work in the background (not based on a large amount of your cfml, per se) , like spreadsheet or pdf processing, or the like.

These are just SOME ideas. It could really be anything.

Sadly, FR does not break down the memory use (or allocations) at the request level beyond what you see there (let alone cfml level).

There IS the fr memory profiling (heap snapshot) feature, but that’s across the entire heap and ALL requests running (or that have run, creating objects in heap), so that won’t likely help here.

If somehow you could run that request and NO OTHER, between a couple of Gc’s, then the feature to compare two such heap snapshots might be helpful for you, at least to see what KIND of objects were created. It does track both differences in the SIZE of objects and their COUNT. But the effort often leaves people wanting, as it’s low-level Java object types, which one might struggle to relate to cfml-generated objects.

And knowing now what I’ve said, even without the heap profile, you may look at what your request is doing and perhaps you’ll see something differently. You’re a smart guy, so I’ll keep hope alive for you on both the last two points. :slight_smile:

Finally, it should be clear now that the issue has nothing directly to do with the size of what’s RETURNED to the user (you could return some file that did not need to be loaded into memory). But of course if you do produce a LOT of content as the response, you might find there’s something about that process which could be driving up heap allocations.

Again, though, since the Gc’s are keeping up (lucee is not going down on you), the allocations alone are not necessarily where you need to look to understand whatever is the original reason you looked into this indication on the request details page, or some may notice the related “requests by memory” page.

And perhaps that heap profile feature MAY help you there. I’ve also done some videos/presentations on solving memory problems with fr (done before the heap profile feature was made available).

Or of course I help people directly with such problems. Sometimes I help connect dots far more readily than some might on their own. It’s their call. As always, I don’t charge for time that’s found not to be valuable.

Sadly, I can’t cover/refund the cost for the time it takes to read my long replies like this. :slight_smile: I just never know who will read it, what they do know, what they want or don’t want to know, etc.

2 Likes

Thanks so much for the detailed response @carehart! Really appreciate your insight here.

I just want to clarify this one point you made:

Does this mean that this tracking includes memory allocated to different requests, if they happened during the life of this request? That is to say, it’s tracking memory allocated during the request, not necessarily by the request?

Ah, ok. Funny how even careful attempted use of language can still leave gaps!

First, to be clear, no it is NOT about “memory allocated to different requests”. It’s only THIS request. But I can see how you could wonder.

Second, FWIW, I did go on to elaborate how:


And that would be about whatever the request does:...

Still, as it seems even those references to “the request” weren’t enough, I will revise your quoted first reference to make this more clear, for the sake of future readers who may not read these replies. (And I see typo in another paragraph that I also want to correct.)

And so now I also hope that the rest of what I said may make more sense or may seem more worth considering, if you had this misinterpretation hindering you all that way. :slight_smile:

:grinning:

Thanks for your patience and for clarifying - I just wanted to be 100% certain about that point.

I’m getting closer to narrowing down what portions of the request are driving up the memory usage. This is a FW/1 app, and I found that some portion of the memory was being generated while the framework rendered the response. In light of your above explanation, this makes sense.

There are no other obvious memory culprits, but on Development I’m trying to narrow down each step of the request’s lifecycle.

Again, really appreciate your help with this - I’m leaving it with a better understanding of the data that FR is presenting (and how to track down the causes of higher memory requests).

Sweet, and glad to have helped. You’ve helped so many over the years. Happy to return the karma!

best do a heap dump if you like i can take a short look on it and maybe give you some pointers.

You can configure the JVM to create a heap dump when it runs our of memory, what EVERYONE ALWAYS should do or you can create a heap dump on the fly with the following code (maybe not work with java 21 depending on the config)

<cfscript>
	setting requesttimeout=10000;

	HeapDumper=createObject('java','lucee.commons.surveillance.HeapDumper');
	ResourceUtil=createObject('java','lucee.commons.io.res.util.ResourceUtil');

	dir=getDirectoryFromPath(getCurrentTemplatePath())&"dumps/";
	urlDir=getDirectoryFromPath(cgi.script_name);

	function createIt() {
		var dest=dir&dateTimeFormat(now(),"yyyy-mm-dd-HH-nn-ss")&".hprof";
		var res=ResourceUtil.toResourceNotExisting(getPageContext(), dest);
		HeapDumper.dumpTo(res, true);

		var zip=res&".zip";
		zip action="zip" file=zip overwrite=true {
			zipparam source=res entrypath="dump.hprof";
		}
		fileDelete(res);
	}

	if(!directoryExists(dir)) directoryCreate(dir);


	// create
	if(!isNull(url.create)) {
		createIt();
		location url="#cgi.script_name#" addtoken=false;
	}
	else if(!isNull(url.delete)) {
		fileDelete(dir&url.delete);
		location url="#cgi.script_name#" addtoken=false;
	}
	else if(!isNull(url.get)) {
		content file=dir&url.get type="application/zip";
		abort;
	}
	list=directoryList(path:dir,listInfo:'query',filter:"*.hprof.zip")
</cfscript>
<h1>Heap Dump</h1>

<cfoutput query=list>
	<a href="#cgi.script_name#?get=#list.name#">#list.name# (#int(list.size/1000)/1000#mb)</a> 
	<a href="#cgi.script_name#?delete=#list.name#">[delete]</a><br>
</cfoutput>


<a href="?create=1">Create Heap Dump</a>
6 Likes

Thank you @micstriit!

That’s a super handy script - appreciate you taking the time to help :grinning:

I got it working (Java 17) and the heap dump popped into Eclipse Memory Analyzer - really powerful tool.

1 Like

Matt, did your assessment of the heap dump get you to resolution, as to what in the request was allocating so much memory? People often find it quite challenging to connect the dots between the java objects identified by such heap dump analysis and what those relate to in CFML or Lucee/CF. But sometimes it’s more clear. Did it work well for you?

Also, something to consider is that you were pursuing the high amount of ALOCATED memory (as reported by FR, in a given request’s details), and I’d pointed out how that tracks all allocations over the LIFE of the request. And your question was why it was so much higher than the reported USED heap (which is itself a reflection of current heap use of the entire jvm, not specific to this request). There could have been many GCs over that “life of the request” that would lead the “heap used in the entire JVM” to be far lower than the “heap allocated during the life of the request”.

One reason I repeat/clarify that point is that any heap dump would itself be a “point in time” representation of what was IN THE HEAP at that time. As such, it’s possible that a heap dump might not really identify what objects are being “allocated but later GCed” over the life of the request.

I appreciate that all this may be too much for some readers to consider/keep in their heads/care about. But I know that you and some others WOULD appreciate understanding all this well–and we all can learn new things, myself included. I do look forward to hearing how things went for you, and anything we might all learn from it. :slight_smile:

1 Like