Large query issue

I’ve got an application still running on Lucee 4.5/Windows Server. It has a rather complex query; running over two (MSSQL) databases and having a lot of SQL Joins. The Query has about 200 fieldnames, returning approx. 200’000 Records. So far so good; it all works under Lucee 4.5 (although query takes about 2 minutes to get all data; doesn’t matter; it’s a job that doesn’t run often).

Problem now: If I try to fire the same query within Lucee 5.*, the template runs infinite and doesn’t return anything. It DOES work, when I do a “maxrows=100” for example; then the first 100 rows are returned as expected. It als DOES run, if I don’t get all fields from the query (i.E. only select id from …)

Probably, there’s a memory problem which prevents Lucee from handling that large amount of data. Any ideas how I can get this stuff to work under Lucee 5.*?
Server is WinServer 2016 with 16GB of RAM.

What happens in terms of memory and cpu usage? was debugging enabled?

Also, which exact versions of Lucee and MSSQL extensions are you using?

Does it occur with the latest SNAPSHOT?

Memory/CPU don’t bump up much; everything stays more or less silent, Debugging is disabled.
I’ve taken the query itself into SQL Management Studio. I get the same problem there as well; query takes forever… So probably it’s not a lucee issue but a MSSQL. I’m using MSSQL 2017. Query is working on another machine which has MSSQL 2016/2012. Any hints maybe how to check in MSSQL?

Could be index fragmentation or the execution plan. I would start with execution plan in ssms first. Check the server compatibility of the database to see if is running in 2017 mode or something older.

which version of Lucee?

There are entire courses dedicated to the topics of MSSQL query optimization.

I’d start with sp_whoisactive while the query is running and see if your query is blocking or being blocked by other queries. You’ll see reads and cpu metrics. You can also see blocker information in Activity monitor.



To the top of your statement so you get information on runtime, logical reads + writes and such after the query finishes. (Assuming it ever does)

Turn on Query Store in your database, and use it to analyze the execution plans while they run and/or for past queries within the database.

Use sp_Blitz to analyze your cached execution plans, indexes (or missing indexes), and use BlitzFirst to see how your overall server configuration is w.r.t. best practices. Speaking of best practices, ensure your server follows this:

Maybe leverage this:

To see your locking/wait stats over time and look for bottlenecks. Or use a more automated tool for alerting - Red Gate SQL Monitor is great for doing realtime alerts on long running queries or blocking queries, (with a realtime tree view). SolarWinds DPA (Database Performance Analyzer) is great for tracking wait stats, point-in-time query information, execution plans and blockers and wait stats on a regular basis.

Look at the actual execution plan in SSMS… perhaps using TOP 100, TOP 1000, TOP 10000 etc to make it run faster and hopefully glean some information on where the greatest cost is.

SQL Sentry Plan Explorer is another great tool to look at execution plans and do analysis of total waits/reads/writes/cpu in a tree view as well as pictorially.

1 Like

And also realize that if your query in Lucee uses parameterization, when you run it in SSMS and replace the parameters with actual values, you will get DIFFERENT results. (Because the query optimizer will create a plan INCLUDING the VALUES you replaced, not for a parameter that might change in the future)

Consider a technique like temporary stored procedures if that is the case:

And as you can tell I’m a fan of Brent Ozar - consider using SQL ConstantCare for ongoing alerts (i.e. you had poison waits today, your server needs more RAM, consider query store, etc.) and/or if you want to go down the road of query tuning:
Index Tuning
Query Tuning
Server Tuning

His “How to think like the SQL Engine” talk is free and a good place to start when you start trying to figure out why SQL made the decisions it did;

It could also be the translation of query parameters to Unicode (enabled by default) which can result in indexes being bypassed.

If you see different performance between Lucee and querying the database directly, this may be the reason (if you are passing string parameters)

cfquery with lazy=“true” on it will keep only 1 row in memory at a time similar to how JDBC in Java works by default. You could loop 1 billion rows or 10 rows with the same memory usage as long as you discard each row after each loop. If you loop the entire query all at once with cfloop query="", and end the request quickly, you should be much more efficient. If you do queries inside a loop, this is also dangerous, so try not to do that. The connection won’t be closed until the request ends. On regular cfquery, it builds entire result before you can loop it, which is why it would need more memory. Increasing the JVM -xmx would be your only other option.

I customized Lucee to be able to close the query whenever I want as well and to automatically close it after a loop (which i use all the time now) and it has greatly reduced memory usage requirements and reduces the chance of crashing due to memory usage. Being able to close resultsets and other things quickly is the only way to get lucee more stable, otherwise you have to subdivide batch jobs into multiple requests. There are several ways to crash lucee because of the lack of control we have over connections.

1 Like