Announcing Lucee 5.2.5 (Final)


#1

Hello Lucee-verse! Happy New Year to everyone!

We’re hoping everyone had a great 2017, and great holidays as well. Team Lucee managed to find several hours of rest and relaxation during the holidays (:laughing:), while we were busy planning the 2018 development schedule.

To get things rolling right away, today we’re announcing the final release of Lucee 5.2.5.

To recap, we shipped 5.2.5 as a Release Candidate on Oct. 10, 2017, with the goal of leaving that in RC status for a bit longer than usual (typically one month), while we worked on wrapping up the 2017 development schedule, and also planned out 2018.

The 5.2.5 release is really solid, as evidenced by the fact that we didn’t have any regressions reported during the 2+ months of testing in the community. And, quite a few customers found that 5.2.5 squashed a lot of bugs that had been lingering over the prior release or two, including some fairly serious ones related to threading and memory management problems.

In short, 5.2.5 is an especially good time to upgrade, if you haven’t done so recently. You can head over to the latest Lucee final releases downloads page, and also check out the Change Log there.

Next, the January 2018 Sprint is underway, and we still have room for adding tickets to this current sprint, so if Santa didn’t deliver your favorite ticket in 5.2.5, please let us know. The January sprint will produce Lucee 5.2.6-RC, which is already on snapshot 29. (Truth be told, we started on 5.2.6 development well before the January sprint!)

Later this week, I’ll be sharing the official 2018 development schedule, which we tweaked a bit based on the experiences (good and bad) of 2017, which was Lucee’s first full year of formal/published sprint activity. Also, Team Lucee is growing, with a new developer and PM joining in late 2017. And more! Stay tuned.

That’s it for now, but we have lots of other news to report as we kick off 2018.

Thanks for listening, and once again, from everyone at LAS–Happy New Year!


#2

Awesome work Team LAS! 2018 is off to a roaring start :lion:


#5

#6

But is 5.2.5 really stable? There’s been quite a few reports here of people rolling back to earlier releases due to memory problems?


#7

Do you mean 2018?


#8

News like this should rate a blog entry on lucee.org. People who just visit the website would never know the amount of continual work that is going into Lucee.


#9

OK, I heard that many people had issues with 5.2.4.x and that the 5.2.5.x RC was really stable. So I hope others will chime in here and backup that statement of mine.


#10

I have been running some tests on 5.2.5.20 and that fixes a lot of issues we had with previous versions, although I don’t know how stable it is as we are just moving over to it.

The lucee.org site should have these posts on there too. Maybe an admin tag can trigger an https://ifttt.com/ script?


#11

It would be great to get the below into the next sprint :wink:


#12

Stating the obvious but software like Lucee is complex and a version that may be “stable” in one situation could be unstable in another.

We started having apparent memory leak problems with 4.5 over a year ago which we isolated to a particular application, but couldn’t determine the root cause (despite having FusionReactor, which just showed waiting/blocked threads piling up).

Having upgraded a client installation from 4.5 to 5.2.3.35 and found it very stable (still running smoothly as I write), we optimistically did the same with our own, only for it to bomb immediately forcing a roll back.

It was only when we got positive reports from others similarly affected that we gave the 5.2.5.20 SNAPSHOT a try and as I’ve mentioned in other posts, it has indeed proved very stable… for us… in our environment…running our particular apps with their particular workloads.

But sadly it would appear even this version is not stable for everyone.

I’m certainly happy to commend this release based on our experience, but YMMV.


#13

5.2.5 is not stable for me. I’m stuck on 5.2.1.9 right now. See
https://luceeserver.atlassian.net/browse/LDEV-1640.


#14

All News related posts are served from here:
https://dev.lucee.org/c/news

Unfortunately, the development of the new website is lagging behind the move in comms. We do update the main web site blog section with a link through to here. But looks like that has been delayed a little bit over the break.

When the new site is released this year, new posts will be dynamically represented on the main site.


#15

This is our experience too. We appear to have no issues with the current release.


#16

The LAS team has grown in 2017; additional Java development resources, project management and admin support. Members and supporters contributions will start to make a real difference in our resourcing capacity in the coming year.


#17

@modius and @Julian_Halliwell Out of curiosity, are either of you using query caching in your applications? I’ve been reviewing a heap dump with @sbleon today on https://luceeserver.atlassian.net/browse/LDEV-1640 and have discovered 93% of his heap space being consumed with cached queries. We’re still digging into it so it’s just a lead at this point, but it could possibly be what makes the issue completely absent for some people and present for others.


#18

Do you mean query of queries sort of stuff, or cachedwithin sort of stuff?

While it’s possible somewhere in our portfolio it is not something we typically do. If we need to cache something we would more likely externalise the data in memcached or similar.

So without any sort of close scrutiny I’d say, no; we don’t use query caching.


#19

You might be onto something, Brad. No we hardly use the built-in cachedwithin etc query caching at all. In fact we reduced its use in favour of our own caching mechanism a while ago in case it was contributing to the memory problems we were experiencing at the time.


#20

To add to this, Leon is making large use of cachedwithin=0, whose behavior was changed recently in this ticket:
https://luceeserver.atlassian.net/browse/LDEV-907

However, it looks as though it possibly was implemented in such a way that even though Lucee returns an un-cached query, it doesn’t actually clear the items from the cache. See this related ticket
https://luceeserver.atlassian.net/browse/LDEV-1480

If some developers are using cachedwithin=0 AND that value incorrectly leaves the items in the cache AND the RamCache has no limits set, then it seems reasonable that this could be the source of memory issues that only affect some people.

This is a total guess though right now. I need to hear back from some other people who are still claiming to have memory issues on 5.2.5.20 to see if this profile possibly fits their usage.


Lucee 5.2.x Java Heap Issues
#21

Just to quickly document another snafu we’ve just encountered switching to this release. As mentioned we’ve been using the 5.2.5.20-SNAPSHOT in production since its release in October because it solved our memory issues, but now that 5.2.5.20 is officially released we want to be back on the stable release update channel.

This should have been straightforward being the same version, but after switching .lco files to the latest in one of our instances and restarting found that request execution times were terrible. No memory problems or stuck requests as before, but high CPU and very long running requests which eventually timed out.

This server has lot of different applications running on it, some constantly busy, others only accessed at certain times. After 10 minutes or so things seemed to settle down but when a new application was accessed for the first time, the same thing started again: CPU thrashing, long requests and timeouts (the logs were full of Java thread death events).

Eventually I found a solution: clearing the WEB-INF\cfclasses folder before the first request, prevented the issue and the app loaded normally with none of the symptoms described.

An edge case of “side”-grading, but just in case anyone else is in the same situation.


5.2.7.63 upgrade and cfclasses
#22

I’ve been doing some load testing on my server using Webserver Stress Tool 8 with simulations of 45 users and a random number of clicks per user over a 20 minute period.

The webserver is running:
Win 2012 R2
4 X6560 Xeon Processors / 8GB Ram
IIS 8.5
Lucee 5.2.5.20
For testing purposes max heap was set to 2GB

Running Mura 7 connecting to mySQL database

The site I’m testing ran at around a 13% average heap and 8-10% non-heap on version 5.1.1.65. It also ran at about 13% heap on 5.2.1.9 but I saw a gradual increase of the non-heap (about 1 - 2% over a 24 hr period). There haven’t been any code changes to the site.

When the simulation first starts, the java heap begins to climb; Lucee appears to reclaim much of the heap, but as the test continues the heap climbs and Lucee doesn’t claim more than about 20% of the heap (sometimes as little as 2%). At the beginning of each test Lucee reclaims about 20% of the heap. During each test, CPU was fluctuating between 66% and 99%.

On average, at the end of each 20 minute test heap was at 75-80%. When I started the initial test I had a 3% heap, 3% non-heap, (non-heap climbs to about 10% and stays steady around 10% through out the duration of the tests). If I re-initiated the test, there was an initial reclaiming of about 20% of the heap, however, If I increase the duration of the test to 40 minutes the server eventually becomes unresponsive with a 95% heap, 10% non-heap and cpu railing at 99%.

I ran the test on the same server on a simple static site with no database connections and the there was nominal heap growth and the heap was fully reclaimed.

I think this leans towards the hypothesis of cached queries not be released…