Application.log full of unwanted OSGI logs

@justincarter is the lucee/lucee:5.3.7.34-RC-nginx updating OK ?

It looks like Lucee is doing that logging for the OSGI bundles that it’s loading. It shouldn’t be specific to using the Docker images, I would guess it’s reproducible in a standard install or in Lucee Express as well?

Docker builds for a specific version number only happen once, so 5.3.7.34-RC was built 22 days ago, and once that image is tagged it will not change (any new builds will have a new version number)

From the comments in the ticket I suspect it’s not fixed in Lucee yet;

https://luceeserver.atlassian.net/browse/LDEV-2516

Pothys - MitrahSoft [June 15, 2020, 9:30 AM]: I’ve checked this with 5.3.7.34-RC and latest snapshot 5.3.8.5_SNAPSHOT also. That latest RC version is not a fix for this issue. Still, it throws an same issue.

I think there’s been a subsequent RC ? Further up the thread ?

______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. For more information please visit http://www.symanteccloud.com ______________________________________________________________________

I suppose what I want @justincarter is a Docker image for 5.3.7.34-RC3 - not the original RC.
I can’t find it listed on https://download.lucee.org/ either so have no idea what to do next to help.
This is killing our logging. It’s impossible to find anything.

I’m not sure what you mean by RC3, RCs are just a snapshot flagged as a RC, 5.3.7.34 is the RC?

I suppose I should have asked:

Is there a later release in 5.3.x series that the .34 RC ?
If not, when might one (assumed with the bug fix, but can’t test till it makes a Docker image) be due ?

The ticket is still “Open” so it doesn’t seem like the issue is fixed yet;
https://luceeserver.atlassian.net/browse/LDEV-2516

The Docker images are essentially builds based on the version numbers that you see on the Lucee Downloads page:
https://download.lucee.org/

I don’t think there’s any such release as 5.3.7.34-RC3, the version number from the downloads page is 5.3.7.34-RC and that’s the version number for the corresponding Docker image. That is also the newest RC, and there are no SNAPSHOT builds after 5.3.7.34 in the 5.3.7.x line. If you wanted to try something newer than that you’d need to try the 5.3.8.20-SNAPSHOT;

image

@cfmitrah Can you confirm this is still a bug in the latest RC and latest SNAPSHOT? If that’s the case, IMO it should be fixed in 5.3.7.x before the release.

1 Like

Ok, @justincarter I will check now.

1 Like

I’ve checked this and can able to reproduce with the latest version 5.3.8.20-SNAPSHOT and 5.3.7.34-RC too.

  • But, you can’t see the latest versions of snapshots and RC too if you don’t create a log for application as a separate.
  • This issue will log with application.log in web-inf.
  • But, we couldn’t able to see the application.log in that folder.
  • Because the log not available in web-admin - settings/logging.
  • So, if we create an application.log and level as info means, we could see this below log as level info.

"INFO","ajp-nio-8009-exec-8","07/31/2020","16:59:13","OSGI","add bundle:test\lib\jsoup\jsoup-1.10.2.jar"

the missing application.log is another bug/regression?

bug filed [LDEV-2990] - Lucee

Yup, certainly and see https://lucee.daemonite.io/t/no-application-log-written-but-others-are/7088/17 because even if you try and fake it the contents are wrong

Just to resurrect this, the issue with no application.log at all is sorted, but that just makes it even more annoying that it’s full of OSGI spam, multiple lines on every single request.

This directly goes into increased costs for storage / transfer for sites that have any Java classes loaded.

I’ve submitted a simple one line PR because that’d be easier for everyone then hand hacking .jar files, with unknown impacts beyond not making it log as much

Does anyone know when the next release, with hopefully this fix in, might be ?

just truncate the log

set -o noclobber
tail -t 1000 > nameoflogfile /pathoflogfile/logfile

alternatively, you could setup logrotate

Hi Terry.

That’d work on a small scale, but we’ve a fleet of servers, all shipping out to places like DataDog and/or AWS CloudWatch Logs etc

These places charge for storage, and the I/O is a pain point too when lots of requests are happening.

It’s not a long term solution I feel.

I should charge for my support here… :wink:

you could easily use cfexecute to call a script task to purge the file
you could alternatively just dump the file to /dev/null
logrotate has used by enterprises worldwide, format is pretty straight forward

you could put on your java hat and do something like this

handlers = 1mybad.spring.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler        

1mybad.spring.org..apache.juli.FileHandler.level = SEVERE
1mybad.spring.org..apache.juli.FileHandler.directory = ${catalina.base}/logs
1mybad.spring.org..apache.juli.FileHandler.prefix = springframework.

java.util.logging.ConsoleHandler.level = SEVERE  
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter      

org.springframework.handlers = 1spring.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler

Or you could limit the number of logging lines by changing the lucee init call

2>&1 | head -10 >> "$CATALINA_OUT" &

Or you could just try to remove all logging, and suffer

server.xml

<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
           prefix="localhost_access_log." suffix=".txt"
           pattern="%h %l %u %t &quot;%r&quot; %s %b" />

then in /conf

conf/logging.properties

handlers = 1catalina.org.apache.juli.FileHandler,


Comment all that out, then point your whole application into a /dev/null for anything else.

Or, or, and hear me out here, Lucee could not have started spamming non-application INFO events to the application log file :slight_smile: - I’ve been running *nix servers for 30 years (my first job was admin on a cluster of Sparc). I know about logrotate :slight_smile:

Putting work arounds in for a (fixed) bug could easily lead to other issues, which is why we’re waiting on a Lucee release.

Well, My take, as I see many enterprise companies making a fortune on opensource products

If you’re making money on an opensource product and its not doing what you want, either contribute to the project and ask nicely for the feature, fork it and code it the way you want, or change platforms.

The application.log growing to infinity isnt unique to lucee

If its that big of an issue, then throw resources at it as there is a business case for its feature and if not, then make one.
If you cant do those, then is it really a needed “feature” or is it just a want.

The lucee forum is filled with some of the biggest leaders in the Coldfusion realm. Really not to hard to ask @carehart @pfreitag @bennadel or others for their consulting services.

1 Like

I agree !

It’s why I contributed the fix.

I just asked when the next Lucee release might be, not sure why we’ve ended up at this place.