Massive amounts of debug logging in 5.3.5.92

OS: Docker running on 64bit Amazon Linux/2.14.3
Java Version: 11.0.6
Tomcat Version: 9.0.33
Lucee Version: 5.3.5.92

I recently upgraded our production environment to lucee version 5.3.5.92 (from 5.2.8.50). Additionally, I upgraded to the newest version of the S3 extension to combat concurrency errors.

After about 5 hours of normal weekday usage, the 120GB hard drive was filled with massive (1-3GB) log files automatically saved by AWS Elastic Beanstalk (named something like “/var/log/eb-docker/containers/eb-current-app/eb-063d69495c9f-stdouterr.log”). Inspecting the file shows tons of DEBUG level logging related to file uploads and subsequent handling (inspection and s3 copy). Within the logs are the full contents of the files that have been uploaded, which is the main cause of the log files being so large.

The lucee and tomcat log files aren’t bloating in this manner. It’s only the log files of the EC2 outside of the docker container. All of the logging within lucee admin is set to ERROR level. This make me suspect that Elastic Beanstalk is picking up some sort of console logging, perhaps output from java, but that’s totally just a guess. So far, I’m unable to replicate this issue outside of our AWS Elastic Beanstalk stack.

In my testing of all combinations of lucee 5.3.5.92, lucee 5.2.8.50, extension-s3 0.9.4.122 and extension-s3 0.9.4.117, this is only happening with lucee 5.3.5.92 regardless of the s3 version. Effectively, the only difference causing this problem is the FROM lucee/lucee:5.3.5.92 line at the top of our Dockerfile.

After looking around this forum and related jira tickets, it appears this issue might be somewhat related to this thread: https://lucee.daemonite.io/t/memory-usage-jump-5-3-5-78-to-92/6813

Does this issue warrant a new jira ticket (where I would provide some test code and samples from log files), or is this a known issue? In the meantime, is it possible to turn this debug level logging off, perhaps with some JAVA_OPTS passed into the Docker container?

Here’s a sample of the log file with redactions and a truncation

DEBUG - Pooling connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]; keep alive indefinitely
DEBUG - Notifying no-one, there are no waiting threads
DEBUG - Released HttpMethod as its response data stream is fully consumed
DEBUG - Parsing XML response document with handler: class org.jets3t.service.impl.rest.XmlResponsesSaxParser$ListBucketHandler
DEBUG - Examining listing for bucket: [BUCKET_NAME]
DEBUG - Found 0 objects in one batch
DEBUG - Found 0 common prefixes in one batch
DEBUG - File name has no extension, mime type cannot be recognised for: aws-s317783892662367369622tmp
DEBUG - Creating Object with key dev-a-s/10k.txt in bucket [BUCKET_NAME]
DEBUG - Uploading object data with Content-Length: 10000
DEBUG - Setting maximal read-ahead mark limit for markable input stream org.jets3t.service.io.RepeatableFileInputStream assuming it doesn't use in-memory storage: 10000
DEBUG - Input stream marked at 0 bytes
DEBUG - httpclient.read-throttle=0
DEBUG - s3service.s3-endpoint=null
DEBUG - s3service.enable-storage-classes=true
DEBUG - s3service.s3-endpoint=null
DEBUG - Creating object bucketName=[BUCKET_NAME], objectKey=dev-a-s/10k.txt, storageClass=null. Content-Type=text/plain Including data? true Metadata: {x-amz-content-sha256=6e904788d0eb0feee634f4738163b34a2f63dd921e5c6583de081c975dd4d16c, x-amz-acl=public-read, md5-hash=9aaee861df25d65d6d5a40869379caa2, Content-Length=10000, Content-MD5=mq7oYd8l1l1tWkCGk3nKog==, Content-Type=text/plain} ACL: AccessControlList [owner=null, grants=[]]
DEBUG - s3service.disable-dns-buckets=false
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - s3service.s3-endpoint-virtual-path=
DEBUG - s3service.s3-endpoint-https-port=443
DEBUG - S3 URL: https://[BUCKET_NAME].s3.amazonaws.com:443/dev-a-s/10k.txt
DEBUG - Performing PUT request for 'https://[BUCKET_NAME].s3.amazonaws.com:443/dev-a-s/10k.txt', expecting response codes: [200,204]
DEBUG - Headers: [Date: Thu, 23 Apr 2020 09:08:27 GMT, x-amz-content-sha256: 6e904788d0eb0feee634f4738163b34a2f63dd921e5c6583de081c975dd4d16c, x-amz-acl: public-read, x-amz-meta-md5-hash: 9aaee861df25d65d6d5a40869379caa2, Content-MD5: mq7oYd8l1l1tWkCGk3nKog==, Content-Type: text/plain]
DEBUG - httpclient.retry-max=5
DEBUG - Adding authorization for Access Key '[AWS_KEY]'.
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - storage-service.request-signature-version=AWS2
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - Canonical string ('|' is a newline): PUT|mq7oYd8l1l1tWkCGk3nKog==|text/plain|Thu, 23 Apr 2020 09:08:27 GMT|x-amz-acl:public-read|x-amz-content-sha256:6e904788d0eb0feee634f4738163b34a2f63dd921e5c6583de081c975dd4d16c|x-amz-meta-md5-hash:9aaee861df25d65d6d5a40869379caa2|/[BUCKET_NAME]/dev-a-s/10k.txt
DEBUG - Get connection: {s}->https://[BUCKET_NAME].s3.amazonaws.com:443, timeout = 0
DEBUG - [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443] total kept alive: 3, total issued: 0, total allocated: 3 out of 20
DEBUG - Getting free connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]
DEBUG - Stale connection check
DEBUG - CookieSpec selected: default
DEBUG - Auth cache not set in the context
DEBUG - Proxy auth state: UNCHALLENGED
DEBUG - Attempt 1 to execute request
DEBUG - Sending request: PUT /dev-a-s/10k.txt HTTP/1.1
DEBUG -  >> "PUT /dev-a-s/10k.txt HTTP/1.1[\r][\n]"
DEBUG -  >> "Date: Thu, 23 Apr 2020 09:08:27 GMT[\r][\n]"
DEBUG -  >> "x-amz-content-sha256: 6e904788d0eb0feee634f4738163b34a2f63dd921e5c6583de081c975dd4d16c[\r][\n]"
DEBUG -  >> "x-amz-acl: public-read[\r][\n]"
DEBUG -  >> "x-amz-meta-md5-hash: 9aaee861df25d65d6d5a40869379caa2[\r][\n]"
DEBUG -  >> "Content-MD5: mq7oYd8l1l1tWkCGk3nKog==[\r][\n]"
DEBUG -  >> "Content-Type: text/plain[\r][\n]"
DEBUG -  >> "Authorization: AWS [AWS_KEY]:[AWS_SECRET][\r][\n]"
DEBUG -  >> "Content-Length: 10000[\r][\n]"
DEBUG -  >> "Host: [BUCKET_NAME].s3.amazonaws.com:443[\r][\n]"
DEBUG -  >> "Connection: Keep-Alive[\r][\n]"
DEBUG -  >> "User-Agent: JetS3t/0.9.4 (Linux/4.14.171-105.231.amzn1.x86_64; amd64; en; JVM 11.0.6)[\r][\n]"
DEBUG -  >> "Expect: 100-continue[\r][\n]"
DEBUG -  >> "[\r][\n]"
DEBUG - >> PUT /dev-a-s/10k.txt HTTP/1.1
DEBUG - >> Date: Thu, 23 Apr 2020 09:08:27 GMT
DEBUG - >> x-amz-content-sha256: 6e904788d0eb0feee634f4738163b34a2f63dd921e5c6583de081c975dd4d16c
DEBUG - >> x-amz-acl: public-read
DEBUG - >> x-amz-meta-md5-hash: 9aaee861df25d65d6d5a40869379caa2
DEBUG - >> Content-MD5: mq7oYd8l1l1tWkCGk3nKog==
DEBUG - >> Content-Type: text/plain
DEBUG - >> Authorization: AWS [AWS_KEY]:[AWS_SECRET]
DEBUG - >> Content-Length: 10000
DEBUG - >> Host: [BUCKET_NAME].s3.amazonaws.com:443
DEBUG - >> Connection: Keep-Alive
DEBUG - >> User-Agent: JetS3t/0.9.4 (Linux/4.14.171-105.231.amzn1.x86_64; amd64; en; JVM 11.0.6)
DEBUG - >> Expect: 100-continue
DEBUG -  << "HTTP/1.1 100 Continue[\r][\n]"
DEBUG -  << "[\r][\n]"
DEBUG - Receiving response: HTTP/1.1 100 Continue
DEBUG - << HTTP/1.1 100 Continue

[THIS LINE CONTAINS THE FULL CONTENTS OF THE UPLOADED FILE AND HAS BEEN MANUALLY TRUNCATED]
DEBUG -  >> "1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 
[THIS LINE CONTAINS THE FULL CONTENTS OF THE UPLOADED FILE AND HAS BEEN MANUALLY TRUNCATED]

DEBUG -  << "HTTP/1.1 200 OK[\r][\n]"
DEBUG -  << "x-amz-id-2: [LOOKS_SENSITIVE][\r][\n]"
DEBUG -  << "x-amz-request-id: 98D012D910ADE495[\r][\n]"
DEBUG -  << "Date: Thu, 23 Apr 2020 09:08:28 GMT[\r][\n]"
DEBUG -  << "ETag: "9aaee861df25d65d6d5a40869379caa2"[\r][\n]"
DEBUG -  << "Content-Length: 0[\r][\n]"
DEBUG -  << "Server: AmazonS3[\r][\n]"
DEBUG -  << "[\r][\n]"
DEBUG - Receiving response: HTTP/1.1 200 OK
DEBUG - << HTTP/1.1 200 OK
DEBUG - << x-amz-id-2: [LOOKS_SENSITIVE]
DEBUG - << x-amz-request-id: 98D012D910ADE495
DEBUG - << Date: Thu, 23 Apr 2020 09:08:28 GMT
DEBUG - << ETag: "9aaee861df25d65d6d5a40869379caa2"
DEBUG - << Content-Length: 0
DEBUG - << Server: AmazonS3
DEBUG - Connection can be kept alive indefinitely
DEBUG - Response for 'PUT'. Content-Type: , Headers: [x-amz-id-2: [LOOKS_SENSITIVE], x-amz-request-id: 98D012D910ADE495, Date: Thu, 23 Apr 2020 09:08:28 GMT, ETag: "9aaee861df25d65d6d5a40869379caa2", Content-Length: 0, Server: AmazonS3]
DEBUG - Response entity: org.apache.http.conn.BasicManagedEntity@19cb4b9
DEBUG - Entity length: 0
DEBUG - Received response code 200; matches one of expected set ([200, 204])? true
DEBUG - Released connection is reusable.
DEBUG - Releasing connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]
DEBUG - Pooling connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]; keep alive indefinitely
DEBUG - Notifying no-one, there are no waiting threads
DEBUG - Processing REST metadata items
DEBUG - Removed header prefix x-amz- from key: x-amz-content-sha256=>content-sha256
DEBUG - Removed header prefix x-amz- from key: x-amz-acl=>acl
DEBUG - Ignoring metadata item: Server=AmazonS3
DEBUG - Leaving header item unchanged: ETag="9aaee861df25d65d6d5a40869379caa2"
DEBUG - Ignoring metadata item: md5-hash=9aaee861df25d65d6d5a40869379caa2
DEBUG - Removed header prefix x-amz- from key: x-amz-request-id=>request-id
DEBUG - Leaving HTTP header item unchanged: Content-Length=10000
DEBUG - Removed header prefix x-amz- from key: x-amz-id-2=>id-2
DEBUG - Parsing date string 'Thu, 23 Apr 2020 09:08:28 GMT' into Date object for key: Date
DEBUG - Leaving header item unchanged: Date=Thu Apr 23 09:08:28 UTC 2020
DEBUG - Leaving HTTP header item unchanged: Content-MD5=mq7oYd8l1l1tWkCGk3nKog==
DEBUG - Leaving HTTP header item unchanged: Content-Type=text/plain
DEBUG - s3service.disable-dns-buckets=false
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - s3service.s3-endpoint-virtual-path=
DEBUG - s3service.s3-endpoint-https-port=443
DEBUG - S3 URL: https://[BUCKET_NAME].s3.amazonaws.com:443/
DEBUG - Added request parameter: max-keys=10
DEBUG - Added request parameter: prefix=dev-a-s/10k.txt
DEBUG - Added request parameter: delimiter=,
DEBUG - Performing GET request for 'https://[BUCKET_NAME].s3.amazonaws.com:443/?max-keys=10&prefix=dev-a-s%2F10k.txt&delimiter=%2C', expecting response codes: [200]
DEBUG - Headers: [Date: Thu, 23 Apr 2020 09:08:27 GMT]
DEBUG - httpclient.retry-max=5
DEBUG - Adding authorization for Access Key '[AWS_KEY]'.
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - storage-service.request-signature-version=AWS2
DEBUG - s3service.s3-endpoint=s3.amazonaws.com
DEBUG - Canonical string ('|' is a newline): GET|||Thu, 23 Apr 2020 09:08:27 GMT|/[BUCKET_NAME]/
DEBUG - Get connection: {s}->https://[BUCKET_NAME].s3.amazonaws.com:443, timeout = 0
DEBUG - [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443] total kept alive: 3, total issued: 0, total allocated: 3 out of 20
DEBUG - Getting free connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]
DEBUG - Stale connection check
DEBUG - CookieSpec selected: default
DEBUG - Auth cache not set in the context
DEBUG - Proxy auth state: UNCHALLENGED
DEBUG - Attempt 1 to execute request
DEBUG - Sending request: GET /?max-keys=10&prefix=dev-a-s%2F10k.txt&delimiter=%2C HTTP/1.1
DEBUG -  >> "GET /?max-keys=10&prefix=dev-a-s%2F10k.txt&delimiter=%2C HTTP/1.1[\r][\n]"
DEBUG -  >> "Date: Thu, 23 Apr 2020 09:08:27 GMT[\r][\n]"
DEBUG -  >> "Authorization: AWS [AWS_KEY]:XtD5Ov/zIglrhtQNUikb/QxTVCg=[\r][\n]"
DEBUG -  >> "Host: [BUCKET_NAME].s3.amazonaws.com:443[\r][\n]"
DEBUG -  >> "Connection: Keep-Alive[\r][\n]"
DEBUG -  >> "User-Agent: JetS3t/0.9.4 (Linux/4.14.171-105.231.amzn1.x86_64; amd64; en; JVM 11.0.6)[\r][\n]"
DEBUG -  >> "[\r][\n]"
DEBUG - >> GET /?max-keys=10&prefix=dev-a-s%2F10k.txt&delimiter=%2C HTTP/1.1
DEBUG - >> Date: Thu, 23 Apr 2020 09:08:27 GMT
DEBUG - >> Authorization: AWS [AWS_KEY]:XtD5Ov/zIglrhtQNUikb/QxTVCg=
DEBUG - >> Host: [BUCKET_NAME].s3.amazonaws.com:443
DEBUG - >> Connection: Keep-Alive
DEBUG - >> User-Agent: JetS3t/0.9.4 (Linux/4.14.171-105.231.amzn1.x86_64; amd64; en; JVM 11.0.6)
DEBUG -  << "HTTP/1.1 200 OK[\r][\n]"
DEBUG -  << "x-amz-id-2: [LOOKS_SENSITIVE][\r][\n]"
DEBUG -  << "x-amz-request-id: 744F56DBFC15E855[\r][\n]"
DEBUG -  << "Date: Thu, 23 Apr 2020 09:08:28 GMT[\r][\n]"
DEBUG -  << "x-amz-bucket-region: us-east-1[\r][\n]"
DEBUG -  << "Content-Type: application/xml[\r][\n]"
DEBUG -  << "Transfer-Encoding: chunked[\r][\n]"
DEBUG -  << "Server: AmazonS3[\r][\n]"
DEBUG -  << "[\r][\n]"
DEBUG - Receiving response: HTTP/1.1 200 OK
DEBUG - << HTTP/1.1 200 OK
DEBUG - << x-amz-id-2: [LOOKS_SENSITIVE]
DEBUG - << x-amz-request-id: 744F56DBFC15E855
DEBUG - << Date: Thu, 23 Apr 2020 09:08:28 GMT
DEBUG - << x-amz-bucket-region: us-east-1
DEBUG - << Content-Type: application/xml
DEBUG - << Transfer-Encoding: chunked
DEBUG - << Server: AmazonS3
DEBUG - Connection can be kept alive indefinitely
DEBUG - Response for 'GET'. Content-Type: application/xml, Headers: [x-amz-id-2: [LOOKS_SENSITIVE], x-amz-request-id: 744F56DBFC15E855, Date: Thu, 23 Apr 2020 09:08:28 GMT, x-amz-bucket-region: us-east-1, Content-Type: application/xml, Transfer-Encoding: chunked, Server: AmazonS3]
DEBUG - Response entity: org.apache.http.conn.BasicManagedEntity@6ea70871
DEBUG - Entity length: -1
DEBUG - Received response code 200; matches one of expected set ([200])? true
DEBUG - xmlparser.sanitize-listings=true
DEBUG - Sanitizing XML document destined for handler class org.jets3t.service.impl.rest.XmlResponsesSaxParser$ListBucketHandler
DEBUG -  << "26a[\r][\n]"
DEBUG -  << "<?xml version="1.0" encoding="UTF-8"?>[\n]"
DEBUG -  << "<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>[BUCKET_NAME]</Name><Prefix>dev-a-s/10k.txt</Prefix><Marker></Marker><MaxKeys>10</MaxKeys><Delimiter>,</Delimiter><IsTruncated>false</IsTruncated><Contents><Key>dev-a-s/10k.txt</Key><LastModified>2020-04-23T09:08:28.000Z</LastModified><ETag>&quot;9aaee861df25d65d6d5a40869379caa2&quot;</ETag><Size>10000</Size><Owner><ID>b02bbd08bc9bcff8972392ceb9f9f7bb965b038f624f2544f6099a6b3839b017</ID><DisplayName>tabbott</DisplayName></Owner><StorageClass>STANDARD</StorageClass></Contents></ListBucketResult>"
DEBUG -  << "[\r][\n]"
DEBUG -  << "0[\r][\n]"
DEBUG -  << "[\r][\n]"
DEBUG - Released connection is reusable.
DEBUG - Releasing connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]
DEBUG - Pooling connection [{s}->https://[BUCKET_NAME].s3.amazonaws.com:443][null]; keep alive indefinitely
DEBUG - Notifying no-one, there are no waiting threads
DEBUG - Released HttpMethod as its response data stream is fully consumed
DEBUG - Parsing XML response document with handler: class org.jets3t.service.impl.rest.XmlResponsesSaxParser$ListBucketHandler
DEBUG - Examining listing for bucket: [BUCKET_NAME]
DEBUG - Created new object from listing: S3Object [key=dev-a-s/10k.txt, bucket=[BUCKET_NAME], lastModified=Thu Apr 23 09:08:28 UTC 2020, dataInputStream=null, storageClass=STANDARD, Metadata={ETag="9aaee861df25d65d6d5a40869379caa2", Last-Modified=Thu Apr 23 09:08:28 UTC 2020, Content-Length=10000}]
DEBUG - Found 1 objects in one batch
DEBUG - Found 0 common prefixes in one batch

does it also happen with the latest snapshot?

My initial testing is showing that the latest snapshot (5.3.7.11-SNAPSHOT) fixes this issue. I’m going to send this to production late tonight and see how it goes tomorrow. I previously tested with 5.3.6.53-RC, but that version didn’t resolve the problem.

Thanks!

If this is still an issue in 5.3.6-RC we need to get it fixed before that goes stable. It looks like the public RC was 5.3.6.53 but the latest snapshot is 5.3.5.55. Can you see if one of those last two snapshots has the fix in the 5.3.6 series? If not, we need to holler about it.

Alrighty, I was mistaken. 5.3.7.11-SNAPSHOT does NOT fix this problem. I only thought it was fixed because someone on my team completely turned off docker logging without telling me.

So jira ticket time?

yep, file a bug, add the labels regression and logging

I just wanted to mention for any future readers of this thread:

The source of the problem was a custom jar in our project. That custom jar contained a log4j.properties file that defined a ConsoleAppender logger. This jar is old and no longer executed, but somehow newer versions of lucee picked up these log4j settings and started outputting debug level logging to the console. Elastic Beanstalk picked up the console output and logged it to files that eventually used up all of the available disk space.