Cfcache max files problem?

Hello,
We keep having a problem with cfcache when there are too many files in a directory (> 30,000).

The file system is EXT4 (Ubuntu 20.04) and Lucee 5.4.8.2.

The problem is that when we try to delete files using a wildcard URL (cfcache delete), they simply aren’t deleted if there are so many files in this directory. If there are only a few thousand, everything works fine.

We have already split them into different directories, each with its own cfcache (the first three characters of the URL determine the cfcache directory), but we still exceed 30,000 files from time to time.

Does anyone have any ideas?

My only other idea is that if a URL from a cfcache directory with more than 25,000 files needs to be deleted, I simply delete the entire contents of the directory. :frowning:

But maybe someone has a better idea?

Thanks!
Carsten

is there an exception thrown?

Lucee 7 defaults to not treating the query string as an identifier (to match ACF) which would reduce the number of cached items as well

https://luceeserver.atlassian.net/browse/LDEV-5722

The other solution would be to use a proper cache provider like Redis, EhCache etc, cfcache is pretty simple and rather than we attempt to badly re-invent the wheel, my advice to to leverage one of these tried and trusted solutions

Thank you very much. I’ll have to see if ehcache and redis offer the option of deleting data from the cache using wildcards. That would be very important in my case.

But maybe someone else has had this problem and found a solution without implementing another solution.

?

no, sorry there is no exception

What happens if you follow these steps:

  1. Use cacheGetAllIDs() to obtain an array of all the IDs in the cache region;
  2. Use cacheRemove() to delete all the cached objects whose ID is in the array.