We are using ImageResizer module, diskCache plugin with configuration
<diskCache autoClean="true" hashModifiedDate="true" subfolders="256" cacheAccessTimeout="15000"/>
<cleanupStrategy startupDelay="00:05"
minDelay="00:00:20"
maxDelay="00:05"
optimalWorkSegmentLength="00:00:04"
avoidRemovalIfCreatedWithin="12:00"
avoidRemovalIfUsedWithin="1.00:00"
prohibitRemovalIfUsedWithin="00:05"
prohibitRemovalIfCreatedWithin="00:10"
maximumItemsPerFolder="4096"/>
With this cleanup configuration we have 253 subfolders with 8000-10000 files each and 3 subfolders with 200 000-300 000 files (at imagecache). Cleanup seems working partially. What can be a reason of this behaviour? How cleanup process can be monitored?
It sounds like you have over 3 million active files; with that number you need to change the 'subfolders' setting to 16384. Reduce the 'maximumItemsPerFolder' value instead if you want to constrain the cache size.
NTFS gets very slow with large numbers of files in a single directory. It's likely that those 3 directories have gotten so large that a directly listing causes a timeout or error, and they can't be cleaned automatically.
Keep in mind that after you delete the cache directory and change the subfolder setting, you may experience high CPU usage until the cache is re-populated. If you have multiple cores and are using the default GDI+ pipeline (WIC and FreeImage are not affected), then you may also want to turn on web gardens to match your core count.
In general, it's a bad idea to constrain your cache such that files are written and subsequently deleted within minutes (which seems to be the case). Better to use one of the memory cache plugins or something like Varnish - if you don't have sufficient local disk space.