To be perfectly honest, I am not sure what changed. Nor am I 100% confident that this actually started exactly a month ago; that's more around the time that I noticed something was up with this particular unit. It was running fine on both Bullseye and Bookworm...Maybe the question is: what changed a month ago? Did this work fine on an older Bullseye system (which had fim 0.5.3-4) and doesn't on a newer Bookworm one (fim 0.5.3-10). But these are minor debian patches, and there are no reported bugs in fim upstream.
Are you perhaps serving bigger images than you used to? If there are images dumped straight from a recent cellphone on your server, some of these can consume huge amounts of memory. The 3a+ (especially) does not have lots of memory, and it looks like the fim process got too big.
However, I think you may be on to something with the "bigger images" thing. I migrated my Nextcloud instance a while back to a LXC container and discovered that in doing so lost a script that I had deployed years ago that would resize uploaded images. I hadn't noticed this until a few days ago when I was checking configs on Nextcloud to see if there was something weird going on with davfs mounts. There are a small subset of images which were uploaded after this script was lost that are uncompressed full size images (16+ MB). Given that the images are displayed randomly and the time between crashes is also random, I am beginning to think that....
...may be the root cause here. If the random display order hits 2-5 of these larger images in succession, this causes the OOM condition.I see that fim caches images in memory.
I've updated my global fimrc to:
Code:
if(_max_cached_images==''){_max_cached_images=1;}if(_max_cached_memory==''){_max_cached_memory=31250;}I'm going to run this for a couple of days and see whether the fix holds, but thank you for finding those configs!
Statistics: Posted by skybolt_1 — Sat Aug 02, 2025 5:02 pm