It’s a special case. Windows is aggressively caching reads, but write-through for safety. Not even auto adapting settings can encompass all workloads.
Caching like this has a longer history in the data center. FancyCache, now here renamed, appeared as a reaction to linux bcache from googler which in turn was a clone of things before it to bring linux on par with stuff like Dillon’s FreeBSD swapcache and of course ZFS tiered and metadata caching which is slightly different, but the primary use-case, and what they want to bring btrfs up to. Of course best would be no discrete memory hierarchy at all! In principle. Like System38 (AS400) where there isn’t even concept of closing a program, they all live happily in unified and secure capability-based memory forever. I thought NVMe would bring us closer to that forgotten utopia, instead here we are.
So in our case Chia writes a lot but then discards alot! With sufficiently large cache we can save on writes and only write out the final result.
This is a quick comparison for k=25 which finishes quicker but otherwise has the same characteristic, pushing the same GB/hour as k=32. Average seconds per 600MB micro-plot for run of 36 jobs, 12 in parallel on 12 cores:
So no difference in time at all. But,
Just a tenth of bytes written, just the final files.
Compared to 239,194,228,224 in test without it.
PrimoCache is very dumb in caching strategies, brittle and prone to data loss too. But for this case, or compiling many small files… I think Windows can be tortured to supply something like that by itself, but it isn’t available for mortals.
Of course for k=32 everything is pushed out of memory outright, especially with parallel throughput, so there’s little savings, and considering no gain in time no sense at all for real-world plots. Outside of terabyte-memory equipped datacenter blades.
Of course a world where everything is designed to run tidily like StackOverflow without monsters behind would be the loveliest.