Ok, I dunno for a fact because I have no experience, played with this 2 days but here …
128 buckets and default RAM:
Total time = 17832.250 seconds. CPU (121.360%) Mon May 24 19:13:24 2021
64 buckets and 8000:
Total time = 14897.512 seconds. CPU (138.340%) Wed May 26 15:30:39 2021
That was a solo plot. 128GB DDR4 3200, AMD 5800X, Firecuda 2TB which is also the system disk and it outputs to a 7,200 SATA drive.
Most other tests show almost no impact for more RAM, and messing with buckets is not advisable.
Oh, wow, didn’t expect that sort of response. Is this the official forum? Where it says ‘Forum Leader’ is that like for the whole forum?
Saying ‘is not advisable’ with no reason is kinda … meaningless I guess.
Why is it not advisable to mess about with buckets? Will it do bad things to my computer? Why is it there? What is it you know that I don’t about this? Probably bucket loads (pew-pew) since I am less than 3 days in and only taking a moderate interest.
All I know is they don’t put buttons on things unless you are meant to press them … OK, it’s not a button it’s a value but I need to touch it and play with it and see what it does.
Anyway, if we all stuck to the rules GFX cards wouldn’t be running 1100 over stock speed at 20-60% of power.
And no, you really don’t need to answer most of that, just curious as to why and thought ‘Why?’ was a bit short.
To give a bit more context about the RAM (I can’t speak to the buckets) - there are a handful of relatively recent YouTube videos and posts here on the forums of folks doing tests with
and the speed increase is almost not noticeable.
I, me myself, personally, tested Default/4/8 with 1/2/3/4 parallel plots and noticed nothing.
I think the principle reason is that the disk is the bottleneck, in almost every case.
Barring some 2Ghz CPU and 8GB of ram, it’s always the disk (Even the Samsung 980 Pro) that get choked up.
I’m growing convinced the only way to MAX out CPU during a plotting run is to get your hands on some $1500-2000 enterprise PCIe 4.0 SSD - otherwise it’s always the disk that creates wait time.
I think there is no more data to fill each of 128 buckets, only about 3,2GB. Theoretically decreasing number of buckets to 64 will utilize more memory and increase plotting speed
I’ve started 64 and 32 buckets just now to get more experience =))
“I’m growing convinced the only way to MAX out CPU during a plotting run is to get your hands on some $1500-2000 enterprise PCIe 4.0 SSD - otherwise it’s always the disk that creates wait time.”
I’m trying to get around that by also having some jobs plotting direct to hdd - they are slow but my concurrent jobs and plots per day is higher than relying only on the nvme. Still testing to see what the limits are though
Seems no speed difference between 64 and 32 buckets.
Plot size is: 32
Buffer size is: 8000MiB
Using 64 buckets
Using 4 threads of stripe size 65536
Total time = 11561.171 seconds. CPU (148.130%)
Plot size is: 32
Buffer size is: 16000MiB
Using 32 buckets
Using 4 threads of stripe size 65536
Total time = 11820.506 seconds. CPU (148.930%)
Couldn’t direct compare for now with 128 buckets. I’d run 64 and 32 in parallel (so write saturation affected both) and 128 alone
Sry, if I missed something, but did someone actually test, how much RAM a plot can actually consume?
I usually dont see any more RAM consumed in Task-Manager, when I increase the amount available in GUI and dont change buckets.
Here are my top 10 single plot times. As usual this is running in the background of my desktop, so there can be some impact of other things I’m doing or running on the machine.
Specs: Core i7 6700 3.4Ghz (4C/8T), 16 GB ram, Samsung 879EVO 1TB SATA temp
||02h 17m 16s
||01h 03m 22s
||02h 30m 56s
||06h 02m 58s
||02h 30m 13s
||01h 05m 22s
||02h 25m 22s
||06h 12m 40s
||02h 24m 51s
||01h 11m 02s
||02h 37m 32s
||06h 29m 41s
||02h 35m 48s
||01h 09m 02s
||02h 44m 02s
||06h 42m 32s
||02h 39m 42s
||01h 28m 51s
||02h 40m 39s
||07h 01m 52s
||03h 02m 03s
||01h 11m 40s
||02h 38m 17s
||07h 04m 51s
||02h 38m 12s
||01h 16m 13s
||02h 59m 42s
||07h 08m 23s
||03h 03m 06s
||01h 11m 43s
||02h 42m 24s
||07h 09m 11s
||02h 52m 49s
||01h 17m 15s
||03h 02m 40s
||07h 32m 14s
||03h 08m 21s
||01h 17m 00s
||02h 55m 20s
||07h 38m 23s
I did manage to score a top time with 64 buckets, but only when giving it a lot more ram (@leadfarmer there’s that run you asked for )
Same here - I’m reading about scrappier solutions that I might be able to add to my build (i.e. using small/10k SAS drives)
Keep us posted!
Agreed, that initial response was … less than optimal. Anyway, I have played with buckets and memory in Windows. Short answer, doesn’t matter much…but for fun, try it, nothing terrible happens to your PC. In summary, less buckets means more memory and minor speed variations. I gave all buckets test plots (32,64,128) 14GB of memory to use. They took, respectively (I’m rounding up) 13,7,4GBs. They remainder was not used. ‘Winners’? 64 and 128 were it. 32 seemed to lag.
I started by doing single bucket size test plots. Some were pretty fast, I thought. Then I did a mashup of all three sizes together (10 total). Surprise, they all came out about the same time (variable up to 40 min difference, with, as I mentioned 32 buckets lagging) when done together. So it’s back to 128 4GB as suggested.
My translation of his response is “take an extra second and do a search, this topic has been exhaustively tested and documented”. But, that’s just me.
Thanks for running that one. As I suspected. For sure not worth it to dedicate that much ram, but good to know we are getting similar results, as far as tons of ram + 64 buckets. So if someone does have loads of ram, they should run 64 buckets!
The difference is almost negligible though. If you substract the difference in the copy phase (which is certainly due to external factors as buckets etc do not play a role there), there is very little difference between the 64/13600 and the 128/3600 run. Certainly not enough to draw any conclusions as the variance from doing other stuff on the machine would be too high to draw conclusions from such a minor difference in 1 sample each way.
Yeah you won’t see me running 64 buckets. 128 is clearly superior for 99% of people.
I have been experimenting with buckets last 24hrs.
System is Ryzen 3900x 32gb cl16 3666(OC) ram. MSI X570 Meg Ace (3 nvme m. 2. W/Wd SN750 1TB in chipset slots… And are faster than the m.2_1 slot HP EX920)
4tb seagate 2018(cold spare I had, CMR)… So slow… still running.
But phase 2 (is more apples to apples comparison of amount of concurrent plots… I had too many yester in phase 1 and cancelled one) and is faster with 64 bucket by 33 min.
2- separate WD SN750 1TB.
- 1 at 128 buckets/ 4096 ram
- 1 at 64 buckets/ 8000 ram (peak ram usage under 8GB btw)
I use Swar. Had 8 running at once, 10 minutes stagger between jobs(1 job/disk). 3 max stage one. The 64 bucket SN750 1TB job started first. And 64 was passed by the 128 bucket job.
Atm TLDR: 64 bucket > for HDD if you have extra ram. (but not 100% scientific/ identical swar jobs)
128 bucket > for nvme(and probably sata) SSD.
But this is more scientific tested methodology. I had 2 samples of 128 passing 64 in same swar config job.