64 buckets vs 128 buckets

Per above, and also per what I’ve read, the consensus is that buckets are already set optimally, there’s no reason to mess with the buckets setting.

1 Like

The one place changing the bucket size (smaller) does help is if you are plotting on hard disk. Forgotten the exact numbers but I believe it was approx 10% improvement. Guessing this cuts down on the number of seeks.

Oh ok… as my 32gb is a form of wasted resources… i just wanna find a way to utilize them

I don’t think that’s true any more, though. I think that was only true of older versions of the plotter. Lots of outdated info out there; they have significantly improved the plotter since 1.0.0!

1 Like

I experimented with buckets and set 64 along with tuning some other params.
My per plot time has reduced to 8.5 hours from over 12 hours.
I am doing all this on the cloud where i dont have NVMe available but whatever SSD is available, I try to optimize it.
My params are
-k 32 -b 7400 -u 64 -n 1 -r 6 -t temp1 -2 temp2
temp1 and temp2 are separate ssd drives.

I strongly recommend not messing with the buckets setting. It’s already optimal at default.

In my experience (limited tests) doubling the ram (6780 MiB) for each plot seemed to have a significant effect on plot times, but giving it even more (8192 MiB) did not yield further improvements. I read that 4608 MiB would be a sweetspot, but I have not tested that yet.

My tests showed almost no improvement when increasing memory. It went from 6:22 to 6:18 …

1 Like

Hey, I got that same strange result as you with the 8GB allocation being slower. I also have no direct answer as to why that would be the case

8slower

1 Like

I Have been plotting w/GUI with M:6144 T:4 Buckets:64 for a couple day; 12 staggered on TR 80GB mem. Early to make conclusions, but nothing too dramatic happens. I need to wait some time, as I’ve set some 128 bucket streams going alongside the 64 ones to compare, and it goes…slowly. Memory certainly peaks early on in the cycle, then fall considerably and stay much lower overall. One goal was to try to minimize drive access w/more buckets, to preserve ssd life a bit. That does seem to be the case, as nvmes are generally less then 50%, more like 10-25%.

One thing I did learn is you can’t set, say, 160 buckets, or 96 buckets, it just goes to 128 when run. Also as I read more about bucket sort, it seems there are tow competing processes; the one that sorts INTO the buckets, and the one that sorts WITHIN the buckets. The former is faster, the latter less so. So amongst all the variables, those two are relatively important.

1 Like

I’m further on in testing and have some interesting results on 64 vs 128 buckets on TR 16c/32t. With M:6144 T:4 Buckets:64 ave plot time for 12 at a time was ~11.535 hrs. Todays result with M:4520 T:4 Buckets:128 is ~13.1 hrs. There is the memory difference, but 64 buckets does appear to be a bit more efficient, depending on whether/if memory played a part.

The biggest change was in Phase 1, 64B from ~16500 to 128B ~22500. Phase 3 was also up from ~14900 to ~15500.

Just setting some new plots away, was using 4000, so i’ll try 4608.
probably won’t be scientific but i’ll try and keep an eye on it and see how it feels.

Sine then I learned that 4608 was the old number for K32 before the algorithm was optimized in 1.0.4, so now it is 3390. I have some test run with double that, 6780 that snuggest a further speedup, but when I tried increasing it further to 8192 MiB this had oddly enough a negative effect.

2 Likes

Well, have ended testing. Last test was to mix 10 plots between 32B, 64B, and 128B. I gave them all 14GB memory, all 4T so there was no QS for memory reasons. All running together in the mix, they ALL ran nearly consistent, similar times. So as far as I can tell, less buckets buys u nothing, and uses considerably more memory/plot. Fewer buckets may take less from ur ssd, but not enough to matter to plot times it seems. So 4GB, 3T, 128B is what I’m using now. Of course, YRMV.

2 Likes

And how many plots u get 1 day with this config?

I heard reducing buckets → increasing RAM means fewer writes to your Temp drive (which could make it last longer)

1 Like

I’m currently using 8GB - 4T - 64B and these are my findings:

  • Even though my setup has a Ryzen with 12 cores I don’t see any plotting times benefit parallelizing many processes, so I’m going one plot at a time.
  • Reducing the number of buckets and increasing the ram (if you have it) up to 8GB got me a 15% better plotting time.
  • Fewer buckets mean less TBW to our NVMEs, so will last longer as @Ryu007 pointed out previously.

I hope it helps!

2 Likes

In my experience (with Madmax at least) changing the bucket size does not affect the total bytes written per plot on temp drives.

1 Like

I think using smaller buckets was a trick to manage to do parallel plotting of 16 or more plots when we were all using the inbuilt plotter to plot with one core at a time. It makes sense to use more ram and fewer buckets if you have enough ram (this is actual ram in use by the system rather than allocated to a ramdrive).

Adding my experiences with 64 vs 128 buckets: Doing 16 4-thread Plots in parallel divided over 4 NVMe’s (4 per NVMe device) and Tmp2 on enterprise SSD’s on my 20 Core (40 Threads) Dual Xeon rig and it made no difference.
I let the system plot for 4 days in a row with each settings (4 days with 64 & Mem: 8000, then 4 days with 128 & Mem 4000).
With 64 buckets, The average total plot times were the same as with 128. So in my case, no need to bother and i’ll stick with 128 as there is no statistically relevant difference in the greater scheme of things. 64 buckets just eats more memory and offers me no benefits.
I don’t worry about wear on the devices as they are write-intensive enterprise grade devices I had lying around.

1 Like