Oh, wow, didn’t expect that sort of response. Is this the official forum? Where it says ‘Forum Leader’ is that like for the whole forum?
Saying ‘is not advisable’ with no reason is kinda … meaningless I guess.
Why is it not advisable to mess about with buckets? Will it do bad things to my computer? Why is it there? What is it you know that I don’t about this? Probably bucket loads (pew-pew) since I am less than 3 days in and only taking a moderate interest.
All I know is they don’t put buttons on things unless you are meant to press them … OK, it’s not a button it’s a value but I need to touch it and play with it and see what it does.
Anyway, if we all stuck to the rules GFX cards wouldn’t be running 1100 over stock speed at 20-60% of power.
And no, you really don’t need to answer most of that, just curious as to why and thought ‘Why?’ was a bit short.
To give a bit more context about the RAM (I can’t speak to the buckets) - there are a handful of relatively recent YouTube videos and posts here on the forums of folks doing tests with
Default RAM
4GB
8GB
32GB
and the speed increase is almost not noticeable.
I, me myself, personally, tested Default/4/8 with 1/2/3/4 parallel plots and noticed nothing.
I think the principle reason is that the disk is the bottleneck, in almost every case.
Barring some 2Ghz CPU and 8GB of ram, it’s always the disk (Even the Samsung 980 Pro) that get choked up.
I’m growing convinced the only way to MAX out CPU during a plotting run is to get your hands on some $1500-2000 enterprise PCIe 4.0 SSD - otherwise it’s always the disk that creates wait time.
I think there is no more data to fill each of 128 buckets, only about 3,2GB. Theoretically decreasing number of buckets to 64 will utilize more memory and increase plotting speed
“I’m growing convinced the only way to MAX out CPU during a plotting run is to get your hands on some $1500-2000 enterprise PCIe 4.0 SSD - otherwise it’s always the disk that creates wait time.”
I’m trying to get around that by also having some jobs plotting direct to hdd - they are slow but my concurrent jobs and plots per day is higher than relying only on the nvme. Still testing to see what the limits are though
Sry, if I missed something, but did someone actually test, how much RAM a plot can actually consume?
I usually dont see any more RAM consumed in Task-Manager, when I increase the amount available in GUI and dont change buckets.
Here are my top 10 single plot times. As usual this is running in the background of my desktop, so there can be some impact of other things I’m doing or running on the machine.
Specs: Core i7 6700 3.4Ghz (4C/8T), 16 GB ram, Samsung 879EVO 1TB SATA temp
Phase1Cpu
Phase2Cpu
Phase3Cpu
Phase4Cpu
Phase1Seconds
Phase2Seconds
Phase3Seconds
Phase4Seconds
CopyTimeSeconds
TotalSeconds
Threads
Buffer
Buckets
2,826
0,976
0,981
0,86
02h 17m 16s
01h 03m 22s
02h 30m 56s
11m 22s
14m 36s
06h 02m 58s
6
13600 MB
64
2,759
0,978
0,964
0,815
02h 30m 13s
01h 05m 22s
02h 25m 22s
11m 42s
20m 10s
06h 12m 40s
6
3600 MB
128
2,661
0,861
0,863
0,521
02h 24m 51s
01h 11m 02s
02h 37m 32s
16m 15s
15m 26s
06h 29m 41s
6
6780 MB
128
2,757
0,979
0,981
0,852
02h 35m 48s
01h 09m 02s
02h 44m 02s
13m 38s
18m 24s
06h 42m 32s
6
6900 MB
64
2,128
0,909
0,861
0,666
02h 39m 42s
01h 28m 51s
02h 40m 39s
12m 39s
15m 28s
07h 01m 52s
4
6780 MB
128
2,302
0,977
0,987
0,818
03h 02m 03s
01h 11m 40s
02h 38m 17s
12m 49s
15m 26s
07h 04m 51s
6
13600 MB
32
2,195
0,881
0,912
0,747
02h 38m 12s
01h 16m 13s
02h 59m 42s
14m 15s
15m 30s
07h 08m 23s
4
8192 MB
64
1,896
0,984
0,965
0,845
03h 03m 06s
01h 11m 43s
02h 42m 24s
11m 57s
13m 46s
07h 09m 11s
3
3400 MB
128
2,147
0,908
0,904
0,661
02h 52m 49s
01h 17m 15s
03h 02m 40s
19m 29s
15m 26s
07h 32m 14s
4
9000 MB
64
1,508
0,883
0,876
0,625
03h 08m 21s
01h 17m 00s
02h 55m 20s
17m 41s
15m 28s
07h 38m 23s
2
6780 MB
128
I did manage to score a top time with 64 buckets, but only when giving it a lot more ram (@leadfarmer there’s that run you asked for )
Agreed, that initial response was … less than optimal. Anyway, I have played with buckets and memory in Windows. Short answer, doesn’t matter much…but for fun, try it, nothing terrible happens to your PC. In summary, less buckets means more memory and minor speed variations. I gave all buckets test plots (32,64,128) 14GB of memory to use. They took, respectively (I’m rounding up) 13,7,4GBs. They remainder was not used. ‘Winners’? 64 and 128 were it. 32 seemed to lag.
I started by doing single bucket size test plots. Some were pretty fast, I thought. Then I did a mashup of all three sizes together (10 total). Surprise, they all came out about the same time (variable up to 40 min difference, with, as I mentioned 32 buckets lagging) when done together. So it’s back to 128 4GB as suggested.
Thanks for running that one. As I suspected. For sure not worth it to dedicate that much ram, but good to know we are getting similar results, as far as tons of ram + 64 buckets. So if someone does have loads of ram, they should run 64 buckets!
The difference is almost negligible though. If you substract the difference in the copy phase (which is certainly due to external factors as buckets etc do not play a role there), there is very little difference between the 64/13600 and the 128/3600 run. Certainly not enough to draw any conclusions as the variance from doing other stuff on the machine would be too high to draw conclusions from such a minor difference in 1 sample each way.
I have been experimenting with buckets last 24hrs.
System is Ryzen 3900x 32gb cl16 3666(OC) ram. MSI X570 Meg Ace (3 nvme m. 2. W/Wd SN750 1TB in chipset slots… And are faster than the m.2_1 slot HP EX920)
Sample disks:
4tb seagate 2018(cold spare I had, CMR)… So slow… still running.
But phase 2 (is more apples to apples comparison of amount of concurrent plots… I had too many yester in phase 1 and cancelled one) and is faster with 64 bucket by 33 min.
2- separate WD SN750 1TB.
1 at 128 buckets/ 4096 ram
1 at 64 buckets/ 8000 ram (peak ram usage under 8GB btw)
I use Swar. Had 8 running at once, 10 minutes stagger between jobs(1 job/disk). 3 max stage one. The 64 bucket SN750 1TB job started first. And 64 was passed by the 128 bucket job.
Atm TLDR: 64 bucket > for HDD if you have extra ram. (but not 100% scientific/ identical swar jobs)
128 bucket > for nvme(and probably sata) SSD.
But this is more scientific tested methodology. I had 2 samples of 128 passing 64 in same swar config job.