Hi there professionals!
I made my research, tried to make some plots and now i am sitting on my new HW and i dont know whats are the optimal stats.
Yesturday i tried to run 10 queues (7 on 2TB temp drive, 3 on 1 TB temp drive) at same time, but finish time was over 40000 sec. (bad)
Setting was : - 3800 RAM, 2 threads
I am using GUI apk.
Ryzen 9 3900x
48GB RAM 3000Mhz
1x 2TB M.2 (gen3)
1x 1TB M.2 (gen3)
I was thinking to try 4 threads (per plot) and make parallel Plots (5KS - 6KS) with about 180 minutes waiting (phase 1) - after that 20-24 threads of the processor should be used.
Can anybody with more experiences help me to optimize it?
THX for reply…
No professional but i would tell you how i do it on my i9 10850k
I got 32 gb ram and only 1tb gen3.
I am doing 4 threads 4 plots 3380 ram as is normal setup in parawell between timing 10 mins. Its get me 4 plots done in 5h.
Maybe try for start something like that and see
Hope i was any help ( Also GUI user )
Hello! Plotting is limited by three components; CPU, RAM, Temp Space. Use the information below to find the limits of your system:
- CPU is limited by this formula: (Cores + Threads)/2
- RAM is limited by this formula: (Total System RAM)/3400
- Temp Space is limited by this formula: (Total Temp Space)GB/256GB
Reply back with the limits of each component.
CPU is limited by this formula: (Cores + Threads)/2 - **12**
RAM is limited by this formula: (Total System RAM)/3400 - **14**
Temp Space is limited by this formula: (Total Temp Space)GB/256GB - **10**
So yes, that is mean that i can do 10 plots at once, but the plotting time in this setup was so slow (40K sec) … ( Time for phase 1 = 17692.607 seconds. CPU (140.740%) )
Now i making a 3 paraller in 3 queues (all three with 90 minutes delay, 4000MB/4 thread per plot) and the time of phase 1 is allmost twice time faster. But the CPU load is between 50 - 80% of use. I think that i dont using my potencial at full load.
Time for phase 1 = 9895.486 seconds. CPU (204.420%) Wed May 5 15:22:43 2021
Time for phase 1 = 9771.293 seconds. CPU (203.750%) Wed May 5 15:21:24 2021
Time for phase 1 = 9970.187 seconds. CPU (203.170%) Wed May 5 15:26:04 2021
It is look like there is some bottleneck …
2nd plot in 1ts queue
Time for phase 1 = 12497.629 seconds. CPU (191.130%) Wed May 5 17:36:06 2021
2nd plot in 2nd queue
Time for phase 1 = 12255.946 seconds. CPU (192.900%) Wed May 5 17:32:50 2021
2nd plot in 3rd queue
Time for phase 1 = 12554.653 seconds. CPU (191.080%) Wed May 5 17:39:12 2021
What are the brand and model of the NVMes you are using?
Viper VPN100 - 2TB
ADATA XPG SX8200 Pro 1TB
It looks like the SX8200 suffers a performance loss after the SLC cache fills up.
I’m not sure if the same thing happens to the VPN100, but I suspect that is the case also. You can test it with iometer if you want to find out. Each plot does 1.4TBW so its easy to overload the cache when parallel plotting.
There are more parameters then only service life and speed …
I can still change that disks du to 14 day law waranty …
So you are recommending that Samsung drives? They got lower TBW then the Viper VPN100 …
Or there are any other you can recommend?
Thx for answer.
I’m still trying to find the answer to that myself, SSDs don’t advertise the performance metric that iometer performs on a drive. From the graph you can see that the 970 PRO stays consistent but the TBW is only 1200 (Which is still a lot).
Thank you very much. So i rather exchange that Viper for Samsung Pro and test it after and post results …