I stopped plotting for while back… But I got a few drives coming in, and figured why not try some MadMax K33s to experiment w/GUI v1.3.0. Nothing too fancy, Win10, TR3955WX Pro and a worn out Samsung 980 Pro 1tb as t1 and an XPG Gammix S70 Blade 1TB as t2. I don’t have enough memory for imDisk to make a K33 size drive. So a lot could be optimized , but seeing ~80 min plot times on K33s.
It’s not a contest, but for fun, what times w/what equip are u getting plotting K33s… or K34s?
Approximately 6 hours, using a Ryzen 9 5950x with a 2 TB Samsung 980 Pro for all temp files.
I have two of the Samsung drives in the PC, so I plot two K34s at the same time. (my OS and blockchain are on yet another NVMe drive).
Phase 1 and phase 3 consume a lot of CPU cycles (especially phase 1).
A single plot often utilizes 100% of the CPU cycles.
So perhaps my 6 hours could be cut to 4 or 5 hours, if I plotted only one at a time?
But your 80 minutes makes my supposedly powerful rig seem like a snail.
Do you think that if I ran one K34 at a time, and split the temp processing between my two 980 Pro drives, that I could cut my times below 3 hours? (it would have to be below 3 hours to make it worthwhile).
I would not call a Threadripper “Nothing too fancy”.
As you have RAM that is not used right now, maybe try using something like PrimoCache. They have 30 days eval downloads.
I used it with MM, on a box that only had 64GB RAM, and it was reducing t2 writes by about 50% (of course, also speeding up the process). Assuming that you have 128GB RAM, you may also save about the same.
Although, I am not sure, whether there are open source project similar to PrimoCache.
When I first replied with my K34 timing, I was sort of guessing. I was not in front of my Chia rigs.
I just checked, and with two K34 plots running simultaneously, each having their own dedicated temp NVMe drive, I was averaging 5½ hours, and sometimes as long as 5¾ hours for both K34 plots to complete. So I was doing a bit better than my original 6 hour guess. But in 5½ hours I had two new plots.
I just kicked off a new K34 plot, solo, and I am using:
-t ssd1 and -2 ssd2.
Let’s see how much better my timing will be in this configuration.
By the way, I noticed that the -2 ssd2 drive gets hammered right away when the processing starts.
Ok, K34 done. Things to note: 1tb 980 Pro has 2PB written & the 500GB shows 1/2 R/W speed of the 1TB in Samsung Magician.
Multi-threaded pipelined Chia k34 plotter
Number of Plots: 1
Crafting plot 1 out of 1 (2022/03/15 17:49:22)
Number of Threads: 32
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^8 (256)
Working Directory: H:\ (1TB)
Working Directory 2: I:\ (500GB)
Plot Name: plot-k34-2022-03-15-17-49-07ea5bb4a8bb…
Phase 1 took 7000.17 sec
Phase 2 took 2584.7 sec
Phase 3 took 3142.42 sec, wrote 87509646088 entries to final plot
Phase 4 took 669.227 sec, final plot size is 461565012613 bytes
Total plot creation time was 13396.7 sec (223.279 min)(3hrs, 43.279 min)
So, the k34 is 2x hash content of k33; however, is about 30% slower when comparing the same number of hashes created. Or there were some H/W changes between those two runs?
A worn out Samsung 980 Pro 1tb as t1 and an XPG Gammix S70 Blade 1TB as t2 - was used for the K33 run.
A worn out Samsung 980 Pro 1tb as t1 and an OEM 980 Pro 500GB as t2 - was used for the K34 run.
I just restarted a new K34 using the same ssds as in 1), so as to get an apples to apples comparison K33 <> K34. Oh yes, I am syncing same time as plotting these three plot files. Haven’t used this workstation in a while. But at least that doesn’t change while plotting.
The speed difference between your 1TB 980 Pro and your 500GB 980 Pro might be due to the number of chips on the SSD.
I have seen/read about larger capacity SSDs being faster, due to their controller being able to spread the work out over more areas of its NAND storage chips. The 1TB might have 2x the number of chips, and be able to write half as much to each of them, simultaneously.
As to it having 2PB written…
I don’t think that that matters. I have never heard of an SSD slowing down due to use. In fact, I never heard of one being worn out. Some fail, but not due to the amount of writes.
If no one in the Chia world has had one get worn out (I nave not seen anyone comment that that has happened to them in this forum), that speaks volumes for that durability and longevity of the SSDs.
If monitoring software is reporting that it has exceeded its life expectancy, or is recommending that it get replaced, etc, I think that it is doing so based on a pre-defined value for TBW. It probably gets the TBW value from the manufacturer, and if it sees you have exceeded that value, you get the warning.
I suspect that the manufacturer probably did not know how many TBW their SSD could handle when they released it, and played it safe with a conservative estimate. By now they must know. But why would they change their long published number? They probably have people changing out perfectly good SSDs with new purchases.
I created a K34, solo, using two 2TB Samsung 980 Pro NVMe drives.
One drive for temp1, and the other drive for temp2. My OS and Chia installation are both on yet another SSD.
It took 3½ hours.
When I create two at a time, directing the temp directory to each of my two NVMe drives, it takes 6 hours, which is an average of 3 hours each. So I get overall better results when I process two K34 plots, simultaneously.
By the way, I do not know how many TBW I have on those NVMe drives. But they have created approximately 5,000 plots (mostly K32), and they have not shown and degradation in performance.
I am re-plotting with K34, and filling in any free space gaps with K33 and an occasional K32, to ensure that the vast majority of my plots survive the day when K32 plots are no longer supported. It takes a long time to re-plot, so I am not waiting for an announcement that will have an unknown lead time, nor do I know if, when that time comes, whether or not I can devote the time to re-plot.
As I have only one nvme in my plotter, can anyone tell me if having two separate drives for -t and -2 is faster than both on one nvme only. If so by how much appr.?