K33/K34 Plot Times ⏲

I stopped plotting for while back… But I got a few drives coming in, and figured why not try some MadMax K33s to experiment w/GUI v1.3.0. Nothing too fancy, Win10, TR3955WX Pro and a worn out Samsung 980 Pro 1tb as t1 and an XPG Gammix S70 Blade 1TB as t2. I don’t have enough memory for imDisk to make a K33 size drive. So a lot could be optimized , but seeing ~80 min plot times on K33s.

It’s not a contest, but for fun, what times w/what equip are u getting plotting K33s… or K34s?

Approximately 6 hours, using a Ryzen 9 5950x with a 2 TB Samsung 980 Pro for all temp files.
I have two of the Samsung drives in the PC, so I plot two K34s at the same time. (my OS and blockchain are on yet another NVMe drive).

Phase 1 and phase 3 consume a lot of CPU cycles (especially phase 1).
A single plot often utilizes 100% of the CPU cycles.

So perhaps my 6 hours could be cut to 4 or 5 hours, if I plotted only one at a time?

But your 80 minutes makes my supposedly powerful rig seem like a snail.

Do you think that if I ran one K34 at a time, and split the temp processing between my two 980 Pro drives, that I could cut my times below 3 hours? (it would have to be below 3 hours to make it worthwhile).

I would not call a Threadripper “Nothing too fancy”.

I’ll try K34 next, I changed the title from K34 to K33, my bad. We’'ll see how the K34s they work out, yours may be great!

Close to two hours (1:59:26;-) on a i7-10700 with -t and -2 on a 2TB Firecuda 510 (gen3x4 NVME).

Took ~2.8 TBW on the NVME.

K34 took 5:30 hours, with ~5.75 TBW

As you have RAM that is not used right now, maybe try using something like PrimoCache. They have 30 days eval downloads.

I used it with MM, on a box that only had 64GB RAM, and it was reducing t2 writes by about 50% (of course, also speeding up the process). Assuming that you have 128GB RAM, you may also save about the same.

Although, I am not sure, whether there are open source project similar to PrimoCache.

.

1 Like

K34 is cooking :slight_smile:

When I first replied with my K34 timing, I was sort of guessing. I was not in front of my Chia rigs.

I just checked, and with two K34 plots running simultaneously, each having their own dedicated temp NVMe drive, I was averaging 5½ hours, and sometimes as long as 5¾ hours for both K34 plots to complete. So I was doing a bit better than my original 6 hour guess. But in 5½ hours I had two new plots.

I just kicked off a new K34 plot, solo, and I am using:
-t ssd1 and -2 ssd2.
Let’s see how much better my timing will be in this configuration.

By the way, I noticed that the -2 ssd2 drive gets hammered right away when the processing starts.

Just for you I’ve used a 980 Pro 1tb as t1 and an OEM branded 980 Pro 500GB as t2 on a solo k34. I’m at 2hr and 48%, so…

Ok, K34 done. Things to note: 1tb 980 Pro has 2PB written & the 500GB shows 1/2 R/W speed of the 1TB in Samsung Magician.

Multi-threaded pipelined Chia k34 plotter
Number of Plots: 1
Crafting plot 1 out of 1 (2022/03/15 17:49:22)
Number of Threads: 32
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^8 (256)
Working Directory: H:\ (1TB)
Working Directory 2: I:\ (500GB)
Plot Name: plot-k34-2022-03-15-17-49-07ea5bb4a8bb…

Phase 1 took 7000.17 sec
Phase 2 took 2584.7 sec
Phase 3 took 3142.42 sec, wrote 87509646088 entries to final plot
Phase 4 took 669.227 sec, final plot size is 461565012613 bytes
Total plot creation time was 13396.7 sec (223.279 min)(3hrs, 43.279 min)

So, the k34 is 2x hash content of k33; however, is about 30% slower when comparing the same number of hashes created. Or there were some H/W changes between those two runs?

  1. A worn out Samsung 980 Pro 1tb as t1 and an XPG Gammix S70 Blade 1TB as t2 - was used for the K33 run.

  2. A worn out Samsung 980 Pro 1tb as t1 and an OEM 980 Pro 500GB as t2 - was used for the K34 run.

I just restarted a new K34 using the same ssds as in 1), so as to get an apples to apples comparison K33 <> K34. Oh yes, I am syncing same time as plotting these three plot files. Haven’t used this workstation in a while. But at least that doesn’t change while plotting.

1 Like

The speed difference between your 1TB 980 Pro and your 500GB 980 Pro might be due to the number of chips on the SSD.

I have seen/read about larger capacity SSDs being faster, due to their controller being able to spread the work out over more areas of its NAND storage chips. The 1TB might have 2x the number of chips, and be able to write half as much to each of them, simultaneously.

As to it having 2PB written…
I don’t think that that matters. I have never heard of an SSD slowing down due to use. In fact, I never heard of one being worn out. Some fail, but not due to the amount of writes.

If no one in the Chia world has had one get worn out (I nave not seen anyone comment that that has happened to them in this forum), that speaks volumes for that durability and longevity of the SSDs.

If monitoring software is reporting that it has exceeded its life expectancy, or is recommending that it get replaced, etc, I think that it is doing so based on a pre-defined value for TBW. It probably gets the TBW value from the manufacturer, and if it sees you have exceeded that value, you get the warning.

I suspect that the manufacturer probably did not know how many TBW their SSD could handle when they released it, and played it safe with a conservative estimate. By now they must know. But why would they change their long published number? They probably have people changing out perfectly good SSDs with new purchases.

I created a K34, solo, using two 2TB Samsung 980 Pro NVMe drives.
One drive for temp1, and the other drive for temp2. My OS and Chia installation are both on yet another SSD.

It took 3½ hours.
When I create two at a time, directing the temp directory to each of my two NVMe drives, it takes 6 hours, which is an average of 3 hours each. So I get overall better results when I process two K34 plots, simultaneously.

By the way, I do not know how many TBW I have on those NVMe drives. But they have created approximately 5,000 plots (mostly K32), and they have not shown and degradation in performance.

I am re-plotting with K34, and filling in any free space gaps with K33 and an occasional K32, to ensure that the vast majority of my plots survive the day when K32 plots are no longer supported. It takes a long time to re-plot, so I am not waiting for an announcement that will have an unknown lead time, nor do I know if, when that time comes, whether or not I can devote the time to re-plot.

So the final K34 file is 440 GB? What’s the size of K33?

GiB to GB is times 1.07374182, give or take;-)

2 Likes

Do you also have the number of entries for k32, k33 and k34 at hand?

Sorry, no i did not write those down when plotting

I have this info from experimenting with K32/33/34 (all on i7-10700 and 2TB Seagate Firecuda 510,final K32 with -2 on 110GB ramdisk)

sudo nvme smart-log /dev/nvme1
start k33 after k33 k32 after k32 k34 after k34 k32 ramdisk after K32 ramdisk
data_units_read 3.486.783.819 3.946.812 3.490.730.631 856.703 3.491.587.334 12.659.118 3.504.246.452 652.113 3504898565
data_units_written 3.641.904.700 5.980.470 3.647.885.170 2.817.830 3.650.703.000 12.304.664 3.663.007.664 774.192 3663781856
host_read_commands 9.109.544.087 5.180.588 9.114.724.675 1.324.841 9.116.049.516 16.840.333 9.132.889.849 1.253.579 9134143428
host_write_commands 4.081.685.164 6.223.838 4.087.909.002 2.914.745 4.090.823.747 12.636.143 4.103.459.890 817.499 4104277389
k.33 k.32 k.34 k.32
TBW 1.695,894 2,785 1,312 5,730 0,361
Time taken (madmax plotter)
P1 3079 1698 7887 1006
P2 1519 709 4290 514
P3 2456 1138 7375 543
P4 112 47 307 49
Total 7166 3592 19859 2112
01:59:26 00:59:00 05:30:59 00:35:12

As I have only one nvme in my plotter, can anyone tell me if having two separate drives for -t and -2 is faster than both on one nvme only. If so by how much appr.?

T7910 single E5-2699v3 256GB ram, 4 X 200GB Intel SSD raid 0 K33 in 66 minutes

As above but 2 CPUs and memory equally split times increased to just over 70 minutes.

I do have a weird problem where some runs it takes longer, 90 minutes instead of the 70, haven’t had time to investigate why.

Don’t think I have the ram/storage for K34 so haven’t tried that yet.

The above system with two CPU’s can do 80 K32 a day, running two Madmax in parallel.