Bladebit disk plotting performance

The logic behind using less ram, is that you can use less ram per unit of output. You can, of course, upgrade your ram and plot more, even switch to only ram-plotting.

I really hope so.

I think that the point was a bit different. It was not really about using more or less RAM, but rather be able to use whatever RAM is available, not to stick to some fixed sizes we have right now (e.g., MM 128, BB 512). At least, I read that comment this way.

When I started using MM, I only had 32GB RAM in that box. With that amount of RAM, I tried using PrimoCache to put it in front of -2 folder, and that already made a big difference. That experiment made me upgrade RAM to 128 GB. Although, some systems can only hold 64 GB. This is even more obvious for BB, as it needs 512 GB. So having a bit more flexible code would benefit potentially majority of smaller farmers.

Although, (IMO) BB target was basically bigger farms, so it was natural to stick with that requirements. I guess, the main reason to go with BB-disk is to have a second source for smaller farms (that MM owns right now, and looks like he is not that interested in improving - focusing on his blockchain project).

Actually, one feature that actually a lot of people that don’t have dedicated plotters asked is to be able to specify “below normal” priority. Even if a plotter will always run with that priority, for those that have a dedicated plotter nothing would change (no competing processes), but that would be a big difference for the other group.

I suppose it depends how you take the statement “The ram usage should come down too.”, better efficiency or just use less and use storage more. By what your saying its the former with the option to use as little or as much ram as you have.

I have a dedicated T7910 with 512GB of ram for plotting, so I’m hoping when plotting a K33 or above it will use as much ram as possible, and then use storage when it doesn’t have sufficient ram, rather like we used to do with Primocache on Windows, but intelligently use the ram to the max.

1 Like

BB disk’s goal is to use ram+disk, so the goal is to be more efficient than MM at using ram. By the way, you can already plot in ram-only with MM too.

I’m curious to know if BB disk will use less TBW than MM with ramdisk.

1 Like

I know :wink: (20 chrs)

Finished plotting in 3329.31 seconds ( 55.5 minutes )…

With MM I spent 28 minutes with the same config.

1 Like

I just want to post an update to my earlier criticisms of this plotter (in this thread) to take virtually everything negative I said about BB Disk plotting back :+1: … now I’m familiar with BB Disk v2.0.0 Beta 1.

Why? Because I’ve been able to produce the last 100TB or so of K32 plots in record time… compared to even the excellent MM plotter … and certainly light years faster compared to the lame OG plotter. And I’m using Windows, for those that think Windows is no good.

How good is good? Best I’ve done on 16c/32t 108GB cache w/one plotting SSD is 14 min. Most are more like 14.2-14.6 min for whatever reason. I save them off to a large 4-6TB ssd before sending them off to HDs. If I do the copy to HD whilst plotting, that seems to add a min or two to plotting. Big deal.

Before BB, and using MM, best I could ever do was 21 minutes, but more often 22-25 minutes, even using two separate SSDs for t1 & t2… or a RAMcache. I’ve tried all the tricks I could manage.

In any event, BB Disk is freaking wonderful :muscle: the amount of time saved over prior plotting is phenomenal. If only I could have the month or more 24/7 of my life (sort of) wasted plotting earlier without BB Disk!

The final version, sometime, should be even better … maybe k33, k34s? or maybe faster yet?

4 Likes

Good info. If you could also add your box specs, that would be helpful. Also, could you add there your command line. (I would also like to see the output, so could compare to what I see on my boxes, as maybe this way it would be easier to catch problems.)

From the stats that chia posted, it looks like bb disk performance may at this moment heavily depends on CPU used so milage may vary. I have i9-10900, 128 GB RAM, and results were basically similar to MM, maybe in the range of 1 minute faster (~30 mins/plot), so I gave up testing / tuning. I think that gen CPU was also close to a toss on chia’s chart, so my results were in support of that.

Also, from my testing, so far it just doesn’t work on Ubuntu (had to abort the plot, as the first phase was much longer than MM full). This was on a dual Xeon e5-2695-v2 with 256 GB DDR3 RAM (using 48 threads). Not sure whether just Linux is the problem, or maybe dual CPU. (I had to go with Linux on this box, as on Win MM just couldn’t bite it at all.)

I also plot to NVMe, and have a script that moves plots to HDs, also am using two NVMes (one for tmp, one for dst, and as much as I can give RAM cache) (just to be on the same page).

My setup >
Threadripper Pro 3955wx 128GB 3200MHz memory
16 cores/32 (31 used) threads, 110G cache, 2GB or 4GB SSD

For Chia plots with BB Chia executable (NFT)>

.\bladebit -f xxxxxxx -c yyyyyyyy --threads 31 -n 1 diskplot -b 128 --cache 110G -t1 f:\ -t2 f:\ e:\

F:\ being a 2GB ssd, E: being an ssd destination, all ssds formatted to 64KB allocation size

Occasionally it justs quits after “Sorting F7 and Writing C Tables”. Likely as it’s still beta sw. But it works enough to get the plots done. I couldn’t find any old plot outputs, so made a new one just now >

Started plot.
Running Phase 1
Table 1: F1 generation
Generating f1...
Finished f1 generation in 4.51 seconds.
Table 1 I/O wait time: 0.00 seconds.
 Table 1 Disk Write Metrics:
  Average write throughput 2328.25 MiB ( 2441.35 MB ) or 2.27 GiB ( 2.44 GB ).
  Total size written: 549685.65 MiB ( 576387.18 MB ) or 536.80 GiB ( 576.39 GB ).
  Total write commands: 162034.

Table 2
 Sorting      : Completed in 13.97 seconds.
 Distribution : Completed in 3.11 seconds.
 Matching     : Completed in 15.65 seconds.
 Fx           : Completed in 17.28 seconds.
Completed table 2 in 51.28 seconds with 4294945340 entries.
Table 2 I/O wait time: 0.06 seconds.
 Table 2 I/O Metrics:
  Average read throughput 4665.01 MiB ( 4891.62 MB ) or 4.56 GiB ( 4.89 GB ).
  Total size read: 954942.09 MiB ( 1001329.36 MB ) or 932.56 GiB ( 1001.33 GB ).
  Total read commands: 69685.
  Average write throughput 4639.07 MiB ( 4864.42 MB ) or 4.53 GiB ( 4.86 GB ).
  Total size written: 100030.68 MiB ( 104889.77 MB ) or 97.69 GiB ( 104.89 GB ).
  Total write commands: 641.

Table 3
 Sorting      : Completed in 21.35 seconds.
 Distribution : Completed in 4.36 seconds.
 Matching     : Completed in 15.58 seconds.
 Fx           : Completed in 17.52 seconds.
Completed table 3 in 63.41 seconds with 4294825232 entries.
Table 3 I/O wait time: 0.10 seconds.
 Table 3 I/O Metrics:
  Average read throughput 7371.59 MiB ( 7729.68 MB ) or 7.20 GiB ( 7.73 GB ).
  Total size read: 65726.77 MiB ( 68919.51 MB ) or 64.19 GiB ( 68.92 GB ).
  Total read commands: 49152.
  Average write throughput 4028.83 MiB ( 4224.53 MB ) or 3.93 GiB ( 4.22 GB ).
  Total size written: 145594.96 MiB ( 152667.38 MB ) or 142.18 GiB ( 152.67 GB ).
  Total write commands: 16898.

Table 4
 Sorting      : Completed in 21.27 seconds.
 Distribution : Completed in 4.27 seconds.
 Matching     : Completed in 15.40 seconds.
 Fx           : Completed in 18.73 seconds.
Completed table 4 in 64.01 seconds with 4294736669 entries.
Table 4 I/O wait time: 0.11 seconds.
 Table 4 I/O Metrics:
  Average read throughput 6299.91 MiB ( 6605.94 MB ) or 6.15 GiB ( 6.61 GB ).
  Total size read: 98491.70 MiB ( 103276.03 MB ) or 96.18 GiB ( 103.28 GB ).
  Total read commands: 49152.
  Average write throughput 4124.28 MiB ( 4324.62 MB ) or 4.03 GiB ( 4.32 GB ).
  Total size written: 145591.65 MiB ( 152663.91 MB ) or 142.18 GiB ( 152.66 GB ).
  Total write commands: 16898.

Table 5
 Sorting      : Completed in 20.83 seconds.
 Distribution : Completed in 6.03 seconds.
 Matching     : Completed in 16.18 seconds.
 Fx           : Completed in 19.21 seconds.
Completed table 5 in 73.38 seconds with 4294549336 entries.
Table 5 I/O wait time: 1.90 seconds.
 Table 5 I/O Metrics:
  Average read throughput 5690.23 MiB ( 5966.64 MB ) or 5.56 GiB ( 5.97 GB ).
  Total size read: 98489.58 MiB ( 103273.81 MB ) or 96.18 GiB ( 103.27 GB ).
  Total read commands: 49152.
  Average write throughput 2793.24 MiB ( 2928.92 MB ) or 2.73 GiB ( 2.93 GB ).
  Total size written: 145586.02 MiB ( 152658.01 MB ) or 142.17 GiB ( 152.66 GB ).
  Total write commands: 16898.

Table 6
 Sorting      : Completed in 20.55 seconds.
 Distribution : Completed in 3.12 seconds.
 Matching     : Completed in 15.65 seconds.
 Fx           : Completed in 17.55 seconds.
Completed table 6 in 65.74 seconds with 4294000885 entries.
Table 6 I/O wait time: 0.20 seconds.
 Table 6 I/O Metrics:
  Average read throughput 6081.15 MiB ( 6376.55 MB ) or 5.94 GiB ( 6.38 GB ).
  Total size read: 98485.33 MiB ( 103269.35 MB ) or 96.18 GiB ( 103.27 GB ).
  Total read commands: 49152.
  Average write throughput 2543.27 MiB ( 2666.81 MB ) or 2.48 GiB ( 2.67 GB ).
  Total size written: 112809.40 MiB ( 118289.23 MB ) or 110.17 GiB ( 118.29 GB ).
  Total write commands: 16898.

Table 7
 Sorting      : Completed in 20.18 seconds.
 Distribution : Completed in 1.45 seconds.
 Matching     : Completed in 15.34 seconds.
 Fx           : Completed in 16.65 seconds.
Completed table 7 in 57.95 seconds with 4292895662 entries.
Table 7 I/O wait time: 0.11 seconds.
 Table 7 I/O Metrics:
  Average read throughput 5742.75 MiB ( 6021.71 MB ) or 5.61 GiB ( 6.02 GB ).
  Total size read: 65712.27 MiB ( 68904.31 MB ) or 64.17 GiB ( 68.90 GB ).
  Total read commands: 49152.
  Average write throughput 1889.98 MiB ( 1981.79 MB ) or 1.85 GiB ( 1.98 GB ).
  Total size written: 79968.36 MiB ( 83852.91 MB ) or 78.09 GiB ( 83.85 GB ).
  Total write commands: 16770.

Sorting F7 & Writing C Tables
Completed F7 tables in 26.23 seconds.
F7/C Tables I/O wait time: 13.49 seconds.
Finished Phase 1 in 406.73 seconds ( 6.8 minutes ).
Running Phase 2
Finished marking table 6 in 4.34 seconds.
Table 6 I/O wait time: 0.00 seconds.
Finished marking table 5 in 19.26 seconds.
Table 5 I/O wait time: 0.00 seconds.
Finished marking table 4 in 19.36 seconds.
Table 4 I/O wait time: 0.00 seconds.
Finished marking table 3 in 19.23 seconds.
Table 3 I/O wait time: 0.00 seconds.
Finished marking table 2 in 19.52 seconds.
Table 2 I/O wait time: 0.00 seconds.
 Phase 2 Total I/O wait time: 0.00 seconds.
Finished Phase 2 in 82.02 seconds ( 1.4 minutes ).
Running Phase 3
Compressing tables 1 and 2.
Step 1 Allocated 5238.35 / 6227.12 MiB
Step 2 using 3.38 / 6.08 GiB.
Table 1 now has 3429306582 / 4294945340 ( 79.85% ) entries.
Table 1 I/O wait time: 1.64 seconds.
Finished compressing tables 1 and 2 in 56.91 seconds.
Compressing tables 2 and 3.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 2 now has 3439685709 / 4294825232 ( 80.09% ) entries.
Table 2 I/O wait time: 2.03 seconds.
Finished compressing tables 2 and 3 in 62.15 seconds.
Compressing tables 3 and 4.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 3 now has 3465791647 / 4294736669 ( 80.70% ) entries.
Table 3 I/O wait time: 1.89 seconds.
Finished compressing tables 3 and 4 in 63.41 seconds.
Compressing tables 4 and 5.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 4 now has 3532368859 / 4294549336 ( 82.25% ) entries.
Table 4 I/O wait time: 1.79 seconds.
Finished compressing tables 4 and 5 in 62.93 seconds.
Compressing tables 5 and 6.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 5 now has 3712554780 / 4294000885 ( 86.46% ) entries.
Table 5 I/O wait time: 3.14 seconds.
Finished compressing tables 5 and 6 in 65.36 seconds.
Compressing tables 6 and 7.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.36 / 6.08 GiB.
Table 6 now has 4292895662 / 4292895662 ( 100.00% ) entries.
Table 6 I/O wait time: 4.03 seconds.
Finished compressing tables 6 and 7 in 64.43 seconds.
Writing P7 parks.
Finished writing P7 parks in 21.48 seconds.
P7 I/O wait time: 11.49 seconds
Finished Phase 3 in 396.74 seconds ( 6.6 minutes ).
Total plot I/O wait time: 43.73 seconds.
Waiting for plot file to complete pending writes...
Completed pending writes in 0.00 seconds.
Finished writing plot plot-k32-2022-10-31-19-59-4280d8adf2d8388a986cbfe9b08cd1063745e2bb712e9527800e9424599a9d1f.plot.tmp.
Final plot table pointers:
 Table 1:       1289587592 ( 0x000000004cdd8b88 )
 Table 2:       3243812258 ( 0x00000000c158a5a2 )
 Table 3:         46071949 ( 0x0000000002bf008d )
 Table 4:       1249417711 ( 0x000000004a7899ef )
 Table 5:       2723392573 ( 0x00000000a253ac3d )
 Table 6:        634841964 ( 0x0000000025d6eb6c )
 Table 7:        905346605 ( 0x0000000035f67e2d )
 C 1    :              252 ( 0x00000000000000fc )
 C 2    :          1717416 ( 0x00000000001a34a8 )
 C 3    :          1717592 ( 0x00000000001a3558 )

Finished plotting in 885.52 seconds ( 14.8 minutes ).
Renaming plot to 'e:\plot-k32-2022-10-31-19-59-4280d8adf2d8388a986cbfe9b08cd1063745e2bb712e9527800e9424599a9d1f.plot'

Cool! Thank you. By the way, when you dump those logs, just highlight them and hit Ctrl+E that will control the height of it.

1 Like

thanks!

I use this setup. I think I downloaded v2.0.0-alpha2

Stuck at Table 7. Did I do wrong?

Plotter is HP Z420 Xeon E5-2680 V2 128GB DDR3. A cheap NVME is E: Destination is D:

MadMax with 110GB ramdisk can do one plot at 48-50 minutes.

Command line is
./bladebit -t 20 -f xxxx -c xxxx diskplot --cache 110G -t1 E: D:

Started plot.
Running Phase 1
Generating f1…
F1 working heap @ 256 buckets: 193.00 / 4062.88 MiB
Minimum IO buffer size required per bucket @ 256 buckets: 124.00 MiB
F1 IO size @ 256 buckets: 3869.87 MiB
Finished f1 generation in 47.31 seconds.
Table 1 I/O wait time: 0.00 seconds.
Table 2
Completed table 2 in 139.07 seconds with 4294890673 entries.
Table 2 I/O wait time: 8.13 seconds.
Table 3
Completed table 3 in 433.06 seconds with 4294763643 entries.
Table 3 I/O wait time: 260.25 seconds.
Table 4
Completed table 4 in 569.69 seconds with 4294615904 entries.
Table 4 I/O wait time: 339.32 seconds.
Table 5
Completed table 5 in 482.62 seconds with 4294281721 entries.
Table 5 I/O wait time: 253.79 seconds.
Table 6
Completed table 6 in 391.39 seconds with 4293633692 entries.
Table 6 I/O wait time: 197.44 seconds.
Table 7

Then it is dead. Memory usage is full. CPU does no work.

Why would you use alpha, when beta was released about a month ago. Actually, there is also rc1 posted earlier today.

Drop your cache to 50 GB or so and see whether that would work. Maybe you have some ram drive started that takes some RAM.

Also, use threads-1 value, as it looks like BB is not really cooperating with drive i/o.

1 Like

Thanks.

I don’t know what I have, beta or alpha.

When I tried to download rc1, windows said the file has virus.

Anyway I will try again.

Threads -1I will do 19. What is the idea behind this setting?

-t1 is nvme -t2 is the storage drive, correct?

Thanks a lot, bro.

Not sure, if code is written properly (no unneeded bumping up thread priorities) it should not be a problem to specify the exact number or actually a bit more (e.g., MM has no such issues). However, for whatever reason, when you specify all of them, those crunching threads are not yielding to I/O threads, so tmp and final writes really suffer.

Defender doesn’t like new binary download from unknown sources, and as this is a brand new file, Microsoft didn’t collect enough data to let it through. Although, if in doubt, you can scan it after unpacking it and see whether it finds anything (don’t think so, but …).

Thank you again for that post!

I run one plot with your setting (except -t2), and this time plotting time dropped to 29.6 minutes. Looks like that -b 128 may have helped (before I got a tad above 30 mins). So, that would be somewhere close to 2 mins faster than MM.

Your output really helped, as I could see all tasks being about the same (i.e., everything was ~100% slower). Although, my waits were shorter. i9-10900 is PCIe3, so a bit slower (possibly not much, once write cache is exhausted)), but most likely the fact that crunching runs slower, those writes are spread more in time, so less waiting for drive writes.

Any reason that you used both -t1 and -t2 as you point them to the same NVMe? I tried before to use different NVMes for those with -a flag, but it just used one NVMe at a time, and was alternating between those. Basically, a waste of time.

In case you want to compare, below is the output from my plot. Also, to have it all in one place:
i9-10900, 128 GB 3,600 GB RAM, 2x WD Black, PCIe 3, Win 11
bb-v2-b1 -f xxxxxxx -c yyyyyyyy --threads 19 -n 1 diskplot -b 128 --cache 110G -t1 d:\nvme1\tmp e:\nvme2\xfr

Started plot.
Running Phase 1
Table 1: F1 generation
Generating f1...
Finished f1 generation in 7.64 seconds.
Table 1 I/O wait time: 0.00 seconds.
 Table 1 Disk Write Metrics:
  Average write throughput 4628.39 MiB ( 4853.22 MB ) or 4.52 GiB ( 4.85 GB ).
  Total size written: 32895.48 MiB ( 34493.41 MB ) or 32.12 GiB ( 34.49 GB ).
  Total write commands: 257.

Table 2
 Sorting      : Completed in 26.47 seconds.
 Distribution : Completed in 7.14 seconds.
 Matching     : Completed in 33.90 seconds.
 Fx           : Completed in 33.43 seconds.
Completed table 2 in 115.96 seconds with 4294911497 entries.
Table 2 I/O wait time: 0.11 seconds.
 Table 2 I/O Metrics:
  Average read throughput 3782.95 MiB ( 3966.71 MB ) or 3.69 GiB ( 3.97 GB ).
  Total size read: 32895.48 MiB ( 34493.41 MB ) or 32.12 GiB ( 34.49 GB ).
  Total read commands: 32768.
  Average write throughput 2184.05 MiB ( 2290.14 MB ) or 2.13 GiB ( 2.29 GB ).
  Total size written: 100029.87 MiB ( 104888.92 MB ) or 97.69 GiB ( 104.89 GB ).
  Total write commands: 641.

Table 3
 Sorting      : Completed in 40.10 seconds.
 Distribution : Completed in 10.11 seconds.
 Matching     : Completed in 35.19 seconds.
 Fx           : Completed in 33.94 seconds.
Completed table 3 in 130.07 seconds with 4294794780 entries.
Table 3 I/O wait time: 0.32 seconds.
 Table 3 I/O Metrics:
  Average read throughput 4321.70 MiB ( 4531.63 MB ) or 4.22 GiB ( 4.53 GB ).
  Total size read: 65726.10 MiB ( 68918.81 MB ) or 64.19 GiB ( 68.92 GB ).
  Total read commands: 49152.
  Average write throughput 1924.62 MiB ( 2018.11 MB ) or 1.88 GiB ( 2.02 GB ).
  Total size written: 145593.73 MiB ( 152666.09 MB ) or 142.18 GiB ( 152.67 GB ).
  Total write commands: 16898.

Table 4
 Sorting      : Completed in 41.50 seconds.
 Distribution : Completed in 10.02 seconds.
 Matching     : Completed in 34.38 seconds.
 Fx           : Completed in 37.45 seconds.
Completed table 4 in 133.68 seconds with 4294667331 entries.
Table 4 I/O wait time: 0.19 seconds.
 Table 4 I/O Metrics:
  Average read throughput 3634.39 MiB ( 3810.93 MB ) or 3.55 GiB ( 3.81 GB ).
  Total size read: 98490.82 MiB ( 103275.11 MB ) or 96.18 GiB ( 103.28 GB ).
  Total read commands: 49152.
  Average write throughput 1935.38 MiB ( 2029.40 MB ) or 1.89 GiB ( 2.03 GB ).
  Total size written: 145589.47 MiB ( 152661.63 MB ) or 142.18 GiB ( 152.66 GB ).
  Total write commands: 16898.

Table 5
 Sorting      : Completed in 41.95 seconds.
 Distribution : Completed in 10.27 seconds.
 Matching     : Completed in 34.64 seconds.
 Fx           : Completed in 37.49 seconds.
Completed table 5 in 135.50 seconds with 4294247433 entries.
Table 5 I/O wait time: 0.17 seconds.
 Table 5 I/O Metrics:
  Average read throughput 3709.38 MiB ( 3889.56 MB ) or 3.62 GiB ( 3.89 GB ).
  Total size read: 98487.89 MiB ( 103272.04 MB ) or 96.18 GiB ( 103.27 GB ).
  Total read commands: 49152.
  Average write throughput 1959.08 MiB ( 2054.24 MB ) or 1.91 GiB ( 2.05 GB ).
  Total size written: 145577.30 MiB ( 152648.87 MB ) or 142.17 GiB ( 152.65 GB ).
  Total write commands: 16898.

Table 6
 Sorting      : Completed in 40.69 seconds.
 Distribution : Completed in 7.03 seconds.
 Matching     : Completed in 34.13 seconds.
 Fx           : Completed in 34.99 seconds.
Completed table 6 in 128.28 seconds with 4293447696 entries.
Table 6 I/O wait time: 0.15 seconds.
 Table 6 I/O Metrics:
  Average read throughput 3259.82 MiB ( 3418.17 MB ) or 3.18 GiB ( 3.42 GB ).
  Total size read: 98478.34 MiB ( 103262.02 MB ) or 96.17 GiB ( 103.26 GB ).
  Total read commands: 49152.
  Average write throughput 1861.20 MiB ( 1951.61 MB ) or 1.82 GiB ( 1.95 GB ).
  Total size written: 112796.59 MiB ( 118275.80 MB ) or 110.15 GiB ( 118.28 GB ).
  Total write commands: 16898.

Table 7
 Sorting      : Completed in 38.00 seconds.
 Distribution : Completed in 3.67 seconds.
 Matching     : Completed in 34.08 seconds.
 Fx           : Completed in 32.66 seconds.
Completed table 7 in 118.44 seconds with 4291942198 entries.
Table 7 I/O wait time: 0.72 seconds.
 Table 7 I/O Metrics:
  Average read throughput 3436.04 MiB ( 3602.95 MB ) or 3.36 GiB ( 3.60 GB ).
  Total size read: 65703.82 MiB ( 68895.45 MB ) or 64.16 GiB ( 68.90 GB ).
  Total read commands: 49152.
  Average write throughput 1699.48 MiB ( 1782.04 MB ) or 1.66 GiB ( 1.78 GB ).
  Total size written: 79953.37 MiB ( 83837.18 MB ) or 78.08 GiB ( 83.84 GB ).
  Total write commands: 16770.

Sorting F7 & Writing C Tables
Completed F7 tables in 34.52 seconds.
F7/C Tables I/O wait time: 10.62 seconds.
Finished Phase 1 in 804.25 seconds ( 13.4 minutes ).

=============================================================

Running Phase 2
Finished marking table 6 in 6.89 seconds.
Table 6 I/O wait time: 0.00 seconds.
Finished marking table 5 in 35.24 seconds.
Table 5 I/O wait time: 0.00 seconds.
Finished marking table 4 in 35.47 seconds.
Table 4 I/O wait time: 0.00 seconds.
Finished marking table 3 in 35.51 seconds.
Table 3 I/O wait time: 0.00 seconds.
Finished marking table 2 in 35.70 seconds.
Table 2 I/O wait time: 0.00 seconds.
 Phase 2 Total I/O wait time: 0.00 seconds.
Finished Phase 2 in 149.13 seconds ( 2.5 minutes ).

=============================================================

Running Phase 3
Compressing tables 1 and 2.
Step 1 Allocated 5238.35 / 6227.12 MiB
Step 2 using 3.38 / 6.08 GiB.
Table 1 now has 3429246123 / 4294911497 ( 79.84% ) entries.
Table 1 I/O wait time: 0.63 seconds.
Finished compressing tables 1 and 2 in 120.86 seconds.
Compressing tables 2 and 3.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 2 now has 3439616475 / 4294794780 ( 80.09% ) entries.
Table 2 I/O wait time: 0.58 seconds.
Finished compressing tables 2 and 3 in 133.26 seconds.
Compressing tables 3 and 4.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 3 now has 3465596326 / 4294667331 ( 80.70% ) entries.
Table 3 I/O wait time: 0.68 seconds.
Finished compressing tables 3 and 4 in 133.04 seconds.
Compressing tables 4 and 5.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 4 now has 3532009817 / 4294247433 ( 82.25% ) entries.
Table 4 I/O wait time: 0.57 seconds.
Finished compressing tables 4 and 5 in 134.57 seconds.
Compressing tables 5 and 6.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.35 / 6.08 GiB.
Table 5 now has 3711978836 / 4293447696 ( 86.46% ) entries.
Table 5 I/O wait time: 0.71 seconds.
Finished compressing tables 5 and 6 in 137.95 seconds.
Compressing tables 6 and 7.
Step 1 Allocated 6227.12 / 6227.12 MiB
Step 2 using 3.36 / 6.08 GiB.
Table 6 now has 4291942198 / 4291942198 ( 100.00% ) entries.
Table 6 I/O wait time: 2.37 seconds.
Finished compressing tables 6 and 7 in 133.16 seconds.
Writing P7 parks.
Finished writing P7 parks in 30.02 seconds.
P7 I/O wait time: 8.03 seconds
Finished Phase 3 in 822.93 seconds ( 13.7 minutes ).

=============================================================

Total plot I/O wait time: 25.98 seconds.
Waiting for plot file to complete pending writes...
Completed pending writes in 0.00 seconds.
Finished writing plot plot-mmx-k32-2022-10-31-21-35-df5390021bb7809f5610e2dd048d93936f663a30ed9a07add62d426ce7dfc9d8.plot.tmp.
Final plot table pointers:
 Table 1:       1289302228 ( 0x000000004cd930d4 )
 Table 2:       3243261034 ( 0x00000000c1503c6a )
 Table 3:         45237675 ( 0x0000000002b245ab )
 Table 4:       1247784237 ( 0x000000004a5fad2d )
 Table 5:       2720302224 ( 0x00000000a2248490 )
 Table 6:        629412290 ( 0x00000000258411c2 )
 Table 7:        896037481 ( 0x0000000035687269 )
 C 1    :              268 ( 0x000000000000010c )
 C 2    :          1717052 ( 0x00000000001a333c )
 C 3    :          1717228 ( 0x00000000001a33ec )

Finished plotting in 1776.31 seconds ( 29.6 minutes ).
Renaming plot to 'e:\mmx\xfr\plot-mmx-k32-2022-10-31-21-35-df5390021bb7809f5610e2dd048d93936f663a30ed9a07add62d426ce7dfc9d8.plot'

22:05

cache:  110G
tmp1:   d:\mmx\tmp\
xfr:    e:\mmx\xfr\
threads: 19

plots:    1
Avg plot: 29:43
Elapsed:  00:29:43
started:  2022-10-31 21:35:23
ended:    2022-10-31 22:05:06

I tried to download it right now, and sorry about my previous comment. I thought that Defender just stated that this is not a verified binary. However, it clearly marks it as a severe threat.

So, let Chia to figure it out and fix it before downloading for now.

2 Likes

Maybe doesn’t make a difference but the slashes >
./bladebit -t 20 -f xxxx -c xxxx diskplot --cache 110G -t1 E:\ D:\

The I/O wait times are terribly long. Perhaps it’s your nvme. That alone would kill any time advantage. Also you may not have enough available memory. 99G is the minimum, try that… then work your way up if it works. Close any memory hogs to get more free memory (anti-v, others). Thread needs to be 19, not 20. One is needed for I/O work or it slow considerably.

I downloaded and tried to use, but it constantly failed as mentioned in my 1st post after “Sorting F7 and Writing C Tables”. Many attempts, no go. Needs work I guess. Reverted back to prior version to plot what I showed.

No, no reason, I was trying two nvmes, but not better, so I just never changed the command line. As long as the single nvme is real good, it seems to work well.

1 Like

image