Optimizing plotting on an AMD Ryzen 5950x (or any other 16c/32t CPU)

Great Info… i’m running an all core overclock … are you running stock, pbo OC or all core OC?

@DougieC

Here’s another one (from thechiafarmer.com) which will move the plots between machines. Checks for finished plots every 30 seconds.

@echo off
:loop
set "source=D:\plot "
set "destination=\<Your 2nd machine><your plot folder> "
robocopy “%source%” “%destination%” /mov *.plot
timeout /t 30
goto loop

I’ve read conflicting things about the number of plots that can be done in parallel on an SSD. For example, does a nominal 2TB drive have enough space for 8 K32 plots? Does the brand or model of the SSD matter?

Wondering because if 8 plots are possible I could convert one SSD to a staging drive.

In my experience, it´s not about capacity, it´s more about IOPS. 500+500 RAID 0 are superior to 1TB, probably rivals 2TB plotting performance

1 Like

Interesting. Does that mean 256GB/plot is a soft number?

I will say 250 is more of an average. It starts smaller and balloons up(past 250 i believe) and then scales down again

I bought mine on ebay - far more expensive then original list price but I wanted to try.
YOu should check how many PCI lanes are really free. On AMD to use all for in parallel you should place it in PCI1 where typically the GFX resides. On PCI2 you might be able to get only 1 or 2 SSDs working.
So currently I run it with two disks in softraid mode and works without issues. What is really great here is the cooling of the discs.

Hello Harris
I’ve been busy configuring my PC with your suggested configuration.
my plotman config is this:

    tmp:
            - /mnt/temp1 (2x 980 Pro 1TB Raid 0)
            - /mnt/temp2 (1x 970 Pro1TB + 1x 980 pro 1TB Raid0)
            - /mnt/temp3 (1 x Corsair MP600 2TB NVME)


    dst:
            - /mnt/buffer (2x Adata SU800 1TB SSD Raid 0)

    tmpdir_stagger_phase_major: 2
    tmpdir_stagger_phase_minor: 1
  
    tmpdir_stagger_phase_limit: 1

    
    tmpdir_max_jobs: 6

  
    global_max_jobs: 12

  
    global_stagger_m: 30

  
    polling_time_s: 20

plotting:
k: 32
e: False # Use -e plotting option
n_threads: 8 # Threads per job
n_buckets: 128 # Number of buckets to split data into
job_buffer: 4500 # Per job memory

the analyzer shows this after about 20 hours:
±------±—±------------±-------------±-------------±-------------±-------------±-------------+
| Slice | n | %usort | phase 1 | phase 2 | phase 3 | phase 4 | total time |
+=======+====+=============+==============+==============+==============+==============+==============+
| x | 47 | μ=100.0 σ=0 | μ=5.9K σ=633 | μ=4.1K σ=236 | μ=6.9K σ=605 | μ=472.0 σ=46 | μ=17.4K σ=1K |
±------±—±------------±-------------±-------------±-------------±-------------±-------------+

total number of plots in 24h was about 32~34 which is far from what you reported.
what do you think is source of the problem?

Sinners are at least 20 characters since I see 2 dozen here

“Sinners are at least 20 characters since I see 2 dozen here”
I don’t get that…

Not sure but the TLC based 980 Pro is inferior to the MLC 970 Pro for sustained write plotting workloads. Also motherboards have different PCIe lane allocations so you should check if there’s a bottleneck there. You didn’t state your chipset or how your drives are connected.

The main thing to change is tmpdir_stagger_phase_limit=1, this is holding up the queue so try at least 4.

If that is still not producing satisfactory results then try the following:

  1. If you have 64GB RAM then disable swap.
  2. Do a periodic manual TRIM with sudo fstrim -v /mountpoint
  3. Reduce threads to 4
  4. Remove RAID0 and plot to individual SSDs with XFS (sudo mkfs.xfs -m crc=0 /dev/nvmeXXX). Don’t combine different SSD models in RAID0.
  5. Ensure the latest firmware updates are installed on the SSDs.

Thank you
I will test and let you know

Just so you know, it seems having the node and farmer running eats an unbelievable 8-10% of plotting potential on this CPU. From about 49 daily plots to 56 plots. It puts it VERY close to the potential of a 5900, which gets me 46 daily plots (and the 5900 just has 2 ssds instead of 3 so the limitation isn’t that).

It has never been optimal to do anything other than plotting on a plotting machine, especially considering the easy addition of a cheap old PC, laptop, NUC or Raspberry Pi to perform node duties.

I’m not sure it is the farmer anymore. I just did a manual trim and i got 25 min again. I’ve got trim on cron every 1 hour and I know it works because i pipe the output to a file and the file does change. I wonder if i’ll make to tweak the script to trim in between plots lol.

1 Like

what cooler are you using? I tried using a arctic 360 cooler and it would crash … switched to coolermaster ml360 are stopped crashing

Update! The new plotters are pushing the 5950X to new levels. Using identical hardware as before (link):

  1. pechy’s chiapos combined takes it to 58 plots per day with “classic” parallel plotting (24.8 minutes per plot).

     +-------+----+-------------+--------------+--------------+--------------+--------------+--------------+
     | Slice | n  |   %usort    |   phase 1    |   phase 2    |   phase 3    |   phase 4    |  total time  |
     +=======+====+=============+==============+==============+==============+==============+==============+
     | x     | 58 | μ=100.0 σ=0 | μ=5.6K σ=122 | μ=6.0K σ=117 | μ=5.6K σ=105 | μ=544.2 σ=41 | μ=17.7K σ=76 |
     +-------+----+-------------+--------------+--------------+--------------+--------------+--------------+
    
  2. madMAx takes it to 81 plots per day with serial plotting (temp1 on 2x P3600 RAID0, temp2 on 2x 970 PRO RAID0)

     Phase 1 took 466.472 sec
     Phase 2 took 314.143 sec
     Phase 3 took 258.27 sec, wrote 21877287047 entries to final plot
     Phase 4 took 27.5885 sec, final plot size is 108836088690 bytes        
     Total plot creation time was 1066.49 sec
    

= 17.77 minutes per plot :ok_hand:

3 Likes

What about bucket size and final destination?
single plot at a time?

I guess these are “predicted” numbers? How about moving plots? Does 81 plots includes moving plots to destination drive?

Don’t forget to enable PBO in BIOS because my mobo on default is auto and bios don’t turn it on (MSI x570 unify).
When PBO is enabled i got better times.
Also get a good cooler because its hell… :smiley:

1 Like