Optimizing plotting on an AMD Ryzen 5950x (or any other 16c/32t CPU)

What you’re saying makes sense to me and your observations about plot times and speed degradation is consistent with what I see. But I’m not too concerned with individual plot time.

My thinking is that our total throughput will be limited by our stagger time in an otherwise unconstrained system, correct? For example, if the stagger time is set to 1 hour; it doesn’t matter if each plots finish in 10 seconds or 10 hours, the system will not exceed 24 plots in a 24 hour period.

If we are targeting 50/day, any stagger value greater than 28.8 minutes guarantees that the system will fall short of our target.

Does that make sense, or perhaps I am misunderstanding something?

2 Likes

Sure, I can follow that train of thought. :slight_smile:

No, I’m not following; the stagger only affects the start time for the first plot, all subsequent plots kick off immediately afterwards. I think we have different understandings of the word “stagger”. For me it is a one-time value, that determines how offset the plotters are against each other, and it is only set once, forever e.g. “plotter 5 will be on phase 1, while plotter 6 is on phase 3 – they’ll always be four hours apart”.

There might be ways of doing stagger every single time every plotter begins a plot, but I personally only use stagger values when starting the plotter initially.

Hmm now I’m confused about how you’re staggering plots. Do your plots start in a way that is consistent with Quindor’s screenshot from post #112? Or are there times when you have more than 1 plot starting at once?

In particular, the “wall” column in plotman reports the time a plot has been running. So for plots # 0-5 we can see his system was unconstrained and each plot is offset by exactly 30 minutes, his stagger time. Plots 6-10 have some additional variability in the timing offset, presumably due to some secondary plot limit (indicating a minor system constraint).

So in this case, with a stagger of 30 minutes, Quindor would be able to achieve 48 plots/day as an absolute maximum. In practice it looks like he would land just shy of 48 because some plots (ie plot 9) are having to wait up to 35 minutes before starting.

That is not correct, it will always wait until it’s stagger time is up and then check the other values if it’s allowed to start a new job. If not it waits until it’s allowed to and then stagger starts counting again. Check my stream shot again, you see the stagger time is 1024s/1800s :slight_smile:

I see, so we’re using different versions of “stagger”. I should probably say “offset”; I let everything constantly plot, just offset from everything else by {x} minutes.

It’s possible I’ve been using the word wrong! Let’s clarify here

2 Likes

Excellent data! Thanks for sharing! I just got my 4x 970 Pros in. I can only use 3 for now until I get NVME to PCIE card. I will be testing 4 once that arrives. If I can plot 15 in parallel on 4 x 1TB, I’ll stick with that, but if not, I’ll do 5x 1TB. May do 5x 1TB anyway just to get more IO bandwidth.

I just started using the 970 Pros but they already seem to be ~25-30% faster than the 2TB Firecuda 520s I’ll be sending back. I will have to see the final results when the plots finish, and will post the results here.

2 Likes

Anyone have a motherboard they recommend for the 5950x? Just got one today (Cambridge Microcenter has 20+)

I’m partial to anything with dual M.2 slots (make sure they’re full bandwidth PCI 4), 2.5gbps ethernet, and 20gbps USB 3.2 or Thunderbolt.

2 Likes

I think I’ve settled on this on the MSI X570 Tomahawk - Specification MAG X570 TOMAHAWK WIFI | MSI Global - The Leading Brand in High-end Gaming & Professional Creation

I was eyeing the ASUS Rog Strix X570-E but they’re all sold out or sold for way too much.

Hi all, im looking for an ideal config for my setup
its currently Ryzen 5900x with 64gb DDR4 3200mhz, on a x570 board
Ive installed Ubuntu 20.4 and running Swar plot manager
My SSD is
2tb xpg gammix nvme
2tb xpg gammix nvme
1tb msi crucial nvme
400gb intel ssd
400gb intel ssd

(2tbs are in m.2 slots on the MB and 1tb is in a pci adapter)
Intels i would like to raid using controller in future as i get more of them and cables to the controller.

global:
  max_concurrent: 25

jobs:  
  - name: 1tb
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd4
    temporary2_directory:
    destination_directory: /mnt/hdd2/plot
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 3400
    max_concurrent: 4
    max_concurrent_with_start_early: 2
    stagger_minutes: 50
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 12
    temporary2_destination_sync: false

  - name: 2tb
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd2
    temporary2_directory:
    destination_directory: /mnt/hdd/plot
    size: 32
    bitfield: true
    threads: 8
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 6
    max_concurrent_with_start_early: 8
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 40
    temporary2_destination_sync: false

  - name: 2tb2
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd
    temporary2_directory:
    destination_directory: /mnt/hdd2/plot
    size: 32
    bitfield: true
    threads: 8
    buckets: 128
    memory_buffer: 6144
    max_concurrent: 6
    max_concurrent_with_start_early: 8
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 40
    temporary2_destination_sync: false
#test    
  - name: intel1
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd400
    temporary2_directory:
    destination_directory: /mnt/hdd2/plot
    size: 32
    bitfield: true
    threads: 2
    buckets: 128
    memory_buffer: 6144
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 10
    max_for_phase_1: 1
    concurrency_start_early_phase: 5
    concurrency_start_early_phase_delay: 12
    temporary2_destination_sync: false
    
  - name: intel2
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd4002
    temporary2_directory:
    destination_directory: /mnt/hdd2/plot
    size: 32
    bitfield: true
    threads: 2
    buckets: 128
    memory_buffer: 3400
    max_concurrent: 2
    max_concurrent_with_start_early: 2
    stagger_minutes: 200
    max_for_phase_1: 1
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 30
    temporary2_destination_sync: false

1)i would be glad if you could point me out what is the best way to config that on 2tb and 1tb drives
2)My intel ssds, maybe there is a point of using them as temp2 directory for 2tb nvme?

hi @codinghorror ! saw your post on the Ryzen 5950X. I am struggling on W10 to get it to work… can i ask you some things?

thanks!

@harris we also have confirmation that Linux is 10% faster at plotting than Windows, so that explains inability to hit 50 in Windows as well. 10% of 50 is 5!

I saw that! Nice to see confirmation there.

So, will you be joining the world famous command-line club for your future builds? :grinning:

Hi, chia farmers!

Today I hit 55. I’ll share my settings.

  • OS: Clear Linux (which is made by Intel)
  • CPU: 5950x PBO enabled. (running 4.7Ghz most of time)
  • RAM: 32Gb x 4 3200Mhz cl18
  • SSD: Samsung 980 pro 2TB x 4 with Asus Hyper M.2 (raid 0 using mdadm, filesystem: ext4)

plotman.yaml

  tmpdir_max_jobs: 20
scheduling:
        # Run a job on a particular temp dir only if the number of existing jobs
        # before tmpdir_stagger_phase_major tmpdir_stagger_phase_minor
        # is less than tmpdir_stagger_phase_limit.
        # Phase major corresponds to the plot phase, phase minor corresponds to
        # the table or table pair in sequence, phase limit corresponds to
        # the number of plots allowed before [phase major, phase minor]
        tmpdir_stagger_phase_major: 2
        tmpdir_stagger_phase_minor: 1
        # Optional: default is 1
        tmpdir_stagger_phase_limit: 5

        # Don't run more than this many jobs at a time on a single temp dir.
        tmpdir_max_jobs: 20

        # Don't run more than this many jobs at a time in total.
        global_max_jobs: 21

        # Don't run any jobs (across all temp dirs) more often than this, in minutes.
        global_stagger_m: 20

        # How often the daemon wakes to consider starting a new plot job, in seconds.
        polling_time_s: 20

plotting:
        k: 32
        e: False             # Use -e plotting option
        n_threads: 4         # Threads per job
        n_buckets: 128       # Number of buckets to split data into
        job_buffer: 6144
9 Likes

Even though the number of maximum temp job is limited by 20, jobs never exceed 15 because it finishes within 6hr±5min.

2 Likes

Well done! :clap: thanks for sharing this!

Very impressive thanks for sharing! Is there any significance of 6144 memory? And does the 5950x seem to be the bottleneck with this setup?

In fact, I just want to utilize my huge memory that’s all.
I don’t think there’s bottleneck since CPU I/O wait time is under 10 sec. 4x 980 pro raid 0 rocks!

2 Likes

I have a 5950x, 64gb with a single Force MP600 2TB, due to limited SSD availability.

So I am running 8 plots in parallel, staggered, through plotman. Allocating 4 threads each.

I can confirm 20-22k seconds per plot.
What amazes me is that I easily get 28 plots per day, with just one NVME drive.

I am expecting to receive two more drives tomorrow. But I believe this shows that plotting is less bottlenecked by the nvme drives than I originally thought.