Optimizing plotting on an AMD Ryzen 5950x (or any other 16c/32t CPU)

hi there, may i know your nvme brand and all the details please?

im also running a 5950 with 64gb Ram.
1x 2tb nvme seagate barracuda q5 pcie g3
2x 1tb nvme xpg
using 2 threads per plot

and using SWAR manager too

i try 2tb nvme 6 max concurrent with 30min stagger
2x 1tb nvme 3 max concurrent with 30 min stagger

the plotting is not working well, im finding my 2tb nvme is having 100% almost all the time, even when the first plot is running

do you have any idea what the problem is? or anyone can help to give any recommendations…

thanks

Hi

I’m using Sabrent 2tb NVME

https://www.amazon.co.uk/dp/B07MTQTNVR

If your filling up your NVME have you checked to see if you have any old Temp files still on it

as far as I’m aware no server version of Chia, just CLI. I believe @Harris is talking about server version of Clear Linux, meaning installing without gui desktop/version and getting a bump in speed because you’re running less overhead.

Depends on your comfort with linux though-I can do 90% of what I want through CLI, but for when I get stuck it’s still nice to have the gui for me to fall back on. Maybe when I get a little more proficient I can move to a similar setup. Toyed with the idea of switching to Clear Linux also, looks dope.

Probably best part about chia so far for me has been getting better with CLI and Linux/networking skills I didn’t have before. Even if I just break even I suppose it’ll have been worth it for that…

1 Like

I use this as a bat file (run as admin)

@echo off
:loop
robocopy X:\ D:\ /mov *.plot
robocopy Y:\ D:\ /mov *.plot
robocopy Z:\ D:\ /mov *.plot
timeout /t 30
goto loop

Save it as .bat file and run as administrator
X, Y, Z are my temp drives, and D is my destination drive

1 Like

i format the nvme before running it

maybe seagate brand is not good to do chia mining? is anyone else having same issues ?

I’m running a
5950x
64gb Ram (3600mhz)
1 x 2tb NVME 980 pro
1 x 1tb NVME. ssd980

I’m doing 10 plots. 7 of them are 2tb ssd. 3 of them 1tb ssd with 3 threads.
Does anyone have a better suggestion?

Try Swar Plot Manager.

Switch off journaling and enable dioread_nolock on top of that. takes out XFS

+1 any good script in swar plot manager for more than 40+ plots ?

Here’s my 5950x config for what it’s worth. I use 14x 300GB SAS Drives, 7 pairs in RAID 0 and then 5x Samsung 980 NVME drives. I’m usually between 55-60 plots per day with the below config. Unfortunately I haven’t had any time to tweak anything. CPU will run constantly over 90% and CPU Load ranges from 65-95+. So it could definitely use some serious tuning to even things out a bit lol:

max_concurrent: 50
max_for_phase_1: 14
minimum_minutes_between_jobs: 15

  • name: 0-Samsung_1TB
    max_plots: 999
    temporary_directory: /ssd/ssd0
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 0
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 1-Samsung_1TB
    max_plots: 999
    temporary_directory: /ssd/ssd1
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 5
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 2-Samsung_1TB
    max_plots: 999
    temporary_directory: /ssd/ssd2
    temporary2_directory:
    destination_directory:

    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest12
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 10
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 3-Samsung_2TB
    max_plots: 999
    temporary_directory: /ssd/ssd3
    temporary2_directory:
    destination_directory:

    • /dest/dest14
    • /dest/dest15
    • /dest/dest12
    • /dest/dest13
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 15
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 4-Samsung
    max_plots: 999
    temporary_directory: /ssd/ssd4
    temporary2_directory:
    destination_directory:

    • /dest/dest15
    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 20
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_1
    max_plots: 999
    temporary_directory: /ssd/ssd5
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 0
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_2
    max_plots: 999
    temporary_directory: /ssd/ssd6
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 10
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_3
    max_plots: 999
    temporary_directory: /ssd/ssd7
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 20
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_4
    max_plots: 999
    temporary_directory: /ssd/ssd8
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 30
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_5
    max_plots: 999
    temporary_directory: /ssd/ssd9
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 40
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_6
    max_plots: 999
    temporary_directory: /ssd/ssd10
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 50
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: 5-SAS_7
    max_plots: 999
    temporary_directory: /ssd/ssd11
    temporary2_directory:
    destination_directory:

    • /dest/dest12
    • /dest/dest13
    • /dest/dest14
    • /dest/dest15
    • /dest/dest16
    • /dest/dest17
      size: 32
      bitfield: true
      threads: 5
      buckets: 128
      memory_buffer: 8000
      max_concurrent: 1
      max_concurrent_with_start_early: 2
      initial_delay_minutes: 60
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

Man, this looks like the best way to go with the 5950x: use as many drives as possible so that you can use all of the IO power possible. How much RAM you have? Are your 980s regular or pro? What are the model of your sas? 55-60 plots a day is the dream bro

980 Pro’s (3x 1TB & 2x 2TB), 128GB memory but uses just over half that and the drives are $14.95 eBay specials DELL 300GB 10K 6GBPS 2.5INCH SAS HDD PGHJG (Renewed). Just need to stagger things a bit better so the CPU Load doesn’t shoot to the moon. When it gets too high plots start slowing down but then it all smooths back out eventually. If I could clean up the spikes I would think it would do over 60+ pretty easily. I also eventually want to see what happens if I use 3x or more of the SAS drives in RAID 0, instead of 2x.

CPU Usage: 92.5%
RAM Usage: 64.95/125.80GiB(52.6%)
Plots Completed Yesterday: 57

2 Likes

Thank you for your input.

Are you using Windows or Linux?

As i calculated the threads used for your setup you come out with 35? But you have only 32 threads available? How does that not affect your cpu or makes it reach the 100%?

I’m running Ubuntu Linux 20.04.2 LTS. The CPU can’t get to 100% due to CPU Load being so high. My CPU load should be 30’ish, not 65-95. Which means I have too many processes waiting for CPU, due to my overcommit of cores/threads. Some of it is iowait but that’s not terrible considering. So if I had less jobs running in Phase 1 the CPU would be much happier. I just need to get extra time to analyze all the timings, play with some timings and tweak everything.

I have this specs:
2x Intel Gold 6248R (48 Core, 96 Thread total)
256GB RAM
16x1TB Samsung 980
I set 48 parallel plot in the same time but that spent a lot of time (>20 hour each plot), anything wrong ?
My plot setting:
3 plot each Drive, 5000MB Ram, 4 thread, no stagger time

I have also managed to do +50 with a 5950X, although sometimes a plot freezes (Memtest passed OK)

It is much more efficient to use a 5800X, it can make 35 plots and the rig costs a lot less, here are my settings in Swar: (can´t remember if I changed the stagger but it can only be less)

5950X - 64GB @ 3600 CL 18 - 2 x 2TB MP600 + 2 x 2TB Gigabyte Gen.4 (Copper one)

  • name: Z-600
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: Z:
    temporary2_directory:
    destination_directory: D:
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 5
    max_concurrent_with_start_early: 5
    initial_delay_minutes: 0
    stagger_minutes: 30
    max_for_phase_1: 2
    concurrency_start_early_phase: 20
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 128
    enable_cpu_affinity: true
    cpu_affinity: [ 16, 17, 18, 19, 20, 21, 22 ]

  • name: Y-600
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: Y:
    temporary2_directory:
    destination_directory: F:
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 5
    max_concurrent_with_start_early: 5
    initial_delay_minutes: 0
    stagger_minutes: 30
    max_for_phase_1: 2
    concurrency_start_early_phase: 20
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 128
    enable_cpu_affinity: true
    cpu_affinity: [ 23, 24, 25, 26, 27, 28, 29 ]

  • name: X-Gig
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: X:
    temporary2_directory:
    destination_directory: E:
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 5
    max_concurrent_with_start_early: 5
    initial_delay_minutes: 0
    stagger_minutes: 30
    max_for_phase_1: 2
    concurrency_start_early_phase: 20
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 128
    enable_cpu_affinity: true
    cpu_affinity: [ 0, 1, 2, 3, 4, 5, 6, 7 ]

  • name: W-Gig
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: W:
    temporary2_directory:
    destination_directory: G:
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 5
    max_concurrent_with_start_early: 5
    initial_delay_minutes: 0
    stagger_minutes: 30
    max_for_phase_1: 2
    concurrency_start_early_phase: 20
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 128
    enable_cpu_affinity: true
    cpu_affinity: [ 8, 9, 10, 11, 12, 13, 14, 15 ]

2 Likes

Is ubuntu kernel better than Fedora, unity, LTS, clearlinux desktop or server? as I have asus X570 TUF gaming PLus mae card with AMD Rayzen 9 5900X processor. are you using the chia blockchain or SWAP platform?

If you’re using 5950x, you should upgrade to 21.04 at all costs to get the performance improvement specifically for AMD CPUs in kernel 5.11.

1 Like

Which motherboard are you using?

Aorus Elite X570, if I had thought more about it I would have purchased another one