Swar Settings for Rig

Hey everyone,

I have the following build:

  • 5900x
  • 64GB Trident Z 3200 CL14
  • B550 Master (Has Bifurication for up to 3 PCIe 4.0 x4 nvme SSDs)
  • 1x Corsair MP600 2TB
  • 1x Adata S11 Pro 512GB
  • 6x 10TB WD Red Pro

From the starter guide I know that my bottleneck is 10 plots with the SSDs

My question is: How can I optimize how many plots to run at the same time?

I thought about running 2 jobs, the first one for the 2TB SSD and 5 HDDs and the second for the slower SSD with 1 HDD

So the settings would be:
Global Max_Concurrent=10

Job A (2TB):
max_plots = 8
temporary_directory = 2TB SSD
destination_directory = HDD 1-5
threads = 16
memory_buffer = 27200 Do I multiply with 3400 with max_concurrent here?
max_concurrent = 8
max_concurrent_with_start_early = 8
max_for_phase_1 = 2 - Since Phase 1 has a big load, maybe set this to 2?
concurrency_start_early_phase Setting this to recommended 4 or other?
concurrency_start_early_phase_delay I honestly don’t know what to set this to - 1 hour?
stagger_minutes My biggest issue, I thought about starting with 1 hour and going down if one plot is faster than a new one is started

Would be great if I could get some insight, any other with a similar setup who can provide some data?

1 Like

im using the windows gui, so im not pro with codes, go check the youtube channel Coin Breakthrough
hes video might be helping you out

1 Like

Thanks for your suggestion, will check his channel out, though he doesn’t seem to provide any insight on Swar Manager

Job A (2TB):
max_plots = 10
temporary_directory = 2TB SSD
destination_directory = HDD 1-5
threads = 4
memory_buffer = 3400
max_concurrent = 9
max_concurrent_with_start_early = 10
max_for_phase_1 = 3
concurrency_start_early_phase 4
concurrency_start_early_phase_delay 1
stagger_minutes 60

I’m building a similar system, other then one additional 2tb M.2. I am planning to start with the above settings.

2 Likes

Hey Jeff, thanks much for your settings, will check them out!

One question regarding threads, is the number on a per-plot-basis or the for the whole process (in your example 9 concurrent plots then)

Going by the guide of the forum, we should calculate CPU+Threads/2, which in the case of 5900x is 18.

So 1 plot = 2 threads, shouldn’t it be 18 threads, then?

It’s for a per-plot-basis. So each of the phase 1 plots will have 4 threads, and the other phases 2-4 use only 1 thread each.

1 Like

Try this:

 - name: JobA
  max_plots: 890
  farmer_public_key:
  pool_public_key:
  temporary_directory: ssd A
  temporary2_directory:
  destination_directory: HDDs
  size: 32
  bitfield: true
  threads: 4
  buckets: 128
  memory_buffer: 3800
  max_concurrent: 8
  max_concurrent_with_start_early: 9
  stagger_minutes: 60
  max_for_phase_1: 4
  concurrency_start_early_phase: 4
  concurrency_start_early_phase_delay: 0
  temporary2_destination_sync: false

- name: JobB
  max_plots: 890
  farmer_public_key:
  pool_public_key:
  temporary_directory: ssd B
  temporary2_directory:
  destination_directory: HDDs
  size: 32
  bitfield: true
  threads: 4
  buckets: 128
  memory_buffer: 3800
  max_concurrent: 2
  max_concurrent_with_start_early: 2
  stagger_minutes: 60
  max_for_phase_1: 1
  concurrency_start_early_phase: 4
  concurrency_start_early_phase_delay: 0
  temporary2_destination_sync: false
1 Like

Thanks a bunch @majo and @JeffJN

My settings are now:

max_concurrent: 10
max_for_phase_1: 4
minimum_minutes_between_jobs: 3

  • name: SSD1
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory:

    • D:\Plotter
      temporary2_directory:
      destination_directory:
    • F:\Plots
    • G:\Plots
    • H:\Plots
    • I:\Plots
      size: 32
      bitfield: true
      threads: 4
      buckets: 128
      memory_buffer: 3408
      max_concurrent: 7
      max_concurrent_with_start_early: 7
      initial_delay_minutes: 0
      stagger_minutes: 60
      max_for_phase_1: 3
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]
  • name: SSD2
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory:

    • E:\Plotter
      temporary2_directory:
      destination_directory:
    • J:\Plots
    • K:\Plots
      size: 32
      bitfield: true
      threads: 4
      buckets: 128
      memory_buffer: 3408
      max_concurrent: 3
      max_concurrent_with_start_early: 3
      initial_delay_minutes: 0
      stagger_minutes: 60
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

1st SSD is the MP600 Pro and much faster, gets 4 HDDs and 3 concurrent plots on phase 7 with 7 total plots

2nd SSD is a slower ADATA S11 Pro which loses steam after the SLC Cache is full, so gets 1 Plot for phase 1 and 2 HDDs

1 Like