Chia Swar optimizations with 3900x 4 X 1 TB ssd

Greetings plotters :slight_smile:
I have these hardwares :
asus 570 MOBO
4 X 1 TB samsung 970 pro and evo (Not Raid)
8 X HDD
3900x CPU (12C/24T)
64GB RAM

i use swar this swar settings and almost 18-20 plot a day . Also work on linux.

if i use stagger 60 limit myself to 24/day. Just math not try this setting.

So, i need some optimizations , can anybody bit help :slight_smile:

Global :

max_concurrent: 12
max_for_phase_1: 4
minimum_minutes_between_jobs: 8

  • name: 970pro
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: /mnt/ssd2
    temporary2_directory:
    destination_directory: /mnt/hdd2
    size: 32
    bitfield: true
    threads: 8
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 3
    max_concurrent_with_start_early: 4
    initial_delay_minutes: 1
    stagger_minutes: 10
    max_for_phase_1: 1
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: true
    exclude_final_directory: false
    skip_full_destinations: true
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

    • name: 970 evo
      max_plots: 999
      farmer_public_key:
      pool_public_key:
      temporary_directory: /mnt/ssd1
      temporary2_directory:
      destination_directory: /mnt/hdd1
      size: 32
      bitfield: true
      threads: 8
      buckets: 128
      memory_buffer: 4000
      max_concurrent: 3
      max_concurrent_with_start_early: 4
      initial_delay_minutes: 1
      stagger_minutes: 10
      max_for_phase_1: 1
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: true
      exclude_final_directory: false
      skip_full_destinations: true
      unix_process_priority: 10
      windows_process_priority: 32
      enable_cpu_affinity: false
      cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

First thin that comes to mind is enabling Trim on those drives. If you search for trim script here on the forum you’ll find some that enable it every hour (I’m on windows myself)

You’re swar setting look ok to me, so that’s why I’m thinking system settings rather than swar settings

Very similar system to mine, doing 30+ plots/day so let me just recap some of the things that helped me get it running properly.

  • Update Bios
  • Update Chipset drivers
  • Enabling cpu affinity (don’t ask me why)

Also I noticed that things slowed down when the drives are getting too full.
If you have 2 evo + 2 pro I would suggest you try making 2x raid0 and then run 5 plots per raiddrive.

Otherwise I suggest doing a few less plots, just to see how that goes.
so just run 2 on each drive. Or you can try spreading it out a bit more.
Still like I said, with the settings you have you should be getting more plots I think, so system/os settings might be the culprit.

This is close to what I am running

max_concurrent: 9
max_for_phase_1: 4
minimum_minutes_between_jobs: 0   


 name: 970pro
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: 
            - /mnt/ssd1
            - /mnt/ssd2
            - /mnt/ssd3
            - /mnt/ssd4
    temporary2_directory:
    destination_directory:
            - /mnt/hdd1
            - /mnt/hdd2
            - /mnt/hdd3
            - /mnt/hdd4
    size: 32
    bitfield: true
    threads: 8
    buckets: 128
    memory_buffer: 4600
    max_concurrent: 9
    max_concurrent_with_start_early: 9
    initial_delay_minutes: 0
    stagger_minutes: 35
    max_for_phase_1: 4
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: true
    enable_cpu_affinity: true
    cpu_affinity: [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 ]
1 Like

Thanks for whole feedbacks, that config for my setup is it right? I have 4 sdd why I use stagger ? also max phase_1 limit is 4.The point of waiting to prevent too many parallel operations in phase 1? A new phase 1 will not start until phase 1 ends. So I thought I don’t need to wait. Am I right?

I obviously do not want raid, because I want to change(2 different brand) so it is better to use SSDs separately.

another question , is there any difference in separating JOB’s?

max_concurrent is 9 I think my setup handle maybe 10-14.
250 * 4 = 1tb then = 16
3400 ram * 16 =54xxx
and cpu (12+24)/2= 18
my reference is : How Many Plots Can I Make a Day? – The Chia Farmer

Yes, you’re right but I find that setting a stagger helps balance out the total system load better.
(Phase 1 is CPU and write heavy, phase 2 uses most disk space, phase 3 is most I/O heavy. )

I find it sometimes easier to just use one job with all the disks in one list because it easier to set the totals and the timings between plots.

Yes, but you could also do 20 at a time if you want but the question is: what will the result be in plots/day ?
I see very little difference between running 8,10,12,14 plots. It all turns out around 30 plots/day for me.

Think the most important thing is to start a bit lower and then see how you can add to that. Running 8-9 plots, you should be hitting 28-30 plots per day. If it stays really below that, then something is up with your system that you need to fix first.

Thanks for feedback ! Your answers have satisfied and informed me. Then I will share the results in the next 24 hours.

last config is :
max_concurrent: 9
max_for_phase_1: 4
minimum_minutes_between_jobs: 0

name: 970pro
max_plots: 999
farmer_public_key:
pool_public_key:
temporary_directory:
- /mnt/ssd1
- /mnt/ssd2
- /mnt/ssd3
- /mnt/ssd4
temporary2_directory:
destination_directory:
- /mnt/hdd1
- /mnt/hdd2
- /mnt/hdd3
- /mnt/hdd4
size: 32
bitfield: true
threads: 8
buckets: 128
memory_buffer: 4600
max_concurrent: 9
max_concurrent_with_start_early: 9
initial_delay_minutes: 0
stagger_minutes: 35
max_for_phase_1: 4
concurrency_start_early_phase: 4
concurrency_start_early_phase_delay: 0
temporary2_destination_sync: false
exclude_final_directory: false
skip_full_destinations: true
enable_cpu_affinity: true
cpu_affinity: [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23 ]