Plots stop at 96.25%

I am getting this weird result on my 3970x with 128GB RAM plotting to 3 1TB 970pro and a MP600 pro. 3 plots stopped at 96.25% for the second time. I deleted my first plotting attempt with this machine and now it just happened again!

The other plots are still running. Any idea what caused this?

here are my swar settings

    
  - name: 970 a
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: D:\
    temporary2_directory:
      - F:\
    destination_directory: 
      - F:\
    size: 32
    bitfield: true
    threads: 6
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 4
    max_concurrent_with_start_early: 4
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 3
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

  - name: 970 b
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: K:\
    temporary2_directory:
      - F:\
    destination_directory: 
      - F:\
    size: 32
    bitfield: true
    threads: 6
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 4
    max_concurrent_with_start_early: 4
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 3
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

  - name: 970 c
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: L:\
    temporary2_directory:
      - F:\
    destination_directory: 
      - F:\
    size: 32
    bitfield: true
    threads: 6
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 4
    max_concurrent_with_start_early: 4
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 3
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

  - name: MP 600
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: E:\
    temporary2_directory:
      - F:\
    destination_directory: 
      - F:\
    size: 32
    bitfield: true
    threads: 6
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 4
    max_concurrent_with_start_early: 4
    stagger_minutes: 40
    max_for_phase_1: 2
    concurrency_start_early_phase: 3
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

First guess would be you are over filling your f: drive when using it as a temp 2 directory.

1 Like

and that’s exactly what it is… not really over filling it but overloading it because I use an externam SSD

I would get rid of the temporary2_directory, and use only temporary directory 1. And also would be better to get some delay between each job by using the global variable minimum_minutes_between_jobs: 15.
I would set that to 15-20 minutes so its not overloading the drive.

1 Like

You have 9 processes fighting to write over 100 GB minimum each, while another 3 are trying to read again over 100 GB each, all at the same time to and from that poor F: drive. It is a wonder it hasn’t just melted or exploded by now.