Swinging for the fences (the super-close ones)

First you need to look at how long it takes for your cpu to complete phase 1. For me it takes roughly 14100 seconds to complete phase 1 using 2 threads and 4gb ram. So I would set the plot to restart every 480 minutes. Than I would queue them up as follows
Time: 00:00:00 Plot 1 -Temp 1 SSD1 Temp 2 HHD1 X3
Time: 00:00:00 Plot 2 -Temp 1 SSD2 Temp 2 HHD2 X3
Time: 00:30:00 Plot 3 -Temp 1 SSD1 Temp 2 HHD3 X3
Time: 00:30:00 Plot 4 -Temp 1 SSD2 Temp 2 HHD4 X3
Time: 01:00:00 Plot 5 -Temp 1 SSD1 Temp 2 HHD5 x3
Time: 01:00:00 Plot 6 -Temp 1 SSD2 Temp 2 HHD6 x3
Time: 04:00:00 Plot 7 -Temp 1 SSD1 Temp 2 HHD7 x3
Time: 04:00:00 Plot 8 -Temp 1 SSD2 Temp 2 HHD8 x3
Time: 04:30:00 Plot 9 -Temp 1 SSD1 Temp 2 HHD9 x3
Time: 04:30:00 Plot 10 -Temp 1 SSD2 Temp 2 HHD10 x3
Time: 05:00:00 Plot 11 -Temp 1 SSD1 Temp 2 HHD11 x3

This was the best I could come up with using the GUI and it should have some safeguard built in. If it works right it should give 33 plots a day. This is based off of how long my cpu takes to do phase 1. The times need to be adjusted for what your system is doing. I would plug that in and than go and learn how to automate the process before having to re-plot for pools. That would be my plan.

1 Like

That base time for phase 1 is really just a starting point. Because once you fire up all the other plots in parallel, that time triples. Really the only way to see what it is capable of is to let-r-rip and then settle in for about 24 hours and then get real times for the phase with all the plots in parallel, then you can really tweak your stagger and number of parallel plots depending on available resources and overlap in the plot timing.

1 Like

I was just about to write what @WolfGT wrote - Iā€™ve tried this a bunch of times And ALL phases become totally unpredictable wrt runtime as more start up which means eventually - the 3 times I tried it was < 8 hrs - too many critical phases snowball into one a other and the plotter falls over :frowning:

My base time for that plot was with 9-12 plots running at the same time on a 10850k. I should have clarified that.

Edit to add
My assumption is that the 11700k and a 10850k are going to perform to a similiar level on a per thread basis.

This is what makes swar useful - it is actually reading your logs as jobs progress, so it knows what stage each job is at and acts accordingly. You can set a maximum number of jobs to have in stage 1 (allowing you to optimize your CPU) and a maximum number of jobs to run (allowing you to ensure that you donā€™t overflow dusk space).

It also allows you to round robin across all of your temp and destination disks, which allows you to better balance resources and bandwidth for final copies.

It took me a bit of time to figure it out, but doing so actually gave me better insight into how chia uses resources. I really recommend trying it.

Here is my current status with my very similar 10850k machine.

Specs:
10850k 10c/20t
64GB DDR4
4x1TB nvme
12x14TB external usb3 drives

I have experimented with different stagger-values and x plots in parallell, read somewhere that the optimum number of parallell plots is (n of cores + n of threads) / 2 that equals to 15 for me, another suggestion that i settled for was n of cores * a factor of 1,3 that equals to 13 for a 10850k (if you memory & disks can support this), read about what your bottleneck is here: How Many Plots Can I Make a Day? ā€“ The Chia Farmer

As you want to utilize as much of your cpu over time - staggering is the way to go. Phase 1 is multi-threaded, the rest single thread. From my benchmarks P1 accounts for ~ 42% of the total plot-time, therefore i tried to ensure that 40% of all parallell plots where in P1 > 0,42*12 ~ 5 plots in P1. Then in SWAR there is something called ā€œstart early phaseā€ that lets you kick of one more plot while the last one is finishing up, copying etc - so i added 1 > 12+1 = 13 parallell plots with a max of 5 in P1.

I run 2 threads and 4000mb per plot.

With this i get a somewhat continuous utilization of:
5 plots in P1= 10t
6-7 plots in P2-P4 = 6-7t
= 16-17t usage all time.

Stagger value of 45 min calculated from a complete runtime of 9h 40 min per plot > 580 min / 13 plots > ~45 min (is this a correct way to calculate?)

Right now i get ~30 plots a day with this SWAR config (i know i could tweak this further, and run Linuxā€¦)

    - name: 4x1tb-nvme
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: D:\
    temporary2_directory:
    destination_directory: K:\plots
    size: 32
    bitfield: true
    threads: 2
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 12
    max_concurrent_with_start_early: 13
    stagger_minutes: 45
    max_for_phase_1: 5
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

4 Likes

Greg, I sincerely want to thank you for the push - after this post I literally thought ā€œI donā€™t want to tell the guy I didnā€™t get Swar runningā€¦ā€ And popped open the config and went through it again.

It makes A LOT more sense to me now than it did 4 days ago interestingly enough.

Dead serious, if you hadnā€™t been pushing I wouldnā€™t have done that so it means a lot.

Lemme know if this passes the eyeball-sniff-test!
(REMOVED - see post in next reply - more up to date config)

2 Likes

Iā€™m literally reading your post and adjusting my Swar config as a referenceā€¦ THANK YOU for posting in such detail.

Here is what I have now:

Make sure you update the chia location to the windows version (edited with your username and version number.)

1 Like

Oh Iā€™m on Ubuntu 20.04 so I think Iā€™m ok?

Also the log location. It is very convenient to send them to the plotter directory in your chia folder.

Glad if we can share knowledge!

Perhaps lower your threads from 4 to 2 and set max_for_phase_1 = 6 and max_concurrent = 14?

P1
2x2=4
2x2=4
1x2=2
1x2=2
= 12 threads on a max of 6 plots

P2 to P4
20-12=8 threads left for the single treaded plots

Just an idea, not sure its the best :slight_smile:

OH, got it. I guess Iā€™m not used to seeing the GUI on linux.

Good call, here are the changes I made.

  • Changed global-max_concurrent to 12 (Iā€™ll adjust up/down as I get timings in I think)
  • Changed globlal-max_for_phase_1 to 6.

As for lowering threads, I was trying to figure out how threads and max_concurrent interact with each other on the jobs.

For each Job I have configured, is it thread X max_concurrent=TOTAL THREADS?

Yes, that is correct. But during plotting, each thread assigned is not used 100% all the time. So you can actually over commit. But start conservative and then play with it. You can actually change the config live without starting over. The new settings just get applied to every new plot after the change.

1 Like

*if you type ā€œpython manager.py restartā€

And agree, at some point you just have to start with a basic setting you think will work and see how it goes, perfecting it requires reading the way your system behaves with various changes

1 Like

And donā€™t be too hasty on your changes. Let the system settle in for at least 24 hours before reading too much into the phase times. Because the first 10 or so are going to be faster than what they will be once it settles in (because they were not running with the system under full load).

3 Likes

Roger that - Iā€™ll be firing up this config tonight probably in 6 hours or so (the ones I have running in the GUI are all ~35% done but already in Phase 3ā€¦ weird % with these ones).

Quick update, for anyone wondering, using the new-fangled 6-disk RAID0 setups from the JBOD to help with farming to do 8 at a time has hit 27 plots/day:

Tonight Iā€™m using Swar and weā€™ll see if we can beat that.

2 Likes

2.7 TB/day is very respectable. Youā€™ve derived this number from the spreadsheet though right?? ie you havenā€™t actually done 27 plots yet? Just be mindful that extrapolation with Chia is often woefully inaccurate. Chia is a fickle mistress!!