(Tip) "Pen and paper" method for finding the right staggers for your plots

Just thought someone might find this helpful OR point out how silly this is and I can learn something new.

Challenge: How can I figure out the optimal staggered starts/delays for scheduling my plots?

Solution: Run a small sample size of single (for best-case timings) and parallel plots, collect the stage timings in a spreadsheet and then convert those timings into blocks based on averages and then visually lay them out to see what a fully utilized machine looks like.

Here are my sample recordings (Iā€™m adding more of time to improve my guestimates):
image

And here, I converted the times into ā€˜blocksā€™ (for visual representation) of 45minsā€¦ originally I did 30mins but it was too fine-grained I think and the spreadsheet got a bit unwieldy:

If you can read the sub-note there on the bottom right, you see I have 2x1TB drives for plotting (in a RAID 0 array) - so I have 2,000 GB to use for plotting.

I also see that from the logs each plot is taking 270 GB of working space, so I have:
2,000 GB / 270 GB = 7.4 plots will fit

(NOTE: I know Chia uses ā€˜GiBā€™ but from what I gathered they were always within 10% range of normal GB measurements so Iā€™m not splitting hairs here)

So I know that no more than 7 plots in existence on the plot drive will ever fit.

So then I set out organizing and staggering my plot runs until I had a layout (highlighted in black) where the maximum number of simultaneous plots is 7 and then the oldest ones start closing out and being moved to farming.

The result was a 90 min stagger in my particular setup - Iā€™m running 20 plots in that configuration over the next few days to see what the result is.

I hope/have a feeling I may be able to pull that in a bit.

Anyway - just a silly ā€œpen and paperā€ method I used. Hopefully it helps someone!

4 Likes

There are a couple of tricks. Each plot is taking 270GB not all time.

Iā€™m confused. Is this the $5000 plotter you built? 7 plots in parallel at a 90 minute stagger is really slow considering the cost. Am I reading this wrong? What am I missing? How many plots per day are you estimating?

I have to be missing something.

Yep itā€™s that plotter - you arenā€™t missing anything.

The real killer seems to be filling up the RAID 0 array (which happens with anymore than 7 simultaneous plots) - then everything locks up.

When I sat and watched my sets of parallel plots (4 then 6 then 7), I noticed two things:

  1. The CPU in the 7-parallel plots is pegged most of the entire time.
  2. The plotting drive stays full the entire time until the very end when itā€™s moved off to SATA and tmp files deleted.

I was thinking I could stagged the plots more closely assuming there were ā€œspace intensiveā€ phases that would then compact the file and I could sneak a few more on thereā€¦ it seems not.

I expected this to run faster so maybe Iā€™m seriously missing something.

Ubuntu 20.04

EDIT: Something Iā€™ve never dabbled with was core count (per plot) or memoryā€¦ if people have found those to really impact things I could give them a try.

Ok so this is interestingā€¦ I am continuing to collect samples of the logs from runs when they complete and I just grabbed one from a plot that ran almost completely by itself (to get a baseline) and itā€™s about exactly 2x as fast as the slowest plot I recorded when I fired up 7 at one time:

If I use that 4.5hrs runtime/plot as a more realistic baseline, I should be able to stack a lot more, more aggressively.

Let me update my visualization to see what it looks likeā€¦

Hmmm yea with these numbers I should be able to start a plot every 45mins which is exactly 2x as fast as Iā€™m plotting now.

Alright Iā€™m queuing every 45mins now - letā€™s see how this goes -

Iā€™ll share another screenshot tonight once itā€™s running 7 in parallel - show system utilization and what not.

Hi, its a super very nice and usefull thread.

If u accept an tip, I strongly recommend use swarā€™s or plotman (I prefer swars), theres a lot features where the original GUI dontā€™ve.

Thank you @Saburo for the nice comment and great suggestion!

Yes I definitely plan on using Swarā€™s, Iā€™m just trying to understand this machine better before jumping into a tool like that - I went through the config file and didnā€™t understand what some of the configuration values should be (yet) - so this is my continued research into that.

Ok hereā€™s the status so far using 45mins off setsā€¦ got 7 plots in flight right now, the oldest one is 55% done - if I was a betting man, I would expect the 8th one to start up before that one finishes and the total disk space remaining is 684GB on the plotting drive, so we might be OK hereā€¦ itā€™ll be down to the wire for sure:

Also what Iā€™ve noticed over the hours this has been running, as more plots start to stack, the CPU is slowly creeping towards 16.0 load - which is 100% utilization of Cores/Threads (itā€™s an 8-core / 16-thread CPU):

So from that aspect itā€™s more efficient to stack them like this - but disk read/write is all staying under 500 MB/sec range, nothing spiking up into GB/sec on the drivesā€¦ I canā€™t tell if Iā€™m overloading the driveā€™s buffer with so much context switching.

Iā€™m going to grab some total runtimes from these plots as they wrap up and see if they are running dramatically slower.

From a disk I/O perspective it might be faster to plot less in parallel and have more latent CPU if the disk is doing less choppy I/O I guess.

EDIT: Just did a prelim check on 1 of the plots running while the machine is ā€˜maxed outā€™ with 7 simultaneous plots and it is in fact running 1/2 as fast as the baseline run.

I have found out that when I run multiple plots close together ( multiple in varios stages ) just makes all the plots drag to hours. Now I was running ubuntu at that time and now im on windows. currently averaging low 5hrs by making them 120min apart but just 1 will be at each stage more or less.

Setup : Ryzen 3600X / 32GB 2TB 970 Evo Plus (temp ) / 3TB SATA (temp2 ) / USB ( Final )
I guess we will start using the for our signatures soon lol

Plots are 4 threads, 3390 Ram

1 Like

I JUST compared the timings of Ph 1 and 2 on one of the loaded plots and you are right - itā€™s absolutely running 1/2 as fast as my baseline plot.

Did you notice a big increase by doubling the threads? I think Iā€™m headed in the same direction as you (less parallel plots and less resource contention so they go faster) - so Iā€™m thinking of upping the thread count from default also.

Actually I cannot say I did, I should have run the same thing with 2 threads, which if I remember correctly was putting me close to 6hrs per plot on ubuntu.

At this moment im doing multiple at the same time: 1 4T at phase 3/4, 1 4t which is in 2/4 and 1 2T at phase 1/4 with another to start at the 60min hour. Will see what happens. Im trying to the the feel for the temp 2 drive which has helped even though is HDD

Here is another average from lats plots on this setup. Done with

What tools gives you those charts for disk read/write and CPU usage?

Was that question to me?

If so, it is Stacer on Linux (Ubuntu 20.04)

Hello. What I donā€™t understand is the obsession with shaving milliseconds off your x parallel plotting. Dudes, your storage space is finite, and you have time. What difference does it make if you end up filling your storage in x days, or x+1 days? Enjoy the ride lol. Your cpu will thank you!! :stuck_out_tongue_winking_eye:

The reason I am fine tuning my plotting is because I know very soon I will most likely be replotting my plots for a pool and I would rather do that tuning now then wasting time then. Your earning potential is directly reflected in your total number of plot space in comparison to the total netspace. The netspace is growing. So the faster you can get to your full capacity, the more you earn. That is why.

As an example, would you rather have a full 16TB drive of plots right now or in two months? I can tell you the math is simple. It is worth a lot more now.

2 Likes

I feel like if you had been watching the netspace grow in Chia for the last 2 months, youā€™d answer this yourself.

100 TB 2 months ago would have you mining multiple coins a day, 100 TB today will have you mining 1 coin a month if you are lucky.

There is a crushing time-element to this :frowning:

I tried to do the same thing with some code: keep piling on until you fill up the temp drive. It kinda/sorta worked. To be really useful, it would have to take into account how much each plot slows down. So I ended up using plotmam.