Just thought someone might find this helpful OR point out how silly this is and I can learn something new.

**Challenge**: How can I figure out the optimal staggered starts/delays for scheduling my plots?

**Solution**: Run a small sample size of single (for best-case timings) and parallel plots, collect the stage timings in a spreadsheet and then convert those timings into blocks based on averages and then visually lay them out to see what a fully utilized machine looks like.

Here are my sample recordings (Iām adding more of time to improve my guestimates):

And here, I converted the times into āblocksā (for visual representation) of 45minsā¦ originally I did 30mins but it was too fine-grained I think and the spreadsheet got a bit unwieldy:

If you can read the sub-note there on the bottom right, you see I have 2x1TB drives for plotting (in a RAID 0 array) - so I have 2,000 GB to use for plotting.

I also see that from the logs each plot is taking 270 GB of working space, so I have:

2,000 GB / 270 GB = 7.4 plots will fit

(NOTE: I know Chia uses āGiBā but from what I gathered they were always within 10% range of normal GB measurements so Iām not splitting hairs here)

So I know that no more than **7 plots** in existence on the plot drive will ever fit.

So then I set out organizing and staggering my plot runs until I had a layout (highlighted in black) where the maximum number of simultaneous plots is 7 and then the oldest ones start closing out and being moved to farming.

The result was a 90 min stagger in my particular setup - Iām running 20 plots in that configuration over the next few days to see what the result is.

I hope/have a feeling I may be able to pull that in a bit.

Anyway - just a silly āpen and paperā method I used. Hopefully it helps someone!