Swinging for the fences (the super-close ones)

Ok cool - glad I’m getting closer.

And yes, def just extrapolating from spreadsheet. I won’t buy the yacht just yet - thanks for the heads up :slight_smile:

Another vote here for NOT running the JBOD in RAID - for SSD it helps but for HDD it generally hinders and a number of people here have observed 30+% performance improvement by running 1 plot per HDD - each running as a separate drive - but all in parallel, whilst observing Phase 1 and destination drive potential bottlenecks using a Plot Manager - - i use Swar

I appreciate the feedback - I actually run out of cores/threads at this point and then have 12 idle SATA drives… putting them into 2x 6-disk raids is just a way for me to use them for something - and the performance kept pace with the the 3 parallel plots I had running on my Samsung 980 Pros… so pretty decent. I was surprised.

I am assuming you have a staging drive/folder but just in case you don’t have, a staging NVME drive/folder can also help squeeze out more plots. In this way, your plotting moves on to the next job after completing a copy to your staging NVME drive/folder.

  • NVME1 → 3 Parallel Plots → NVME1\stage\folder → Robocopy → SATA1

I personally like it that in @swar ar plot manager it is possible to add multiple final destinations.

Ideally there would be an option in swar to assign a staging drive and a final destination (s) to keep all settings in one configuration.

Do you mean utilizing the /tempdir2 feature?

I hadn’t looked into it much - but I’m confused in your example why having it in a sub folder on the same plotting drive would help? Same IOPS/bandwidth demands on the same device?

I think you can, or I’m misunderstanding you.

You can set separately
temp
temp2
final destination

So for each cue you can define a staging drive and a final destination to copy to.
and then with start early phase, you can have it start a new plot before the last one finishes if you want

It is my understanding a staging drive is where you put the plots after they are finished. My final destination is on another machine, so the copying process will takes some time.

In your scheme it would be:
Temp (nvme for the first phase)
Temp2 (optional: Where the compressing is done)
Staging drive (where the finished plots are stored coming from temp2)
Final destination (NAS)

As per above people use Robocopy or other scripts to copy from the staging drive (which is in swar is the final destination) to the slow storage location.

My final destination is a NAS. I have a temp1 and temp2 but then write directly to the NAS as the final destination. Why do you have a staging drive? Are you just using it so you can use a script to throttle the upload?

I don’t have a staging drive at the moment. But if i could write to finished plot to e.g. a smaller SSD on the plotter, the whole plot will be finished faster than to copy it over the network to a remote location, no?

I see your point.

Although depending on how many plots get to phase 3 at the same time and the size of the 2nd temp, you can still use it as such. Just depends on your use case I suppose

Yes, it would finish faster. But, the upload of the file will still take just as long. So that means your plotting will outrun your uploads and sooner or later, your staging drive will fill up because it can’t keep up.

Normally, the way people use a staging drive is as a USB drive. Then they plot to it, let it get close to full, then swap it with an empty drive and take the full drive over and directly transfer the contents to the final destination.

Well my system will for sure not be fast enough to have plotting outrun the uploads.

But i can imagine on a fast system you could gain a significant percentace of faster speed by offloading to a local staging drive than to a remote location. And if the plot is finished faster, a new plot can also start faster.

The manual swap is something which i prefer to avoid. I want to set it and let it run. (slowly but steadily)

I run about 100-110 plots a day, MOST days - a combination of gigabit ethernet and sneakernet. The times i run under are usually when some part of my pipeline fills and everything turns to sh** and i have to restart the pipelines leading to a 12h plot dip in production. I have resorted partly to copying to the farm direct and partly copying to one plotter with 20TB hot swappable drives and walking them over and restocking with blank drives so i can, at least have 2 gigabit segments active and still some left over to serve the farming with decent timings

It is not the best arrangement but i only have about 50TB to fill my disks so i will live with it - after that i will be overplotting with pool versions which will be easier to manage and less time critical

1 Like

I don’t think I will see that problem ever :rofl:

I have been thinking about buying a faster plotter, but for me personally, i don’t think it matters too much if I fill my space in 2 weeks or 2 months.

I understand it can have benefits of faster plotters when you are on a different strategy.

You do what you can. But your plots (however many you can hold in your storage) are worth a lot more in two weeks (or now) than they will be in two months. That is why we are working hard to speed up our plotting.

1 Like

In my setup, a staging drive is very helpful because I am using USB external hard drives as the final destination. I configured 3 x 2TB NVMe into RAID0 then create a temp folders for plotting and completed plots. Once I ran out of storage, I will stop the Robocopy script (or any), insert a fresh external hard drive, modify the script to point on that new hard drive without interrupting the plotting. I got the tip and script from Optimizing Plotters in Windows – The Chia Farmer

I do the exact same thing, highly recommended.

HEY MAN! It’s been a little over 24 hours since your post, how’s the spreadsheet stacking up to the real world usage, you on track to hit your goals??

Real world Swar is 30 plots/day but I noticed %wa (up wait) was around 2 so I dropped max concurrent to 10 (from 12) and max Ph1 to 5 (from 6) and waiting to see what that turns up