2nd Temporary Directory Experiments

I think both scenarios are worth seeing real numbers for. But to really know the benefits you have to have a baseline to compare to. So run a series of plots in parallel from the original setup for at least a couple days to let them settle into a pattern, then you can have something to improve from. Would like to see the results.

I just modified my plotters last night. They now both have two NVME’s running in RAID 0 as temp1. Both systems have a SATA SSD to be used as temp2 but I don’t have that turned on currently. The reason I didn’t turn it on was that I suspected that temp1 I/O was not going to be my bottleneck now. It appears I was right. The temp1 drive is smokin’ fast. I now have 10 in parallel. The bottleneck is the CPU. But I think it is going to run well. I’ll post up numbers when they get settled in.

Yep, testing as we speak on the test machine. I mirrored your settings quite closely because they line up with what I would attempt on a similar NVMe. So basically, I have a WD Black 750SN T2 as Temp -1 and then a Samsung 970 Pro as Temp -1. Testing and monitoring it as we speak to see what results and how the times measure up to your times you posted above across everything.

The one question I have then is that you have a 30 minute timing delay, which is OK as it starts up however, as we know, P1 will be greater than 1.5 hours, so the hard limit of P1 = 3 will kick in and start to dictate the actual delay until another kicks off, eventually then settling at a condition where when P1 #3 moves to P2, another P1 plot will begin if the concurrent max of 8 (9) also allows another to start, so these hard caps regulate it. Over time, the 30 minute delay is meaningless as the hard caps would settle into a pattern anyhow…I assume this is how you see yours running during your trials. Ideally I would think getting the offset delay to a very close actual number would help get it smoothed out from the start and regulate sooner though. Minor point I am sure.

Do you find that it balances where more or less you would see 4 in P1/P2 and 4 in P3/P4/P5 as a split when your times are at 8 hours total? Then I wonder with the second drive, how much could it handle in P3/P4/P5 before it would have an issue and what would cause it to bottleneck…this is leading to my idea of running 3 NVMes (P1/P2) into a single NVMe (P3/P4/P5) and would there be something with that Temp -2 handling triple the action from the other 3 that would bottleneck.

Also curious to see your results of RAID0 as temp1 and 10 in parallel and how that handles and produces, as we might then try taking our 8 drives into 4 RAID0 as 4x Temp -1. If CPU is the bottleneck, we would not have that issue I do not think as we have Threadripper 3970x, 256gb ram, liquid cooled and can overclock the CPU and that machine only plots, nothing else.

The bottlenecks have been driving me crazy these last 3 days as I have tried so many different configurations to get times like others have.

Thanks for replying, appreciate all your efforts on this thread. Hope I can add some test information to it.

Exactly. I was just not sure where it would settle in and I didn’t want the stagger to get in the way so I set it to what I thought was low and let the phase 1 limit determine the happy spot.

I have discovered that with my new configuration that temp1 is so busy working on active plots that the phase 5 time has doubled. You can look at the ethernet traffic and it is half what it could be. That means that having the final file sitting on the temp1 drive to be copied off is slowing down the system. I just made the change to start using the temp2 directory. I put the drives in there just in case I would need them. I’ll see if it make a difference.

1 Like

Looking forward to hearing how a SATA drive handles 2 NVMe drives feeding it.

How many plots in parallel are you testing on this setup now?

  1. And that is going to be the max. My processor can’t take any more. The drives are fine, but I have hit my processing limit. To be expected. I built this from a “Build a budget plotter” article.

I don’t quite understand it, and this is really early to say something, but somehow, the 970 Evo Plus drives are outperforming the 980 Pro’s. WTF? The only difference in the setups at all are the drive types and their sizes. The 970’s are 2TB each and the 980’s are 1TB each. Could the extra space really make that much of a difference in speed? Of course I’ll know more tomorrow as they settle into a smooth routine. I’ve been doing a lot of tweaking today to try to get it just right so that has messed with the numbers so far.

As mentioned above, I did decide to use the SATA SSD’s as temp2 drives on each. I’ll know later tonight how that is working out (or not working out).

1 Like

Is your mobo-PCI’s 3.0 or 4.0?

The spec sheet says

  • Dual Ultra-Fast M.2 with PCIe Gen3 X4 & One Supports SATA protocol

I guess the “Pro” line is PCI 4.0 and Evo is 3.0. If your mobo is PCI 3.0, does not matter Pro or Evo. My guess.

The 980 pro says PCIe® Gen 4.0 x 4, NVMe® 1.3c and the 970 Evo Plus says PCIe Gen 3.0 x 4, NVMe 1.3.

So does that mean my mother board is slowing the 980 down to gen 3 speeds?

1 Like

That is my guess. Try to benchmark both nvmes on Crystal just to make sure that both memories will deliver the same speed if your mobo lane is PCI 3.0

It is not the extra space. I have actually read on other threads here that the Samsung 970s are better overall than the 980s.

Seems to be the SSD of choice for plotting is the 970s. My 970 outperforms my WD 750SN as well and the 970 is 1T and the 750SN is 2T.

Can’t wait for the results. Are you feeding the SATA SSD’s with 2 NVMe drives or single?

It really sucks about the 980’s. I thought I would get a bump by getting those. I’m sure if my board supported Gen 4, it would be different. My bad for overlooking that. I could have saved $80 and just got 970’s. As for performance, they seems to have leveled off and are performing about identical.

I have the two NVME’s setup in a Storage Space (Windows version of RAID 0) and feeding the SATA SSD as the 2nd Temp drive. I don’t have final numbers yet, but it is doing fine. It is not getting backed up or overloaded. But It also isn’t doing what I was hoping. I was hoping that pulling that phase 5 (the uploading) over to the 2nd temp drive, I would get back to the 15 minute uploads, but that didn’t happen. The upload speed is still between 25 and 30 minutes per plot. I think it is the fact that my processor is maxed. I am doing one final adjustment. I have gaps in my stagger because of the way I originally started it all. So I am pausing the plot manager for the next 3 hours to work that out. From then on I think a 40 minute stagger will work. If it does, I should get 36 per day per system. But we’ll see if it can truly keep up with the 40 minute stagger.

It really sucks about the 980’s. I thought I would get a bump by getting those. I’m sure if my board supported Gen 4, it would be different. My bad for overlooking that. I could have saved $80 and just got 970’s. As for performance, they seems to have leveled off and are performing about identical.

Man, your CPU do not support PCI4.0


https://ark.intel.com/content/www/us/en/ark/products/199335/intel-core-i7-10700k-processor-16m-cache-up-to-5-10-ghz.html?wapkw=i7-10700k

That is obvious now. Neither does the motherboard. So, Oh well, I guess I have some 4.0 NVME’s if I ever build a better system. At least I didn’t lose a ton of money on it.

Plan for you:

  1. Earn XCH
  2. Upgrade CPU
    :rofl:

I’ve also been trying out the secondary temp folder today. Not with another ssd, but just use the final hdd as second temp.
Using another ssd never made much sense to me, unless you have something suitable lying around that you cannot use for plotten but is ok for second temp.
If I have to buy another ssd, might as well just get another plotting drive.

In any case, from my experiment today I can see that the phase 3 and 4 times are the same between plotting 3&4 to sata hdd or finishing them on the plot drive (3x 1TB ED Black sn750, raid0). So I save the copy time and lighten the load of the plot drive.

I’m going to try to set it up so that my plots will alternate between 6 different hdd’s as secondary temp and see if that makes a difference for the plot times as this should reduce the load on the plotting ssd’s.

P.S.
Why would you want pcie gen 4? You won’t ever use that speed for plotting anyway. SSD is not fast enough to keep up with the pcie bandwidth while plotting, only for short burst.

1 Like

Using an SSD makes sense when your final drive is on a NAS. Using the final destination as the 2nd temp is not an option for me.