Only getting 30 plots/day Ryzen 5950x 128GB Ram 4x Firecuda 520

I have a Ryzen Build with the specs above, running ubuntu 20.
I am running 5 threads per NVME with a 90 min stagger
During my first week or so I was getting plots every 8-9 hours around 40-50day

but over the last week that has gone down to like 30 plots a day.
I have plotted a total of like 400 - 500 plots with this machine. (100+ plots per NVME)

when I tested the drive health using seagate utilities they listed at 91% health.
when I run bench marks on them using gnome-disk I get ~1.9GBPS write speeds which is about what its always been.
but I notice after I kill off my plots, and go back to do another benchmark I get speeds of like 300-500 mbps

have my NVMES degraded that much already? or do you guys have any tips to get my plotting speed back to where I expect. 30.day for 4x nvme has been a huge drawback for me.

chia plots create -k 32 -b 5000 -e -r 3 -u 128 -n

Yes, it is most likely the drives. What else could it be? Try different drives and see if you get different results.

(You can also try doing a TRIM or re-formatting the drives.)

thanks for getting back to me. from your experience so far, how many plots did you get from an NVME before you started noticing a slow down?

Run CrystalDiskInfo and let us know what the health is of each drive. Should answer this question.

1 Like

He said they were listed at 91% health in the first post, but never hurts to re-check.

2 Likes

If you run it right after you kill the plots, the cache will still be recovering so that’s why it’s not running at max speed at that time. It should still do a bit more than 300 though.

You probably already tried, but reformat the drives and see if that helps. 100 plots should not be nearly enough to kill those drives. 91% health seems about right

pretty sure crystal disk is windows only. I am using ubuntu 20. have been considering doing an windows install on this

Thats good info bro thank you. How long do you think it would take the cache to clear? I have left like 10 minutes in-between checks and gotten the same results.

I have tried reformatting the drives, and even reinstalling ubuntu

I have not tried TRIM, never even heard of it actually. and I am a pretty strong tech.
anyways, will give this a try the next plot session I start.

1 Like

It’s weird, there’s multiple people reporting similar problems like this. No idea what’s going on, but trim is a good thing to do on Linux anyway, if you look in one of the other topics with this problem you’ll find some scripts for it as well

Tomshardware:
Seagate’s FireCuda 520 absorbed nearly 370GB of data before its write performance tanked. Once the pSLC cache is full, the write speed fell to an average of 600 MBps. After letting it idle for a bit, the cache should mostly recover. After 30 seconds, the drive recovered 16-20GB, and after just a minute, it recovered 63GB of pSLC space. Bear in mind, the cache capacity is dynamic. After an intense write workload, it only recovered 118GB of cache after idling for 5 minutes, and only 100GB after idling up to 30 minutes.

1 Like

TRIM is super important for the health of SSDs. Please please please enable it. Otherwise over time performance will degrade. This can also reduce lifespan.

2 Likes

Now I believe you are using three threads(I use Swar and am a command line noob) and 5 jobs per Nvme is what I am deciphering??? What size are the nvme?

A bit of untested theory with regards to ccx design: keep threads probably at 4 or under. Each chiplet is 4 core so in theory scheduler will keep within its boundary.

Going 5+ threads it has to transmit across the infinity fabric to another ccx which ads latency. So if you are going to go past 4 threads. …make it count.

So theory, what if it schedules one of the jobs with 3 threads on say ccx #1. All good. Then with one thread left on ccx 1 it might schedule next plot on remaining ccx #1 thread and 2 threads on ccx 2???

TLDR: So in theory use 2 or 4 threads??

Also not tested myself but apparently Ubuntu 20.04 doesn’t have all the latest Ryzen 5000 optimizations/drivers. 21.04(default kernel) apparently does( maybe try an updated kernel first on 20.04).

1 Like

As reference I get 30 plots with 2( WD SN750 1TB) and a half nvme(HP EX920 1tb…with other files and games on it). Ryzen 3900x and 32gb 3600 cl16 ram(on Windows) and (Swar and I think only only 10-11 plots running… Staggered)

Does anyone have the cpu affinity layout for 3000 and 5000 ryzen? It might help with @chiatroll and myself.

Windows doesn’t seem to move activity much across threads… But I never know either.

need to trim them, the lower the health the more regularly you need to trim them to keep them working well, though it’s a bit odd to go down to 30 plots with that setup even if your ssd has degraded

i would also check to see if any overclock has been reset such as xmp profile

In either case, I too like TRIM

1 Like

I feel like 1 hour trims have helped my plotting speed. I actually bumped it up to 6 threads per NVME, but I am noticing it really slowing down some of my threads. on my first rotation I was getting plots at 6-7 hours. now getting plots at around 10 hours after my staggered threads have jumped in the mix. There may be some value in having 4 threads per nvme. just feel like there is so much wasted temp storage. at that point I might as well have gotten 1tb nvme instead of 2tb

firecuda 520 have very slow sustained (post-cache) write speeds, which is what you’re experiencing

if you move to the madmax plotter, you’ll probably easily get 50 plots a day, i believe the firecuda 520 has a pretty large pseudo-slc cache EDIT: yep, as above, 370gb), so with the reduced nvme writes with this plotter, you’d come close to writing entirely within said cache.

if you could raid0 both of them and alternate between each raid0 as tmp1, you’d definitely be entirely within the cache all the time as it would have plenty of time to refresh at 30ish minutes between plots, pretty sure the cache is well and truly refreshed with 30 minutes of idle time.

1 Like

Another possible reason might be SSD overheating ? Did you check the sensors?