I’ve ordered multiple M.2 NVME drive to test on my plotters (23900X and 22700x), running all on Ubuntu 20.04 freshly installed for the plotters.
I was running RAID 0 for most, then think about the bottleneck of 1 of 2 slower drives that will slow down the quickest, so i’m running 2 temp drives and plotman manage it.
Here is the write speed i have during the plotting(usually i increase the number of parallel plots until i see an IOwait around 6-8%):
Samsung 970 PRO 1TB : 180MB/S (IO~30% with //5Plots)
Running the benchmark test showed only 550MB/S for the 970 PRO, already on the latest update (tested on W10), on W10 test bench showed around 1500MB/S for a 64GB test.
I’ve looked around and found some other person with poor performance but never achieve to found the solution for that.
I’ve never used Linux until recently only for the plotting system, so my knowledge are only limited to me, searching on internet.
#Format the NVME drive on /dev/nvme0n1 (adjust for your setup)
sudo mkfs.xfs -m crc=0 /dev/nvme0n1 -f
#Mount the file system with the 'discard' option which is for trim.
sudo mount -t xfs -o discard,noatime,nodiratime /dev/nvme0n1 /mnt/temp00
I have the 1TB 970 Pro and it’s insanely fast when compared to the 2TB Sabrent I have (both latest models). The 1 TB can only handle 3 plots at a time but it’s nearly twice as fast as the 2TB with 7 plots…
Bear in mind I’m not using Windows…
It’s 2 separate system, but setup the same way, so maybe i’m doing something wrong…
I’ve followed pretty much this video using Ubuntu Desktop 20.04:
Drive are in XFS, now with TRIM enabled, just installed, format and benchmark
I insist on the fact that i’m not really diagnosing by the benchmark, but during plotting, i’ve got around 180MB/S with 4 plots running and the IOWAIT is climbing to 30%!!!
The fourth field (fs_mntops).
This field describes the mount options associated with the
filesystem.
It is formatted as a comma-separated list of options. It
contains at least the type of mount (ro or rw), plus any
additional options appropriate to the filesystem type
(including performance-tuning options). For details, see
mount(8) or swapon(8).
Basic filesystem-independent options are:
defaults
use default options: rw, suid, dev, exec, auto,
nouser, and async.
My understanding is that noatime implies nodiratime, so if you have noatime in your fstab options for a drive, you won’t need nodiratime in the options, too.
I found the issue with similar results I was getting… It’s the built-in gnome benchmark. I used FIO to test my drives instead using sequential read/writes and they got the rated speeds. I was getting similar results in FIO when I used the option for random read and writes simultaneously as the gnome benchmark.
Notice the gnome disk benchmark reads and write at the same time here? I guess some drives shows results like this when reading and writing at the same time.
This is correct but I leave it in anyways. I prefer to not imply… It is a habit from my professional background. Not everyone knows this, so I want it to be clear when someone looks over the fstab that nodiratime is declared. Doesn’t cost my anything, so I do it.
Hi guys, trying to solve my speed issues on Ubuntu with an 2Tb Evo Plus. When the drive is at 80%+ usage ATOP is showing max 260 MBw/s.
The reason I say this is im struggling with staggered plots on my Ryzen 3600x with 32GB ram. I plot will be done in almost 6 hours but when staggering multiple it goes to over 10, this is with never more than 2 in phase 1 and 2 core for each.