XFS temp mount performance tweaks

What are your best practices regarding mounting options for XFS drives? I was running defaults, after additional research I’ve changed mount options to this but I do not see any major performance changes

1 Like

It could be you weren’t io limited to begin with.

Also some of the options dont really improve anything:
I think inode64 is the default already.
Discard is not the default because it’s considered slower, although I’m not sure if that’s also the case for our workload. Anyways, you still want it because it means less wear on your SSD.
The effect of noatime is probably very tiny. It should save a few iops, which was important when using HDDs, but with SSDs that doesnt matter that much anymore.

I’m not familiar enough with the other options to judge wether you should expect a bigger change.

1 Like

When I format them, I use crc=0 option.

1 Like

I believe that i am io-blocked given the fact that when I added “discard” my performance decreased from 32 plots per day to 29 ppd :confused: Is there a need for discard if SSD is fulfilled only to 60-70%?

Can you give an example of cli command? What is the impact of crc=0?

This has all the information:

$ man mkfs.xfs


It surprises me it’s that big of a difference, are you sure discard was the only thing you changed between those ppd numbers?
What is the iowait % with and without discard? That is imo the best indication if you’re io bound.

If you don’t send TRIM commands, which is what discard does, the SSD doesn’t know only 60-70% is in use. That’s why you want to enable discard.
The alternative is to regularly run fstrim on your temp drives. I don’t know how often is needed. But once a week, which the the default in Ubuntu iirc, is nowhere near enough. We fill the whole drive on an hourly basis, fstrim should be more often that that.

I do nodiratime also but I doubt it’s a big additional speedup.

1 Like

It is always good to talk with people smarter than me so I can learn something new - thank you all for your replies!

@BramS thanks for a suggestion, for all other people: you need to add “-m crc=0”
@XiMMiX: you were right, in fact, it wasn’t the impact of io bound. But on this subject - what is recommended “maximum” iowait? 8-10%? Obviously the lower the better, I am thinking about tolerable level
@avifreedman good point, it was missing in my initial list

8-10% for me would still be acceptable. The thing is, to really know where the limit is you have to test and see if 1 more or less parallel plot changes the amount of plots per day.
For example, I was running a single plot on some old sata SSD, not io bound and plot time was about 32k seconds. Adding a second parallel plot meant iowait went up to 5-10%. Plot time went up to 33k seconds. That longer plottime was easily negated by the fact I could run 2 in parallel.

noatime already implicitly includes that so it’s not needed.

1 Like

theres a million mount options for xfs and I believe a whole sweet of tools for after mounting… check xfs docs. massive list.
xfs= best practices
funny tho its not recommend by chia. they say willy stuff like fat and nfs. silly

It depends of use case of those best practices. I found btrfs shows more performance for my setup.