Best linux filesystem/settings for Temp drives

As the title suggests, what is deemed the best filesystem for the Temp drive on linux with any modifications to that filesystem. Currently I am using ext4, no modifications.

And does it really make any difference?

I went with XFS because of this post and another one from a 2019 that showed XFS outpacing the others.

For the farming/storage drives I did ext4 because once it’s there, there is almost no I/O, but for plotting… man every ms counts with all the r/w IO.

EDIT: If you are plotting on an SSD, don’t forget to trim every hour or some other very frequent cycle - here’s a quick writeup I did.

1 Like

Thanks for the reply.

Do you still have to change the trim if you use(or same for xfs):

sudo mkfs.btrfs -f -d raid0 -m raid0 /dev/nvme0n1 /dev/nvme2n1
sudo mount -t btrfs -o ssd,nodatacow,discard=async,noatime /dev/nvme0n1 /mnt/pool

brtfs I’m not sure about - the trim command is a signal to the SSD hardware controller itself to do something on the drive, so it’s an even lower level than the file system - so I believe the answer is ‘YES’ but the only part I don’t know is how brtfs already does or does not take care of trimming.

I’d google for it but otherwise assume you should probably be doing it hourly also.

btrfs seems to work better for us than xfs, xfs slowed down after a few days, even with trims between bigger amounts of plots

likely cause discard=async
(type btrfs (rw,noatime,nodatasum,nodatacow,nobarrier,ssd,discard=async,space_cache,commit=60,subvolid=5,subvol=/))

Hey guys, I’m also in this quest of finding out the best configs for the temp drives. I heard BRTFS filesystem with continuous trim and unit allocation size of 64kb are the best configs so far. What do you guys think? I’m currently formatting my temp drives with this commands:

# create raid 0s - may need to change directories depending on disk names
sudo mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1

# create and mount the filesystems
sudo mkfs.btrfs -n 65536 -f /dev/md1 # create a filesystem on the array
sudo mount /dev/md1 /mnt/raid0disk1 # mount the filesystem on the directory

# add RAID array to /etc/fstab file so that it mounts automatically at boot time
echo '/dev/md1 /mnt/raid0disk1 btrfs noatime,nofail,discard 0 0' >> /etc/fstab

Do you guys have any concerns on this or anything you would change from my configs? What are y’all current configs?

1 Like

are you using all these parameters? rw, noatime, nodatasum, etc? what are you feeling about this so far?

Rw is a must that’s default (allows u to read and write files)

No real difference noticed for noatime/nodatasum

But also haven’t tested on identical systems so no clue ain’t got a testbed atm

XPT, Thank you for this. I applied your solution to a couple of 2TB Firecuda 520’s on a 570 MoBo / 5950x / 64GB ram machine and it works great. I had to create the /mnt/raidodisk1 mount point and apply permission’s to it but after that it works great using the Swar plot manager. Only have one problem with the last part…

This would not work as it gave me a access denied permission’s error. Any ideas on how to make this work?

Did you sudo or try as root (sudo su)?

If not try to sudo or root nano and edit it

Grant permissions of read, write and eXecution to the fstab file (actually you just need read/write, but I usually grant all permissions anyway).
sudo chmod a=rwx /etc/fstab

@xpt
How do you prepare your disks to mdadm.

Do you use fdisk to just create a regular partition? IE fdisk /dev/sdc then n,p,1. I see some instructions to use t then fd for raid auto detect? Also do you use GPT? IE parted /dev/sdc mklabel gpt?

Thanks

I don’t use any of these… I just let mdadm create the raid 0 arrays and then I format the filesystem with mkfs.btrfs. No need to prepare the disks, I let mdadm and mkfs prepare the disks for me.

Here are my results:

F2FS

Phase 1 took 567.552 sec
Phase 2 took 393.629 sec
Phase 3 took 298.24 sec
Phase 4 took 32.6371 sec
Total plot creation time was 1292.11 sec (21.5351 min)

EXT4

Phase 1 took 573.549 sec
Phase 2 took 399.32 sec
Phase 3 took 299.316 sec
Phase 4 took 35.3708 sec
Total plot creation time was 1307.61 sec (21.7935 min)

BTRFS

Phase 1 took 573.896 sec
Phase 2 took 401.02 sec
Phase 3 took 308.365 sec
Phase 4 took 30.433 sec
Total plot creation time was 1313.76 sec (21.896 min)

BTRFS with mount option: nodatacow,discard=async

Phase 1 took 569.164 sec
Phase 2 took 393.637 sec
Phase 3 took 301.906 sec
Phase 4 took 45.1006 sec
Total plot creation time was 1309.86 sec (21.831 min)

XFS

Phase 1 took 568.539 sec
Phase 2 took 392.519 sec
Phase 3 took 295.443 sec
Phase 4 took 33.8041 sec
Total plot creation time was 1290.37 sec (21.5062 min)

XFS with mount option: discard

Phase 1 took 569.33 sec
Phase 2 took 392.917 sec
Phase 3 took 295.741 sec
Phase 4 took 34.8529 sec
Total plot creation time was 1292.9 sec (21.5483 min)

  • It seems XFS with the default mount option is the fastest. However, I am not sure about a long-term performance as XFS will mount with nodiscard by default.

  • XFS with discard mount option performs almost identical with F2FS which already has some kind of discard in its hybrid TRIM by default. Therefore using F2FS will eliminate any setup that would require on XFS with a custom mount option (i.e. you can’t point to the partition’s label name if you are not using the default mount option).

  • It seems BTRFS with the default mount option takes the last place as far as performance goes. However, BTRFS with nodatacow,discard=async mount option has a better performance but it’s still behind EXT4.

  • BTRFS and EXT4 have similar performance regardless of the mount option tested.

  • XFS and F2FS also have similar performance regardless of the mount option tested.

These results are tested on a single Corsair MP600 Pro (without RAID setup). All filesystem tested are formatted with GNOME Disk on Ubuntu 21.04.

1 Like

I use btrfs with rw,noatime,nodatasum,nodatacow,nobarrier,ssd,discard=async,space_cache and it works really well. i trim hourly

Do you test the performance with other filesystem?

As far as I know, the discard=async mount option is there so you don’t need to manually perform TRIM by yourself. I don’t know about the other mount options that they could improve the speed in anyway.

I will perform the test again out of curiosity when I finished my current batch :sweat_smile:

1 Like

Awesome I am curious:) This is not a real world test for me I do not think though as I am running 30 parallel. Unfortunately I cannot test it as it would just require to much downtime and I highly doubt madmax could beat my current config doing 80 plots per day

XFS single NVMe + RAMdisk ~18min/plot

why the hell would you add extra overhead with SW RAID or fancy stuff like BTRFS???

Many tried, SW RAID is always slower than single fast NVMe such as famous Corsair MP600