Optimal mdadm raid 0 options for plotting

I’m going to be using 4 sata SSDs in Raid 0 and I’m thinking what chunk size and other options with mdadm would be optimal for that.

Would changing those settings make a difference in speed or lifespan of SSDs or should I just keep it default?

I’ll then mount the raid partition though fstab with discard option. Does the discard work the same with Raid as it does with individual drives?

I use XFS and mount with noatime and nodiratime. I don’t bother with discards but have fstrim setup to run every other day. If doing the same, pay attention to sunit and swidth. Most distros should get that right if you have the block layer setup properly. sunit should be bs * 2 and swidth should be that * number of drives. Makes a huge diff in my testing.
…oh and if you’re using something other than enterprise-grade drives (or even if you are), I suggest overprovisioning. I do that on all the SSDs I use (NVMe and SATA) and it makes quite a bit of difference. I use a 5% OP.

1 Like

Thanks, noatime/nodiratime sound useful!

What are you configuring to provide for a 5% OP?

Also, why no discards?

I got two Intel DC S3700 in RAID0 using Mdadm.

Formatted in XFS,

I played around with chunk sizes, 64, 256, 512 (default), 1024, and 2048.

The smaller chunk sizes increased my plot times by 3-4 mins (64) compared to Mdadm default setting of 512.

Going up one size 1024 lowered my plot time by 1 min roughly from the default Mdadm size of 512. 2048 chunk size equaled 512 in plot time.

I’m using chunk size of 1024 for my system…

Dell T5810
128 GB Ram
E5-2683 V4
Two Intel DC S3700 (400 GB) in RAID 0 with mdadm
Ubuntu desktop

mdadm runs fine default settings.
more drives more faster.
be sure to trim
have fun

plot storage should be jbod xfs.

When plotting Chia on multiple NVME drives, it is recommended to use RAID 0. Here are the optimal mdadm settings for plotting Chia on multiple NVME drives:

  1. Install mdadm:
sudo apt-get update
sudo apt-get install mdadm
  1. Create :For RAID 0:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1

Note: Replace /dev/nvme0n1, /dev/nvme1n1, /dev/nvme2n1, and /dev/nvme3n1 with the correct device names for your NVME drives.

  1. Format the RAID array:
sudo mkfs.xfs /dev/md0
  1. Create a mount point:
sudo mkdir /mnt/raid
  1. Mount the RAID array:
sudo mount /dev/md0 /mnt/raid
  1. Verify the RAID array is mounted:
df -h

You should see the mounted RAID array /dev/md0 listed.

To make a fstab entry for the RAID array you created using mdadm, follow these steps:

  1. Open the /etc/fstab file in a text editor with administrative privileges:
sudo nano /etc/fstab
  1. Add a new line at the end of the file to specify the mount point and options for the RAID array:
/dev/md0 /mnt/raid ext4 defaults,nofail 0 0

Note: Replace /dev/md0 and /mnt/raid with the correct device name and mount point for your RAID array.
3. Save and close the file by pressing Ctrl+X, then Y, and then Enter.
4. To test the fstab entry, unmount the RAID array by running:

sudo umount /mnt/raid
  1. Remount the RAID array by running:
sudo mount -a

This command will read the /etc/fstab file and attempt to mount all entries listed in the file. If there are no errors, the RAID array should be mounted at the specified mount point.

  1. Verify the RAID array is mounted:
df -h

You should see the mounted RAID array listed with the correct mount point and options.

Note: It is important to test the fstab entry before rebooting to ensure the RAID array is correctly mounted at boot time.

There are several optimization flags you can add when creating a RAID array with mdadm to improve its performance. Here are some examples:

  1. --chunk: This option specifies the chunk size for the array. The default chunk size is 512 KB, but you can increase it to improve write performance or decrease it to improve read performance. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --chunk=64 /dev/nvme0n1 /dev/nvme1n1
  1. --bitmap: This option enables or disables the use of bitmaps, which can improve the speed of rebuilding the array after a failure. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --bitmap=internal /dev/nvme0n1 /dev/nvme1n1
  1. --write-mostly and --write-behind: These options prioritize write operations to specific devices in the array. --write-mostly sets the specified device(s) as the primary target for writes, while --write-behind sets the specified device(s) as a secondary target for writes. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --write-mostly /dev/nvme0n1 --write-behind /dev/nvme1n1
  1. --run: This option allows the creation of an array in a non-interactive mode. For example:
sudo mdadm --create --verbose --run /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1

Note: It is important to carefully consider the impact of each option and how it may affect the reliability, performance, and recovery time of the RAID array. It is also recommended to test the array and benchmark its performance after applying optimization flags to determine if they are having the desired effect.

To trim an mdadm RAID array, you need to perform the following steps:

  1. Check if your system and disks support the trim command by running the following command:
sudo hdparm -I /dev/sdX | grep TRIM

Note: Replace /dev/sdX with the appropriate device name of your disk.If the output shows that your disk supports the trim command, then you can proceed with the next steps. If not, then you cannot use the trim command for that disk.

  1. Check if the trim command is already enabled for the RAID array by running the following command:
sudo cat /sys/block/md0/md/discard_{granularity,max_bytes}

Note: Replace /dev/md0 with the appropriate device name of your RAID array.If the output shows that the discard granularity is not 0 and the max discard bytes is not -1, then the trim command is already enabled for the RAID array. If not, then you need to enable it by proceeding with the next steps.

  1. Enable the trim command for the RAID array by running the following command:
sudo mdadm --grow --bitmap=internal /dev/md0

Note: Replace /dev/md0 with the appropriate device name of your RAID array.This command adds a bitmap to the RAID array, which enables the trim command for the array.

  1. Wait for the bitmap to synchronize with the array by running the following command:
sudo mdadm --wait /dev/md0

This command waits until the bitmap synchronization is complete.

  1. Verify that the trim command is enabled for the RAID array by running the following command:
sudo cat /sys/block/md0/md/discard_{granularity,max_bytes}

The output should show that the discard granularity is not 0 and the max discard bytes is not -1.

Once you have enabled trim for your mdadm RAID array, you can use the fstrim command to trim unused blocks on the file system. For example, to trim the /mnt/raid file system, you can run the following command:

sudo fstrim /mnt/raid

Note: It is recommended to periodically trim your RAID array to maintain optimal performance and prevent data loss due to uncorrectable errors.

hope that clears everything up About mdadm

2 Likes