When plotting Chia on multiple NVME drives, it is recommended to use RAID 0. Here are the optimal mdadm settings for plotting Chia on multiple NVME drives:
- Install mdadm:
sudo apt-get update
sudo apt-get install mdadm
- Create :For RAID 0:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=4 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1
Note: Replace /dev/nvme0n1
, /dev/nvme1n1
, /dev/nvme2n1
, and /dev/nvme3n1
with the correct device names for your NVME drives.
- Format the RAID array:
sudo mkfs.xfs /dev/md0
- Create a mount point:
sudo mkdir /mnt/raid
- Mount the RAID array:
sudo mount /dev/md0 /mnt/raid
- Verify the RAID array is mounted:
df -h
You should see the mounted RAID array /dev/md0
listed.
To make a fstab entry for the RAID array you created using mdadm, follow these steps:
- Open the
/etc/fstab
file in a text editor with administrative privileges:
sudo nano /etc/fstab
- Add a new line at the end of the file to specify the mount point and options for the RAID array:
/dev/md0 /mnt/raid ext4 defaults,nofail 0 0
Note: Replace /dev/md0
and /mnt/raid
with the correct device name and mount point for your RAID array.
3. Save and close the file by pressing Ctrl+X
, then Y
, and then Enter
.
4. To test the fstab entry, unmount the RAID array by running:
sudo umount /mnt/raid
- Remount the RAID array by running:
sudo mount -a
This command will read the /etc/fstab
file and attempt to mount all entries listed in the file. If there are no errors, the RAID array should be mounted at the specified mount point.
- Verify the RAID array is mounted:
df -h
You should see the mounted RAID array listed with the correct mount point and options.
Note: It is important to test the fstab entry before rebooting to ensure the RAID array is correctly mounted at boot time.
There are several optimization flags you can add when creating a RAID array with mdadm to improve its performance. Here are some examples:
-
--chunk
: This option specifies the chunk size for the array. The default chunk size is 512 KB, but you can increase it to improve write performance or decrease it to improve read performance. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --chunk=64 /dev/nvme0n1 /dev/nvme1n1
-
--bitmap
: This option enables or disables the use of bitmaps, which can improve the speed of rebuilding the array after a failure. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --bitmap=internal /dev/nvme0n1 /dev/nvme1n1
-
--write-mostly
and --write-behind
: These options prioritize write operations to specific devices in the array. --write-mostly
sets the specified device(s) as the primary target for writes, while --write-behind
sets the specified device(s) as a secondary target for writes. For example:
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 --write-mostly /dev/nvme0n1 --write-behind /dev/nvme1n1
-
--run
: This option allows the creation of an array in a non-interactive mode. For example:
sudo mdadm --create --verbose --run /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
Note: It is important to carefully consider the impact of each option and how it may affect the reliability, performance, and recovery time of the RAID array. It is also recommended to test the array and benchmark its performance after applying optimization flags to determine if they are having the desired effect.
To trim an mdadm RAID array, you need to perform the following steps:
- Check if your system and disks support the trim command by running the following command:
sudo hdparm -I /dev/sdX | grep TRIM
Note: Replace /dev/sdX
with the appropriate device name of your disk.If the output shows that your disk supports the trim command, then you can proceed with the next steps. If not, then you cannot use the trim command for that disk.
- Check if the trim command is already enabled for the RAID array by running the following command:
sudo cat /sys/block/md0/md/discard_{granularity,max_bytes}
Note: Replace /dev/md0
with the appropriate device name of your RAID array.If the output shows that the discard granularity is not 0 and the max discard bytes is not -1, then the trim command is already enabled for the RAID array. If not, then you need to enable it by proceeding with the next steps.
- Enable the trim command for the RAID array by running the following command:
sudo mdadm --grow --bitmap=internal /dev/md0
Note: Replace /dev/md0
with the appropriate device name of your RAID array.This command adds a bitmap to the RAID array, which enables the trim command for the array.
- Wait for the bitmap to synchronize with the array by running the following command:
sudo mdadm --wait /dev/md0
This command waits until the bitmap synchronization is complete.
- Verify that the trim command is enabled for the RAID array by running the following command:
sudo cat /sys/block/md0/md/discard_{granularity,max_bytes}
The output should show that the discard granularity is not 0 and the max discard bytes is not -1.
Once you have enabled trim for your mdadm RAID array, you can use the fstrim
command to trim unused blocks on the file system. For example, to trim the /mnt/raid
file system, you can run the following command:
sudo fstrim /mnt/raid
Note: It is recommended to periodically trim your RAID array to maintain optimal performance and prevent data loss due to uncorrectable errors.
hope that clears everything up About mdadm