How to format multiple drives and then create mountpoints and fstab entries?

My JBODs have been finally delivered to me, and now I’m met with a trouble: I have drives from sda through ~ sdhz.
It’s going to take hella lot time to format them all to ext4, create mountpoints and append them with UUIDs to the fstab.
Are there any commands out there that can help me to have a little bit of automatization for my needs?

1 Like

I parse the “blkid” output to generate the fstab file. For formating I use a bash script

Can you up the bash script? Would like to see it.
I go the manual route with GParted and give friendly names to the drives.

i use always stripped JBOD of at least 4-8 drives for better space utilization…faster speed. don’t use ext4…it is crap for management of large space…XFS

1 Like

Use the -L option in mkfs to add a label when creating the partition. I label mine with the serial number of the drive. I then create a mountpoint that is named according to the slot where the drive is loaded. If a drive goes bad, you get the location and the serial number to know which one to pull.
The fstab entry loads by label rather than uuid.

3 Likes

here you go

2 Likes

If a drive crashes, this leaves you searching for the uuid that isn’t there. Also, leaves you searching for the SN that is not reported in the system. I think these scripts would work great for folks with 10 drives in a box, but fall apart when dealing with 20+ drives. Not a problem if everything continues to run smoothly, but a mess if something breaks.

You can just use a nofail option which eliminates the delay and any potential problems like emergency boot if a drive fails.

1 Like

That addresses boot concerns, but it doesn’t take care of the maintenance side: how do you find the drive that failed and replace it? When formatting the drive, the drive is healthy and can report its serial number. When inserting the drive, you know which jbod and slot you are putting it in. The upfront effort of using those two pieces of info to mount your drives will make maintaining them much easier. Alternatively, some jbods offer chassis service features to identify which slots are active. However, these tend to be more expensive.

You would usually have a map of each jbod and you name the system disks according to row/column of the physical disk. Or something similar with mount point, serial number, or another way to identify both in software and hardware that matches easily.

Yes, as described in my earlier post in this thread.

to create file system use:
sudo mkfs.ext4 -m 0 -T largefile4 /dev/$DEV_ID -L $LABEL
don’t use plain ext4 without largefile4 blocks arrangement.
You may also use xfs, but to my testings for 8 TB - c7 bladebit plots

  • ext4 - 89 plots per drive
  • xfs - 94 plots per drive
  • ext4 + largefile4 - 95 plots per drive

don’t make the same mistakes as I’m did, take most from your drives :slight_smile:

here fstab command sample:

UUID=$UUID /mnt/$MNT_DIR ext4 defaults,defaults,nofail,x-systemd.device-timeout=4 0 2

or

LABEL=$LABEL /mnt/$MNT_DIR ext4 defaults,defaults,nofail,x-systemd.device-timeout=4 0 2

I prefer to use labels instead of UUID, and this format of fstab commands will let your OS boot even with failed or not existed drives

I use this a few times and it works great with DS424x enclosures to identify problematic drives, it blinks the led light on the drive that you are searching for. Although I found that after a round of led on and led off, i’d have to reboot the enclosure to get that particular slot working again with a new drive in there.

1 Like