Copy Pots to SMR Drive

I happen to have several SMR drives that I would like to put into use. Copy times are horrible! Anyone out there know if there is a way to get plots onto an SMR drive in a reasonable time?

Currently I am using rsync and limiting the transfer speed to 2.5 mb/s and I do not have an issues. But each plot takes upwards to 12 hrs to copy over.

Have you tried basic OS-level commands like cp or mv? What file systems are on the HDDs?

Since an SMR drive need to do multiple steps (and PC assisted I think) to write an already used space on the drive, I would suggest reformat, then write only to the drive in a single stream. That may alleviate the extra steps it needs to perform when writing on a ‘used’ track…maybe.

1 Like

It has to be a user error (something wrong with the setup), as it is rather not likely that drive would take 12 hours instead of 15 minutes. Sure, SMR drives suck, but mostly / only for RAID setups, but not that much as single drives.

Thanks for the feedback, Yes the drives are newly partitioned and formatted. I am copying in a linear stream. So what I am doing is probably best it is going to get.

It is only taking 12 hours because I am limiting the speed. If I let the drives get full speed it starts raising my IOWAIT which ends up affecting everything else on the machine. So.

I am going to try and raid the speed a bit more, But ill just live with the slow copy. It is what it is. Got the drives for free, Cannot complain!

Whats the model number of these drives? And are you running Linux?

image

Yes, Running Linux. Ubuntu Server

Have you done a quick format or a regular format ? What’s the filesystem and blocksize ?

File system is ext4, Honestly I dont know what block size it is. Ill see if I can find it. I just used the defaults.

Really. Non of that matters.

You just don’t want to be deleting data and re writing.
Think of them as write once drives.

Just write your plots to the drive. One time.
Format with
Xfs is perfection for plot storage. With the proper flags.

Curious tho the read performance
And how harvester handles the drive
Back the drives with a ram/ssd cache to be safe

Leme know results tho of this

mkdir test
sudo mount /dev/sdxx test
cd test

Sudo su

sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync

dd if=tempfile of=/dev/null bs=1M count=1024

sync; echo 3 > /proc/sys/vm/drop_caches

dd if=tempfile of=/dev/null bs=1M count

Once copies have finished reads are perfectly fine. Response time in the sub 100ms.

I do not understand your problem…
Why not limit the I/O of your copy command to use only idle I/O resources with ionice -c 3 <your_command> instead of limiting the copy speed?

1 Like

To be honest, I never explored that idea. I will try that on the next set of copies.