Seek help on how to auto move plot from SSD/NVME(ext4) to HDD(NTFS) in ubuntu 20.4

I am a new bee to Ubuntu, Linux world ,currently I setup madmax of copying plots to interal 14T SAS HDD(ext4) for testing,which takes around 5 minutes of copying time ,still ok at the moment, my plan is to setup destination to a temp SSD/NVME drive which is much faster and may only take 2-3 minutes of copy, and rest of my disk is all in NTFS format which could be used for windows farming later.

But I am still studying in Linux of how to auto move plot from SSD/NVME(ext4) to HDD(NTFS)

Does different file structure matters? or anyone could help advise, many thanks in advance.

Background: Xeron E5-2670v2 *2 CPU with total 288G RAM and setup for 256G for Ramdisk. as temp &2
if directly setup destination driver to an NTFS 14TB driver, system always crashed during plot 2 when writing to disk, guess, may due to not enough space.
Now seeking solutions for dumping the plotted 110G quickly out of memory, or other may cause this problem ?

Linux supports NTFS, but you may need to install a package. Have you tried mounting and writing to the drives yet?

currently Ubuntu support NTFS by default setting, I tested with MV command ,and it is successful from ext4 to NTFS now, just not know how to run it automatically with like 60 second interval , anyone with experience of Linux command ?

I would recommend rsync mainly because it allows you to limit the bandwidth of the transfer, so going from a fast drive to a slow drive will be more consistent instead of bursts of speed with slowing to a crawl in the mix. It is a very powerful file transfer tool, take a look at some examples of using it.

while true;
     mv /ext4/*.plot  /ntfs/tmp/
    mv /ntfs/tmp/*.plot /ntfs/plots/
    sleep 60

save it as “automove” chmod +x

run it at screen cmd or service background

1 Like

You might want to look at cronjobs. This is exactly the sort of thing cronjobs do. To schedule a job you run the command:

crontab -e

and then edit the file to create a job. You’ll have to learn the notation if you want to create a custom schedule, but here are a few examples (pick one):

# Run this script every minute
* * * * * /home/jerry2021/

# Run this script every 10 minutes
*/10 * * * * /home/jerry2021/

# Run this script every hour
0 * * * * /home/jerry2021/

Save and close. Your script will then run at (roughly) the scheduled time.

“But what should this mover script be???” I hear you asking telepathically through the monitor.

It can be anything you’d like, a bash script it just a bunch of bash commands you want to run when you execute the script. A basic script will look like this:

# Jerry's Awesome Mover Script

# Move anything ending with *.plot from my other NVME to 
# my ntfs drive.
mv /media/jerry2021/my-other-nvme/*.plot /media/jerry2021/my-ntfs-drive/plots/

Finally, you’ll need to make the mover script executable.

chmod +x

Now you can test/run the script by using the command


If you haven’t already, add this script to the crontab, and that’s it. It’ll run this script at the scheduled time.

Feel free to modify the script to be as complex as your like. Prefer rsync ?, than use rsync instead of mv . Want to check if there is a plot before running mv add some if statements etc etc. This is the power and joy of Linux scripting.

It sounds like a lot of work, but really once you understand it, you can write some really funky scripts.

Here are some useful links:


Although I have just given you an answer to your original question, a question of my own popped up… why do you want to temporarily write the plot to an nvme, and then move it to an ntfs drive?

When using madmax, when it writes the plot to a destination directory it’ll run the process in the background and continue plotting (unless you told madmax to wait for copy with the -w flag).

It seems a little unnecessary to quickly copy to a temporary nvme. There doesn’t seem to be anything gained here? You can just make the ntfs the destination directory, it shouldn’t slow down anything?


thanks a lot for your detailed answer,
for your question,I think it depends different situations.
For me, I am using 256G full memdisk as temp & 2 driver ,so mot much temp space.
I already setup destination driver to NFTS driver for testing and it crashed every time, I guess due to the tight memory available, if Using big temp disk like 1T NVME as Temp disk one, it should not have such problem of crashing.
So that is reason for my case, It is necessary for me to setup as fast destination driver to dump quickly for the plotted 110G file out of memory.
I do it not for efficiency ,but to prevent system from crashing …
But as a new bee, I am willing to sharing with all guys in forum and see if maybe other case the problem…


I will try it during the weekend…

if your total memory is 256GB you used 256 FOR TMP AND TEMP2, it will crash,

you should add a nvme disk for TMP and you can running two processes, it will mush fast than that.

actually I do added extra and have total 288G RAM and setup for 256G for Ramdisk.
Sorry If I did not make it very clear.

Still much appreciated your help and will try your suggestion as well.
Currently the testing is more fun of exploring the tech itself since I have done most plotting any way. So I will try different methods and share with you all.

1 Like

Ah, I see. That makes sense. :+1:

I wish 256GB of Ram would fit in my system, running it as tempdir1 & tempdir2 sounds like fun.

If your system is still crashing because it is taking too long to copy the file out of ramdisk you could use the -w option in madmax, then it would wait for the copy out of memory before starting the next job so you don’t run out of memory.

Like you said it will copy the plot from ram to nvme so you won’t have to wait too long, then use the cronjob or’s script to copy it out of nvme to ntfs. But you probably won’t need to use -w if you have enough ram and your system is stable.

Good luck and have fun.

So, I recently had to take my own advice here and automate a plot moving script and I wanted to add something I discovered (my original post is no longer editable).

Turns out, if you run this as-is, the cron job would start a new move job every minute, even if the previous job was still running. To prevent this you have two options.

  1. Flock - This locks the command so it isn’t run again if one is already running.
  2. run-one - A wrapper for flock, but (slightly) easier to use.


To use flock to lock a command that is already executing your cronjob would look like this:

* * * * * /usr/bin/flock -n /tmp/mover.lock /path/to/your/

In this command, we are using an arbitrary file called /tmp/mover.lock which just tells flock if there is a lock. Flock will create this file if it doesn’t exist. When the script is being executed a lock is added to this file and subsequent flock commands will see this lock and fail accordingly, -n tells flock to fail with exit 1 if the file is locked. This allows the previous command to continue and the new cron job will do nothing.


This is a wrapper for flock. Available in Ubuntu apt repositories and can be installed with:

sudo apt install run-one

This behaves in much the same way as the previous example, but with the added benefit of not having to specify your own lockfile. The cronjob example becomes:

* * * * * /usr/bin/run-one /path/to/your/

Just thought I should add this comment in case anyone else is trying to replicate this and ran into the same problem. I should probably consolidate this into a [HOW-TO] later…

Here is a well-written tutorial for more information:

1 Like