Could you let me know what command I can use to save the output of the “chia plots check” command? I have a bunch of drives with 1.3PB plots to check, but I would like to let it run over night and make sure the result can be saved into a log file which I can review afterwards.
Yeah, that commands outputs all text to stderr (for no reason, should rather do it to stdout instead), so you need to run something like:
chia plots check -l -n 1 2> file.txt
The plot check is rather a useless command. It does a check using always 0 for the seed. If you re-run the check with a different seed (using the same starting point and the same number of tests) you will get different results.
So, the only reason to run it is with a minimal number of tests (6, I think), just to check whether the plot is busted or not. Also, if you have a bad cluster under a plot, this check will rather not catch it, so you would need to complement it with chkdsk / fsck test.
Also, there is an ‘-l’ option that checks for duplicates, although it only checks the local plots (so basically useless if duplicates are across different harvesters). The second problem with this option is that once it finishes checking for duplicates, it proceeds to do plots checks, so it is better to monitor it to stop that lengthy process, and also narrow the number of checks in case you want to run it unattended.
NirSoft has a free Windows utility that searches files for pretty-much everything under the sun.
I use it to check for duplicate filenames, based strictly on the filename (you can include byte-for-byte comparisons, too, but that would take an eternity for plots, and is overkill for finding duplicate plots).
Having it check for only duplicates, based on the file name, you get the results in a few seconds. And if you want to verify whether or not it is really capable of finding duplicates, then create a file with the same name and store it somewhere else that is within the filesystems that are being searched. It will find it.
I have two Windows boxes, and I am able to peruse files on either box via “Map Network Drive”. So NirSoft’s program is able to check 100% of my plots in a few seconds.
I give it the drive letters to check, plus the filename parameter “*.plot”, plus choosing the drop-down choice of only comparing filenames.
If you have lots of drive letters, it should still work. But you will have to specify all of them.
I use NTFS mount points for all of my drives (similar to Linux mounting file-systems). So I have only two drive letters where my plots reside (one drive letter for each of my two computers).
It is a very good file searching utility:
Looks like a cool utility.
I guess, my point about that was really about false advertisements and lack of clarification by Chia team (even if explicitly asked).
Thanks a lot this command worked for me.
If I find a plot with <0.6 health, with n =30, would you recommend to delete this plot and replot?
This return value has rather no meaning / value.
There is a problem with CLI documentation, as it does not provide the full description of that check parameters. However, when you run “chia plots check -h” you will also get:
--challenge-start INTEGER Begins at a different [start] for -n [challenges]
You can re-run your test on that plot as follow:
chia plots check -n 30 --challenge-start 100
And this time using the same number of test but starting from 100 rather than 0 you will get a different result. So, which result is valid? Neither one. There are billions of hashes in k32 plot, and that test is using just 30 challenges to check hash distribution, as such is rather a worthless check.
Also, if you check the code, when the test starts, it is always using 0 as the starting seed. If you change the code, and re-run the test, it will also produce different results.
Sure, you could run the test where n = 1,000, and that value will start providing more reasonable results, but from what I have seen, using high n value basically always gives good results. That rather implies that it is rather difficult to produce a plot with badly skewed distribution, and on the other hand, you may assume that if the distribution is not even, it will fail providing proofs for some challenges, but on the other hand, will provide more / better proofs for challenges that hit those dense hash regions.