Tips for Chia harvesting using S3-compatible APIs

urghhhhhhhhh

just wanna make it as simple and “AS SAFE” as possible, next i will try awscli… Thanks!

rsync is definitly not as safe as possible for this kind of storage and file size
its pretty easy to get io errors

if you use goofys as file system for the upload, you should increase the http timeout to something like 2 minutes

image

but time for proof is more that 5 seocnds. what do you suggest?

yes i am using goofys, so we should set to 2 mins ya?

from your experience do you think ns3 storage really suitable for chia mining?

with --inplace it failed as well, changing the timeout to 120s.

trying awscli at the same time as well :slight_smile:

Any idea what kind of test tool is this?

@taijicoinmaster That is the regular chia software, you just need to change the log level from WARNING to INFO in your config file.

@Chida chia check does full proofs? You have 30 seconds to solve those, so 9.5 seconds would be successful.

If it helps, here is the check time I am seeing (time chia check -n 5):

Chiapos version	Run	Checks	Time (seconds)	Challenge solve time
1.0.2	1	4	42.648	10.662
1.0.2	2	4	49.843	12.46075
1.0.2	3	4	46.918	11.7295
1.0.2	4	5	56.748	11.3496
1.0.3	1	5	14.25	2.85
1.0.3	2	5	13.596	2.7192
1.0.3	3	5	12.563	2.5126

Even before upgrading to chiapos 1.0.3 times were well under 30 seconds. After upgrading to chiapos 1.0.3 challenge solve times are ~2 seconds.

Do we need to upgrade this manually?

So this is what u see when you run “chia plots check”?

With 120s timeout, it failed too.

So far only awscli is working.

this time is “COLD RUN” ? you use goofys?

I just signed up an account on CrowdStorage, and I plan to switch from Wasabi to CrowdStorage soon. I got some questions here:

  1. as amazon s3 and wasabi, they have regions, but here in CrowdStorage, I can’t create a bucket with the desired region to pick up. May I know how to do this?

  2. This is the first time I know that S3 buckets has no rename function directly, I was charged the delete fees exactly after I check my billings. I plot directly, so in order to save the storage space of the cloud computer, can I have a patch to chiapos soon?

  3. I want the step-by-step to get parallel PR cherry-picked.

Thanks

@jacobcs if i use a server in us-east, the time is good with CrowStorage!

plot the file as destination folder locally, and then instantly move the file to crowdstorage
aws cli mv command is pretty good for that

Thank you! I thought to modify chiopos the sequence to rename the plotted file name first before uploading to buckets, so it will not use more space locally. I think it’s possible, it can be a patch to chiapos.

If in this way, it will help to save the space of local computer.

u do not need extra space for moving it right afterwards vs letting chiapos do it

both require the same space(the size of 1 plot)

I see, yes, both of them require the same space locally. In order to change the name locally, we should use the same dir name as temp2 dir for final dir, right?

I just read the code here:

     if (tmp_2_filename.parent_path() == final_filename.parent_path()) {
                fs::rename(tmp_2_filename, final_filename, ec);
                if (ec.value() != 0) {
                    std::cout << "Could not rename " << tmp_2_filename << " to " << final_filename
                              << ". Error " << ec.message() << ". Retrying in five minutes."
                              << std::endl;
                } else {
                    bRenamed = true;
                    std::cout << "Renamed final file from " << tmp_2_filename << " to "
                              << final_filename << std::endl;
                }
            } 

So, if the same folder between final dir and temp 2 dir, they will rename the file name to *.plot. I think it’s the easiest way too for making final plot files firstly locally, then move this file to the final s3 bucket.

I thought also that they only need to rename file name firstly locally and then copy/move to final s3 bucket. Chia developers didn’t know this earlier that users may use s3 buckets to save plots and s3 buckets don’t support rename command.

I complied the newest chiapos 1.0.3 and found that the seeking time got even much longer than 1.0.2.

In 1.0.2, mostly are around 5 seconds.