Tips for Chia harvesting using S3-compatible APIs

Will wasabi bucket works in this?

Support/admin from chia keybase said it will not work.

We transfered around 1000 plots to S3, we did also had some fails, but the s3 cli worked best yet, but somehow still 1 in 100 or so Endup as invalid

With s3fs every failed, with goofys and repeatedly using rsync on io errors also worked sometimes, but sometimes wasabi/crowdstorage charged for the bad behavior of rsync for 3 months per deleted part…
And that was with 1gbit upload speed with constant above 930mbit

1 Like

What you use to transfer? rsync or ? I failed all even wget with setting up a web server to do so. Are you using goofys?

for manually we now use aws s3 cli
https://docs.aws.amazon.com/cli/latest/reference/s3/
and for automatically a library for our programming language

goofys worked tho too with rsync
goofys + rsync with -W and --inplace

1 Like

I am using wasabi, there isnt any cli :frowning:

u can use aws cli with wasabi…
export AWS_ACCESS_KEY_ID=SECRETID
export AWS_SECRET_ACCESS_KEY=SECRET
aws s3 mv LOCALFILE s3://BUCKETNAME --endpoint-url s3.eu-central-1.wasabisys.com

1 Like

Wasabi and Crowdstorage(Polycloud) is too slow in seek ( also with patch Parallel reads ).

set it and try in COLD RUN, because the second time is fast, but in cache. for example:
try "chia plots check -n5 " and after “chia plots chech -n6”
if you use the same number, is the same seek, if add 1 the last proof use new seek, and add time ( this is time is for one proof ).

time for Polycloud

other test for Wasabi anch k25 plot

ALL more 5 second for PROOF

Other feedback?
@jacobcs @MisterOutofTime

const MAX_READAHEAD = uint32( 131072*100)
const READAHEAD_CHUNK = uint32( 131072)
i just patched goofys with these settings and its alot faster

will report back, still working on it tho
it took more than 40seconds to find a proof before the changes, in debug mode a few min ago with a fuckton of debug prints it was at around 9 seconds with the patch above

@Chida It is my understanding that time chia check -n5 does 5 challenges, so you’d want to divide that time by 5 to see what the time is for a single challenge.

The best thing to do is upload some plots then enable INFO logging and watch the actual times.

@MisterOutofTime Do you know how many bytes chia actually needs to read at each seek position? I’ve been considering patching goofys like you did with a size that is closer to what is used by chia.

@jacobcs

policloud 364 seconds / 38 proof = 9.5 seconds > 5 second warning

Are you agree?

enable the debug fuse option and it will log it.

and goofys is still to slow btw

@MisterOutofTime What cloud service are you using? If you are using Polycloud results are best if your server is in N. Virginia.

currently backblaze europe with a 14 ms latency

My testing with backblaze they were able to respond in time for most checks, but still periodically would have spikes where challenges couldn’t get done in time. This was using their US location though which is split between California and Arizona, so I couldn’t help wondering if the response time depended on which location the object ended up in.

oh yes, i have this installed… will try it out, THanks!

FYI for those wanting to upgrade to chiapos 1.0.3, you can do it with this command (inside the chia virtualenv) pip3 install chiapos==1.0.3.

You can verify you are on the right version with:

(venv) ubuntu@chia:~/chia-blockchain$ pip3 show chiapos | grep Version
Version: 1.0.3

Also don’t forget to restart chia to make sure it runs the new code.

1 Like

3 command to run right? i have the access key writen in config.

:frowning: It failed without --inplace

Trying it now with --inplace.

try aws cli it really works best, and retries automatically

jfi, without inplace you also pay for the temp file rsync uses at wasabi…

you could try with --append-verify too

1 Like