Plot *re-packs* Compression

It already does this. Each plot has a 1 in 512 chance of passing the plot filter. When a plot does pass this one in 512 chance, it is then (paritally at least) read to see if there are proofs within it. As this sequence of reads takes place, more data is read until a point when either a proof is not found or a proof is found. This whole process takes place every 9-10 seconds (at each and every challenge, assuming your full node is in sync and your connection to the blockchain is good). If you add in mega-compression using fancy algorythms to this process there would not be enough time between challenges to uncompress plots without a massize amount of work. So pointless.

Then bring in a caching question - if you were able to uncompress plots on the fly fast enough, where are you going to store the uncompressed version (or parts of the uncrompessed version) ? RAM ? Temporary fast disk space (SSD/NVME) ?

On the subject of compression, I think 10% plots size footprint reduction will be good going… The plotting process already squeezes down a large amount of non-repeating data into a file of roughly 101.4 GiB. Making that footprint smaller and maintaining performance will not be easy without large amounts of work in realtime. We don’t want to go down the rabbit hole of ‘proof of space and time and work’ with Chia…

So the ‘10% compression’ in my view should not be considered as compression in the traditional sense at all, but just further optimisation of the current plot algorythm to make a file use a slightly smaller footprint but contain the same (or similar) amount of potential proofs.

3 Likes