It was me who told Chris about that lol, so give him some slack, he’s not a dev.
Of course you can dynamically crate the JSON files, but who does that? Usually you don’t give file extensions to dynamic query content.
It just makes it seem like you’re manually editing the files via notepad…
Regarding the “plots are not random data” claim: they are created by hash functions essentially. So I don’t see how they are not random. The only way to “compress” it is to use clever encoding schemes like chia already did. But finding patterns in the data and trying to compress on that, no way…
The current plot format is a bit inefficient due to the way it’s structured to make lookups faster, that’s the few % that Brahm is talking about. Essentially there are unused areas in the plots due to variable sized contents, but to make lookups fast they are stored in worst-case fixed size chunks.
Given that your plotter has different levels of compression, it implies that you are probably compressing larger chunks of data with higher levels of compression. But again this seems implausible due to the random nature of the data.
Regarding the statement “a plot is created from a single plot id”: try to compress the output of sequentially running blake3(n), where n is an integer from 0 to whatever. The data essentially comes from the value zero, so it must be easy to compress?
In order to get 20% reduction you’d have to radically change something, including the whole plotting process and the plot format. But in this case you wouldn’t have different levels of compression, just a singular specific gain or none. I never really looked into compressing the plots while developing my plotter, I just figured that would be a waste of time.
In any case, if this is real, @Dawson is a genius for sure.