Here is the problem. When you create those plots with different k sizes, you are trying to figure out how to cover most of the free space.
However, the problem is that I donât think that there is a metrics that shows what is the hash / TB ratio for those different k values, and how it eventually fluctuates when there are small differences within a given k value plots. Sure, there are plenty of strong opinions, but all of those are just BS.
My take (potentially unfounded) is that the hash / TB is constant, and those higher k value plots are just less hash-dense, thus take more space, falsely implying that we are making progress in reducing that dead space.
This is the point that I was trying to make with my take on those partial k32 plots. I really donât give ratâs ass about k values, I do care about hash density and plot creation energy efficiency, and Chia is rather quiet about that part (yeah, the green company produced the least green plotter and brags how good it is in the lab to test plotting correctness, potentially once a quarter), and instead of making it obvious, is pushing BS features (changed icons, new buttons without any meaning behind those).
So, really no offense, as that is really not your fault or even if this is a fault (rather a lack of guidance from Chia), but my take is that this chase to kill all the dead space is a good example of how to use âbarking at the wrong tree.â Although, maybe a âdead horse managementâ would be a better expression in this context.
And yes, the NTFS is a robust file system, but to be such there is a size penalty. There were some excellent posts on this forum about ext / xfs formats that give a bit more space, and not just by killing journaling, but few other attributes to fine tune them to host larger files. Is it worth to pursue, I guess that is a personal choice, how familiar the user is with Linux / fine tuning those things.
By the way, someone has posted on this forum a calculator for that dead space vs. k values, just search for that.
Oh, and to be clear why I tend to believe that the higher k plots do not offer higher or equal hash rather is as follow. If we take a plot (e.g., k32, but it really doesnât matter), it has three components: 1. table describing that plot (say plot name, how many hashes, âŚ), 2. tables that hold all hash entries, 2. hashes. So, when we increase the k value, potentially we get rid of one #1 table, but we need some extra table(s) to join #2 tables that hold the lower k number of hashes, where the number of hashes exactly doubles. Therefore, if we see an increase in space usage, it most likely is attributed to more #2 tables being employed. Again, I didnât look at plots layout, so am just guessing.
Actually, the âplot checkâ test is a very similar thing. Chia has advertised it as a âcheckâ for plot quality, but that is really completely wrong. If one checks the code, that check is always using the same seed for all checks. What it implies is that if that seed is changed (through the code, or by command line param), the results most likely will be opposite to the previous test, making the check process basically worthless. (Yes, I have run such tests, and have seen plots flip from extremely lucky to extremely unlucky depending on seed used.). Instead of making and advertising that binary as a quality check, it should be changed to be just an plot integrity check that could eventually better scrutinize plots, provide more stats for a given plot (thus potentially be helpful in analyzing different k value plots). As with the different k values and plot sizes, there are plenty of people that take various positions about that check making the water even murkier.