K35 repeated plot creation failure

My computers are rock stable. I am not changing anything for this nothing-more-than curious attempt at creating K35 plots.

I have been willing to try different Chia settings. But I am not going to tinker with my hardware settings.

You could be right. It is just not important enough for me to find out.

I have no such hardware.

Well ok then :man_shrugging:

if ur 4TB Raid is the only Part u using at both systems than i would say, this must be troublemaker.
i dont think its a bug in the chia plotter, especially if it runs the first times without problems and now it makes problems.

It is not a single part.

These are two independent systems.

Each one has its own, local, dedicated 4 TB RAID 0.

oh, ok this i misunderstood. but anyway i think its more a hardware problem than a software problem. all errors that i got with plotting and i had read from others was normaly a hardware problem.

What an amazingly wonderful creative idea!! Send that to the dev channel, say we need a variable size k32 plot creator, allowing only one plot like that per drive. That would be so useful for all our unfilled drives!

1 Like

You can find in my posts that I already got a reply from the Chia team about this, they’re aware of it and might implement it.

My work-around is finding the right combination of K34 + K33 + K32 plots to fill drives to as close to zero gigabytes remaining as possible.

I wish that there was a calculator for this, as I had to figure it out by trial and error by multiplying various quantities of plot sizes for each drive size.

For some of my 18 TB drives, I got it down to less than 1 GB of free space, using 100% K34 plots. Although to do so, I had to move many TBs of plots around. By default, I always fell short of accomplishing this goal by approximately 1 GB. So I moved the largest K34 plots off of that drive, and scoured all of my other drives for their smallest K34 plots, and the file transfer dancing began.

Eventually, I end up with 18 TB drives having only 100 MB or less of free space. But also eventually, I run out of small enough K34 files to take the place of larger K34 files.

The above aside, I am still able to be relatively space efficient by using the right combination of K34 + K33 + K32 files, with the priority on packing in as many larger K sizes as possible.

100% of my drives are NTFS formatted.
I have often wondered if some other file system would be more efficient, even if it means giving up journaling, which is hardly needed for drives with virtually no write activity after being filled.

Or would using different allocation units or number of tracks or number of sectors per track allow for better efficiency. After all, the default format settings are for all-purpose use. Whereas, storing huge plot files is a specific use case.

1 Like

with 4k Sectors and lowes posible I-Nodes for EXT4 the filesystem takes about 160MB its nearly the same like NTFS with 4k sectors.
with 1MB sectors EXT4 takes about 40MB, i think 120MB less didnt help u and is not worth the circumstances.

edit: this are filesystem sizes for a 18TB HDD

i always format my drives like this:
sudo mke2fs -I 128 -m 0 -N 10000 -t ext4 -O ^has_journal,extent -U 00000000-0000-0000-0000-000000000[drive number] /dev/[drive]

2 Likes

Here is the problem. When you create those plots with different k sizes, you are trying to figure out how to cover most of the free space.

However, the problem is that I don’t think that there is a metrics that shows what is the hash / TB ratio for those different k values, and how it eventually fluctuates when there are small differences within a given k value plots. Sure, there are plenty of strong opinions, but all of those are just BS.

My take (potentially unfounded) is that the hash / TB is constant, and those higher k value plots are just less hash-dense, thus take more space, falsely implying that we are making progress in reducing that dead space.

This is the point that I was trying to make with my take on those partial k32 plots. I really don’t give rat’s ass about k values, I do care about hash density and plot creation energy efficiency, and Chia is rather quiet about that part (yeah, the green company produced the least green plotter and brags how good it is in the lab to test plotting correctness, potentially once a quarter), and instead of making it obvious, is pushing BS features (changed icons, new buttons without any meaning behind those).

So, really no offense, as that is really not your fault or even if this is a fault (rather a lack of guidance from Chia), but my take is that this chase to kill all the dead space is a good example of how to use “barking at the wrong tree.” :slight_smile: Although, maybe a “dead horse management” would be a better expression in this context.

And yes, the NTFS is a robust file system, but to be such there is a size penalty. There were some excellent posts on this forum about ext / xfs formats that give a bit more space, and not just by killing journaling, but few other attributes to fine tune them to host larger files. Is it worth to pursue, I guess that is a personal choice, how familiar the user is with Linux / fine tuning those things.

By the way, someone has posted on this forum a calculator for that dead space vs. k values, just search for that.

Oh, and to be clear why I tend to believe that the higher k plots do not offer higher or equal hash rather is as follow. If we take a plot (e.g., k32, but it really doesn’t matter), it has three components: 1. table describing that plot (say plot name, how many hashes, …), 2. tables that hold all hash entries, 2. hashes. So, when we increase the k value, potentially we get rid of one #1 table, but we need some extra table(s) to join #2 tables that hold the lower k number of hashes, where the number of hashes exactly doubles. Therefore, if we see an increase in space usage, it most likely is attributed to more #2 tables being employed. Again, I didn’t look at plots layout, so am just guessing.

Actually, the “plot check” test is a very similar thing. Chia has advertised it as a “check” for plot quality, but that is really completely wrong. If one checks the code, that check is always using the same seed for all checks. What it implies is that if that seed is changed (through the code, or by command line param), the results most likely will be opposite to the previous test, making the check process basically worthless. (Yes, I have run such tests, and have seen plots flip from extremely lucky to extremely unlucky depending on seed used.). Instead of making and advertising that binary as a quality check, it should be changed to be just an plot integrity check that could eventually better scrutinize plots, provide more stats for a given plot (thus potentially be helpful in analyzing different k value plots). As with the different k values and plot sizes, there are plenty of people that take various positions about that check making the water even murkier.

Those calculators focus on filling disk space, rather than making the most of high K plot sizes.

For example (I am making up numbers), those calculators would sooner recommend using “10” K34 plots, and “110” K32 plots, if that would leave you with 2 GB of free space.

I would prefer to use “25” K34 plots and “3” K32 plots, even if I end up with 4 GB of free space. Yet, the latter choice is not offered by any plot calculators.

Those plot calculators will not offer numbers depicting efficient use of higher K values.

Those plot calculators are solely focused on filling plot space, with no focus on making the most of larger K sizes.

I have one main reason for using using higher K sizes:
Future proofing.

In terms of chances of winning, four K32 plots probably offer the same chances as one K34 plot (or close enough).

So I see no down side to using higher K values, and there is the up side of not being concerned about whenever K32 plots will be fazed out. Even if K32 plots are never phased out, it makes no difference, as my chances of winning with only one K34 plot is still the same as having only four K32 plots.

Yes… poor choice of words. The developers should call it an “integrity” check.
It caught my bad K35 plot issue.

And I have done that “seed” value change for checks, and the results of the so-called “quality” vary wildly.

1 Like

That was my original point. I would like to say to the plotter - this is my drive, fill it up with one plot up to 100% and let me know when that is done. I really don’t care about k values, just hash density, and thus reducing dead disk space, and of course energy efficiency of such plot generation. So, I would like to see one more stat in the UI with respect to those plots - energy efficiency. Something like chia plotter for a given k value will consume 100 energy units, where MM 60, and BB 40. Same for k values, that a given k value plot will be XYZ hashes / TB dense.

We have discussed that future proofing, so no need to beat it anymore. In your case, seeing how much work you put into reducing that dead space, this argument makes even more sense. On the other hand, in his latest video, JM mentioned that he has over 1PB farm, and all his plots are k32. If he is not worried about future proofing, I have no reason to be (at least that is my take).

1 Like

A yottabyte is the largest unit approved as a standard size by the International System of Units (SI) . The yottabyte is about 1 septillion bytes – or, as an integer, 1,000,000,000,000,000,000,000,000 bytes. The storage volume is equivalent to a quadrillion gigabytes (GB) or a million trillion megabytes.

Can easily store K88 Plots :joy: :joy: :joy:

2 Likes

just use a very high nummber of proofs and all plots that has no errors will become the same “quality” its just a statistic problem if u use low values like the default (30) u will get a “wild” result.
at end 4x K32 plots == 1x K34 and the quality of all are the same, as long they had no plotting errors.

and by the way this shows u that it dossent matter if u us K32 or K34 to fill the same amount of TB at the end u got the same chances to win a block.
the downs and ups for K32 and K34 are others :slight_smile:

That is the point, isn’t it? The default value is worthless, the high value is basically the same for all plots.

Stil, there is plenty of people that run that check at low values and then replot those that didn’t have good results.

That clears things up for me… I see what you are saying.

But the way plots are crafted, they require temp space for the process. So unless the formula for crafting plots changes, we cannot create one plot that fills our hard drive.

That is a different story. For instance, if you have enough resources to create n value plot, and n+1 plot with the same energy efficiency, and those take the same space (same hash / TB), why bother with lower n value. Why I as a farmer need to deal with that nonsense, aren’t computers good at calculating things? On the other hand, when one hits the wall, and creating n+1 plot takes 5x energy / time, it is kind of pointless as well (either upgrade H/W to get better efficiency (e.g., more RAM, or NVMe) for n+1, or stick with just n values).

Also, a plot is just a container, so we can imagine a 200 GB k32 plot, or one 300 GB plot that will store one k33 and two k32 plots. What I am trying to say, whether a different container will be used (storing different k value plots), or it will be broken into multiple plots, I don’t really want to deal with one plot at a time, but rather one disk at a time.

1 Like

There is:
https://plot-plan.chia.foxypool.io/

1 Like

That calculator is not designed to use space with the focus being on favoring larger K sizes.

See my reply to @Jacek, eight comments above your last comment.

1 Like