K33/K34 Plot Times ⏲

Common sense would say yes, and I don’t know why the devs would make it so much larger (~6.1GB more than x2 k32s) full of exactly what I have no idea. Seems a bit wasteful of disk space if you ask me.

1 Like

In that case, mixing those different kXY plots to use more HD space is just waste of time, isn’t it? The end result is more disk space used, same number of hashes.

Using k32 as a hash unit, the disk utilization should be looked at how many k32 hash units it has, not really how much disk space was consumed.

The point of higher k-values is not really to store more hashes, but rather thwart short range replotting attacks. If those plots would be just a straight forward hash storage, we would not need to deal with those higher k-values. So, my take is this is where that extra overhead comes from.

Not entirely. I used to believe this to be the case, but there’s actually a constant value + 2^k number of entries in the tables, so the extra space should be due to more entries and therefore more proofs. This would also be explained by fewer entries being trimmed out due to having more overall entries to match against in the backpropagation phase. So not really less density, just different. This also means that if a k33 doesn’t have a proof, it REALLY doesn’t have a proof, whereas 2 k32s that are similar enough to pass the filter, may have a slightly higher chance of one of those two having a proof. It’s all pretty convoluted when dealing with probabilities and statistics with such huge numbers and hashes.

I believe generally the smallest k-size allowed on the network will give you the biggest chance of having a proof.

1 Like

Take it for what it’s worth, but on Keybase, last week support mentioned that a k33 has a slightly better chance of winning, in answer to some question. I immediately said, "(!!!) So k33 are more lucky?"

Reply was “Not so, they are the same on a 'plotted space metric” (not how he actually phrased it, but close).

So the slightly better chance of winnings w a k33, is offset by the sightly bigger plotted disk space. That’s how I understand it.

2 Likes

It was phase two which used to take much longer when the system was on a go slow. The other thing I’ve just noticed is that since doing your mods my transfer speed has increased, gone from 60MB/s to between 100 and 115MB/s.

What is a replotting attack? (or short range replotting attack?)

Here you go:

Although, I am not really sure, if it (prevention) needs to be symmetric (both plot creation and proofs retrieval need to be higher, maybe just plot creation, thus it is irrelevant here).

Although, if this is not the case (proof retrieval doesn’t need to change), then we can assume that a plot is just a “dumb” container of hashes, so to store 2x more hashes, you just need 2x more space.

I really don’t know much about it, so that is just my thinking out loud.

1 Like

Lots of stuff there that I did not know about.

1 Like

The basic concept is that if you can complete phase 1 fast enough for the plot filter, you don’t need to do the rest of the plot or store it, then you would effectively always have a plot passing filter. With the 1/512 filter, that would be statistically equivalent to having 512 plots (or 50tb of k32). Of course all farmers know passing filter is still a very far cry from winning a block. The replotting “attack” is really just a way to simulate netspace with continuous hashing, which is basically just proof of work, and is only a viable solution when the cost of completing phase one fast enough is cheaper than 50tb of storage. In the extremely unlikely event that this actually does happen in the near future, the chia team can just change the plot filter to 1/256 and now the “simulated space” is only worth 25tb.

2 Likes

My understanding is that this is not just passing the filter, but rather generating a full plot that is eligible for looking up proofs for that filter. Otherwise, it would be just spamming the net space size, what is not really an attack (would not lead to any wins).

Although, maybe I missed something with that explanation. I have no blockchain background, and cannot say that I really do understand that section.

Yes, it has all the table entries so it’s got all the proofs, a normal plot would have, but once the next filter comes that’s all discarded and the next plot is generated. Just because a plot passes the filter doesn’t mean it has the proof for the given challenge so it’s not a very worthwhile method, especially considering that much compute will cost a lot more in electricity than the equivalent space. Short of a very major hardware change, we’re pretty far from retiring k32, and personally I believe k33 wouldn’t be very far behind and might get skipped when k32 does get retired.

1 Like

I now have 512GB of 2400Mhz DDR4 with all channels/slots populated. K32 down to 18.6 minutes, and K33 now down to 57 minutes. Need more SSD space before I can try a K34.

Be interesting to try Bladebit for K32’s.

Edit 2 x K32 in parallel started at the same time takes 27.5 minutes, so average of just under 14 minutes per plot, 104.7 plots per day.

K=34 in 128 minutes on M1 Ultra using 2x Samsung 980 Pros via Thunderbolt (RAID0).

2 Likes

what in the heck. why would ANYONE plot k33 right now…
for the next 10 years

  • 2 k32 plots have higher probability of winning vs 1 k33 or k34

its not like k33, k34, are more likely to win over a k32.

say u have 1 k32 1 k33 and 1 k34… they all stand equal to win. but the latter uses far more space…

so why would u not wait as long and possible to plot the larger plot sizes…

I understand a couple for fun but cmon…thats it.

Incorrect. The odds of winning are relative to space that is occupied by the plot. Relative to K32s, k33s & k34 take up slightly more than x2 & x4 the space, and so have a slightly better chance of winning. In other words, if a k32 has a chance of 1, then a k33 has a chance of 2+ a bit, and a k34 has a x4+ a bit more still.

And why do k33 & k34? Of course, because we can - naturally. Second, with a careful mixing of k sizes, more of a drive can be occupied with plots, and therefore a more fully plotted drive has a slightly better chance to win blocks. Drives are expensive, so why not use them to their fullest potential? Third, future proofing, should that time arrive when k32s are no longer sufficient to stop various attacks on the proofing of plots.

Lastly there are potentially other benefits that may benefit the farmer during the processing and proofing of these higher K plots due to their different construction vs. k32s.

1 Like

learn something new every day. i thank you

Quick update. I’ve has some issues with k34s, crashes, unreasonably long process times, etc. But that’s a story for another day.

But concentrating on k33s - now that going much, much better… I’ve produced a few hundred so far @ ~59 minutes/plot doing two concurrent MM threads the my TR 16 core system.

I’ve tested many scenarios, and found both given 32 threads (no affinity) with -t 512 -2 256 using three consumer 1tb ssds makes the best combo together and uses resources best from what I have available.

So separate temp t1 SSDs, with a common t2 SSD, staggered after phase1 - works well and can be left for days running without losing their sync to each other, phase-wise. Must be some natural resonance that keeps everything in sync…just happy it’s a set and forget process now :grinning:

The real work will be meshing them into existing k32s to fully utilize 1/2 PB of already plotted k32 disks of 12, 14, 16, 18 TB sizes!

I hear Bladebit ‘disk’ is on the horizon… :dizzy: can’t wait!

I don’t envy you, it was bad enough getting just over 200TB of disks full by using a mixture of existing K32s and freshly plotted K33s.

I keep searching for info on Bladebit ‘disk’ but didn’t manage to find anything, also can’t wait.

It is currently being tested … and peps on Keybase:chia:plotting subforum are trying it (you can follow how it’s going there). Not ready for prime time just yet, but hopefully soon.

1 Like

AMD 5700G@4400, FCLK=2000, 64GB@4000/18, 2TB GEN3 NVMe
K34 r=16 → 192 - 202 min

AMD 5950X@4300, FCLK=2000, 64GB@4000/18, 2TB GEN4 NVMe
K34 r=23 → 153 - 160 min