SpacePool GH C29 Difficulty

Hi, it looks like SpacePool now offers the ability to set a static difficulty. Does anyone know if that input corresponds to the GH suggested Difficulty on the GH Git page? Namely, for C29, a Difficulty of 10,000 should be used? When I put 10,000 in, SpacePool says that corresponds to 0 partials per day which doesn’t seem right. Currently, my setting is 8 partials per hour, which doesn’t seem all that bad in terms of computational effort? But probably I just don’t understand what Difficulty means.

Thank you

How much effective space do you have?

I have 760 TiBe of c30’s and I’m running 5000 Diff on Foxy Pool, I’ve had 14 partials in the last 24 hours.

The thing I’ve found with such a high diff is that you get large swings, the worst of which I had over the weekend, where it went as low as 170 TiB average effective capacity, just 24 hours before I was on 1.18 PiB. My 7 day and 30 day averages though are just above my actual size.

How did you choose 5000? I chose 500 for 200TB of C31 but I have no idea why 50000 is the recommended difficulty.

There’s only this really :
"Partial difficulty is important for maximum farm size, especially for C9 / C19 / C20 and C29 to C33 (higher difficulty is better).

Solo farming roughly corresponds to a partial difficulty of 800k (800000)."

GPU usage / power consumption depends on 1. processing plots that generate partials, and 2. processing plots that fail to deliver partials (log lines XYZ (#2) eligible plots with ZXY (#1) partials).

The higher difficulty, the less of #1 processing needs to be done (and this is an expensive job, but less frequent). Most likely also less compute is needed to process #2 (more frequent, less expensive), but I am not sure about it (although, most likely, this is where Max does a better job than NoSSD).

Therefore, if the GPU is loaded up to the max, making difficulty higher can let you squeeze in more plots.

On the other hand, if the card is not fully loaded, setting difficulty higher let’s you lower GPU voltage, thus reducing power consumption (running the same number of plots, e.g., for @Ronski 3060ti farm, he should be able to drop his 3060 ti from 200W down to 100-120W).

But, as @Ronski and @drhicom stated, for those higher diffs usually some cider is needed when checking the pool daily.

1 Like

Is it that we are trying to reduce the chance of getting two partials to process at once?
- by setting the difficulty so high that 99.9% of the time we just get a single partial

My understanding is that for a single harvester, we cannot get multiple partials (I was scanning my logs for it, but never found more than 1; but an absence of proof is not a proof of absence). Not sure about multiple harvesters, maybe it happens on those setups, so that will be a concern there. Although, 10k diff for a 3060 ti card means a handful of partials per day and not that many eligible plots in total, so that would be really an extreme case to only fight multiple partials per challenge, especially on a single harvester setup.

My concern is more about challenge processing “spillage”. When you check your GPU, it looks like a square wave form. The more saturated the GPU is the less off time a card has. The problem comes from randomness of eligible plots found for every challenge. What it implies is that some challenges will have more eligible plots (with no proof) that can be processed in one time slot (9.75 sec or so), and the processing will spill over to the next time slot, basically pushing the results close to 19 secs. The more loaded the GPU is, the more often you will see two time slots overlapping, and 3 slots overlapping will happen also often (and so on, as this is a compounding effect). Once you start seeing 3 slots overlapping, you are in 27 secs processing regime, what means that if you have a partial to process during this overlap, it has a high chance to be over 27 secs (stale). Those 3+ overlaps (for my understanding) are where partials are going stale.

This goes back to what I wrote before. My guess is that #2 (processing eligible plots that don’t produce partials) is dominating the GPU, but higher diffs most likely are lowering the processing needs on per plot loads (allowing more plots with less overlap).

If one doesn’t care about the GPU power usage, and the card is not fully loaded, the diff level is really not that critical, as there are not that many (if at all) 3 challenges long overlaps. Again, 3 challenges overlap does not imply 27 secs partial processing time, as the found partial has to be coming from the first challenge.

By the way, I was looking at processing times vs the number of eligible plots, and eventual plots with proofs found. Of course, in general, the more eligible plots found, the more GPU cycles are needed, but the correlation is not really clean (sometimes more eligible plots per challenge takes less time than a challenge with less eligible plots found). Also, for eligible plots with a partial, time processing was not linear with the number of eligible plots per challenge (meaning that those partials really don’t add that much to challenge processing, implying that optimization is also happening on eligible plots without proof level).

At least, that is my read about how to look at those diffs with respect to what is happening on the GPU.

So, my take is look at what is have going on on the GPU, lower the GPU power, check back on the GPU. If there are not that many overlaps, bump up the diff, and go back to lowering the GPU power, … Therefore, I like better specifying the number of plots / hour, rather than the diff level (which has different meaning for different farm sizes / cards processing power.

The diff is basically a derived value from proofs per challenge (for the mainnet), and not really easily human digestible. It can be seen as a different / indirect representation of netspace size. Somehow, for pools the diff was initially used as a term, and we just got stuck with it.

500 will basically double your GPU load, that’s why 50k is recommended. But 10k is still much better than just 500.

The reason is how GH 3.0 plots work, just a nature of the beast.

If fewer partials are calculated with a higher Difficulty, will that reduce the chance of winning a Block? I don’t think so, but I don’t know why…

It wont change no, making blocks is completely separate from pooling.