madMAx released GH 3.0 today. C29-C33 compression

By the way, you are running your farm with 250 diff. You have some brown (over 9sec) lookup times. Also, not too many (6 over the last 24 hours) missing partial slots (those bars on Stats chart for 1 day represent 15 mins slots). Maybe you could try to push your diff to 500 or so, that will most likely lower those high-lookup outliers. Although, you may want to monitor those missing proof bars and Effective Capacity. I am running my farm with ~3.5x the calculated value for the past 24h. I will let my side run like that for another day, and tomorrow will try to push it to 30x or so (to see how many proofs will be there per hour).

Both our farms are close to GPU capacity (based on lookup times). So, trying to push those diffs up right now before completely switching to c3x will give some feedback about EC / payments as far as finally pushing diff to 20k. Basically, switching from c19 to c30 gives about 30% more proofs, so not that big change.

What you could also possibly do is to try to get your avg lookup times based on your harvester log lines and compare that with what you have in your pool stats. In my case, I have about 2 secs difference, and I am not sure how to account for that (my guess is that most of it comes from the pool side, as once the harvester produces that line, it pushes it directly to the farmer, and this one make a REST call to the pool, so not much room to gain that extra 2 sec delay).

1 Like

I had some weird issues that I’ve finally resolved (about 24 hours ago), if you look back at my farm over the last week, it all went a bit Pete Tong as they say.

First of all I thought it was related to the bug in Giga30, but when I applied Giga31 my EC carried on decreasing and various errors continued in the logs.

I’ll copy a post from Foxy Pool Discord.

All but one of my drives are mounted in C:\plot - mounts, but when I replot I also assign them drive letters (for my plot mover software so it can get the size of free space etc.), this worked perfectly last time I replotted, this time it seems to be causing an issue. Chia is not aware of the drive letters, they are not listed in the config, only the mount points. Yesterday when I rebooted I stopped plotting, so the last log only had GH running, and no plots being moved or deleted. Last night I removed the drive letters, there are no key errors since then. Those plots with key errors all existed, but bizarrely they all referenced the drive letter, not the mount point. I’ve just stopped chia, renamed the log, and restarted, so we’ll see what today’s log brings. My pool EC is recovering as well

My pool EC is back where it should be, and no key errors, so it was something to do with the drives having both a mount point and a drive letter assigned - weird!

Can’t remember the exact number of plots but I have a mixture of c19, c30 and c31. I think I’ll replot the rest of the c19’s to c30, I think c31 is pushing it a bit with a 3060. I’ve also updated my diff to 500.

1 Like

My understanding is that 3060 should be able to support ~500 TB physical space, so it should be no sweat for it to have only c19 or c30 (same from GPU perspective). As such, you should be able to have a decent number of c31 plots. If you don’t have too many of those c31 plots right now, maybe something is not setup right with your 3060 (as your current lookup avg is too high). I assume that those high lookup times outliers are due to those c3x plots (i.e., really high diff should knock off some of those).

Although, my concern is still about those high diff levels, especially for smaller farms (e.g., like yours). What we know is what Max said, and that is the theory behind pools. On the other hand, what Felix said represents more how a particular pool is doing the internal calculations. Therefore, if you fully switch to c3x plots, you kind of will be forced to push diff level to 20k, and that may decimate the number of proofs your harvester will produce per day. So, trying to slowly bump it up right now while monitoring EC / payments may provide some feedback where your farm should be.

By the way, here is my 3060 ti GPU usage (nvtop, not sure what would be the equivalent for Win side):

The widths of those processing peaks are reflecting the number of eligible plots (a constant with respect to diff) and found proof (reverse to diff). What I have noticed on my farm is that the distribution of both eligible plots and found proofs is not that even, but rather there are clusters of both. Those lookup-time outliers are happening if those clusters overlap, and the needed GPU processing is spilled to the next slot. I had plenty of those overlaps before I pushed the diff to ~3.5x. I think it is worth to watch something like that chart while working on switching / adding plots.

1 Like

Windows Afterburner has a chart display.
I think “GPU utilization” is one of the options on it.

Where does the GTX 1080ti fall in the charts, with it’s 11GB of VRAM?
Will it plot the new C32 format plots, or is it limited to C31?

To jack6070 - the A5000 is ROUGHLY equal to the 3090, but slightly slower clocks at a LOT less power consumption.

Yes, but as your farm gets bigger, there is less variation.

Diff 20k is like farming solo with a 40x to 50x bigger farm. Or put another way, like farming solo with 40 to 50x lower netspace.

1 Like

When I said c31’s are too much, I was really thinking about after the filter change, currently I have 829 c31 plots.

These are the ProofofSpace results I got.

c30

Partial Difficulty: 10000 (0.00555701 % chance)
Max Farm Size @512: 0.654555 PiB (physical)
Max Farm Size @256: 0.327278 PiB (physical)
Max Farm Size @128: 0.163639 PiB (physical)
Average time to compute quality: 0.274606 sec
Maximum time to compute full proof: 0 sec

c31

Partial Difficulty: 10000 (0.00555701 % chance)
Max Farm Size @512: 0.335287 PiB (physical)
Max Farm Size @256: 0.167644 PiB (physical)
Max Farm Size @128: 0.0838218 PiB (physical)
Average time to compute quality: 0.477824 sec
Maximum time to compute full proof: 0 sec

Currently I could just about farm all c31, but after June I would hit issues, I was thinking of picking up a 3060ti if I could find one cheap, but that doesn’t really solve anything and just gives slightly more head room.

Once the sun comes out I will start replotting more c30 plots, with the constant rain and cloud running the plotter 24/7 depletes my battery, which means I start drawing more expensive electricity from the grid.

1 Like

This is the problem that I am actually worrying about (farm gets bigger). At this point, I am looking at 3 farms which sit on just one 3060 or 3060 it. Neither of those three farmers wants to expand their farms past what they have right now and roughly what one of those GPU cards support. I would assume, that this is similar to what other similar size farms are at right now.

For the past about 24 hours, two of those farms bumped diff to 1k (one from 200, the other from 300, making it around 5x and 3.5x diff jump respectively). The farm with 5x / 200 diff used to have around 500 partials/day. With just 1k diff, the number of partials dropped to around 30/day (roughly what Felix mentioned as kind of a recommended borderline), if the diff will be further bumped up to 20k, that number will drop to 5 partials / day. Also, during those 24 hours, his EC dropped by about 15%. I understand that 24h is zilch as far as EC fluctuation; however, our both farms hit the lowest EC in the past 30 days during the same time (sure, that could be another correlation). I have asked my friend to back off a bit, but will ask him to back off it more tomorrow morning (when he wakes up).

I will continue to push my farm slowly to 20k, just to see what will happen.

Both of those farms are still on mostly c19 plots and close to GPU capacity. If we switch to c30 plots, that will increase the number of partials by ~30%, so in his case to about 7 / day (not much to make a difference).

The third farm is the one that @Ronski is running. He is also sitting on 3060 with about 250-300 TB physical space. He has already started transition to c30 plots. Few hours ago, he bumped up his diff by about 3x / 500, and so far so good.

When farming solo, there are no partials / daily payments to worry about, as the only thing that counts are full proofs. So, regardless of what diff is, the ETW will just follow the netspace. My understanding about how pools should work is exactly like yours: sometimes less proofs, sometimes more, with a longer timescale it will even out. However, right now my feeling is that when chia put together the sample pool code, such high diffs where not a thing, and pool diffs were meant to be low and to smooth daily payment out. Potentially, there is a dropped proof count penalty in the code that will penalize smaller farms.

We have just started to play with those diffs a couple of days ago, so the data is basically worthless. As mentioned, I don’t mind if my EC / payments will drop a bit for the next few days while I am pushing my diff up to collect a bit more data. Although, I would like to see more data that could provide us a clue whether the pool side is working properly or rather there is some old chia code that may need to be tweaked before jumping to c3x plots / 20k diffs.

For clarity I have 323 TiB raw.

Is there a way from the cmd line to see how many of each plot I have? Usually I have the GUI open, but not at the moment.

1 Like

Don’t know, as I also only used GUI to check on that.

By the way, if you want to keep your current plots (with the current raw HD space), you could look into adding to your 3060 an nVidia p102-100. Those cards run around $60 or so and are roughly equivalent of 1080 ti (good for farming, worthless for plotting - to narrow bus). Maybe that would give you enough GPU cycles to keep your farm as is with little extra expenses (sure power draw will jump by 100-150W).

1 Like

EC != payouts. EC will vary more when using high diff yes. But if there is any issue it will show as lower payouts, not as lower EC. Average EC over long time should be exactly the same, no matter the diff.

For example, lets say you found a partial at 200k diff, but the pool didn’t win any block that day. Your EC will show something greater than zero, but your payout will be zero.

But even in this case, I think it will average out over time. Since getting 200 partials at 1k diff is the same as a single partial at 200k diff.

2 Likes

We are on the same page about how it should work.

Although, when I switched to Foxy pool about a month ago, it took me about 3 days for daily payments to catch up. Based on what you said, after 24 hours, payments should be at the full level. So, at least there is some pool code that is throttling those payments during the initial switch (if I recall it right). If the expected partials at 20k diff are in the range of 5 / day, it could easily happen to see a day+ without any partials and depending what triggers that initial throttling code, it could be hit during those times (causing a slow ramping up / lower payments). Although maybe there is a slow ramp down period that will cancel it out.

That farm had around 30 partials / day with 1k diff. So, the expected partials for 20k diff would be around 5 / day. I think that the farm size is around 600 TB raw sitting on 3060.

Again, I am not saying that we have a problem yet, just that at least 2-3 of us started to experiment with diff levels and are collecting data for now. Neither of us wants to expand our farms. So, if there is a potential problem it could be manifesting with smaller farms that are pushing GPUs up close to the limits.

For now, no need to worry about it yet, this is just a heads-up info. Maybe some other folks with smaller farms are already mostly plotted with c3x plots and will chime in.

1 Like

Shortly after upping my diff from 20 to 5000:

image

EDIT: lol the last partial was actually a block:

image

That should not be this way… It will cause issues. As temp you have P:\ and D:\ and then you have those same drives as destinations for plots. As they will fill up and be written to, using the as temp will not only make it very difficult on those drives to do double duty, but they will likely run out of space eventually, depending. If they are all SSDs that will be better as a temp, but they will still get filled and everything will stop at that point.

Best is separate SSDs as temps, and other drives for plot storage.

1 Like

You realize that as difficulty goes up on a pool, you approach basically what a solo farmer does. As the pool has less and less knowledge (touch) of your plots over time, it is harder and harder for it to ‘guess’ your farm size, due to the variability of lookups over time.

This is a real problem for pools in general, and can’t be “fixed” wo/increasing the farmer load (more lookups by having a smaller dif) or decreasing the ‘compression’ to accomplish the same directly.

Please help me set up plotting on two GPU 3060 12 GB. I use the -r 2 option as well as -M 64, otherwise the out of memory error occurs. But at the same time, for some reason, plotting is very slow and processor resources are heavily used, although in single GPU mode it is almost not used (see screenshot). I have 256 Gb of RAM.

So why don’t you just make 5 plots to one drive to see if it works, then expand.
.
.
cuda_plot_k32_v3-2 -n 1 -x 8444 -M 128 -C 30 -t g:\nvme\ -d h:\C30\ -f fffff -c ccccc

Deleted, as @drhicom keeps doing ninja edits :stuck_out_tongue_closed_eyes:

Change -M 64 to -M 128 and try that.

I tried -M 128, but the error is `out of memory’ occurred.