NoSSD Chia Pool update! GPU mining, 50.3GiB plots, >200% reward, new CPU & GPU plotters

You don’t have a phone?

NFS mount the drive to the machine(s) with the GPU.
Or SAMBA if you’re using shudder! Windows should work.

I’ve been “remote computing” for MANY MONTHS on NoSSD

I have a phone.
It is a LANDLINE, not a cell phone.
An real Western Electric Bell TouchTone ™Telephone (via a box that does Voice Over IP and nothing else).

Not having cellular is NOT the same as not having a phone, I get tired of people that ASSUME that lie.

So I assume your computer is not running windows or you’d have the Telegram app installed on that, or is the Telegram app itself what you don’t want near you?

I run LINUX.
I dunno why I have to type “at least 20 characters” in one of these messages on this forum.

Ive said this to you before. Repeatedly. And you are still not getting it.

Your farm and my farm are different sizes.

I have significantly more hdds than you. I am not flexing im trying (for the fourth time) to explain that remote compute doesnt work for me because isp bandwidth caps and 100k plots (would be 200k if the new version comes out).

Watch

100k plots = 200,000 MB/d = 6,000,000 MB/m = ~ 6TB/m
Isp only sells 1TB/m plans.

Local network bandwidth would be no issue until 1M plots and then you just need 10 gig nics.

I am trying to remote compute over the atlantic where my plex server is in europe. Europe datacenters has much higher power prices than my home (by a factor of about 6)

To the best of MY knowledge, you have NEVER explained your situation to me before.

Why are you hosting your HDD servers and your “compute” servers in different places?
And if European hosting is so expen$ive, why are you hosting there at all?

I’m also pretty sure “remote compute” is going to have the SAME bandwidth limitation issue, since it’s ALSO going to need access to the entire plot to DO the computation on it to the best of my knowledge.

 tailscale0:
       2024-06    158.14 GiB  /   64.95 GiB  /  223.09 GiB  /  422.14 GiB
     yesterday     46.04 GiB  /   18.93 GiB  /   64.97 GiB
         today     38.57 GiB  /   15.88 GiB  /   54.45 GiB  /   63.75 GiB

This is the same machine using remote compute from gigachad.

To be fair he has posted several times on here about his situation, although I forget the exact details, which I suspect most people will do.

We don’t all remember that he lives in USA (I think), and weirdly has his storage in Europe and his compute server at home, oh and also in this day and age has an ISP that limits his total data bandwidth, which is not common, certainly not in the UK.

You really can’t expect people to remember that from months ago.

He did post and did explain the data he has seen on the wire few times already.

Also, assuming that the recompute would be getting the same (amount of) data as the harvester, just show the ignorance about how recompute works.

Its there …
It came on 13.06.
V3.0beta

Here are benchmarks from 3.0 beta on Linux that I captured for a couple of GPUs that I have. This may be helpful for those deciding what level of compression to use.

4070 Super
Results are shown for plot filter 512 and for all mining GPUs combined
This PC is capable to mine on:
C1: 4057359 plots (1ms/quality)
C2: 2748267 plots (2ms/quality)
C3: 1376836 plots (3ms/quality)
C4: 862206 plots (5ms/quality)
C5: 450080 plots (9ms/quality)
C10: 970939 plots (4ms/quality)
C11: 335636 plots (12ms/quality)
C12: 209885 plots (21ms/quality)
C13: 127683 plots (34ms/quality)
C14: 70777 plots (59ms/quality)
C15: 30938 plots (122ms/quality)
C30: 74660 plots (57ms/quality)
C31: 63505 plots (67ms/quality)
C32: 49919 plots (84ms/quality)
C33: 36758 plots (113ms/quality)
C34: 22796 plots (178ms/quality)
C35: 15035 plots (260ms/quality)
C36: 7879 plots (448ms/quality)
C37: 0 plots (unsupported with current GPU configuration)
C38: 0 plots (unsupported with current GPU configuration)

3080 TI
Results are shown for plot filter 512 and for all mining GPUs combined
This PC is capable to mine on:
C1: 4460476 plots (1ms/quality)
C2: 2216318 plots (2ms/quality)
C3: 964439 plots (4ms/quality)
C4: 506226 plots (8ms/quality)
C5: 256317 plots (16ms/quality)
C10: 825298 plots (5ms/quality)
C11: 318355 plots (13ms/quality)
C12: 202152 plots (21ms/quality)
C13: 117587 plots (36ms/quality)
C14: 61628 plots (68ms/quality)
C15: 27677 plots (135ms/quality)
C30: 73411 plots (58ms/quality)
C31: 60552 plots (70ms/quality)
C32: 45519 plots (92ms/quality)
C33: 33043 plots (125ms/quality)
C34: 22251 plots (182ms/quality)
C35: 15229 plots (257ms/quality)
C36: 8108 plots (437ms/quality)
C37: 0 plots (unsupported with current GPU configuration)
C38: 0 plots (unsupported with current GPU configuration)

5 Likes

I wanted to share my experience thus far with the 3.0 beta on Linux. Given the benchmarks and my very modest farm consisting of only 25 8TB spindles, I opted for C36 compression. I stood up 2 dedicated plotting rigs and plotting is quite good with plot generation times (to SSD) of 1:50 (1 min 50 seconds) on a 3090 and 2:50 on 3080ti while plowing to other disks.

Mining (a.k.a. harvesting) is a bit more troublesome. The mining rig uses a 4070 Super, which according to the benchmark, should be able to support “C36: 7879 plots (448ms/quality)”. With 5347 C36 plots, I am seeing warnings in the output like this:

WARNING: High GPU load #1:100%
WARNING: Too many timeout qualities! 737 qualities out of 15734 were not processed in time. If no other application is loading the system, launch the benchmark to see if you are using too many plots or compression level is too high for this PC

I really don’t know what to make of those as they represent only about 5% of my “qualities”. The good news is that this has not yet been reported as stale shares on my nossd.com status page. Also worth noting is that my power usage on this rig increased from 341 watts to 472 watts since the GPU is now at 95% constantly.

So, I am left with a bit of a dilemma. Do I:

  1. Wait for a newer client version and hope performance improves?
  2. Re-plot at a lower level like C32?

My other option is to swap in a 3080ti in place of the 4070 Super but I am reluctant to do that since it has the potential to use so much more power. TDP for the ti is 350w versus 220w for the super.

Thoughts and suggestions are welcomed.

Your issue is that the benchmarking specs given are still for filter 512, not the 256 filter it is now. Efficiency likely isn’t going to double in a new version, so replotting to a lower level is probably best.

Ah. Good catch. I was aware of the filter change and I noticed that the benchmark says 512 but I did not put the two together. If I understand the filter change, it basically means the miner has to do 2 times the work as before. So, a reasonable estimate would be to cut the benchmark plot numbers in half meaning my 4070 super could only support about 3940 C36 plots. I am currently replotting at C33 where the super should be able to support up to about 18k plots. I will only have about 6k so that should put my GPU at about 30% thus reducing power consumption back to something more to my liking.

Didn’t nossd add the ability to be able to bench mark your hardware at any filter amount you want? That was in the latest non beta build.

its in the beta version too, people don’t do basic research.

2 Likes

Although I have not played with the beta build yet, I would think it has all the previous switches still active.

Finally, NoSSD v3.0 is now available to the public.
As far as I’m concerned, C33 seems to be the way to go,
considering energy costs and market prices.

2 Likes

If I use CPU plotting with --no-temp option, I get Segmentation fault.
I have 512 GB of RAM and two CPUs.
Why does that happen?