NoSSD Chia Pool update! GPU mining, 50.3GiB plots, >200% reward, new CPU & GPU plotters

Are the performance measures for C10 - C15 compression with a plot filter of 512 or 256?

unlikely, for my comparison (using bladebit ):

1070 ti 4.75 minute plot
3900X: 26 min plot.

Both these systems use about the power Watt at the wall while plotting.

That’s a very big difference in efficiency right there as with GPU plotting I will only be using that power draw for 5.5 times less time than CPU plotting before my farm is replotted.

NoSSD CPU plotter is also quite efficient compared to the other CPU plotters out there, so the difference might be less.

Edit:
Actually we can see from the website:
13900K makes 3 min plots
GPU makes 1.67 min plots
13900K at full blast uses more power than most GPU’s and takes 2x as long to make a plot, so there you have it.

@Anthony
It looks like there is not much performance difference between one GPU and the next and the plotter is on a fixed timer? So what would be the about minimum GPU to make a 1min40sec plot?

512

@Voodoo
GPU plotter performance mostly depends on PCIe speed and RAM bandwidth, so PCIe 4.0 cards are preferred with DDR5 system memory.

The only risk is having to replot, though, right? No one is going to farm to their bag…

No one should, but many actually do not take care of their keys very well

But indeed, if you send your xch to a real cold wallet, then the risk is having to replot ( and loosing a bit of xch before you find out)

Biggest issue is if you want to be part of a closed source pool, not signing your own blocks.

This is not an pure economic argument but also a principle one. It’s a difficult one imo. I understand that ppl like nossd need to keep their stuff closed to make money and their innovations are seriously impressive. But I like the decentralization of open source farmers signing their own blocks. Do I want to sacrifice my principles to make more money… :thinking:

What is the ROI ? energy vs performance ?

I have a 22 PiB farm C8 GH and it seems energy bill is too high. Beat my statistics.

@BlackEgo

The answer requires knowledge of so many unknown variables. Our miner is more efficient than GH, this is all I can tell.

Kudos! Classy and impressive to just develop and deliver at this level without hype.

These compression levels are intriguing and I’m sure most are curious about GPU utilization/power while farming at these levels.

While it’s nice to know the max farm size for those two GPUs, having an idea of utilization across an array of common GPU models at different compression levels using a constant farm size would be helpful.

For instance, what would average utilization be for 500TB using an A4000 at the different C10 thru 15 levels.

Does your software/client farm plots over smb/samba or nfs mounts and how does this perform?

@Whatshisname

We’ve included a benchmark in our software. You may get the best guess of space and performance tradeoff by running it on a target machine. We have special switches to farm on network shares and other high latency storage.

Can we get a “–no-mining” option to run on a computer that we are just running for plotting?

Just remove the wallet address from the command line

1 Like

Awesome. Thank you!

(20 characters)

Sorry, what is the difficult to farm with AMD AGPU ? Thanks for the explanation.

You haven’t disclosed how many hard drives you have in this 22PB farm so difficult to say how make improvements.

Which version for CPU plotting? Old 1.2 or new 2.0?

Well, you can do both. I currently use flexfarmer but also run a non-farming chia node to contribute to the network.

The risk is in running a piece of software that’s closed source: you cannot inspect it to see what it will do from your system (or more exactly you can but it’s not trivial).

It is an IT security risk, not an economic opportunity cost risk.

Well, they win. I am out.

@idds Change “3d” to “cuda”

Question: is there a minimum cuda version / generation of gpu needed?

1 Like