Dr. Nick has also joined the Chia team to work on the future of plotting!
The DrPlotter technology will remain unchanged and we’re removing all fees for its use.
Chia Network Acquires DrPlotter Technology and Expertise - Chia Network
As it was stated long time ago that they should do.
The question is whether they will run it into the ground the same they did with BB, as it looks like the problem with CNI is on management level, not dev.
So when will there be a windows version??? That won’t need a RTX 4090 card??
As soon as the RTX 5090 will see the daylight.
It is funny where they say it will not require a replot to remove fees just a data copy…
-
I thought there were no fees, just a share of gpu power? Just stop sharing the gpu!?
-
The data copy is the hardest part for me, lol
Edit 3. I just switched eveything off
Edit 2
each DrPlot you create contains a small set of developer proofs, which will occasionally be solved just like one of your own proofs
So that all makes more sense now
My feeling is that he stored plots in with your plots, and I think I’ve seen this mentioned elsewhere, if this is the case it could mean slightly smaller plots.
It would be interesting to find out the method used though.
Still need a windows version to test on another server…
Only if you have a 3090 or 4090
Then it needs more work for the peoples use.
It’s needs 24GB of vram, that’s the problem.
As someone with a small farm, I don’t think the issue is the fees, it’s the price of Chia!
I think @drhicom has a valid question. If you read that announcement, there are only two things of interest there (for me, at least). The first is that the main focus is still on having non compressed plots. The second is that Nick will be only “initially” working on his plotter (i.e., before the new plots will kick in).
So, the question is why they purchased it. From my perspective as a stop-gap measure to draw people from NoSSD to prevent them launching majority attack. Having only that 4x compression running on 3090/4090 doesn’t accomplish much, as potentially relatively small number of farmers (netspace) is sitting on it. So, if lower compressions and Win (as in GH and NoSSD) are not supported / added, that purchase doesn’t make any sense.
It would really make more sense if they purchased either NoSSD (removing that threat completely today) or GH (slowly drawing people away from NoSSD starting tomorrow). Apparently, they decided to go for Nick as the least expensive option; however, the one that as of today is rather worthless.
Nick went for the 24GiB plots as he saw a gap in the market, I’m pretty certain he said that he could do bigger plots, which would of course not require so much vram, and thus be more accessible. So it’s entirely possible they may release plots sizes which are easier for the average user to produce, but at the pace CNI move I’m not so sure.
As for NoSSD have you seen their netspace lately, down from over 11 EiB (IIRC) to currently 9.32 EiB, it is decreasing, and I can’t see that stopping for a while unless they come up with something else.
Netspace is also decreasing in general, whilst some must be replotting to high GH compression, but still not making up for those packing up and leaving.
We are in agreement on most likely every point you have mentioned. The question is how CNI wants to handle it (there is no mention about less compressed plots in that PR article).
The only things that are really worth to focus (as far as NoSSD) is the border line capabilities for majority attack (33% of netspace + one 2x faster than the current timelord). They could also keep more people around if they lower their fees and/or add more compression levels, at least that would be a signal for me that they need that 33% (42% (12 EB) does not require 2x faster timelord). Also, they are the only party right now that will be pretty much done once the new plot format will be incorporated so that may be an extra motivation for them (Max has right now a very good motivation to focus only on MMX).
Hello there! I am new here!
it is about time Chia stopped parasites in our network. Never seen that in any company I own, and they pay me dividends.
what I see is corporate/gov use. It means ASICs in future I guess. 24GB VRAM move may be first step. How many can afford 2000$ GPU? Or 5k $ datacenter GPU?
Still, I do not understand the mixed philosophy of Chia. It supposes to be green, yet lately there is outbreak of high compression, and rapid decrease in efficiency.
CPU farming is still king of efficiency/profit. What is point having +20% space, if you use double energy…while profit drops because nobody has got free energy no matter how many claims about “paid-off” PV on roof.
That statement is simply completely baseless and wrong. If I have 1PB (and I do approx), w/Giga c30 I have 2.34PBs equivalent. That’s a bit more than 20%, yes? Disks alone, just idling take 650 watts…so that’s far short of “double the energy” of x2.34 the disk would consume. Running full tilt w/GPU …energy varies from 750 watts to 1050 watts…let us say 900 watts average.
It is absolutely better for profit, absolutely better for energy use/XCH, better in every way than making an equivalent reward sized farm without compression. Which would use …in my example… approx 1500 watts of disk energy at idle alone and more when actually actively CPU farming.
Of course, if you want to use some compression while CPU farming…well then you’ll be getting a bit better results, but no where near GPU compression efficiency.
you are doing it wrong, darling.
I have 1PiB, 1.2PiBe @ 420Wh operating. Only HDDs idling, 340Wh. Even at 10$/XCH I still have cashflow.
Lets say I double it as you, I would need 2x3090 or at least 1x4090. GPUs only lets say 2000$. Since nobody likes to talk about wattage, I am assuming Chia official stuff/or some sparse discussion.
Based on Max’s spreadsheets, RTX 3060 ( I have 3060Ti) should handle C8 1PiB RAW with powerlimit, and clocked down. It consumes allegedly 100W in nvtop
. I can confirm <40Wh with clocked down RTX 3060Ti for 1PiB RAW just for C4 official compression.
It means in reality 120-150W because drive cannot read MB/PCIe consumption, or losses in PSU. Wall reading is the fortune-teller that most don’t like too.
2x260W or 350W. If you are just starting, it may good idea to buy 2-3k in GPUs instead of buying 8k in HDDs. I already had HDDs so for me, it was pointless. I kept CPU farming, compressed just Chia C4 because my vision of parasitic 3rd parties in our network won’t last forever. It looks like end of the year my visions come true once again.
Since filter halving (despite overdesign on my side), it screwed everything up and I was forced to use GPU anyway as was my plan B during replot. Instead of 1.2PiBe @ ~425Wh, I am at about 440-450Wh…too early to say. Proof times below 1 sec again.
I am betting if I hook up my old OCed Ryzen 9 3900X, it will go down again…maybe even more efficient.
The whole GPU/compression hype still do not support the math. Max itself have some weird misunderstanding about investing, power usage. Boy, how he defended fiercely sHis bullshit…I am still laughing now. It is hard to discuss anything with dreamers
I have even created a thread about GPU farming…there is awkward silence, it is either pure ignorance or just the usual poverty
I would say “Fuck off” but the 20 chrs make me say “My friend, learn your units”
you look like Max’s twin. It looks like other parts of the world do not use SI units, and standard measure of time ROFL
Are you charged for spark per minute or what? Or bullshit per Watt?
Do we need to repeat the awkward embarrassment we experienced in Compression myth I posted few months ago ROFL
There were also many fraumen with excess fembot hormones, that fiercely defended their bullshit at any cost while shifting the blame at me ROFL