What do you think about this?
It appears to be written by someone who understands chia and plotting. I’d say it is bullshit but then I probably thought that about nossd.
The one glaring problem is their use of kW per day as a measure of farm power use. How could they code this and still be such an idiot?
(It’s probably that drjo… twit)
Quite easily I would think, lots of people get confused about how the amount of electricity used is measured compared to the instantaneous power required to power an electrical item.
Whoever it is, they joined Github in 2012 and seem to be involved in various projects over the years.
Initial thoughts, oh no here we go again
Actually, if you check the other repositories, that person was involved in some way with chia development starting around 5 years ago (committed to chia project). Somehow looks like stopped just before chia went mainnet. Also, looks to me like that person had high interest in Hellman attack early on (basically, what NoSSD first officially used and the whole chia forgot about). Most likely if there are any chia devs that also worked around that time could comment on that person, maybe also why he was not hired early on.
I guess, if that would smell more like a scam, there would be also a Win binary for download and there is none right now.
Although, if project does what it advertises, we are getting closer to POW even without ASIC. What it implies, we may have 128 filter soon.
Sorry, I got mixed up again. Those two extra repositories look like not signed by drnick, just clean pulls.
Interesting fee structure, sounds like they use a small amount of space to farm their own plots, to generate revenue.
To continue this journey, I’ve stepped away from incorporating a randomized fee or possibly changing fee on farming revenue. Instead, a minimal portion of user resources is allocated to support my own plots and farm. Most importantly, all the performance stats I present already factor in these contributions – ensuring that there are no hidden costs. What you see is precisely what you get. This contribution structure is steadfast and unchanging, ensuring that the results you see now will remain consistent in the future. This stability offers ease in planning and peace of mind.
Must be quite power intensive, minimum GPU is a 3090.
PS Quite a bit of discussion in the Madmax Discord.
I’m the developer for DrPlotter – didn’t mean to be a twit and use the kW per day metric, I’m happy to correct this and present a better alternative that’s less of a glaring problem (is kWh per day the correct term?). I just needed something that was easy to compare, some people assume GPU just equals more power.
Tomorrow I would like to post a topic to share a bit more about my background and history on the project. I have done one or two posts here in the past, but not been active.
I welcome all feedback, good and bad. Don’t hold back.
interesting. i will keep on eye.
yes, 1 kW for 1 hour = 1 kWh, 1 kW for 24 hours = 24 kWh, 24 kW for 1 hour = 24 kWh, etc
I haven’t tested the binaries, but I know it’s possible, so most likely it’s legit.
The problem with this method is, it doesn’t work with low compression, need 20G / 24G VRAM or more at minimum and lots of power draw.
I have to say the compute seems to be very optimized, I would have expected the farm size to be smaller. It’s better than my and NoSSD’s C20.
You can go even higher, 560% and 650%, but then you need 32G and 48G VRAM, and a lot more compute of course.
I was just thinking Chia farming was getting boring again
Not for me as I am not blessed with any 3090’s but gotta love how slick the presentation looks, so kudo’s to @drnick
I would suggest hower to change to specs presentation a bit (apart from the mentioned kwh thing). Now it compares equal number of plots between different compression and original. I think it’s better to compare based on equal physical space. Or at least add it in as a second comparison. For existing farmers that will make much more sense to compare and evaluate.
For new users, actually the space savings might be important so the comparison as it is now, is also good.
Hey Voodoo, thanks for the kudos, much appreciated.
I had a version of the presentation where the plot count was fixed to 10,000 plots. The problem is, GPU utilization varies so when summing up the cost, you’re not getting the full potential, and then comparing Eco3x and Pro4x on 10,000 plots with different GPU utilizations wasn’t a balanced. I tried a version displaying hardware costs as per PiB, but then it was potentially confusing a different way.
I did a quick edit of JM’s TCO model, that one is pretty thorough, but not the easiest to understand in one glance. I didn’t want to present any $ or factors that change when netspace/price/etc changes, just stick to the constants.
Thanks Max, I’ll revise it to kWh.
I spent a lot of time on all the kernels, sometimes a few weeks on only just one function just to throw it out or get a few % here and there, GPU optimization is a bit of a black art at times and it’s also too easy to come across 1 in a trillion bugs. Sometimes it really paid off, and I got a few breakthroughs.
I’m constantly impressed with how quickly you can push out releases. I suspect you’ve held this cat in the bag about as long as I’ve been working it. Please feel free to PM anytime.
Have I understood it correctly that the DrPlotter and the DrSolver both need 24GB VRAM? i.e. if you only have 1 GPU with 24GB then you can only plot or only farm?
Yes, that is a short-coming, without an additional GPU your plots will be delayed until they are finished. Either you commit some resources to a dedicated plotter, or you re-use the GPU after plotting for solving.
Not necessarily, I could get two 3090s and plot and farm, then when the filter changes to 256 I have the extra 3090 all ready to go
Kinda glad I didn’t replot from OG in a way…
Looks like some of my 3090’s I hung on to after the Ethereum merge might come out of retirement…
At 800 USD+ for a used 3090, I think I will pass until a new generation of gpu lands from Nvidia and drives down the crazy prices of older cards. I’ll stick to Nossd and see if maybe they do something to shrink their plot sizes further without the huge jump in compute. That and I don’t Linux, yet……
Not only do I not want to buy a 3090, it won’t fit in my farmer, which also runs Windows 10
Yeah I know how it goes, you can spend days not achieving anything, and later get a big boost from a new random idea.
I’m not planning to compete with you on this, the pie is getting smaller and smaller with the plot filter reductions.
When I found this method originally I did some table 1+2 testing on GPU and concluded it only made sense on FPGA, due to the high GPU power draw and a significant compute advantage for FPGA (which would offset the increased price). It seems you managed to optimize it well though.
If you are up for it, please share your upcoming post on the background/history of your project to reddit as well.
If you are using a new reddit account let me know what your user name is prior to posting, otherwise you will be blocked by the automod. You can dm me here, or send modmail on the subreddit. https://www.reddit.com/r/chia/
Any chance for using 7900 xtx 24gb?