Gluster plotting machine?

I am interested in plotting k33 or k34 plots, yes to futureproof my setup, but to also have fewer plots to check. This will go a long way with the additional storage I now have and push me to 2PB of space.

Obviously using bladebit to plot k32 is one option and I can already do that as I have 1 machine with 768GB of RAM. Another option is madmax for k33 and k34 either all in RAM or a hybrid of nvme and RAM.

I have looked into getting a system with closer to 2TB of RAM but the cost of 128GB RAM sticks are absurd compared to 64GB (3x the cost).
I stumbled across gluster last night and seems its possible to create a filesystem spanning multiple nodes in a clustered environment, so why not make a tmpfs drive that spans the RAM of 2 or 3 servers? The only other thing I would need is some PCIe NICs connected back-to-back for fast communication between the machines.

Is this realistic or will the throughput of even a 25 or 100gbe connection be too slow for the memory lookups?

Sam

wouldn’t be cheaper to simply use many customer grade gaming PCs? :wink:

My rig:

AMD Ryzen 9 3950X @ PBO 4.2 all cores 600 CHF
Corsair H150i AIO cooler 200 CHF
Ripjaw 128GB 3200MHz kit clocked to 3600MHz 600CHF
ASUS ProArt B550 180 CHF used
Corsair MP 600 2TB (maybe smaller now) 250 CHF
Seasonic Prime-TX 750W 250CHF
any GPU for 50 CHF

~2000 CHF/USD rig with <20min/0.05 CHF/50W/per plot or 72 plots/day.

Threadripper way may be 5000 CHF for 15min/plot or 92 day. I doubt it has got better plot per watt/price ratio than cheap consumer stuff.

How much is your plotter?

price/watt/plot?

I seriously doubt any old server grade stuff has better price/watt/plot/speed ratio.

Above by HW is easily resold for 50% of its original price just 3 years later :wink: Where do you sell old used server with >512GB RAM? You will lose fortune…I recall selling my pimped Dell T5500 dual Xeon workstation with 48GB…bought used, sold with at least 30% loss.

1 Like

OP has been AWOL for a bit but I agree on your point. A beefy consumer rig (or rigs) and madmax might be better suited for K33 and K34s instead of investing in fields of RAM. I kind of miss the early days of this project and all of the youtube videos with plotting boxes all lined up on shelves (I know most people on here would rather not got back to the OG plotter days though, but I like the aesthetic).

1 Like

i guess in following halving, k33 will soon follow. efficiency will be cornerstone of Chia…you cannot replot 1PB in 1 year :wink: )

I have seen some kids working on GPU plotting…they’re allegedly hitting 9min with GPU/CPU/RAM combo. It may be eventually official plotter as MadMax became.

I have already RX 6800 XT in my workstation…if I can reduce plotting by 50%…why not…3000CHF rig doing 140+ plots/day is romantic :smiley:

1 Like

OP has been lurking, i never really posted much.

Ive found from having many plotting machines that ddr3 is not much faster than using 2 different nvme drives. Also amd cpus are incredibly slow. The best for performance seems to be core count for intel. You can find a lot of cheap servers to load linux on with not much ram but decent 18 core cpus and 2 cpus. Load up a few pcie cards with nvme in raid and you can plot k34 pretty quickly.

1 Like

those old servers have crap efficiency. My old dual Xeon Dell T5500 workstation had 500W at full load…same performance from Apple MBP 15" 2017 but <100W.

I am currently running Ryzen 9 5950X @ 4.3 GHz, and ~18min/plot @ 170W. I doubt you can manage that with 10 years old server on >12nm process :wink:

MadMax doesn’t scale very well beyond 12 cores/tourist NVMe…16 cores are braked by NVMe…perhaps Intel Optane 5k $ can utilize 16 cores…but you do not plot 0.05$/plot anymore :wink:

dual CPU…numa collisions, numactl control. Can you fit old server with 2 x 350GB RAM to avoid NVMe bottlenecks?

1 Like

Sure, but those are much less expensive. You can potentially assemble Dell t7610 / 512 GB RAM for about $700, and it will give you about 15 mins plots (either MM or BB). Yes, it will draw around 450-650 W (depending on CPUs used). Depending how many plots you need to do, the upfront cost may easily justify the power consumed.

You don’t need 350 GB RAM to do MM plots fully in RAM, you “just” need 247 GB RAM or so. Thus, you can easily run 2 MM instances plotting k32 plots fully in RAM on a 512 GB box. Of course, your main bottleneck will be to offload plots from RAM when they are completed.

I never had NUMA problems either on Ubuntu or Rocky Linux. If you did, maybe the culprit was your H/W.

Anyway, all those plotters are basically the old generation, so no point on banging on them anymore. (Most likely, if no MM plotter, chia will not be where it is right now.) New plotters are coming, and most likely all will be GPU assisted, so there will be no point going either 5950x or Xeon route anymore although, thinking that maybe 4090 ti will be needed is kind of sad (due to the cost).

1 Like

I’m not doing k32, I am doing k34. I have a 5950 overclocked and it is incredibly slow compared to xeon (50% - 100% more time on AMD). I dont have issues with numa collisions or control. Wattage is not the only thing to look at, just because one is an old server doesnt mean its not better than than a 5950, and yes I’ve tweaked my settings on both and have run both for a very long time. And for your information when you have a 2PB+ farm the fewer lookups the better, that is why I run k34. Just to add a few smilies, not sure why you did, :wink: :wink: :wink:

1 Like

People who buy such a machine dreams about at least 2PB I guess. The more you plot, the more you lose.

700$ initial costs for 2.65-3.8x higher costs doesn’t look any businesswise extraordinary to me :wink: Translated to above by discussion

0.05$/52W/plot vs 0.13-0.19$/plot…20k plots ~2PB = +1600 - 2800$ you just saved -300 to - 1300$.

It looks smart decision as plotting “future” proof plots. I always wondered what sane person does that :stuck_out_tongue:

I highly doubt even the stupidest, clueless crypto kid would “force” thousands of other kids to plot 20EiB of data, and 1 year later say “k32 is dead…plot k33…6 months later…surprise…plot k34”

What is point of >k32 is beyond my business mind, but if you like it, do it. My guess is, k33 may come after halving…3 years later we will have more efficient CPU according to Say/Moore law.

Why would you waste energy/time with something that has zero or none advantage…beyond my beautiful mind. It may be something with crypto paranoia and Lambo kids.

There are some kids working on GPU plotting…they managed 9 mins now. My RX 6800 XT idles at 100W…fully loaded ~200W. Allegedly, you do not need high performance CPU…you still end up with same price for setup, with less efficiency.

Unless, there will be some miracle in storage systems…and we finally get normal tourist SSD with 30TB and 500MBs…your bottle neck will be final storage because mechanical HDD will still have just ~150 MBs

copy took 945.182 sec, 109.813 MB/s avg normal 1Gb eth ~16min. If lucky, you get down copy to 10 min. You would need extra fast storage to buffer imbalance…your costs go up.

Unless, there will be something extraordinary fast…i rather stick with “slow” most-efficient, and cheap tourist stuff on market. If I get bored, I still sell my rig to gamers for same price I bought it used :wink:

To whom do you sell 15 years old dual CPU >256GB RAM server? If Chia stays <30$, it takes just 5 years just to repay HDD that may die by then. How do you repay all this crap you bought, and future proofed?

If we reach times where special HW will be needed…Chia will die as Eth, Satoshi did. Chia will be most likely business people domain in future, and we get screwed. You guys dream about Chia as sure thing…do not forget, it is still experimental start-up with questionable future…even if it is officially working with World Bank mafia.

1 Like

Thank you for your thesis. It proves to me you have no idea what you talking about and have not messed with multiple plotters or understand why k34 is better.

3 Likes

says the guy that cannot do basic RTFM, UTFG and math ROFL

It is written right there, in basic documentation.

It has been proven by various pioneers that there are no benefits. There is benefit for utility companies because it takes excessive amount of energy with no benefits.

yet, you wanna preach others about your exceptionality.

There are numerous discussions on why higher k values are better, if you had a large farm you would understand and the final storage does not become a bottleneck :wink: . I suggest you read up, there are numerous reasons to use higher k values, waiting for k32 to die and replot is a greater waste of resources than some ancient xeon processor :wink:

But you are getting off topic. This thread was meant to discuss the possibilities of lowering entry costs by deploying gluster. At this time with recent changes like GPU augmented mining and PaaS where you can pay pennies for a plot it seems moot now. If you really think the return on investment is that long of a time then just buy chia instead?

Also your math doesn’t check out, 4x plot pricing for xeon? Do you have reference systems for xeon and amd you have tested yourself on k32-k34? Have you factored in the wait time for a k32 plot to write to the final destination as it takes longer than the plotting time? Have you factored in the cheapness of used xeon processors compared to a new AMD system when calculating costs?

Why are you limiting yourself to k32 when you will need to replot in a few years everything, you just increased your plot pricing and time when you could be plotting new plots. There is a whole thread dedicated to k34 plotting and reporting times and the best systems, and the consensus is that amd sucks.

If you want to discus k34 further take it to the k34 forum/posts, this is about gluster which seems to be a non-starter, it was an idea, but i haven’t seen anybody actually do it. You would need a 100gbe direct I/O link between the computers to keep up with cpu I/O, anything less and you would have a bottleneck.

As written before, anything beyond single CPU + hybrid or full RAM plotting is waste of resources, and storage will be always bottleneck.

Unless, you plan to plot 20EiB of plots…I doubt multi-cluster with InfiniBand connections is for normal people playing with crypto tech :wink:

in 5-10 years when there will be need to replot…Say/Moores law will bring same plotting speed for k33…then 5-10 years for k34. Why bother by 3-10x longer plotting times now for future “advantages”?

How much years/months you need to take advantage of your theory? Right here, right now in your wallet…that’s what matters. Rest is speculations.

I see no economic argument why plot now. I have no server plotting machine, I have cheap/tourist one that does currently 0.05$/plot with >78plots/day (calculating consumables such as NVMe or energy). HW can be resold anytime with little or no loss.

I have seen few threads in forums, and I haven’t seen any real numbers from guys like you. Except “feeling” tuning, and some poor argument about initial costs or questionable long-term benefits.

What is your price per plot if you claim k34 is better?

I haven’t seen any real advantages, despite lots of usual bullshit in forums.

I have heard common argument “fill the space” I have 6 x 100-140TB LVM JBODs with 0-30>GB place left (I use it for backups mainly)

3-10x longer plotting time for 2-4x “advantage” is very smart.

Bottom line is always efficiency and ROI. You can feel whatever you want, but resources in your pocket always tell the story.

I do it for cashflow, you may be Samaritan or developer…your choice, you have to pay the piper in the end.

Current ROI with this setup is 5-6 years per 18TB HDD. What is your ROI?

This thread is about gluster, please direct all k34 questions to those forums/threads.

So why do you spam it with pointless theories, and attacking any opposition trying to show you the true way?

I already mentioned with newer plotting techniques and code gluster is moot plus the barrier to entry with the extra hardware needed to reduce I/O bottleneck. This thread was started a long time ago as a theory which again i have proven won’t work economically.

If you have any further comments on gluster please post them…

It couldn’t be profitable in old school plotting days, and it is still not profitable now :wink:

Any state-of-art tourist PC will always have better $/plot efficiency than any fancy stuff. It reminds me car tuners, they usually aim to prove the concept at all costs.

You can buy 10 x tourist PC, and it will be still cheaper than any enterprise/datacenter cluster. Just check out BackBlaze how they disrupted the enterprise/datacenter illusion of IBM, Oracle, HPE mega-corps :wink:

Who takes Chia seriously, and long term, thinks in ROI terms as pro investors do. It is always down to cost vs profit :wink:

You can dream whatever you like, fight fiercely to protect your bullshit but NUMBER NEVER LIE :stuck_out_tongue: You may look cool, but you have to always pay the piper. We rather look poor, and earn lots of money.