Old Intel Xeon servers for plotting: how old is too old?

Well I use swar

I run a max of 2 plots in phase 1 with a 90 minute stagger.
A max total of 6 parallel plots concurrent. I had some system issues yesterday so the output was just 7.

Today i increased to max 3 plots in phase one, but it doesn’t seem to work well. Perhaps i should try i with 2 threads. It is a bit testing and tuning to find what works best.

1 Like

Access to cheap registered DDR3 is the main reason why old Xeon’s are such a great deal - especially the rack mounted servers which are noisier so not as popular as Dell Precision class workstation of the T5600/T6710 era (The T76 are good as well but the extra disk capacity comes at a hefty price) The T58 series are good on CPU but they require DDR4 which somewhat compromises the CPU edge

1 Like

@ianj out of interest how do you get the cpu to max out like that. i am running plotman on an e5-2673 v3 @2.4ghz but only getting:

CPU Usage: 17.3%
RAM Usage: 11.03/62.73GiB(18.7%)
Plots Completed Yesterday: 0
Plots Completed Today: 2

and approx 15hr plots, am sure they can be faster than that am using 2x1tb name as tmp and a 14tb as destination.

is there any secret sauce i need to add in the plowman.config.yaml?

I really wish more people saw this… I have v1, v2, and v3 xeons… single, dual socket configurations… Unless you are overpaying for the CPU (IE recent price jumps in some of the 2699v3’s for instance) you should save money by going this route and get lots of performance… not to mention the expandability argument pcie lane wise…

1 Like

Well running with bitfield enabled took 10k seconds less…guess that settles it.

Next challenge, finding a cheap HBA to get a Hitachi DF-F850-DBL 12 bay disk box running for my farm :crossed_fingers:

Any sugestions?

Which SSD ? I use 2xHP ioDrive II (2.4TB each) which are PCIe data centre grade - they are not technically as fast as an NVME (“only” 2GB/sec) but they can SUSTAIN a higher throughput and have a stupidly higher TBW of 34,000 (vs the best NVME around 3,000) The 2xE5-2670 has been set to allow 5 plots in Phase 1 and each SSD is separate so combined it will allow up to 10 in Phase 1 - it does get pretty bogged down - i could perhaps reduce that to 4 but hey - if it works, don’t break it

I have a T5610 with 2xE5-2650v2 with only 1xHP ioDrive SSD in it (and a pair of regular SATA SSD in RAID 0 but they are SO SLOW in comparison) - and 2xE5-2670v2 in my drawer - i could probably switch 1xSSD to the faster machine, upgrade to the E5-2670v2 and get 35-40 a day - but i’d have downtime doing it - also that machine has an HBA and 8 SAS drives in it (part of the farm) and the SSD take a slot each … it would probably be ok but not keen on upsetting something that work

2x 2.4TB SSD means 2 sets of 10 - ie up to 20 in parallel - a regular 2TB NVME would be capped at 8 in parallel

It “broke” last night/yesterday - i have a small (3TB) target directory/root drive which i network copy off - well it filled and screwed up Linux and Swar - lost half a days work - its back up now but behind on the last 2 days

Dell Perc H310 or H200 (same card, different connector orientation) - both can be run hardware RAID but even better in IT mode (no raid but faster and simpler as a JBOD HBA) It supports 2xSFF-8087 cables, each supporting 4xSAS drives (so 8 out of the box) but each SFF-8087 can also be used to connect to an HP SAS expander to break out to 32 drives - so one HBA in the PC can support 64 drives - powering them is the challenge

They support 6GB SAS which with a typical 4TB disk is about 160MB/sec - i don’t know the max capacity disk it will support though - all my drives are 4TB and less

Don’t go for the H700 - it insists on RAID-0 setup for each drive as it doesn’t support passthru and you can’t hotswap - that is why it is half the price

I don’t really have enclosures per se - i have 32 disks piled up with 8xSAS breakout (about £75) and 8x4xSATA power breakout (about £40-45) and a DESK … lol i’ll get some plywood to wrap them in over the weekend and i already have a half dozen fans for a DIY QUIET enclosure - the HP SAS expander is £40-50 and an SFF-8087 to external (in the expander) about £15 - I think some guy made a YT video where he 3D printed a case but used the same approach for the disks/connectivity - he doesnt mention power which is a notable issue - a PC PSU only supplies 15-20A on 5v - which is about 20 disk MAX - i am powering them all off a single 12v Server PSU with a few basic DC->DC converters to provide 12v/5v to power all the disks (3 enclosures, 96 disk - but all small)

All good fun !

Remember i said it was down last evening and much of today - it will run at 33/day when it ramps up

Hi all. I’ve got a good deal on Dell T7810 2x Xeon E5-2695V4 2.1GHz 18 c/36t (36 cores/72 threads total ) with 128 GB RAM

If i pair it with 1x 8TB Intel P4510 Enterprise SSD should I be able to do like 30+ plots in parallel ?
And what the total plot time to expect from this machine if anyone has one ?
Just debating if it’s worth the investment .
Would really appreciate any input.

I don’t know exactly how fast a p4510 is, but if you have the option I would go with 2/3/4/ smaller disks.
30 plots on a single drive seems like asking a bit much, even from a good enterprise disk.
Also have to factor in the interface throughput, sas is fast, but not that fast.

I think it uses a Gen 3, PCIEx4 interface - which probably caps it to about 3GB/sec - 2xNVME can PEAK faster but it is likely the enterprise drive will be able to sustain 1-2GB/sec whereas many SSD cannot sustain 200MB/sec - also the random i/o on the enterprise drive will likely be far better But i would agree that 2 comparable but smaller drives would fare better than 1 larger but similar drive.

I checked - its PCIe 3.1 x4, NVMe - 3000MB/sec peak but its still TLC so likely to cap at about 1-2GB/sec sustained - and its VERY VERY expensive

My HP ioDrive II (old but solid) is 2.4TB and can sustain 10-11 plots in parallel - if you have 3 slots that will likely sustain a higher throughput than the single 8TB - at a fraction of the price and a similar or higher TBW.

Am not saying the HP ioDrive is better - but 3 of them is almost certainly better - if you have the slots - and the CPU

Thanks Ianj ,really appreciate the input.

Funny thing is the brand new 8tb intel p4510 is still cheaper than 2 x HP ioDrive 2410 used right now lol. The prices are crazy !!!
Only can find 8tb Enterprise SSD in stock right now anything smaller is backordered or double the price from scalpers :frowning:

Just looked while I was writing this and found 2x 4TB p4500 Intel SSD for pretty much the same price .
You sound like you know what you talking about maybe I should buy them instead of single 8TB.

I heard there are some good gen4 m2 Nvme drives out there with upto 3.6 TBW cheaper than Intel Enterprise but I’m not sure how accurate these numbers will be after 24/7 several months heavy usage. Failure and dealing with warranty is a pain and losing all this time not being able to plot will cost you so much more in the end imo. So I stay away from them . I have 2x2TB p4510 in my ryzen 3900x plotter that I bought before the chia rush and they are solid no issues whatsoever . Only -3% TBW after 900+ k32 plots .

So i was thinking about investing in another plotter saw this dell T7810 with 2 Xeon 2695 18c/36t for a decent price it has lga2011-3 MB so pretty sure it has 4 PCie slots. Would you recommend buying it ? Would really appreciate any feedback .

I can’t say really as it depends on your ongoing commitment - i bought 3x2.4TB ioDrive II in mid May, JUST before they exploded in price by 50% - had a few teething problems with drivers in Linux but once they settled they were great.

I have capped my disk collection and its not far away so i am now planning to ramp down my production - currently about 2300 plots - but 400 not online - (waiting for a SAS expander) and will cap at around 3000-3500 plots - and that is my full commitment. My 2xSabrent 1TB were FANTASTIC value and for me great as i will probably be decommissioning them before i destroy them - they can happily retire to regular PC and diet of games and software development. I cannot personally tolerate paying $25-$35/TB and supplies of cheap small disks are drying up as well

If you intend on knocking out 5-1000 plots pretty quickly then its probably a no brainer - if like me you are within reach of a peak, then it might be time to had the torch to the next plotter

My commitment was limited - and i have just about recovered it - from now on it is “free money” - but i don’t think its a big earner now the netspace is exploding

If you already have a large commitment in disks and need to deploy it as soon as possible, then i understand the need to scale up plotting ASAP

My Sabrent cost me £75-80 each for 2x1TB (i have a 3rd but it has a reprieve) and are now at around 40% - a great deal for something i will use and not resell

If i had to build plotting machines at scale for as little as possible i would definitely stick to old rack mounted E2-26* series - and would be loathe to purchase threadripper class machines and £2K SSD unless i had a use for them planned after plotting

I have couple t5610 with dual 2690 v2 10 cores + 64RAM + dual 9211-i8 (it mode) with 16 x 300gb 15k sas drives (8 drives connected to each hba).

Am having hard times getting more than 8 jobs to run in parallel on these sas drives. 8 jobs will finish in about 11 hours. Adding more parallel jobs (more than 7 or 8 jobs) will slow down the whole plotting pretty bad.

I’ve tested this setup on identical second t5610 and getting the same behavior. Is this happening because am using dual hba and not sas expander?

I feel the board is getting maxed out in pcie throughput some how.

I’d appreciate any ideas or suggestions on what the problem is.

This is just supposition, not experience as i don’t have a setup with 2xHBA and 16xSAS disks

I think your HBA will max out at about 4GB/sec - so plotting > 16 disks on it would be a bottleneck - a SAS expander should not in theory help beyond 16 disks - i get about 11-12h off 8x 3.5" 7200RPM SAS drive

64GB RAM is good for 20+ - are you staggering your plots - my 2650v2 are max’d out (and them some) on 20 parallel plots) and that is staggered - the early phases are very CPU intensive - if you are using a plot manager like Swar it will show your CPU load

That make sense… Any recommendation for a better hba that can handle more throughput ?

I upgraded from a perc 6 to a h200 lsi with IT mode firmware. smoking fast.
in my experience with chia (the past week) the trick is the IT mode firmware on the hba’s. it presents to my hypervi the disks as just disks…
so using the correct hyperv…
a software raid 0 zfs, lz4 compression with limits on ram usage
~~~~~ “cause zfs willl eat ram for breakfast.”
so I use my sus hypervisor to manage multiple virtual harvester nodes on the same hardware. in ubuntu containers… cli only.
the containers are unaware of each other and each one thinks it has a full 1.5tb nvme to work with. I virtualize everything to them, so balanced for me is 3 nodes plotting in parallel with plotman. this is to maximize on resource utilization.

with only one container I couldn’t get my cpu past 60 percent even using 8 threads per job. ~~7 hour single plot time…
but with all 3 plotting in parallel to the same nvme.
the containers “share” nvme space as they all plot in parallel. took a week to get my timings straight with plotman. many many crashes later im cooking 30+ a day.
t710 11 gen power edge MONSTER 24 cores:
tmp1~

  • 3 pci gen2 m.2 nvme 500gb each with its own slot ( zfs raid 0 with the works)
    tmp2~
  • 8 4tb (IT mode HBA) again zfs raid 0
    dir~
  • 70tb nfs in a raidZ1 (virtualized from the same machine) with usb 3.2 adapter to many many externals
    processors:
    +2 x5690’s nehelm :slight_smile:
  • only 92gb ddr3

she might not knock out a plot in under 6 hours. by any means… but with all 3 nodes plotting in parallel 3 plots each… she takes a little bit to get going…
{7 hours}. but after that its clockwork 3 plots finished every couple hours.
still hardly breaks a swet tho. but any more pressure and she crashes hardddd.
ram issues I assume.

Hi guys. I made approx 50 plots / 24hrs on 2x 2678v3 (12C/24T each) + 256GB RAM on the very 1st day. Then I messed up with settings and it slowed down to approx 36-40 plots / 24hrs.

The key for 48 threads system is to use 4x2TB NVMe drives (3x 2TB minimum) and do not run more than 5 plots on each drive as the speed will go down when the drive is fulfilled with data. I also have pretty decent results on single cpu 2678v3 with 4x1TB set to RAID0 via mdadm. It easily makes 24 - 26 plots / 24 hrs. I also recommend using an SSD drive as a primal destination drive. I started to use it as a buffer between tmp and dst. It receives the plot from tmp drive and send to storage hdd afterwards. nvme → ssd → hdd will release your nvme for next plot faster than direct copy nvme → hdd.

Besides all of the above you have to consider some other optimisations - unlock full turbo on all cores/threads with haswell 26xx v3 cpu, put fstrim into your crontab (trim data on all ssd drives), disable journaling on your filesystem, etc…

1 Like

slavedriver’s answer here is important: forget SAS / SATA - you need NVME on these old xeons. I’m running 2690v3’s and until I bought myself some PCIE adapters for m.2, things were pretty pointless. Dual m.2 adapters can be had for $50 !

I fstrim all SSD’s hourly in my crontab.

Only works if your system supports PCIe bifurcation…when we’re talking about “old” xeons most people have a v2 or older and most of those systems dont support PCIe bifurcation and the PCIe card that cant do it for you are prohibitively expensive.

1 Like