My Xeon Dell 7810 server arrived

Hi everyone,

A few weeks ago I posted a few questions about building a Xeon plotting server and I received my 2012 Xeon server yesterday. Its setup:

image

And a 32GB DDR Memory. It’s a Dell 7810 server.
I also bought two nvme to PCIe adapters and 2x1tb Corsair MP600 running with Windows Storage

I just started the plotting 2 hours ago, so I don’t have a whole lot to report just yet, but I am hoping to use this thread to share my experiences and hopefully help other people wanting to build an old Xeon server.

I am using Swar to plot and this is my config:
- name: mp600
max_plots: 999
farmer_public_key:
pool_public_key:
temporary_directory: F:
temporary2_directory:
destination_directory: E:\plots
size: 32
bitfield: true
threads: 18
buckets: 128
memory_buffer: 4000
max_concurrent: 6
max_concurrent_with_start_early: 6
stagger_minutes: 85
max_for_phase_1: 2
concurrency_start_early_phase: 4
concurrency_start_early_phase_delay: 30
temporary2_destination_sync: false

I am hoping to fine tune this parameters soon and share with you guys

2 Likes

waiting for you result …

the bloody machine crashed, not quite sure why (I am thinking it got too hot, so I opened up the case to get some more airflow as I didn’t buy any fans to cool the thing better…
let’s see how it goes until tomorrow

If you mean Windows Storage Spaces I really wouldn’t bother. As far as I could tell there is some way to create raid0 from powershell but just creating a “simple” space in the gui is JBOD. Run a disk benchmark and you will see no performance increase from “simple”.
If you really want to try a software raid0 then ignore storage spaces and just create a striped volume from disk manager instead.

what’s the benefit of creating the striped volume instead of storage spaces?

What are you trying to achieve with storage spaces?

just wanted to see my two drives as one and have the plot move to that one drive when plotting finishes.

Ok fair enough. Would be interesting to see if you get any noticable performance difference from this vs. just listing both disks in swar. No idea if storage spaces is adding much overhead.

I read somewhere that storage spaces adds negligible space and then this post here makes me wonder if chia is all so i/o hungry

I’m not suprised by that post. I can run a single plot on my nvme in just under 7 hours and a single plot on a really old slow hdd in around 13, the difference should be way more than that.
Where the nvme/ssd helps is parallelisation. I tried 2 jobs in parallel on my hdd for kicks and it litterally crashed the head requiring power cycle.

I think the reason people use nvme over ssd is that it is easier to find more durable ones vs enterprise ssds.

2 Likes

I think the CPU speed is quite low. From my experience, you need cores that run high core frequencies. I am plotting with AMD Ryzen CPUs and Intel and while I managed to OC the AMD CPU, the Intel processors are still faster…

Creating a RAID of 2 x 1TB NVMe is the right move to ensure you can plot 7 (change that from your config) plots at once. I would recommend a waiting time of 60 rather than 85 but you need to see what works best for you.

Interesting post. But did we not already discover it was the on average better latency of NVME vs SATA (all else equal) that provided a benefit? This would align well with the ‘spurts’ story.

I am not 100% if I can create a RAID with my 2x 1TB NVMe drives because my motherboard is so old that it doesn’t support.
So I created with Windows Storage Spaces. After I run this for 24 hours and have 24 hours stats to show you guuys, I am going to stop the plotting, run some benchmark on the drive with Storage Spaces, then delete this config and benchmark the individual drives to see what I get.

Agree on the clock speed, but this is my server and that’s what I have :slight_smile:

What I can tell you is that it’s plotting more than my 2020 intel i7 only because it’s a dedicated machine and my i7 cannot be, so I am happy with that.

Alright guys, my first 24 hours running this machine is over. Only 8 plots in 24 hours. Definitely not good enough:

What’s happening is that I am ending up with many jobs getting stuck on phase 3 and as you can see it’s taking a long time to finish it.
Interesting as well is to see the phase 1 is taking virtually the same time to complete once all jobs are loaded in parallel and competing for threads and I/O.

I am not quite sure what to to change in my parameters to try to improve this. This is my current Swar config:

  - name: mp600
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: F:\
    temporary2_directory:
    destination_directory: E:\plots
    size: 32
    bitfield: true
    threads: 18
    buckets: 128
    memory_buffer: 4000
    max_concurrent: 6
    max_concurrent_with_start_early: 6
    stagger_minutes: 65
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 30
    temporary2_destination_sync: false

And 7 parallel jobs as the global configuration, but it’s never gone past 6 jobs in parallel.
I think I am going to try to increate the stagger_minutes to 85 and threads to 15 and see what happens.
I don’t have any reasoning behind that, if anyone has any tips to give me, that will be much appreciate it.

Once again, I am on a Xeon 2.5ghz 12c/24t, 32gb ram and 2TB corsair mp600 each connected to a PCIe slot on the machine using an adapter.

I am running on similar machines. Mine have 64 Gb of RAM each and 2 E52999 v4 CPUs and they are slow as shit compared to my other computers with straight m2 connections. I have a suspicion the adapter might be the limiting factor here. Not sure if there is anyway to optimize the setup in bios or anything.

100% my nvmes right now are the limiting factor. When all jobs are running in parallel, the drive is at 100% most of the time while I barely used half of the memory and CPU is ~70% ish
I haven’t checked the bios to be honest, I dont know if a 10 years old motherboard allows me to create a RAID0 with two nvme drives connected on the pcie slots. I am going to have to check.
That’s why I am using windows storage spaces, but I actually want to test the performance of the drive with Storage Spaces and without.

Why in your config you are assigning 18 threads? This number is PER PLOT, not total! It should be max 8, 4 if you have ram and SSD to handle more parallel plots.

1 Like

good point, but because the number of threads is only used in phase 1 (all other phases use 1 thread) I thought I could get away with having many threads to try to get phase 1 completed very fast. In the beginning phase 1 was finishing in 2.5 hours, but then it jumped to almost 5 hours when 6 jobs were running at the same time.

why should it be max 8?

Based on research (also on this forum) and own test 4 is a sweet spot, above results are diminishing with virtually no difference between 8 and 12. Of course you should check it on your machine, but I doubt that the results will be bigger.

In the beginning phase 1 was finishing in 2.5 hours, but then it jumped to almost 5 hours when 6 jobs were running at the same time.

This is I/O bottleneck which is TBH quite huge, in my case solo plot performance is 1:45 which jumps to 2:50 with 11 plots. On other PC 2:20 => 3:30