Xeon e5-26xx slow plotting

2.5 ghz, 128 gb ddr4

I have 2 of these builds so I hope it runs ok!
I completed one yesterday but i am limited on temp SSD now but will be testing over the next week. I hope get 40 to a day :crossed_fingers: :crossed_fingers:

Give me settings pls

I’m running a dual Xeon 2690v1 machine with 128gb RAM. It isn’t the fastest either. If I run individual plots, my laptop with a modern intel 8 core destroys it. I haven’t really messed with it much. Just started my first batch of parallel to see how that turns out.

If you can throw HDDs on it you should be able to do 1 plot per spindle up to 15 or 16 if you’re not farming also. With staggers the right way, might be faster to do a plot per 2 with temp2 on the dest disks, but even the simple 1/spindle is a good place to start.

Your single 2TB Evo drive uses 2GB of LPDDR4 Cache and this is your bottleneck, not the system itself. Its not the fastest drive. Getting 40 plots a day out of any system requires enough NVMe/temporary drives to generate enough plots in parallel to create 40 plots per day. No system on this planet will get you 40 plots per day with a single 970 plus 2TB NVMe drive. I mention this in my video, but if i were to build something from scratch with your system I would do (& am doing) the following:

With a 2699v3 and 14x 400GB HGST SSD MLC SAS HUSMM8040ASS200 drives ($80-$100 each) (Use a single 16i or 16e LSI HBA to connect these)

Raid configuration for the drives:
2x 400G drives Raid0 / 14 drives = 7 raid 0’s of 800GB

Sw4r Config:

  • name: 400gx2-1
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: $RAID0-PATH
    temporary2_directory:
    destination_directory: $DEST-PATH
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 3
    max_concurrent_with_start_early: 3
    initial_delay_minutes: 0
    stagger_minutes: 60
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32

The above configuration repeated for each of the 7 raid0 arrays. You should hit ~21 plots in less than 12 hours (probably closer to 8-10 hours) which will net you more than 40 plots per 24 hours on your current system. Or you’ll need 2x more 2TB nvme drives for a total of 3x 2TB nvme drives but then you’ll need a different config (and these drives don’t have the best endurance…). A sw4r config for 3x 2TB nvme drives looks like this:

  • name: Samsung2TB-Evo1
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: $Temp-Path
    temporary2_directory:
    destination_directory: $Dest-Path
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 7
    max_concurrent_with_start_early: 7
    initial_delay_minutes: 0
    stagger_minutes: 60
    max_for_phase_1: 3
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32

  • name: Samsung2TB-Evo2
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: $Temp-Path
    temporary2_directory:
    destination_directory: $Dest-Path
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 7
    max_concurrent_with_start_early: 7
    initial_delay_minutes: 0
    stagger_minutes: 60
    max_for_phase_1: 3
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32

  • name: Samsung2TB-Evo3
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: $Temp-Path
    temporary2_directory:
    destination_directory: $Dest-Path
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 3390
    max_concurrent: 7
    max_concurrent_with_start_early: 7
    initial_delay_minutes: 0
    stagger_minutes: 60
    max_for_phase_1: 3
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32

plotman config for the hgst drives:
user_interface:
use_stty_size: True

directories:
tmp:
- /mnt/400g-x2-1
- /mnt/400g-x2-2
- /mnt/400g-x2-3
- /mnt/400g-x2-4
- /mnt/400g-x2-5
- /mnt/400g-x2-6
- /mnt/400g-x2-7
dst:
- /mnt/$DESTINATION-PATH

scheduling:
tmpdir_stagger_phase_major: 2
tmpdir_stagger_phase_minor: 1
tmpdir_stagger_phase_limit: 2
tmpdir_max_jobs: 3
global_max_jobs: 21
global_stagger_m: 8
polling_time_s: 20

plotting:
k: 32
e: False # Use -e plotting option
n_threads: 4 # Threads per job
n_buckets: 128 # Number of buckets to split data into
job_buffer: 4000 # Per job memory

5 Likes

You are probably oversubscribing your temp drives if its taking 13+ hours to plot, also you are comparing a CPU that costs $700 or more to one that costs $25 on ebay. A CPU made in 2019 to a CPU made in 2012 not to mention the Ryzen CPU alone probably costs more than the entire 2680 system does. . That seems like a pretty unfair comparison as i could probably buy 2 or 3 of the 2680 systems at the cost of a single 3950x build.

Describe both the 3950x and the 2680 builds… what drives are you plotting against on each and how many do you have in each machine? VERY interested in your reply.

1 Like

Well put. I’m loving your vids - excellent info. I’m also loving the 2699v3 → I used more of my budget early doors on HDs than expensive CPUs. Would love a threadripper but I’m ticking along nicely with the 2699 and using it in 2 machines - allows some excellent parrallel plotting. Thanks Sloth to you and your dog :laughing: :dog:

1 Like

She has become quite the sensation… She says WOOF THANK YOU

I actually think she is more popular than me at this point – and I’m totally okay with that. She is so adorable… :slight_smile:

1 Like

Thanks for the reply! I found the culprit, I think it is the XPG NVME that is making the plotting slower than usual, I replaced it with the 2TB Samsung 970 Evo Plus and plots are going along much faster about 8-9 hours with 7 parallel plots running.

1 Like

I have a plotter i9 10850k 64GB ddr4 2 nvme 2tb drives, 32 plots a day.
T7810 2xe5-2699v3 96GB ddr4 max 12 plots, 2xnvme 2tb 4 sas 300gb 4 hdd
the price of both configurations is the same, 1 xeon price about 500$
I just lost money. it is not possible to do 40 plots. Xeon
if someone is wondering to buy xeon, I strongly advise against it, it is old garbage, razen or intel are on top. you buy, connect and plotting.


I9

So you are saying you have a system with 2x 2699 v3’s and are only able to get 12 plots a day and that means xeons suck at plotting? I think you likely have something configured incorrectly. What 2TB nvme drives are you using in your 10850k? what drives are in the 2699 v3? Are you using cheap adapters in the 2699 v3 build and native motherboard m.2 slots on the 10850k?

I am getting 40 plots a day with just one of those 2699 v3’s

EDIT: The 2699 v3 has went up in price WAY too much… i literally bought one 3 weeks ago for $230… i do not recommend buying those cpus right now… you are right – $500 is a rip off for that cpu.

EDIT 2: All of that having been said – your 2699 v3 system is capable of as much or more than your 10850k system… you just need to configure it correctly… tshoot it – figure out if your nvme drives are in a pcie 2.0 slot, or disconnect those SAS drives and try plotting to JUST the nvme drives… i’ve had weird behaviors with 10k sas drives while plotting… i’ve seen entire systems slow down from a single bad drive… (yes this includes nvme plots as well)

EDIT 3: I have seen a few cheapo nvme to pcie adapters cause slowness like what you describe… i wonder if its the adapters youre using? (Sounds weird i know… but it is something i’ve seen in the wild personally)

1 Like

I glad to see you.

I use Xeon 2x 2650v2 / 64GB DDR3 of RAM / 2TB ADDLINK r3500/w3000 PCIe3.0 (install window in this)

I have plot first one, remaining time in Chia-plot-Status show is 18hr

then now, pass about 11hr remaining is 18hr same start point with 22% process from plotting 10hr :joy:

What wrong ? How can faster ? Help me please

EDIT1 I use X79 server board from china / check crystaldiskinfo NVME is run on PCIe3.0x4

for me it is also strange, the disk speed tests show good speeds, so the nvme adapter probably ok. maybe the problem is the ghz of the cpu, 2.2ghz. I will buy an ASUS adapter and check it, I don’t know how to solve it anymore.


attach a picture of the actual plot manager, all of your plots… what is your config? I wrote an incredibly lengthy response to you above and provided a detailed config for both plotman AND sw4r… :frowning:

Look at your cpu… in the screenshots your xeon is at 24%… your i9 is maxed out… this really screams to me “configuration issue”… – its not the system… its how its set up im 99.999% sure of this.

1 Like

This is a single variable that should contain the location of your chia executable file. This is the blockchain executable.

WINDOWS EXAMPLE: C:\Users\Swar\AppData\Local\chia-blockchain\app-1.1.5\resources\app.asar.unpacked\daemon\chia.exe

LINUX EXAMPLE: /usr/lib/chia-blockchain/resources/app.asar.unpacked/daemon/chia

LINUX2 EXAMPLE: /home/swar/chia-blockchain/venv/bin/chia

MAC OS EXAMPLE: /Applications/Chia.app/Contents/Resources/app.asar.unpacked/daemon/chia

chia_location: C:\Users\ja\AppData\Local\chia-blockchain\app-1.1.6\resources\app.asar.unpacked\daemon\chia.exe

manager:

These are the config settings that will only be used by the plot manager.

check_interval: The number of seconds to wait before checking to see if a new job should start.

log_level: Keep this on ERROR to only record when there are errors. Change this to INFO in order to see more

detailed logging. Warning: INFO will write a lot of information.

check_interval: 60
log_level: ERROR

log:

folder_path: This is the folder where your log files for plots will be saved.

folder_path: C:\Plotter

view:

These are the settings that will be used by the view.

check_interval: The number of seconds to wait before updating the view.

datetime_format: The datetime format that you want displayed in the view. See here

for formatting: datetime — Basic date and time types — Python 3.9.5 documentation

include_seconds_for_phase: This dictates whether seconds are included in the phase times.

include_drive_info: This dictates whether the drive information will be showed.

include_cpu: This dictates whether the CPU information will be showed.

include_ram: This dictates whether the RAM information will be showed.

include_plot_stats: This dictates whether the plot stats will be showed.

check_interval: 60
datetime_format: “%Y-%m-%d %H:%M:%S”
include_seconds_for_phase: false
include_drive_info: true
include_cpu: true
include_ram: true
include_plot_stats: true

notifications:

These are different settings in order to notified when the plot manager starts and when a plot has been completed.

DISCORD

notify_discord: false
discord_webhook_url: https://discord.com/api/webhooks/0000000000000000/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

IFTTT, ref Webhooks works better with IFTTT, and this function will send title as value1 and message as value2.

notify_ifttt: false
ifttt_webhook_url: https://maker.ifttt.com/trigger/{event}/with/key/{api_key}

PLAY AUDIO SOUND

notify_sound: false
song: audio.mp3

PUSHOVER PUSH SERVICE

notify_pushover: false
pushover_user_key: xx
pushover_api_key: xx

TELEGRAM

notify_telegram: false
telegram_token: xxxxx

TWILIO

notify_twilio: false
twilio_account_sid: xxxxx
twilio_auth_token: xxxxx
twilio_from_phone: +1234657890
twilio_to_phone: +1234657890

instrumentation:

This setting is here in case you wanted to enable instrumentation using Prometheus.

prometheus_enabled: false
prometheus_port: 9090

progress:

phase_line_end: These are the settings that will be used to dictate when a phase ends in the progress bar. It is

supposed to reflect the line at which the phase will end so the progress calculations can use that

information with the existing log file to calculate a progress percent.

phase_weight: These are the weight to assign to each phase in the progress calculations. Typically, Phase 1 and 3

are the longest phases so they will hold more weight than the others.

phase1_line_end: 801
phase2_line_end: 834
phase3_line_end: 2474
phase4_line_end: 2620
phase1_weight: 33.4
phase2_weight: 20.43
phase3_weight: 42.29
phase4_weight: 3.88

global:

These are the settings that will be used globally by the plot manager.

max_concurrent: The maximum number of plots that your system can run. The manager will not kick off more than this

number of plots total over time.

max_for_phase_1: The maximum number of plots that your system can run in phase 1.

minimum_minutes_between_jobs: The minimum number of minutes before starting a new plotting job, this prevents

multiple jobs from starting at the exact same time. This will alleviate congestion

on destination drive. Set to 0 to disable.

max_concurrent: 20
max_for_phase_1: 16
minimum_minutes_between_jobs: 5

jobs:

These are the settings that will be used by each job. Please note you can have multiple jobs and each job should be

in YAML format in order for it to be interpreted correctly. Almost all the values here will be passed into the

Chia executable file.

Check for more details on the Chia CLI here: CLI Commands Reference · Chia-Network/chia-blockchain Wiki · GitHub

name: This is the name that you want to give to the job.

max_plots: This is the maximum number of jobs to make in one run of the manager. Any restarts to manager will reset

this variable. It is only here to help with short term plotting.

[OPTIONAL] farmer_public_key: Your farmer public key. If none is provided, it will not pass in this variable to the

chia executable which results in your default keys being used. This is only needed if

you have chia set up on a machine that does not have your credentials.

[OPTIONAL] pool_public_key: Your pool public key. Same information as the above.

temporary_directory: Can be a single value or a list of values. This is where the plotting will take place. If you

provide a list, it will cycle through each drive one by one.

[OPTIONAL] temporary2_directory: Can be a single value or a list of values. This is an optional parameter to use in

case you want to use the temporary2 directory functionality of Chia plotting.

destination_directory: Can be a single value or a list of values. This is the final directory where the plot will be

transferred once it is completed. If you provide a list, it will cycle through each drive

one by one.

size: This refers to the k size of the plot. You would type in something like 32, 33, 34, 35… in here.

bitfield: This refers to whether you want to use bitfield or not in your plotting. Typically, you want to keep

this as true.

threads: This is the number of threads that will be assigned to the plotter. Only phase 1 uses more than 1 thread.

buckets: The number of buckets to use. The default provided by Chia is 128.

memory_buffer: The amount of memory you want to allocate to the process.

max_concurrent: The maximum number of plots to have for this job at any given time.

max_concurrent_with_start_early: The maximum number of plots to have for this job at any given time including

phases that started early.

initial_delay_minutes: This is the initial delay that is used when initiate the first job. It is only ever

considered once. If you restart manager, it will still adhere to this value.

stagger_minutes: The amount of minutes to wait before the next plot for this job can get kicked off. You can even set this to

zero if you want your plots to get kicked off immediately when the concurrent limits allow for it.

max_for_phase_1: The maximum number of plots on phase 1 for this job.

concurrency_start_early_phase: The phase in which you want to start a plot early. It is recommended to use 4 for

this field.

concurrency_start_early_phase_delay: The maximum number of minutes to wait before a new plot gets kicked off when

the start early phase has been detected.

temporary2_destination_sync: This field will always submit the destination directory as the temporary2 directory.

These two directories will be in sync so that they will always be submitted as the

same value.

exclude_final_directory: Whether to skip adding destination_directory to harvester for farming

skip_full_destinations: When this is enabled it will calculate the sizes of all running plots and the future plot

to determine if there is enough space left on the drive to start a job. If there is not,

it will skip the destination and move onto the next one. Once all are full, it will

disable the job.

unix_process_priority: UNIX Only. This is the priority that plots will be given when they are spawned. UNIX values

must be between -20 and 19. The higher the value, the lower the priority of the process.

windows_process_priority: Windows Only. This is the priority that plots will be given when they are spawned.

Windows values vary and should be set to one of the following values:

- 16384 (BELOW_NORMAL_PRIORITY_CLASS)

- 32 (NORMAL_PRIORITY_CLASS)

- 32768 (ABOVE_NORMAL_PRIORITY_CLASS)

- 128 (HIGH_PRIORITY_CLASS)

- 256 (REALTIME_PRIORITY_CLASS)

enable_cpu_affinity: Enable or disable cpu affinity for plot processes. Systems that plot and harvest may see

improved harvester or node performance when excluding one or two threads for plotting process.

cpu_affinity: List of cpu (or threads) to allocate for plot processes. The default example assumes you have

a hyper-threaded 4 core CPU (8 logical cores). This config will restrict plot processes to use

logical cores 0-5, leaving logical cores 6 and 7 for other processes (6 restricted, 2 free).

  • name: G
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: G:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4600
    max_concurrent: 7
    max_concurrent_with_start_early: 8
    stagger_minutes: 60
    max_for_phase_1: 4
    concurrency_start_early_phase: 2
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: F
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: F:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4600
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 1
    max_for_phase_1: 1
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: D
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: D:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4600
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 1
    max_for_phase_1: 1
    concurrency_start_early_phase: 2
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: I
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: I:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4600
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 3
    max_for_phase_1: 1
    concurrency_start_early_phase: 2
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: E
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: E:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4500
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 5
    max_for_phase_1: 1
    concurrency_start_early_phase: 2
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

  • name: K
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: K:\plot
    destination_directory: H:\plots
    size: 32
    bitfield: true
    threads: 4
    buckets: 128
    memory_buffer: 4500
    max_concurrent: 1
    max_concurrent_with_start_early: 1
    stagger_minutes: 7
    max_for_phase_1: 1
    concurrency_start_early_phase: 2
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false
    exclude_final_directory: false
    skip_full_destinations: false
    unix_process_priority: 10
    windows_process_priority: 32
    enable_cpu_affinity: false
    cpu_affinity: [ 0, 1, 2, 3, 4, 5 ]

what you xeon motherboard ? Why my SSD NVME have working about 10-30%

why the CPU is not working around 100%? maybe the bios is wrongly set? CPU temperature around 75 degrees Celsius

Thank God you are here! Lol
I bought 2 T7810s and loaded them up with dual E5 2699v4 processors and 64 gigabytes of RAM in each. I have two Samsung Pro 970s in those M2 adapters in each and am getting crazy slow results. I did not RAID them as I hear conflicting info on that. I am getting 14 plots a day on average per unit.

Here is my SWAR config for the two drives in one: Any insight you have would be greatly appreciated!

  • name: Y
    max_plots: 999
    farmer_public_key:
    pool_public_key:
    temporary_directory: Y:\Chia
    temporary2_directory:
    destination_directory: E:\Plots
    size: 32
    bitfield: true
    threads: 8
    buckets: 128
    memory_buffer: 5000
    max_concurrent: 7
    max_concurrent_with_start_early: 7
    stagger_minutes: 60
    max_for_phase_1: 2
    concurrency_start_early_phase: 4
    concurrency_start_early_phase_delay: 0
    temporary2_destination_sync: false

    • name: Z
      max_plots: 999
      farmer_public_key:
      pool_public_key:
      temporary_directory: Z:\Chia
      temporary2_directory:
      destination_directory: E:\Plots
      size: 32
      bitfield: true
      threads: 8
      buckets: 128
      memory_buffer: 5000
      max_concurrent: 7
      max_concurrent_with_start_early: 7
      stagger_minutes: 60
      max_for_phase_1: 2
      concurrency_start_early_phase: 4
      concurrency_start_early_phase_delay: 0
      temporary2_destination_sync: false

PS: Love your channel!

1 Like

Those pro drives are fast you should be getting about 16 plots per day per 2TB nvme drive – you could change the max concurrent to 8 per drive and max for phase 1 from 2 to 4 (assuming they are 2TB drives) … what slots are the nvme drives in? What adapter did you use? above in this thread a guy realized his slowness was his adapter as well… how long do the plots take?"