Codinghorror, I’ve been curious what your improvement was like going from 9 to 12 simultaneous plots on the 5950x and what you’ve discovered since then. I saw in the ‘Ordered new Plotter…’ thread that you are skeptical 50/day is achievable. I’ve been closing in on 50/day so I wanted to share my experience and get some other opinions.
To add some data for comparison here:
- AMD 5950x with PBO enabled (~4.6Ghz all core while plotting)
- 4x16gb RAM
- plot parameters: -k 32 -r 4 -u 128 -b 4000
- Running plotman, so a few parameters below are specific to the plotman implementation.
- temp drives: 2x 2TB NVMe on PCIe Gen 3 x4 lanes, 1x 1tb NVMe on PCIe Gen 2 x4 lane (I have an X470 board, so no PCIe gen 4 for me)
- dest drives: local HDDs
- stagger: 30 minutes
- max plots: 14
- max plots per temp: 6 per 2tb NVMe, 3 on the 1tb NVMe
- max plots in phase 1: 5
plotman analyze 2021-05-07*
+-------+----+-------------+--------------+--------------+---------------+--------------+--------------+
| Slice | n | %usort | phase 1 | phase 2 | phase 3 | phase 4 | total time |
+=======+====+=============+==============+==============+===============+==============+==============+
| x | 46 | μ=100.0 σ=0 | μ=9.7K σ=797 | μ=5.4K σ=679 | μ=10.0K σ=977 | μ=699.2 σ=84 | μ=25.8K σ=2K |
+-------+----+-------------+--------------+--------------+---------------+--------------+--------------+
plotman analyze 2021-05-08*
+-------+----+-------------+---------------+--------------+---------------+--------------+--------------+
| Slice | n | %usort | phase 1 | phase 2 | phase 3 | phase 4 | total time |
+=======+====+=============+===============+==============+===============+==============+==============+
| x | 47 | μ=100.0 σ=0 | μ=10.0K σ=765 | μ=5.5K σ=725 | μ=10.1K σ=974 | μ=702.4 σ=77 | μ=26.2K σ=2K |
+-------+----+-------------+---------------+--------------+---------------+--------------+--------------+
Interesting observations:
$iostat -h
kB_read kB_wrtn Device
79.0T 72.4T nvme0n1 (2TB mobo m.2 NVMe Gen 3 x4)
49.7T 45.4T nvme1n1 (1TB mobo m.2 NVMe Gen 2 x4)
74.8T 68.5T nvme2n1 (2TB no name PCI Gen 3 x4x4 adapter card)
The ratio between my gen3 and gen 2 temp drives is almost exactly 1.6, the relative speed of gen 3 vs gen 2. I suspect this is where gains can be made in my setup.
iowait is typically 1-1.5 hours per plot by end of phase 4. This seems like a lot, and was 2-2.5h prior to adding the 1TB NVMe to the gen 2 slot.
CPU and RAM usage hover around 50% as reported by glances. I have never seen either above 65%. iowait is ongoing, critical, 10% etc.
I have been trying to migrate my 1 TB NVMe to my dual NVMe slot PCI adapter but get io errors after a few hours with the 2nd drive installed.
I wonder if a SSD staging drive would help, at 46 plots per day and 700 seconds per stage 4, I am spending ~9h per day copying plots to HDD.