I have an issue where plotman has a duplicate shadow job for every job, with the same plot id. This causes me to have to double the job max in order to get the right number of plots going, as well as prohibits me from suspending or resuming jobs because it throws an error on duplicate plot ids.
I don’t know if this is related to the other problem I am having with this system, which is I should be able to do 8 staggered plots just fine, but my total times are pretty bad.
Here is my system:
Linux Mint
i17 10700 (8c, 16t)
32gb RAM
x2 2TB inland NVMe drives
X2 Destination HDD
Here is what i see in plotman:
fixed for me in development now
pip install git+https://github.com/ericaltendorf/plotman@development
opened 06:24PM - 13 May 21 UTC
<!--
**Should I file this?**
Use this template to report *observed bugs in cur… rent Plotman*. Do not use this to request tech support or troubleshooting; we are unable to provide that support here, so please use the user channel(s) on Keybase, or GitHub Discussions instead. Do not use this to report issues with the core Chia plotter. Because the `development` branch often contains new fixes not yet released, you may also want to try installing it to see if it addresses your issue (you can do this by following the install instructions but specifying `@development` instead of `@main`).
-->
**Describe the bug**
Plotman is tracking each plot twice.
**To Reproduce**
1. Install current plotman `pip install --force-reinstall git+https://github.com/ericaltendorf/plotman@development`.
2. Run `plotman interactive`
**Expected behavior**
Each listed job has a unique plot id.
**System setup:**
- OS: Ubuntu 20.04 LTS
**Config**
<details> <summary>full configuration</summary>
```yaml
# Options for display and rendering
user_interface:
# Call out to the `stty` program to determine terminal size, instead of
# relying on what is reported by the curses library. In some cases,
# the curses library fails to update on SIGWINCH signals. If the
# `plotman interactive` curses interface does not properly adjust when
# you resize the terminal window, you can try setting this to True.
use_stty_size: True
# Where to plot and log.
directories:
# One directory in which to store all plot job logs (the STDOUT/
# STDERR of all plot jobs). In order to monitor progress, plotman
# reads these logs on a regular basis, so using a fast drive is
# recommended.
log: /var/log/chia
# One or more directories to use as tmp dirs for plotting. The
# scheduler will use all of them and distribute jobs among them.
# It assumes that IO is independent for each one (i.e., that each
# one is on a different physical device).
#
# If multiple directories share a common prefix, reports will
# abbreviate and show just the uniquely identifying suffix.
tmp:
- /chia/plotter0
- /chia/plotter1
- /chia/plotter2
# Optional: Allows overriding some characteristics of certain tmp
# directories. This contains a map of tmp directory names to
# attributes. If a tmp directory and attribute is not listed here,
# it uses the default attribute setting from the main configuration.
#
# Currently support override parameters:
# - tmpdir_max_jobs
tmp_overrides:
# In this example, /mnt/tmp/00 is larger than the other tmp
# dirs and it can hold more plots than the default.
"/chia/plotter2":
tmpdir_max_jobs: 6
# Optional: tmp2 directory. If specified, will be passed to
# chia plots create as -2. Only one tmp2 directory is supported.
# tmp2: /mnt/tmp/a
# One or more directories; the scheduler will use all of them.
# These again are presumed to be on independent physical devices,
# so writes (plot jobs) and reads (archivals) can be scheduled
# to minimize IO contention.
dst:
- /chia/plot-tmp
# Archival configuration. Optional; if you do not wish to run the
# archiving operation, comment this section out.
#
# Currently archival depends on an rsync daemon running on the remote
# host, and that the module is configured to match the local path.
# See code for details.
archive:
rsyncd_module: chia
rsyncd_path: /chia
rsyncd_bwlimit: 1000000 # Bandwidth limit in KB/s
rsyncd_host: 10.0.0.40
rsyncd_user: ubuntu
# Optional index. If omitted or set to 0, plotman will archive
# to the first archive dir with free space. If specified,
# plotman will skip forward up to 'index' drives (if they exist).
# This can be useful to reduce io contention on a drive on the
# archive host if you have multiple plotters (simultaneous io
# can still happen at the time a drive fills up.) E.g., if you
# have four plotters, you could set this to 0, 1, 2, and 3, on
# the 4 machines, or 0, 1, 0, 1.
# index: 0
# Plotting scheduling parameters
scheduling:
# Run a job on a particular temp dir only if the number of existing jobs
# before tmpdir_stagger_phase_major tmpdir_stagger_phase_minor
# is less than tmpdir_stagger_phase_limit.
# Phase major corresponds to the plot phase, phase minor corresponds to
# the table or table pair in sequence, phase limit corresponds to
# the number of plots allowed before [phase major, phase minor]
tmpdir_stagger_phase_major: 2
tmpdir_stagger_phase_minor: 1
# Optional: default is 1
tmpdir_stagger_phase_limit: 8
# Don't run more than this many jobs at a time on a single temp dir.
tmpdir_max_jobs: 4
# Don't run more than this many jobs at a time in total.
global_max_jobs: 24
# Don't run any jobs (across all temp dirs) more often than this, in minutes.
global_stagger_m: 30
# How often the daemon wakes to consider starting a new plot job, in seconds.
polling_time_s: 20
# Plotting parameters. These are pass-through parameters to chia plots create.
# See documentation at
# https://github.com/Chia-Network/chia-blockchain/wiki/CLI-Commands-Reference#create
plotting:
k: 32
e: False # Use -e plotting option
n_threads: 4 # Threads per job
n_buckets: 128 # Number of buckets to split data into
job_buffer: 3390 # Per job memory
# If specified, pass through to the -f and -p options. See CLI reference.
# farmer_pk: ...
# pool_pk: ...
```
</details>
**Additional context & screenshots**
Note how there are 14 jobs but only 7 actual plots.
![image](https://user-images.githubusercontent.com/43181229/118169332-aefea380-b3dd-11eb-9a7e-7a61602a80e0.png)