Unbuntu Madmax plotting problem - takes far longer than before

Back in August I installed Unbuntu on a spare drive, and setup madmax, and successfully completed a plot in just under 28 minutes.

I’ve not used Linux since, but have decided to start replotting to NFT plots, and thought it would be quicker to use Linux, well it should be, but it isn’t.

So I swapped back to my Linux disk, had an issue with mounting the Intel Raid 0 array, but sorted that out. And then set it going plotting, trouble is it now seems to take a very long time, and I’ve no idea why, or how to work out why.

Plot in August

Plot Name: plot-k32-2021-08-07-23-50-0cf995468c15e4502075760c8c771cf4e9786047933bd222c9e9d1896fcd7000
[P1] Table 1 took 10.3658 sec
[P1] Table 2 took 107.454 sec, found 4294972186 matches
[P1] Table 3 took 116.241 sec, found 4294939769 matches
[P1] Table 4 took 149.154 sec, found 4294857166 matches
[P1] Table 5 took 144.882 sec, found 4294837600 matches
[P1] Table 6 took 139.92 sec, found 4294693068 matches
[P1] Table 7 took 107.253 sec, found 4294323997 matches
Phase 1 took 775.284 sec
[P2] max_table_size = 4294972186
[P2] Table 7 scan took 10.1827 sec
[P2] Table 7 rewrite took 29.5417 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 24.6915 sec
[P2] Table 6 rewrite took 42.7078 sec, dropped 581312049 entries (13.5356 %)
[P2] Table 5 scan took 28.8993 sec
[P2] Table 5 rewrite took 53.7792 sec, dropped 761997452 entries (17.7422 %)
[P2] Table 4 scan took 31.5449 sec
[P2] Table 4 rewrite took 40.0991 sec, dropped 828829447 entries (19.2982 %)
[P2] Table 3 scan took 31.3667 sec
[P2] Table 3 rewrite took 39.7457 sec, dropped 855087655 entries (19.9092 %)
[P2] Table 2 scan took 33.8771 sec
[P2] Table 2 rewrite took 39.254 sec, dropped 865582831 entries (20.1534 %)
Phase 2 took 421.838 sec
Wrote plot header with 268 bytes
[P3-1] Table 2 took 23.9975 sec, wrote 3429389355 right entries
[P3-2] Table 2 took 26.8031 sec, wrote 3429389355 left entries, 3429389355 final
[P3-1] Table 3 took 41.4292 sec, wrote 3439852114 right entries
[P3-2] Table 3 took 27.4749 sec, wrote 3439852114 left entries, 3439852114 final
[P3-1] Table 4 took 43.1801 sec, wrote 3466027719 right entries
[P3-2] Table 4 took 27.0932 sec, wrote 3466027719 left entries, 3466027719 final
[P3-1] Table 5 took 43.9725 sec, wrote 3532840148 right entries
[P3-2] Table 5 took 28.2409 sec, wrote 3532840148 left entries, 3532840148 final
[P3-1] Table 6 took 45.5923 sec, wrote 3713381019 right entries
[P3-2] Table 6 took 29.0528 sec, wrote 3713381019 left entries, 3713381019 final
[P3-1] Table 7 took 29.2331 sec, wrote 4294323997 right entries
[P3-2] Table 7 took 34.4663 sec, wrote 4294323997 left entries, 4294323997 final
Phase 3 took 405.008 sec, wrote 21875814352 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 70.1642 sec, final plot size is 108827243588 bytes
Total plot creation time was 1672.37 sec (27.8728 min)

Plot from today, first one I tried yesterday took over half an hour just for phase one. P1 Table one takes about the same time, after that it just goes so slooooooooowwwwwwww.

chiamining@chiamining-Precision-Tower-5810:~/chia-plotter/build$ ./chia_plot -k 32 -x 8444 -n 1 -r 18 -K 2 -u 256 -v 128 -G False -t /media/chiamining/IntelRaid0/ -2 /mnt/ramdisk/ -d /media/chiamining/ChiaPlotsDest/ -w -c -f
Multi-threaded pipelined Chia k32 plotter - 974d6e5
(Sponsored by Flexpool.io - Check them out if you’re looking for a secure and scalable Chia pool)

Final Directory: /media/chiamining/ChiaPlotsDest/
Number of Plots: 1
Crafting plot 1 out of 1
Process ID: 4254
Number of Threads: 18
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^7 (128)
Pool Puzzle Hash: d5ea399b658453fbc84bc52076a879ad3fa39524aa00539d9511ebbc2322acd6
Farmer Public Key: 88f4ae9716dedee91fa73cc13cc1073fb332380d7424c7308a43f84be2c88193bcf450be3501406dd55d3b5df24f7919
Working Directory: /media/chiamining/IntelRaid0/
Working Directory 2: /mnt/ramdisk/
Plot Name: plot-k32-2021-11-14-15-52-38712183cdd4e560a06b9d104ea6e032047d2b498a1b0d55431e9ecaf6bc0cf4
[P1] Table 1 took 10.333 sec
[P1] Table 2 took 230.803 sec, found 4295055912 matches
[P1] Table 3 took 316.503 sec, found 4295173653 matches
[P1] Table 4 took 344.627 sec, found 4295161896 matches

Being a Linux noob I’ve got no idea how to work out why its running so slow now. I’m using a 110GB ram disk, and a raid 0 array. Switching back to my Windows install a plot completes in 33 minutes.

CPU is Intel Xeon E5-2699 V3 18 CORE 2.30GHZ and 128GB of ram.

Just let it do one plot on Unbuntu, and it took 95 minutes!!!

Final Directory: /media/chiamining/ChiaPlotsDest/
Number of Plots: 1
Crafting plot 1 out of 1
Process ID: 34893
Number of Threads: 18
Number of Buckets P1: 2^8 (256)
Number of Buckets P3+P4: 2^7 (128)
Pool Puzzle Hash:
Farmer Public Key:
Working Directory: /media/chiamining/IntelRaid0/
Working Directory 2: /mnt/ramdisk/
Plot Name: plot-k32-2021-11-14-16-44-9d9c49f81f81ac5e7cc2c3021cf3a9f1f27f18f794aae893adf893cfcbd06450
[P1] Table 1 took 10.5978 sec
[P1] Table 2 took 234.109 sec, found 4295015071 matches
[P1] Table 3 took 316.268 sec, found 4295062340 matches
[P1] Table 4 took 320.625 sec, found 4295049271 matches
[P1] Table 5 took 319.456 sec, found 4295078545 matches
[P1] Table 6 took 327.654 sec, found 4295130700 matches
[P1] Table 7 took 316.291 sec, found 4295108539 matches
Phase 1 took 1845.19 sec
[P2] max_table_size = 4295130700
[P2] Table 7 scan took 10.9992 sec
[P2] Table 7 rewrite took 29.946 sec, dropped 0 entries (0 %)
[P2] Table 6 scan took 21.0133 sec
[P2] Table 6 rewrite took 260.025 sec, dropped 581394180 entries (13.5361 %)
[P2] Table 5 scan took 36.2158 sec
[P2] Table 5 rewrite took 247.61 sec, dropped 762033244 entries (17.742 %)
[P2] Table 4 scan took 39.6356 sec
[P2] Table 4 rewrite took 242.13 sec, dropped 828928810 entries (19.2996 %)
[P2] Table 3 scan took 42.9501 sec
[P2] Table 3 rewrite took 243.418 sec, dropped 855166653 entries (19.9105 %)
[P2] Table 2 scan took 41.4467 sec
[P2] Table 2 rewrite took 31.125 sec, dropped 865639881 entries (20.1545 %)
Phase 2 took 1260.85 sec
Wrote plot header with 252 bytes
[P3-1] Table 2 took 24.3879 sec, wrote 3429375190 right entries
[P3-2] Table 2 took 310.219 sec, wrote 3429375190 left entries, 3429375190 final
[P3-1] Table 3 took 42.295 sec, wrote 3439895687 right entries
[P3-2] Table 3 took 296.316 sec, wrote 3439895687 left entries, 3439895687 final
[P3-1] Table 4 took 42.9569 sec, wrote 3466120461 right entries
[P3-2] Table 4 took 294.934 sec, wrote 3466120461 left entries, 3466120461 final
[P3-1] Table 5 took 44.3302 sec, wrote 3533045301 right entries
[P3-2] Table 5 took 303.709 sec, wrote 3533045301 left entries, 3533045301 final
[P3-1] Table 6 took 46.1873 sec, wrote 3713736520 right entries
[P3-2] Table 6 took 304.48 sec, wrote 3713736520 left entries, 3713736520 final
[P3-1] Table 7 took 32.4928 sec, wrote 4295108539 right entries
[P3-2] Table 7 took 348.416 sec, wrote 4294967296 left entries, 4294967296 final
Phase 3 took 2095.03 sec, wrote 21877140455 entries to final plot
[P4] Starting to write C1 and C3 tables
[P4] Finished writing C1 and C3 tables
[P4] Writing C2 table
[P4] Finished writing C2 table
Phase 4 took 512.799 sec, final plot size is 108835462691 bytes
Total plot creation time was 5713.98 sec (95.2331 min)

Anyone any idea’s as to why its taking three times longer???

So you were getting 28 minute plot times in Windows, but now getting 90 minute plots in Ubuntu.
My suggestion would be to make sure the plotting directory is where the plotting disk is mounted.
Check to see where your drive is actually mounted using “df -h”.
If its not mounted at all, find out what drive letter it has by using “fdisk -l”. Or you can do “ls -lha /dev/disk/by-label/| grep nameOfDrive” and then mount it with “mount /dev/disk/by-label/nameOfDrive /mnt/plotterDirectory”.
Once mounted the ubuntu user needs to be able to write to both plotting directories and destination. You do so as root by typing: “chown user:user /mnt/plotterDrive” and likewise for destination drive etc.

Either it’s this, or you didn’t actually create the ramdisk (or both). I create mine manually after I’ve booted up with the following command:
sudo mount -t tmpfs -o size=110G tmpfs /mnt/ram/

The plotter is not going to check what drives you actually have mounted at your destinations in the config, it’s simply going to use those as directories. If your disks and ram are not mounted there then it will be using your OS’s hard drive for plotting since that is the drive the directories are located on if nothing is mounted to them.

Perhaps while fixing the raid-0 config you undid the mounting of the ramdisk in /etc/fstab.
/tmp/ramdisk is just a mountpoint, so if not actually mounted during boot MaxMax would use the disk /tmp/ramdisk is on for temp -2.

/etc/fstab should have a line something like:

myramdisk /tmp/ramdisk tmpfs defaults,size=110G,x-gvfs-show 0 0

Oeps, crossposted the 1min. earlier reply of dpak90;-)

2 Likes

@dpak90

28 minutes was in Ubuntu when I tried it back in August, now its 95 minutes.

I’ll try some of what you’ve suggested tomorrow night, I’m going to leave it plotting over night and tomorrow day time in Windows, as its late here now.

You say using “df -h”, do I just enter that from the terminal? It certainly appears to be writing to both the raid drive and ram disks, they both have files written to them, I’ve had to delete them when I’ve cancelled plotting. And on the raid drive I can see the WIndows recycle bin folder and IIRC a system information file, so that seems to be mounted correctly. Can’t be sure about the ram disk without being in Ubuntu.

@xkredr59

I do have those statements, or very similar in fstab - I checked yesterday

Thank you both, I’ll do further investigation tomorrow night.

When you see the windows recycle bin on the raid-0 drive i guess the drive is still formatted in NTFS?
Linux/Ubuntu can read/write windows NTFS but it is quit a bit slower then for instance EXT4 of XFS, native linux formats.
I formatted my 4xraid-0 SSD’s with XFS and maybe you did also with your earlier attempts with madmax on linux? Using the disks in the mean time for windows NTFS could be the difference then,

entering df -h from terminal will show you what disks are mounted and where they are mounted at, including the ramdisk.

Hope this makes sense to someone.

chiamining@chiamining-Precision-Tower-5810:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 2.9M 13G 1% /run
/dev/sda5 219G 11G 197G 6% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
tmpfs 112G 0 112G 0% /mnt/ramdisk
/dev/loop1 56M 56M 0 100% /snap/core18/2246
/dev/loop0 128K 128K 0 100% /snap/bare/5
/dev/loop2 56M 56M 0 100% /snap/core18/2128
/dev/loop3 261M 261M 0 100% /snap/kde-frameworks-5-core18/32
/dev/loop4 57M 57M 0 100% /snap/kdiskmark/59
/dev/loop6 219M 219M 0 100% /snap/gnome-3-34-1804/66
/dev/loop5 66M 66M 0 100% /snap/gtk-common-themes/1519
/dev/loop7 52M 52M 0 100% /snap/snap-store/518
/dev/loop8 219M 219M 0 100% /snap/gnome-3-34-1804/72
/dev/sda1 511M 4.0K 511M 1% /boot/efi
/dev/loop9 51M 51M 0 100% /snap/snap-store/547
/dev/loop10 66M 66M 0 100% /snap/gtk-common-themes/1515
/dev/loop11 43M 43M 0 100% /snap/snapd/13831
/dev/loop12 33M 33M 0 100% /snap/snapd/13640
tmpfs 13G 16K 13G 1% /run/user/125
tmpfs 13G 32K 13G 1% /run/user/1000
/dev/sde1 5.5T 2.6T 2.9T 48% /media/chiamining/ChiaPlotsDest
/dev/mapper/isw_cegajfdide_IntelRaid0p1 531G 113M 531G 1% /media/chiamining/IntelRaid0

fstab

**> **
> # /etc/fstab: static file system information.
> #
> # Use ‘blkid’ to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name devices
> # that works even if disks are added and removed. See fstab(5).
> #
> #
> # / was on /dev/sda5 during installation
> UUID=d63f5b5e-2b43-469d-bee9-6f859e414137 / ext4 errors=remount-ro 0 1
> # /boot/efi was on /dev/sda1 during installation
> UUID=BE10-1088 /boot/efi vfat umask=0077 0 1
> /swapfile none swap sw 0 0
> tmpfs /mnt/ramdisk tmpfs rw,size=112G 0 0

chiamining@chiamining-Precision-Tower-5810:~$ sudo fdisk -l
Disk /dev/loop0: 4 KiB, 4096 bytes, 8 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop1: 55.51 MiB, 58191872 bytes, 113656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop2: 55.45 MiB, 58130432 bytes, 113536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop3: 260.73 MiB, 273375232 bytes, 533936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop4: 56.42 MiB, 59146240 bytes, 115520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop5: 65.22 MiB, 68378624 bytes, 133552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop6: 218.102 MiB, 229629952 bytes, 448496 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop7: 51.4 MiB, 53522432 bytes, 104536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 186.32 GiB, 200049647616 bytes, 390721968 sectors
Disk model: INTEL SSDSC2BA20
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x1d9107f0

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1113583615 1113581568 531G 7 HPFS/NTFS/exFAT

Disk /dev/sdc: 186.32 GiB, 200049647616 bytes, 390721968 sectors
Disk model: INTEL SSDSC2BA20
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sda: 223.58 GiB, 240057409536 bytes, 468862128 sectors
Disk model: INTEL SSDSC2BW24
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa452783c

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1050623 1048576 512M b W95 FAT32
/dev/sda2 1052670 468860927 467808258 223.1G 5 Extended
/dev/sda5 1052672 468860927 467808256 223.1G 83 Linux

Disk /dev/sdd: 186.32 GiB, 200049647616 bytes, 390721968 sectors
Disk model: INTEL SSDSC2BA20
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/mapper/isw_cegajfdide_IntelRaid0: 531 GiB, 570157301760 bytes, 1113588480 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 131072 bytes / 393216 bytes
Disklabel type: dos
Disk identifier: 0x1d9107f0

Device Boot Start End Sectors Size Id Type
/dev/mapper/isw_cegajfdide_IntelRaid0p1 2048 1113583615 1113581568 531G 7 HPFS/NTFS/exFAT

Disk /dev/sde: 5.47 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: EFRX-68L0BN1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 89B747C5-5C4D-4C0F-A0A3-DBF05EC37726

Device Start End Sectors Size Type
/dev/sde1 2048 11721043967 11721041920 5.5T Microsoft basic data

Disk /dev/loop8: 219 MiB, 229638144 bytes, 448512 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop9: 50.98 MiB, 53432320 bytes, 104360 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop10: 65.1 MiB, 68259840 bytes, 133320 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop11: 42.19 MiB, 44232704 bytes, 86392 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/loop12: 32.45 MiB, 34017280 bytes, 66440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
chiamining@chiamining-Precision-Tower-5810:~$

Tried to format the raid array to EXT4, failed, also now can’t write to the drive, I really wish it was as simple and easy as Windows, Linux is always so cryptic.

On the right of the picture it refers to it as /dev/dm-0 but in the error message it has /dev/-dm-1

Also tried with gparted, but get errors with that as well.

and then I can’t even mount it.

Edit. After a reboot I’ve managed to format it to EXT4 and now running Chia Plot.

Things are looking better now, system monitor now shows the CPU threads constantly between 90-100%, yesterday they were not, and P1 Table 2 took 99 seconds.