Plotting process measurements

I believe you can run max 8 parallel with 2 TB SSD.
Have you tried and what is your max number of plot per day achieved.

1 Like

Not tested yet. I have no free space for plots now :sleepy: Waiting for pool protocol release, so I will run total replotting and get my max PPD

1 Like

Hi All

I have a Ryzen 5 3600 16 GB RAM and a corsair 2TB MP600
I am plotting 19 plots in 24 hours. is that good?

It’s not bad. But I believe you can do more! :wink:

I recon with more RAM maybe one extra thread…

Ya about half of my 3900x. So doing well!

1 Like

How are you launching them? Plotman, swar?
How many in parallel?

I’m only getting 24 from my 5800x on dual mp600s.

I use plotman and I don’t use file journaling. Extra writes no need for that on precious nvme
How much ram do you have ?

1 Like

How do you mount your temp drives when you don’t use journaling? I am running raid0 and xfs on my two mp600s…

Your filesystem is your problem. Everyone believe ext4 is slow.
Correctly configured, its marginally faster with lesser writes due to no journal logging

I’m using this:

[lv@fedora ~]$ mount
/dev/nvme1n1 on /temp1 type xfs (rw,noatime,nodiratime,seclabel,swalloc,attr2,discard,largeio,inode64,allocsize=65536k,logbufs=8,logbsize=32k,noquota,x-gvfs-show)
/dev/nvme0n1 on /temp2 type xfs (rw,noatime,nodiratime,seclabel,swalloc,attr2,discard,largeio,inode64,allocsize=65536k,logbufs=8,logbsize=32k,noquota,x-gvfs-show)

[lv@fedora ~]$ xfs_info /temp1/
meta-data=/dev/nvme1n1           isize=256    agcount=4, agsize=62512790 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0, sparse=0, rmapbt=0
         =                       reflink=0    bigtime=0
data     =                       bsize=4096   blocks=250051158, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=122095, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[lv@fedora ~]$ xfs_info /temp2/
meta-data=/dev/nvme0n1           isize=256    agcount=4, agsize=31256726 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0, sparse=0, rmapbt=0
         =                       reflink=0    bigtime=0
data     =                       bsize=4096   blocks=125026902, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=61048, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
 Metadata CRC error detected at 0x55abce4c1bee, xfs_agf block 0x3a382231/0x200
xfs_info: cannot init perag data (74). Continuing anyway.
meta-data=/dev/nvme0n1           isize=512    agcount=4, agsize=122094662 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=488378646, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=238466, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
medusa@chiaminator:~$ sudo xfs_info /dev/nvme1n1
Metadata CRC error detected at 0x56545b11cbee, xfs_agf block 0x3a382231/0x200
xfs_info: cannot init perag data (74). Continuing anyway.
meta-data=/dev/nvme1n1           isize=512    agcount=4, agsize=122094662 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=488378646, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=238466, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

This is my output. I have some options set to true instead of 0 like you. Do they make a difference? And these CRC errors…What do you mean by ‘using next’?

Would you mind explaning these best options for ext4 and chia plotting?

I formatted my xfs without crc.
makefs.xfs -m crc=0 /dev/nvmeXn1

1 Like

Ext4 is fast but as soon as multiple writes occur, You get bottlenecks. I recon Journalling has to do with this. Journaling is bad for our use as it does multiple writes which reduces the very short lives our NVME drives have even more. I found a lovely patch that fixes the inherent slowness of ext4
ext4 vs xfs on SSD - Percona Database Performance Blog

This is the game changer. Getting about 17Plots a day and I only have 16GB ram on a Ryzen 3600

Look at this
Linux: How to disable/enable journaling on an ext4 filesystem - FoxuTech

There are several other optimizations, and I am busy testing them

Try my suggestions and let me know how it performs

It dated by 15 Mar 2012. Seems to be very old :wink:
Had you really tested and compared this?

Indeed. Running it at the moment and it is working well. I look at preserving of the NVME as well no journaling - less writes. You can’t switch off journaling on XFS

latest update 20 plots in 24 hours on that same rig