Struggling with iowait

Hello I just built my plotting machine and im struggling:

I have an i7 10700, with 64gb of ram, and 3x1tb ssd running in raid 0. Everything running on ubuntu server 20.04.

I was planning to have 3 plots in parallel in phase 1 with 4 cores and 6gb of ram, with a total of 8 parallel plots.

This morning (1st morning) I woke up, and I had 7 plots form yesterday, working for more than 24 hours, stucked in the process. I killed the delayed ones, suspend the middle ones, and started running only 2 parallel.

Now I have only 1 last plot running. I will stop the machine to try to boost performance from bios & update chia.

Anyways, with only 1 plot running, I am having high IO Wait. Do you have any clue how can I improve this? Thanks.

You don’t mention the most important spec. What drives are you running? If they are just SATA SSD’s, there is your problem.

1 Like

I have 3x1tb ssd sata in raid0

The I/O that a normal SATA SSD has is the bottleneck. There is no way you will run 8 in parallel on SATA drives. Even running 3 in parallel will get you a couple plots a day. You need something faster. Look into NVME drives.

1 Like

On a 4 threads cpu with 8 gb of ram, and 1 TB ssd, I was running 2 plots in parallel in 11 hours.
with 16 threads, 64fb of ram, and 3 tbs, I was expecting at least do double than before, 6 plots in the same 11 hours, and it is taking 24 hours.
The system sda is running on nvme 250gb, but it is a sata nvme I believe.
That is why I imagine something else is going on

Hard drive I/O is a huge factor in plotting speed. That is why you see everyone using NVME drives. It doesn’t matter how much CPU and RAM you throw at it, they won’t make the drive any faster.

You need to specify the exact model of SSD … Chia plotting requires very good sustained read and write speeds.


Also make sure you have trim enabled- not by default on Ubuntu for the nvmes at least

I kinda beg to differ on the SATA SSD comment. Maybe in his config (3 drive drives), you’re not going to get decent parallel results but one of my main plotters is a 10-drive RAID 0. I run 19 concurrent plots on it with little IO wait and it’s producing ~50/day.

Well, that’s a little different. You are combining the I/O capability of 10 drives. When I tried to use one SATA it would get maybe 125MB/s write and the same read. So his 3 drives gets about 1/3 what an NVME would get. Your 10 can perform pretty well. But that is another level than what he has.

1 Like

I’m plotting with 6 ssds and 1 nvme. No raid. I’m doing around 20 a day. I could get more but need to save some resources since it’s my main work computer. It’s usually running about 10 in parallel at any given time. The most I’ve had it up to is 16.

It could be better but like I said it’s not a dedicated plotter.

3950x with 128GB, 5x1TB SSD, 1x500GB SSD, 1x500GB NVMe, Ubuntu desktop 20.04

Interesting information.
These are my ssd:
1x crucial bx500 1tb
2x team ex2 1tb

Right now I am running 2 plots in phase 1 with 4 threads, and 2 plots in phase 3. And I am having high iowait.
I am thinking about taking apart the raid, maybe keeping it on the 2 ex2, and keep the crucial as temp2.
I am checking if I can change the motherboard to a Z series and get a bit of overclock on the 10700 (i read it can go easy up to 4.4 ghz/t compared to the 2.9 standard.).

Do you think this should be enough to help the io wait?

I think it is enabled, this is how it is mounted the raid0:

/dev/md0 /mnt/md0 xfs defaults,nofail,discard 0 0


Just out of curiosity what hardware are you running

I see a lot of users advocating against using real-time discard on Linux for NVMe and use periodic batch instead ( fstrim )