Is it recommended to fill every last space of a HDD for Chia plots?

I am currently plotting but I want to ask for other plotters out there - is it safe to fill a HDD to the brim with plots? I know there’s this article here that shows you how to do it: Tip for using that last bit of space on your drives

However there are also practices where it is recommended to leave 20% of space free

As a HDD will slow down and thus affect its performance. Wouldn’t this in turn affect the plots stored in it to farm Chia? (i.e. It might not make it to the 28 seconds response time to the Time Lord).

I think this is my favorite answer:

To determine how much free space a system requires, one must account for two variables:

  1. The minimum space required to prevent unwanted behavior, which itself may have a fluid definition.Note that it’s unhelpful to define required free space by this definition alone, as that’s the equivalent of saying it’s safe to drive 80 mph toward a brick wall until the very point at which you collide with it.
  2. The rate at which storage is consumed, which dictates an additional variable amount of space up be reserved, lest the system degrade before the admin has time to react.

First let’s assume we can disregard point 2, because once the drive is full we won’t be writing any more data. In fact, most of the other answers in that thread can be ignored for the specific use case of farming Chia.

For the first point, let’s define unwanted behavior. Unwanted behavior would be defined as poor farming performance, specifically being unable to answer challenges in time because of slow disk access. If you read through the rest of the answers, poor performance is mentioned when writing new data to an almost full disk. Because there isn’t a large amount of free space, contiguous data might not be possible, so the OS must fragment the data in the smaller empty spots on the drive. Again, this won’t be a problem with Chia plots because they are huge files that are written sequentially to the disk once and therefore will not be fragmented. So: at this point our theory is that Chia farming should not be affected by these performance concerns. Let’s test it!!

Luckily I already have, being the author of that thread you mention. :slight_smile: I’ve been running over 60 SATA disks with this method, all almost completely full except for a few gigabytes and all connected through port multipliers and PCI add-in cards. Performance is still excellent. In fact, I’m not only farming Chia but also 5 other forks, all at once. They all live in their own virtual machines and farm the same plots from the host drives. Even with the equivalent of 6 farmers hitting the same plots at once, I have no performance issues whatsoever and consistently win proofs (mostly on forks lol)!


Thank you, this answer is spot on.

1 Like

Just make sure you write one plot at a time to your hdd and you cool them properly.


If you are using Windows, it is recommended to leave the space for a plot free in the your HDD in case you need to use the CHKDSK command to repair bad sector in your HDD in case of a failure. Like that, if a plot get corrupted because of a bad sector, you can hope to recover it.

If a sector goes bad on your hard drive and you lose a plot because of it, you can delete the plot, fix the sector and re-plot in under an hour with Madmax.

Compare that hour lost with all that space that you aren’t using over time. The expected value of that extra space is far greater if you use it to farm even if you have to replot every once in a while when a sector goes bad. This is basic logic but it doesn’t work for ANY OTHER USE CASE - only Chia. It is the same logic that says “don’t use RAID parity or mirroring for Chia plots” for exactly the same reason: in the Chia world, space itself is the premium, not necessarily the data in that space. You can easily “recreate” the data if you lose it. Therefore, if you aren’t using as much as your space as possible, you are leaving Chia on the table. :slight_smile:


I’ve filled my drives with as many plots as it can hold. Leaving whatever space is at the end that isn’t big enough to fit a whole plot.

I’ve not run into any noticeable performance issues.

Once they’re full I mount them as read-only, so I don’t expect there to be any corruption of data either. :man_shrugging:

Then again, it’s early days.

1 Like

I had installed a brand new 18TB HDD, formatted it real quick and started writing plots… After a 3rd plot or so, noticed some failures to read, hang ups…. Chia harvester crashed with io errors, chia plots check failed with similar errors…

Had to run check/repair file system that took several hours (on an empty drive!) The first plot was corrupted and could not be recovered.

The moral of the story is: better run full hdd check before filling it up

Do you use some Ram Caching Software for HDD writes?

No caching software. Had same model same capacity hdd before this and it didn’t have problems.

But I will check every new hdd before writing plots to it

Just checking. Do you overclock your memory or CPU?

No overclocking on cpu.
Memory uses predefined D.O.C.P., no manual fiddling with any parameters.

Name the XMP Settings…

I watched a youtube video (SerpentXSF youtube channel) mentioning that its not recommended to OC your RAM or CPU while plotting, but that’s specifically for Madmax plotter - otherwise it would crash. Not sure if its the same for Chia GUI though.

You cannot fill to the brim, with 108gb files; So I end up with 72gb on every drive free, so then you run turboplot ( gpu plotter faster than madmax ) to make 4gb plots to fill the 72gb on all drives;

Then you run the foxy-pool miner in double miner mode for burst&bhd, and you will make more than chia.
Lot’s of people have already quit even making chia plots and just do bhd/burst, you can also mine these coins on HPOOL.

GPU plotting and bhd(bitcoin-hdd) is +5 year old technology, but its 100X better than anything chia ever did, and also bhd is clone of burst just like chia from 2017.

I think this is an old wives myth.

I use the XFS format on linux, which means xtra-large files, of my 10TB say disk, I only get about 9TB usable, that means almost 10% is getting used for internal book-keeping of the disk data structure, given these formats were designed for super-size files…

I will say this NTFS is shit, the problem here is newbs are largely tethered to windows from birth.

Move on dudes to linux.

I don’t think it matters much about full or not, if your using the correct Software for the task.

NTFS is like 40 years old, from the 1980’s. Sort of FAT on steroid’s, your grandfathers file system, that originally only supported 4gb DOS HDD’s.

Cut the leash to windows, get a life;

Almost everything in your comment is mistaken. First of all, xfs does not mean extra-large filesystem, the x was intended to be a placeholder, but the creators never changed it later. Where xfs excels is not in storing large files, but in parallel I/O, which makes it the ideal filesystem for plotting. The reason your 10TB drive only shows 9.1TB has absolutely nothing to do with journaling, but the fact that manufacturers use TB (1000GB) and computers show TiB (1024GB). This means all hard drives will show slightly less in the OS than what’s written on the drive.

While I’ll agree Linux can be better than windows at many tasks, farming is pretty independent of filesystems and operating systems. For plotting, it absolutely makes sense for performance gains, but for farming it doesn’t particularly matter. I would say the biggest advantage of farming on Linux would be the easier command line functions to parse the log files quickly, as well as the third party tools that are developed specifically for Linux. Not as critical, but NTFS came about in the 90s, not the 80s.



Y’kno I hadn’t even considered how a filesystem would affect performance or capacity. I just loaded everything up in bog standard ext4. For my ultra budget Pi farm I’m pretty sure ext4 is a safe bet.

Oh, and I removed the reserved blocks for system and root to reclaim some disk space. Because I like living on the edge. (Ref: How to Free reserved space on ext4 Partitions)


1 Like