Can I plot and farm on same machine? How?

Hi all

I’m trying to plot on a GPU and farm using CPU is this possible, How ?

Hardware wise
I got 60 tb disk space in a fast raid 5 array (6 x 14 tb)
The machine is a dual processor e5-2690 24 cores 48 threads, 640 gb memory and a rtx3080ti

I’m running MadMax chia-gigahorse-farmer version 2.1.1.giga22 and cuda_plot_k32 version 2.0.0-6ec48cb

I also have Chia Blockchain version 2.1.1

So the question is how to I plot using the gpu only and farm using the cpu only?

1 Like

I’ll be the guy to offer the obvious: plot a lower compression level that’s appropriate for CPU farming for starters. From there, the GitHub (GitHub - madMAx43v3r/chia-gigahorse) has a section on CPU farming about 3/4 of the way down, but I will admit that I haven’t farmed with it and can’t help setting that up. It should be something achievable should you decide to farm on this.

The cuda plotter should handle the GPU part of your mission so that should be the easy part.

60TB is very little space - just plot this using GPU. and later also farm using GPU.
With 3080ti, You should be able to plot this in less than a day.


Only compression levels c1-c7 and c11-c13 can be farmed with CPU, so that is one thing.
I think C6 or C13 is probably best.

Do use CPU only for farming, run this in a terminal (in the Gigahorse folder) and then start the farmer in the same window:
Windows(powershell) $env: CHIAPOS_MAX_GPU_DEVICES=0


Forget about RAID array for harvesting


You should be able to run the plotter and harvester/ recompute server on the same GPU. At least I am using a 3090 to run a plotter and remote compute combined (~0.5PB).
For that small amount of space it should be no issue even using nvme for plotting. You will be done in less than a week. I would also go for high compression and later use the GPU for the farming. Dont go for CPU farming any more if you can reserve your GPU for it.
Of course if you need your GPU elsewhere later, plot the CPU formats as suggested above.

1 Like

And create a Spanned drive ?

Not exactly sure what you mean. I would not recommend to use any shared media like NFS, SAMBA… Plots I create locally and copy them (automatic) into their final place. Harvesters are accessing local drives only. But I use remote harvester connected to a farmer.

I’d NOT recommend to make any complex drive array
Just plot to separate drives & add them as folders
The data for mining is not uniq. So no any reason to protect it with RAID systems.
Volume of data is HUGE so no reason to combine a few HDD to one big (cause if any HDD in chain crashed - you will loose the all data array)
Heard, that guys use the free space on each HDD to make a “complex” HDD… but not sure, what happens if that “complex” HDD will crashed. Is it crashed the mains HDDs or not.


My brother tried this, but it just caused problems, he gave up on the idea after having to replot some drives.

1 Like

Windows has a “Storage Spaces” feature, which allows you to pool drives together for creating a virtual drive with the capacity of all of the pooled drives.

I have never used Storage Spaces. But it seems to be similar to a RAID set up.

I searched for information, and watched videos, but they always demonstrated with empty drives.

Perhaps you can take the unused space of each of your drives, and make it “unallocated”.
Then, perhaps the Storage Spaces tool might see it as free space that can be used to create a pool?

If you test it, please reply with your results.

exactly that way. So you are making a logical disk from unallocated parts from different physical HDD.
& no. I will not try, sorry. Cost of failure will be quite high in meaning of replotting.

Hate to steal the op’s post but I get 75 % + invalid / stale plots when I plot and farm/harvest on the same machine. I have plenty of cpu, gpu and ram. I’m using multiple raid arrays - 6 volumes @ 70tb each.
but for some reason when I plot on any one volume all the plot get tagged as stale regardless of what volume I plot and farm from. My machine has 2x e5-2690 procs (12 cores-24 thread) , 512 gb ram, a rtx 4060 with 16 gb vram. My guess is the gpu doesn’t have enough compute power to farm/harvest and plot even though it’s a 4060ti. Once I get my 3080 back from the shop I’m gonna try remote farming.

Is there anyway I can get the farming/harvesting done on the cpu?

any thoughts / ideas?

1 Like

my 4060 ti has no issues but I am on nossd w/ 2PB plotting the last 400TB while farming.

It’s easy to do with scripts. I have done it with spare space on many drives.

Basically, you create a .vhd (virtual disk) file on each disk that has some spare capacity. It has to be bigger than something like 4-5GB (minimum size Storage Spaces can use…) and obviously usually less than the space available to store another plot, otherwise you would just put another plot on there…

Done across many disks, you can get enough space to store extra plots. Lets say on average you have 70GB (edit: GB of course, not TB) of free space per disk (not enough to store an OG plot for example) and you have 20 disks, you can create a Storage Spaces drive of roughly 1.4TB - another 13 plots for ‘free’.

Yes, it’s not a lot, and yes, unless you add resilience to the Storage Spaces drive (I don’t, I just use a non-resilient striped volume…) you might lose some plots on the Storage Spaces drive but hey, if that happens, you’ve likely lost the drive hosting one of the Storage Spaces .vhd files anyway, so you’ll lose a lot more plots on the host drive than the ones in the Storage Spaces drive.

Worst case - replot a few plots if you need to recreate the Storage Spaces drive from scratch…

I can share scripts if anyone is interested…

Please tell me how you get 70TB of free space per disk

1 Like

You can’t - you either have to have it already or you don’t.

The point is, if you do have some free space per disk, but not enough to store a plot, if you have many disks, you can combine the free space into one new ‘virtual’ disk and use that to store more plots.

Example of one of my systems:

You can see each disk only has a small amount of free space (typically 512Mb, although some have more due to the Storage Spaces limitation on .vhd file size, see above).

@rallbright was referring to where you wrote:

You probably meant to write 70GB.

1 Like

You are correct… I am so used to thinking in TB these days… :face_with_hand_over_mouth:

Edit: original post edited to avoid further confusion… :+1: