Total newb needs 128GB RAM config help

Less than 24 hours ago I didn’t know that Chia existed and I’m running around in circles with the same questions. I really need a solid guide, I don’t know which to trust and there have been conflicts in what some say.

Everything is working and it is plotting and farming.

I’m using the GUI but don’t know if that’s best after reading other comments, but at the same time don’t know what I’m looking for so for the purposes of getting to grips with the basics it should do.

When I do parallel plots do I use X amount of threads per plot, do I increase memory and/or buckets?

I don’t see in the log what separates these ‘stages’/how to differentiate them to calculate my staggering.

My setup:

AMD 5800X -bought this temporary months ago and will be a 5950X when I can get one.
128GB DDR4 3200
2tb Firecuda nvme, I have 4 x WD SN850 en route and already have an Asus PCIE adapter for them.

For a 5800x, 128gb ram not needed. 32 is optimal for that. 64 optimal for the 5950x. As a rule, 2TB temp per 32gb memory per 16 threads. However, your 5950x will benefit slightly from 128gb ram because you can run a few more in parallel. The 4 x SN850 is a good choice. More drives is better.

For you, you should be running 20 ish in parallel. Stagger every 50 minutes. Kick off two at the same time going to different destination drives.

Here’s some more specifics on what 5950x people are doing:


buckets: leave default
memory: default is ok, or you can increase a bit, but very little difference. as long as your log files shows mostly uniform sort, and not quick sort, you’re ok. If you have a lot of memory, you can increase it up to 6GB or something, but again, wont make much difference.

Threads: most people see a benefit using 3-4 instead of 2. Above 4 there is not much benefit anymore.
If you use more threads per plot, you can make less plots at the same time. So this is the trade-off.
Also note that only phase 1 uses multithreading, the rest is all single-core. So most people stagger in such a way that not all the plots are running in phase 1 at the same time, so you have more thread left over and can run more plots.

Phase times. In the log files for each plot, you can find a time in seconds how long each phase of your plots took.

GUI, is terrible for plotting. Better use Swar plotmanager, many people here are using this.

1 Like

Since he has so much extra mem he could move buckets to 64 and increase mem settings per plot. Would potentially have a performance impact.

1 Like


This system isn’t the build I wanted but in January was all I could get in parts. I use virtual machines (off atm while doing this) and got the RAM at a good price, plus I intend to build a Threadripper by the end of the year. At this rate I will probably skip the 5950X, put 32GB of 3600 or maybe 4000 (lol, not with a chip shortage I won’t).

So right now it’s the 4 nvme and 5800x I will be using, I’m not going to run the firecuda down but I also have a WD 1tb nvme I can sit on-board to give me 5 plotting drives, but it’s my spare so I think I’ll stick to the 4 x 1tb because I bought them specifically for this to die a gloriously profitable Ferengi death.

20ish threads? I take it that’s for the 5950x, which is wow, better than I thought, so maybe I won’t skip the 5950x and just get that ASAP and when you say destination drive I take it you mean the final directory. Sorry, not familiar enough yet with what’s being said.

With the current 5800x setup I was going to do 4 plots to each nvme drive staggering 50 minutes apart to start with and increasing the ram for each to 24GB, I get the feeling I’m not going to away with that though and it has its flaws because I haven’t picked it up correctly.

It does 1 plot in 5 hours on its own if that’s any help.

I’d do 4 jobs kicked off at the same time. 1 for each NVME. 4 doing phase 1 on a 5800x is fine. Limit max phase 1 to 4 and you will keep kicking off more jobs as you move to phase 2. Can easily have 12 jobs going at once total. 7000 mem for each will be fine with 4 threads. Will realistically use around 4000 but may briefly used 6000.

Plot time is not important. A consistent lot of plots in parallel is. If you have 64 threads and are getting a job done every 30 minutes, you are doing 48 plots a day. The plot time is just something to be concerned about before you get going.

1 Like


One thing I don’t get is, when it says number of threads, if it is running 4 plots staggered in parallel and number of threads is 2, does that mean by the time the 4th kicks in it will use 8 threads or will it stick to using 2 threads for the job? I have the same question about memory.

Phase 1 uses multithreading but the rest use single core? Does that mean it will use the assigned number of threads for phase 1 then drop down to a single thread after that?

Phase 1 is the super CPU heavy one that can utilize more than 1 thread. Other phases, not so much.

You can also oversubscribe to threads, so IE have 4 plots in parallel with 6 threads each (a total of 24 threads) on a 16 thread CPU.

Also worth noting you can set affinity, but the OS should handle semi well. The tough thing is multiple CPUs on a single rig, and NUMA nodes. Ugh!

I have done the tests on my system comparing 128, 64 and 32 bucket sizes. So far 128 seems slightly better than 64 on average, and 32 distinctly worse (all had sufficient RAM and threads assigned)

Thanks, again :slight_smile:

It’s just under 5 hours to do a solo plot and I get this in my log:

Forward propagation table time: 1150.298 seconds. CPU (153.890%) Mon May 24 16:28:44 2021
Time for phase 1 = 7951.724 seconds. CPU (149.700%)

Forward propagation table time: 1010.503 seconds. CPU (155.750%) Tue May 25 13:22:40 2021
Time for phase 1 = 7014.564 seconds. CPU (150.400%)

I’ve really only done a couple of test plots, I’ll test some parallel in a bit once solo #3 is done. By the weekend I will put in the 4 nvme. I have other work to do to my machine so will do it all at once or they would be in there by now.

Anyone tried doubling it or can’t you do that/flood?

You can’t. 128 is the maximum. I tried :slight_smile:

Interesting, I am using a 5950x with 64gb memory specifying 6 threads, and 6000 memory. I never see a single thread use more than 3.7gb of memory. I only use 40-45gb of memory total in my system it seems.

Does 128gb really get used?

I have a 2nd 64gb of memory arriving today because I want to run other things at the same time, but not sure Chia threads will use more memory?

If I didn’t run virtual machines I’d be sitting on an Intel with less cores and 32GB of RAM. I just upgraded from a 5820k with 62GB of 2400 DDR4. Hmmmm, that machine needs a real purpose again too and I just happen to have another 2 x Asus 4 x nvme cards :smiley:

I bought the cards with the intent of eventually adding 4 x 2tb drives to each and then decided I couldn’t justify spending £3.5k on storage -no matter how badly I wanted it, but my wife will really-really appreciate this excuse. I think anyways.

I don’t see much point TBH, right now I only have storage for about a week of using the current setup, I need more storage drives and am sort of working on that atm, was well. Just need to learn some server setup.

Unless he uses 64 buckets and allocates a ton of memory, no.

For most configurations this is true and I see no reason to change this to 64 for 99% of users.

Hmm…if I have 128gb of ram, would switching to 64 buckets be a performance benefit?

In my experience, no, but each machine is different, so try. My fastest plot on my machine was achieved with 6T / 6780MiB / 128 Buckets, but that is for single plots. 3T / 3400 MiB / 128 buckets is better in my case if I went for plots/24/h