Farming with 4 GB of memory (Raspberry Pi 4B)

I used to run my farm on a Raspberry Pi, but newer chia versions eat all the 4 GB of memory and the device becomes unresponsive (mainly taken by the full_node process).

Is there a way to lower full_node memory usage?

Thanks!

1 Like

Many ppl lowered their peer count to ease things on less capable nodes.

Give the forum a search, lots of info on how to do it.

1 Like

Chia recommends 8GB RPIs, but some people still run it on 4GB. You have two choices.

With chia code, you need to:

  1. move your db of your SD card to as fast media as you can
  2. upgrade blockchain db to v2
  3. create a big swap file that will not be on your SD card
  4. lower the peer count down to 10 (don’t go lower)
  5. move your logs of your SD card and/or drop the log level to WARN
  6. OC your CPU, and give it some extra cooling

Alternatively, you can dump the chia node, and go with FlexFarmer. It has much lower H/W requirements (will run on RPi 3, I think). Although, you will not be able to use any monitoring tools as logs are different.

4 Likes

Thanks, I tried lowering the full_node.target_peer_count to 10, but the chia_full_node process still takes the full 4GB of memory, then the machine starts swapping and becomes unusable.

This is a headless ubuntu setup on which I removed all useless services, using 123 MB of RAM at boot, 99% of the RAM is eaten by this process. The OS is installed on an external SSD (no SD), I don’t see what more I could do on the system side.

What is chia_full_node using all this memory for? Are there other settings that may influence memory usage?

I don’t think that there are any settings that directly specify/limit RAM usage (for any chia process). The only thing that you can do is to lower the processing overhead, and all the above items are helping with that. It may be that 4GB is just not enough (with the latest chia version).

The problem with full_node is that that process basically deals with all resources (peers (extra db access), db (block handling), block dispatching (wasteful cycles on the one core it uses, most likely timeouts)), and is rather poorly written. Although, saying that, the full_node is actually two different processes, the main that deals with all those resources, and subs that only crunch blocks (using additional cores). Usually, the main process chokes one core, and everything slows down at that point (other cores are basically idle waiting for blocks to be processed).

So, potentially the exercise is not so much to lower the RAM usage, but help to better utilize disk resources (faster disk trashing). With that said, you have db that requires a bit of memory to be useful, and you have the swap file. Maybe you could try to add one more SSD, and put your swap on that SSD. This will give a bit more bandwidth for memory swapping. Also, as Chia recommends 8GB RAM, with your 4GB RPi, I would create a 8-16GB swap file on that separate SSD. Actually, you could also try to move your db to one of the plotting HDs. This way, your swap on that SSD would have more bandwidth. If there would be some even small improvements, that would suggest getting another small/fast SSD just for your swap file.

By the way, what your free says about mem utilization? Could you post a screenshot of that, maybe also top screenshot.

By the way, is your node synced?

Also, I checked Flex pool top 10 farmers, and all but 2 are using FlexFarmer (not on RPis, I guess). The biggest one is just a tad over 10PB. Due to those non-chia like logs, for me that is a no go solution, but I have a bit more headroom on my NUC.

1 Like

Running it on rasp pi 4gb as well. I had to use a swap file size of 2GB and my install is also on an ssd not sd card.

3 Likes

Silly me forgot to add a swap partition. I always add one but this time I forgot, and since I always have a swap partition I forgot to check, thanks eichof!

So I added a swap file (not a partition, I followed this guide; it doesn’t matter on an SSD these days apparently), and now the little guy is perfectly stable, with the swap usage fluctuating between 3 and 4 GB (node synced).

1 Like

my full node windows vm gobbles up 32 gb of ram in about 3 days… can’t imagine such stress on a poor pi… further, i give my vm 6 cores… and see spikes that Max out all cores at 3.6ghz. frequently.and I don’t harvest with my full node… so yikessss
poor pi’s
their a good idea in theory. and im all for saving the planet… I just don’t think a pi is really suited for this task… maybe a pi harvester tho may work…

1 Like

I’m running it on a pi and it is absolutly stable with 9 external disks with 14TB each. I have around 3.2GB Ram usage on my pi. Normaly CPU usage is 20-30% @ 1.8GHz.

Must be that swap you’re all using.
My main node is using 15.7 GB today lol.

1 Like

how do u power those and what interface are they connected?

I use the normal power suply plugged into a power rail. But I have ordered some cables and plugs so in future I will solder it to an old pc powersuply. I think this will have better efficency. I also use usb 2.0 with an hub. One of the usb 3 ports is used for the ssd and I had some performance issues if I put the drives on the second. So usb 2 is still enough.

im sorry. I couple of things I can’t go with out saying no intend to offend
so
ur using a pi. that has usb3.o…
but so your harvesting over a usb 2.0 connection…
from external hard drive enclosures.
9 disks… take that usb 2.0 connection max speed. divide it by 9…
because when u goto seek… it will need to read from every disk to find said plot… there will be a massive bottleneck…

especially if you only have max 8 gigs of ram…

  • if u spend that much on hard drives… u must use a usb 3.0 connection. ur shooting yourself in the foot my friend,

I dare ask why do you have an ssd plugged into the pi? performance or reliability issues? with having them both plugged into usb 3.0
what purpose does that serve and why is it more important than your precious plots??
don’t pi’s use a micro sd for the os? I know your not plotting on the please explain…

what do your lookup times look like? im genuinely interested.

wiring to a old pc power supply is a great option. but must be done with absolute care and sureness in ones abilitys. els see all ur drives fry.

but honestly. how many blocks have you won with this configuration…
im just very confused and interested

Maybe your main fault is to use windows for farming. :wink:
I use the usb 2.0 ports because I have the ssd for better performance I/O on the usb 3.0
I don’t see any problems with that, keep in mind USB 3.0 helps you if you have to move a lot of GB, this is not the case when you are farming. Bottleneck will also be the HDD drives, so you never get the real performance out of it. I tried USB 3.0 and USB 2.0 to move some plots on it, almost same speed.
For the wiring, if you can handle a volt meter right this is no problem, I did my own cables on my main pc.

Windows is rock dum simple. The way it should be.
But
I keep my farmer node and Ubuntu lxc harvesters separate. in the same Debian environment.
If u follow
As per recommendation of farming on many machines- increased speeds.
The harvester protocol is wonderful.
Especially whilst contained in a virtual environment within proxmox
Savvy

Iv connected and wired myself over 100 drives in my jbod chassis. Sata/ sas over 4 gigabit fiber. Split among many harvester containers. In hyper v server. Each harvester is allocated minimum speeks as outlined on the holly github.
And each use 4 gb memory… but they are just harvesters.
It’s really complicated to explain why this is optimal. My harvesters and my chia full node. Are both virtual. In the same machine sharing the same processor…. When harvesting happens…. It’s lighting. Instantaneous. Because it’s allready in the harvesrs ram. The same ram shared by the full node requesting the information.
Exploiting the harvester protocol in a virtual environment.

Its a tesla vs a Honda civic hybrid… one is vastly more efficient.

Just trying to clarify here…
Your running a full node and harvesters on the same box, but all in different vm’s?

1 Like

Got flamed at in this forum and even temp banned when I said you need server grade hardware to be successful via chia. Well, when people do not listen, they get hurt. I tried to warn them…

1 Like

The server hardware most are using is old and cheap and far worse than a good modern pc.

So, there’s that.

2 Likes

Not different vms. They are Lxc containers. Isolated processes. Serving 1 farmer that’s available at any one time between 3 servers.
Hyper availability.
So I keep my harvesters separate as they have drives connected to them on the physical box and stay in place.

Seems an odd way to go about it from my perspective, but if it works it works…
I can see the benefits of doing it that way, but your the first I’ve seen do it.