Chia 2.1.4 released - has improvements to help with the recent issues

If you’ve been having issues lately them its well worth updating, you may also want to manually change the peer count to 40, as it won’t be automatically updated on existing installs going by the release note.

If you are running Gigahorse, then you’ll need to wait for Max to release a matching update.

Fixed

  • Update chia_rs to 0.2.15 for AMD K10 architecture (fixes #16386)

Changed

  • improved CPU usage due to tight loop in send_transaction()
  • improve performance of total_mempool_fees() and total_mempool_cost()
  • reduced the default maximum peer count to 40 from 80 (only applies to new configs)
  • changed to normal SQlite db sync option (previously was full)
  • reduced the mempool size to 10 blocks from 50 blocks (improves performance)
  • improve performance of the mempool by batch fetching items from the db
2 Likes

Who gets to bug Max for the Gigahorse update???

1 Like

I’ll let you :wink:

20chrs

1 Like

:rofl: :joy: :upside_down_face: :beers: Need to wake him up!!!

Are these the lines mentioned to be changed?

target_outbound_peer_count: 40
target_peer_count: 40 <—- this was :80

This one, not sure about the other one.

1 Like

What is the best number for the other line? What is yours?
In gui, dometimes I see peers which are lower synched. When I see them, I delete the connection. Am I doing right or is that an unnecessary action? Why are some peer connections have lower syncing hights?

That 40 number is pulled from a thin air, and really should not be seen as a “works for all” value. This is basically a crude patch for bad network-side code that doesn’t do any bandwidth / load management to adjust that number on the fly.

Also, from topology point of view, just 3 nodes would do. Although, with low node count, there is a higher chance to be frequently forked out, as we don’t know what peers the node is connected to.

Lastly, that number becomes relevant mostly during dust storms, as at that time each connection adds extra load on the main full_node process, and this one is limited to use just 1 core (when it gets stuck, it gets stuck, period).

My take is that for low end nodes, we should run with something around 10. If the node has some headroom, and the network connection is solid, I would go for 20. My take is that 40 is really too big.

As far as those two lines, target_peer_count is the total number of connections (in- and out-bound). The target_outbound_peer_count specifies only outbound connections.

If the port 8444 is open, it is better to lower the outbound count and keep the total higher, as this helps with the overall netspace robustness (more limited nodes can connect to you; those open ports are rather scarce resources on the network). If that port is closed, the lower one will be the controlling number.

You should not be doing it. For once, this is a 24/7 job that you are volunteering, and really leads nowhere. However, if you see a bunch of such nodes, this usually indicates that you are connecting to too many peers, and it is actually your node that cannot handle those connections. The kicker is that when you boot some of those nodes, your number of connections drops, your node recovers for a moment, and all looks great. However, because the node will want to maintain that dumb number, it will immediately start adding new connections, potentially overloading itself.

There were people (also on this forum) that advertised doing that, but the only reason they did so, as they had no clue how the protocol works.

And again, there is no magic number to put there, so no point to follow what others have / advertise. If the node stalls during dust storms, it means that the number is too high.

When you power your node down, it falls behind and need to resync on start, right? Some people are starting from scratch, and syncing for them takes a week or so. Some nodes are just overwhelmed, and stop syncing (dust storms?). Your node may be overwhelmed, and because of it it cannot maintain all connections, thus showing bogus numbers. The only thing that we can control is the last one by lowering that connection number. The other nodes should be allowed to sync.

2 Likes

Thank you very much for your help. I am now enlightened on this subject. I could not find answers to my questions before.
I have installed 2.1.4 and I hope there will be improvement of performance on my farm.

1 Like