That 40 number is pulled from a thin air, and really should not be seen as a “works for all” value. This is basically a crude patch for bad network-side code that doesn’t do any bandwidth / load management to adjust that number on the fly.
Also, from topology point of view, just 3 nodes would do. Although, with low node count, there is a higher chance to be frequently forked out, as we don’t know what peers the node is connected to.
Lastly, that number becomes relevant mostly during dust storms, as at that time each connection adds extra load on the main full_node process, and this one is limited to use just 1 core (when it gets stuck, it gets stuck, period).
My take is that for low end nodes, we should run with something around 10. If the node has some headroom, and the network connection is solid, I would go for 20. My take is that 40 is really too big.
As far as those two lines, target_peer_count is the total number of connections (in- and out-bound). The target_outbound_peer_count specifies only outbound connections.
If the port 8444 is open, it is better to lower the outbound count and keep the total higher, as this helps with the overall netspace robustness (more limited nodes can connect to you; those open ports are rather scarce resources on the network). If that port is closed, the lower one will be the controlling number.
You should not be doing it. For once, this is a 24/7 job that you are volunteering, and really leads nowhere. However, if you see a bunch of such nodes, this usually indicates that you are connecting to too many peers, and it is actually your node that cannot handle those connections. The kicker is that when you boot some of those nodes, your number of connections drops, your node recovers for a moment, and all looks great. However, because the node will want to maintain that dumb number, it will immediately start adding new connections, potentially overloading itself.
There were people (also on this forum) that advertised doing that, but the only reason they did so, as they had no clue how the protocol works.
And again, there is no magic number to put there, so no point to follow what others have / advertise. If the node stalls during dust storms, it means that the number is too high.
When you power your node down, it falls behind and need to resync on start, right? Some people are starting from scratch, and syncing for them takes a week or so. Some nodes are just overwhelmed, and stop syncing (dust storms?). Your node may be overwhelmed, and because of it it cannot maintain all connections, thus showing bogus numbers. The only thing that we can control is the last one by lowering that connection number. The other nodes should be allowed to sync.