As long as Chia is not going to fix it, not much. It is a serious bug, but affects us farmers, not really the network or Chia directly.
When you shut down chia, you should monitor start_full_node processes, and hope that they will exit. Also, you can check your blockchain db folder (when those processes exit, there should be only two sqlite files there (the main db and related peers db), as *wal, *shm are temp files). I check for those start_full_node processes and usually give them a minute or so, and just kill the main one, if it still lingers. Actually, the process that hangs is the main start_full_node (the controller), the other ones are usually just lingering there waiting to crunch blocks).
UPDATE:
Actually, thinking more about what you have asked for.
It is a bug, of course. However, db writes are only happening when blocks are being processed (peer servicing is just db reads, I think). Potentially, you could unplug the Ethernet cable, that would basically stop block processing, and in a few seconds (10 ?), that process would be done with db writes, so safe to kill.
I would say, that would be a major pain to do it every time chia is about to shut down (most of us access that box remotely). Another thing is to put more pressure on Chia to get it finally fixed.
This problem is / may be also related to slow syncs (when the main start_full_node process chokes its core), as it is around synchronization between different tasks. Therefore, fixing this problem properly has a potential to speed up syncing (e.g., less issues during weak dust storms).