1.3.1 Upgrade, how's it going?

Just checking in with the community to see if anyone has stepped up to 1.3.1? I’m still at .11 waiting for the shakeout and a stable version. I know was added to this version, so hopefully, this version is better for those who choose to upgrade. We look forward to your updates.

Upgraded from 1.3.0 to 1.3.1 on Ubuntu 20.04 desktop.
Used the downloadable installer (.deb file).
On first running this it deinstalls/removes the earlier version, next run installs the new version.
Normally, but this time running it first time kind of stalled.
After reboot it worked as should, just a one time glitch on my system probably.
1.3.1 runs fine, farmer and pool reward addresses are now visible in the GUI again, I had no problem with my PlotNFT on 1.3.0 to begin with but 1.3.1 should fix the issues some people had there.
Several other issues fixed as in the release notes that I didn’t even notice up till now but I believe them;-)

Only issue now for me is I wanted to switch to 1.3.x for the v2 database with smaller footprint, but in the mean time db and wallet together are 95GB (!) now so I don’t have enough diskspace (120GB) to run the converter from v1 to v2…


I wonder whats in the 1.3.1 thats not in the 1.3.137 beta?

How large is your boot drive, where chia is installed along with the database?

When you run db upgrade, you can explicitly specify src/dst locations, so you can put your destination on a separate drive (e.g., your plotting drive), and after it is done, swap your dbs.

1 Like

Really good to know, thanks :smiley:

Upgraded from 1.2.11 to 1.3.1 tonight, and upgraded the DB, all appears to have gone well.


Installed on a minor farm and worked okay. Then realised it has knocked all the other farms off the pool. Upgraded all of them and they returned to communicating with the pool again. Upgraded DB successfully on the minor unit and currently processing the same upgrade on all systems.
I would say that the farms reduce their efficiency during the db upgrade but otherwise seem to be going okay. I also note that the systems stop farming on completion of DB update and the Chia app needs config DB URL update and restarting.

Updated from 1.2.11 to 1.3.1 yesterday.

I wanted it to re-generate config.yaml from a new prototype and before installing I deleted the old config.yaml. Yes, it created a new config and it had a lot more options and optimized defaults, but for some reason it didn’t recognize blockchain db and started syncing from scratch… Of course I didn’t like it and restored everything from backup, then re-ran the update. Blockchain db had synced quickly.

I still wanted to update config file with the new stuff, so I manually compared “old” vs “new” and selectively modified values in the live config. At some point gui crashed (must be due to my meddling) and I had to force quit chia processes. Ditched gui and ran the cli (as usual).

Wallet looks like had to sync from scratch no matter what, and it took some 8 hours to get synced, due to dust storm garbage over last months.

Yet to perform db upgrade. Just waiting for things to stabilize.

I guess, it would be nice, if during the install, there would be a clean config.sample.vX.Y.Z generated.

Also, I don’t understand, why they strip comments from config.yaml, and this happens not just during updates, but rather during restarts. For example, I comment unused pools, and those comments are first stripped, and then in the next restart uncommented pools are regenerated. Kind of retarded.


I guess chia software unserializes it into memory as an object and then re-serialize whenever changes to configuration need to be persisted programmatically. The comments are ignored in the very beginning during unserialize. It all depends on the python library they use - I am not familiar with it, but perhaps if it is possible to tweak it so it preserves comments (and send a pull request).

1 Like

So I migrated DB yesterday. It took ~80 minutes overall on my system.

I’ve been watching debug log, CPU, memory, and IO. There was no impact for farming.
Some observations:

  • Mild spike to CPU - up to 25% peaks (intel i7 6-core CPU)
  • The migration process peaked at around 1GB RAM usage
  • Up to 60% IO load on the disk hosting the chia DB files, but no significant bottleneck in the read/write speed. Just internal IOps. (Samsung 970 EVO Plus 2TB NVME, dedicated only to chia DB files)

So the bottleneck is the quality of disk hosting DB files. It was a breeze for a high quality NVME. A low quality SSD would take longer, and HDD would be worst.

1 Like

You serialize / deserialize objects that are not meant for user editing. Whatever is meant to be user editable should not be handled like that. I mean, it can be deserialized, but it should not be serialized (i.e., killing part of user editing).

I have just finished an off-line upgrade (chia was stopped). I also have 970 EVO Plus, but 1TB. I used the default params (no params), so both the src and dst dbs were on the same NVMe.

The NVMe load was also around 60%. However, as the process deals with rather small chunks, it was averaging around 100-120 MBps (at least during the very first phase). My understanding is that NVMe speed may be degrading faster when read/write chunks are getting smaller (comparing to an SSD), so it may be not that bad on an SSD. Also, the read load was about 3x time higher than write, so potentially spreading this process across two drives will not improve those speeds.

Also, db is a live file, where updates can modify potentially any record. Therefore, if possible, I would rather prefer to do it when chia is not running. We have seen too many problems with db corruption, and we don’t have any tools to check integrity of those dbs.

Maybe the best option would be to create a copy of blockchain db before upgrading it to v1.3.x (takes about a minute or two on NVMe), Run the new updated chia to check for potential issues, while upgrading the backup db to v2. This way, whenever it will be convenient, a fast chia restart will pick the new db up and do a quick resync (yes, that v2 db would need to be moved to db folder, and config.yaml changed to that folder; also that will require more disk space)


That’s the basic hygiene principle, goes without saying. Every time you need to stop farm, do a backup.

To your point about live upgrade, the SQLite is able to vacuum database while it is in use, so no surprise here.

1 Like

I like the new update, the stability of network connections. logs without errors for several days, although previously there were up to 20 errors per day

I had no issues upgrading 3 full nodes and six harvesters to 1.3.1 from 1.2.11. The DB upgrade was easy and worked well. After the upgrade, everything started running without getting stuck and all the previous settings came across without issue.

Most were source installs from git on Ubuntu 20. One full node had used the Debian package previously because of a bug running the GUI. Upgrading the source and using the previous deb install worked seamlessly.

1 Like

Mine were Windows and NP so far…

All good. Been running fine since.

chia show command doesn’t work in raspberry pi 64 bit OS.