Running out off space

hi, im checking on next steps so i dont get into a world of pain…

Win 10, 250gb SSD disk C drive
Running Chia 1.2.10, 100tb farm (local and 1 harvester)
Chia folder is C drive install C:\Users\admin.chia\mainnet and is 122gb
Currently have 3gb free space…

Ive read i need to be on 1.3 for a hard fork, also sqllite db V1 is replaced with V2 which will shrink the db but i need space to replace V1 with V2! So stuck with rock and hard place

Any advise appreciated so i dont crash it all

Have to get another SSD.
Stop the farm, move entire .chia directory to the new drive, then create a directory junction (aka symlink) from the old location to the new location.
Start the farm.
The farm will “think” it still uses old .chia location, but in reality all i/o will be on another drive.

1 Like

Get any bigger size ssd (500Gb and up), clone the current 250Gb ssd to it, replace with clone, continue. Super simple, super easy. I’ve done this process to go from 500GB → 1TB to two PC this year already.

1 Like

Worst case you can skip the migration, delete your v1 blockchain DB, and let 1.3.x re-sync from the start. This will take a few days, or you can download a v2 database to start with from a place like chia-database.com.

If you have an external HDD, move the blockchain DB there, and do the DB upgrade with the source DB on an external drive, writing out to your 250GB C drive. The DB upgrade command provides options for specifying the input and output file, rather than using the default.

Or, as suggested, upgrade your OS drive from 250GB to something larger that will allow you to upgrade in place and have more storage.

1 Like

First migrate the database to SSD with sufficient space and install 1.3 x. Upgrade v1. Whether you need to migrate back to the original SSD depends on your actual situation. This is a tutorial on database migration.
https://spacefarmers.io/wiki/guides/farming/movedb

1 Like

Good evening, first treat yourself to a birthday present and get a Samsung EVO 500 gb ssd disk, second copy your files off to another USB disk for CYA , delete your node DB file and install chia 1.3.4. Byte the bullet and let it sync for a couple of days. You wouldn’t have caught up when you were at 1.2.10. I’m trying beta 1.3.51 farming with no issues on the upgrade so far on one of my test nodes.

2 Likes

i have many videos that will help you with most of these issues. go check them out.

1 Like

thanks all for advise

Farm went offline last night, problem seemed to escalate pretty quick - I had 3gb free when i posted the thread, it had all gone within 24hr. I cleared a few gb, restarted chia and its synched and hasnt used anywhere near that much space - Is this a bug? Ive been farming for nearly a year, 3gb a day would equate to over 1tb in that time! Seems stable now.

Has anyone tried to install sqlite toolset and shrink the db? im a MS sql dev so seems a natural step…

The v2 db is already shrunk.
Not saying it couldn’t be shrunk further, but maybe it would slow everything else down.
I’ve not heard of anyone attempting their own further shrink.

1 Like

Prior to upgrading to v2, I tried to vacuum the v1 database to see if there would be any space savings. It took a couple hours to run (on my slower machine) and there was virtually no reduction in size. Unfortunately, I don’t remember the exact size differences but it was so negligible there would be no point in doing it, so I didn’t save any of the details.

There is probably no space to be gained from vacuuming a v2 database at this point. The other day I analyzed the storage use of the v2 database, and most all usage is in the coin_record table. The coin_record accounts for 71.1% of the entire database size storing about 16.7 GiB of data and 18.65 GiB for its indexes. The full_blocks table accounts for another 24.2% with very little space used for indexes.

There are little or no deletes happening in the DB so there is little fragmentation so trying to shrink the DB at this point will have little to no effect.

2 Likes

I also run vacuum few time, and every time I got about 10% decrease in size. I was running it offline, so it took me about 30 minutes each time. The task was utilizing just one core (barely), but the NVMe was struggling due to small chunk reads. So, at least from what I observed, the process was not really that much CPU dependent, mostly media.

Also, as vacuum mostly runs defrag on the file (plus removing potentially deleted records), it may be that results depend on how long ago db was created (i.e., db newly resynced from scratch will not compress that much).

Although, all that was on v1 db, and IMO it is not worth the effort to do.

By the way, this rapid db growth is due to dust storm that started early in Feb. Since then v1 db grows about 30GB per month. Although, the early indications are that v2 db grows at about 15 GB right now.

2 Likes