Chia Blockchain Database Download

Hello to all!

In service of the Chia community I am pleased to announce the launch of a new Chia blockchain database download site, https://chiadownload.net.

The site offers a fast, direct download of the v2 blockchain database download with a new version published every 24 hours. The blockchain offered for download was synced from the beginning using Chia version 1.3.4.

Given recent events in the community, I saw this as a fun opportunity to do some simple Chia-related development and offer a useful public service to the Chia farming family.

The database is compressed using xz which reduces the download size by almost 50% while still allowing for fast decompression. SHA-256 hashes are calculated prior to upload and are stored on a separate web server from the downloads. Someone would need to hack both the web server and cloud storage to modify both the checksum and blockchain download.

Note: I don’t offer a v1 database download. If the demand for that is high, I’ll consider it. Currently, the downloads are hosted by DreamHost DreamObjects which is offering affordable storage, very fast downloads, and inexpensive bandwidth.

Please reach out to me here, via DM, or email, with questions, comments, or feedback.

Thank you for reading and happy farming!

2 Likes

P.S. The backend is still in the process of getting up to sync, so the current database download is a bit behind. I expect it to be synced in another 24-48 hours, at which point the download will be within 24 hours of the current block.

We should move ahead and leave v1 database behind. Anyone not running a later version 1.3.3 or 1.3.4 is holding us all back. ( We don’t support Windows 7 anymore also, just an example)

2 Likes

Where might the download link for the v2 database be? The download on the page is for a db from Nov 2021.

Still syncing. Maybe the download link should be suppressed until db is X numbers of blocks behind (with a message ‘still syncing’).

I did download it earlier today, and tried to tar decompress it, but after about 2GB tar barfed. I didn’t try to use xz, though.

Although, looking at FAQs, how one can audit that db? If there is a way to audit it, maybe some pools would be interested to run such audit, and have a statement that db with checksum XYZ is verified? That would add another level of trust to this service.

I would assume that one can go through that db and read record after record and do field comparison. Maybe that would not take too long (few hours ?).

It also seems that you only providing a linux file now. blockchain_v2_mainnet_2022-04-24_1150846.tar.xz

FAQ: " * Windows – Extract using 7-zip or WinRAR"

Although, maybe zip format would be the easiest for most people, though.

1 Like

I don’t think xz decompression with tar should use that much memory but could be wrong. How much RAM did that machine have?

I think doing a row for row comparison of the full_blocks table would be an effective way to audit the DB. I’m not sure if looking at the coin record matters as much but both could probably be done quickly. The audit would need to be regular, as I could replace the download with a modified version only some of the time.

The checksums are only meant to provide a way to verify the download isn’t corrupted or modified, but is not suitable for anything else.

I considered it briefly but since xz reduced the download size quite a bit and seemed readily available on Windows I chose that. I guessed that most would have 7-Zip or WinRAR already, or be willing to download one of those. I’ll try a zip as well, shouldn’t be a problem to offer both.

Thanks for the feedback!

As I understand it, tar is just a wrapper around xz, but maybe it really does native xz de-/compression. Not sure about it.

That box has 8GB.

I was thinking that some pool could run db audit, and put a statement that the db that is behind that checksum was verified. So, that would be an “official” verification.

I don’t have 7-zip, nor WinRar, therefore, I like tgz :slight_smile: But, as @drhicom suggested, zip may be an easier option for some people (would not need a FAQ entry).

Hi, do you have any ETA for the backend sync?

I think it will be fully synced within 4-6 more hours. A slightly updated database version will be appearing on the site shortly. As soon as its caught up I’ll get another update posted.

actual compress db free
welcome

LOL, well that estimate egregiously wrong…I guess at this rate it’ll be another couple days.

Node running; Syncing 1721626/1898919 (177293 behind).
Last block time: Sun, 20 Mar 2022 10:07:44 -0700

Sorry for the delays. Yesterday I moved the sync to a system with 8-core Xeon E5-2630 v3 @ 2.40GHz with 8GB RAM which hasn’t really really helped at all. But neither CPU nor disk have been the bottleneck with the syncing process. Disk write speed barely cracks 40 MB/s when it writes every 5-10 seconds; plenty of memory available; system load average is under 2.0 with the 8 cores.

After this is all said and done, I’m going to mod the Chia source and create a graph showing how block additions get progressively slower as the DB grows. I’ll also be releasing some scripts that will allow someone to audit this database against any other Chia DB so anyone can verify it on any given day. This shouldn’t be necessary since there should be no way one could maliciously change blocks in the database because that would destroy the entire chain, which would be detected by the software.

1 Like

That speed (40 MB/s) is potentially close to the max what that drive can do. Those are small chunks being rapidly written to that SSD, so that will never be close to what is advertised for such drive (NVMe or SSD). I see the same speed on my Samsung 970 NVMe with 10 physical cores / workers.

On the other hand, if you see disk activity just every 5-10 secs, that implies that your block crunching processes cannot handle the workload. Either you have not enough cores / workers, or potentially RAM is slowing such crunching. Block crunching is just CPU/RAM bound

You could try to monitor when the blocks are being retrieved from a peer, how many workers pick them up and choke their respective cores, and finally, when the main process writes those results to db. I think that is your 5-10 second cycle (main process will choke its core at that time, while workers will go idle).

Also, block fetching happens in 32 blocks chunks (iirc), so depending how many workers you have (should have 7/8 on that CPU), processing of those blocks will be serialized, and db writing most likely will happen only once all those blocks are crunched. Assuming that those workers are the bottleneck, the only way to improve that would be to get a CPU with more cores, or a second CPU.

By the way, the closer your box will get to be fully synced, the slower the process it. It is really painful close to the end, as at that point, the main process starts engaging with other peers, further slowing down the process. Cutting down the number of peers is really helpful (I would go down to 5 or so during the sync).

Although, your experience further explains why Chia needs to get it fixed, and why db download is needed until that will happen.

Hello all. Is a download of the latest database still a thing, i cant seem to fine one?

My farmer broke a few days ago and although i’ve got a backup of a v1 database i cant work out how to upgrade it to v2 on Ubuntu GUI and on a reinstall it then starts to download the v2 from scratch.

Apologies to all for the delays in getting up to sync. I underestimated how long it would actually take to sync from scratch. The good news is, this morning the first full v2 database download was posted!

You can now download a copy that was updated in the last 24 hours from chiadownload.net.

If you want to attempt the migration yourself, you’ll need to use the CLI to perform the migration. See CLI Commands Reference · Chia-Network/chia-blockchain Wiki · GitHub

In short, using version 1.3.0 or later, you would run chia db upgrade. If your v1 database is in the default location, this should migrate it to v2. You can run the upgrade with or without the node running.

2 Likes