For the past two/three weeks, mem pool is about 20x - 30x higher than normal. It is really not much as far as dust storms, however, this one is lasting really long. Potentially, the side effect of this is that blockchain db started to grow rather fast. If I am not mistaken, about three / four weeks ago, it was still well below 60GB, or rather closer to tad over 50GB, where today is already over 70GB. So, the current db growth is potentially at 10-20GB/month. One problem is that syncing from scratch will be getting longer and longer, but another problem is that db size is related to db access speed, and this will cause further strain on nodes that may be set up not in the best way.
I guess, we can only do so much with moving db to NVMe, or reducing peers; however, this should be a signal to Chia to get the db problem fixed, as more farmers may not be able to catch up.
In think that Chia team must force people to send 1 mojo fee on each tranfer (this would stop the storms). I’m sure dust storms are the responsable of the fast grow of the db size. Is this grow sustainability for farmers ant for the own blockchain???
And other question is: Why are there people who want to cause these dust storms? I’m sure there are people interested in drop this project for fear that other blockchains will become obsolete.
Other times I think the chia team is allowing this to check the responsiveness of their blockchain…
But chia team, this situation is not sustainable!
We can run bluebox timelords to shrink ‘mostly empty’ blocks on the blockchain. This will shrink the db and make sync faster because the validation of ‘shrunk’ blocks is much quicker. I just started reading up on this subject and plan to run one on a spare machine.
v1.3 shrinks the DB considerably.
I disagree. The Network is self regulating. There can only be so many transactions done per minute. If you want to be faster sending money, add a small fee. Fees will establish by themselves with increasing network adoption. No need to put counter measures.
Just some examples:
I might try out a new payment system for my new project. I am testing it and test how much throughput chia can handle. I test how reliable the network is. There are many reasons of possible causes.
Guess what happens if chia gets more adopted:
- Transaction fees rise
- Transaction amount rises
- DB will grow over time
Wether some unknown individual rules out a “dust storm” or wether you are able to stop that dust storm with minimum fees does not change anything. After all the Transaction Volume will rise.
Chia blockchain it’s too big for the short time is running. Compare vs Bitcoin, it’s only 380 GB after 14 years.
The new v2 database introduced with v1.3 shortly is now 38GB, after nearly a full year.
So not that extreme, taking into account it caters for higher number of transactions possible per sec.
Bitcoin can allegedly make 4.6 Transactions per second, Chia 20.
I dont know the exact numbers but if each and every transaction gets logged, it seems natural to me that higher transaction speeds lead to faster growing Blockchains.
The bitcoin blockchain itself supports max 7 transactions per second. There are solutions around this number but when comparing bitcoin to chia blockchains is 7 vs 20.
The Hoff has recently stated 35 for Chia.
Well, my understanding is that blueboxes reduce size by about a single digit percentage, so not that much. Although, I may be wrong with that.
As far as speed of validation of blocks, this is not a problem at all. If you check how resources are spend right now, mostly the main start_full_node is hosed (100% its core utilization), and this process is not crunching any data. The sub-start_full_nodes crunch data, but they are mostly starved. Although, when you sync from scratch, during the initial say 50% syncing, the process is those sub-start_full_node bound, so it would help during those times (overall shorten the sync from scratch, but not keeping up being synced).
Let’s hope that in addition to cleaning up redundancies in the db, they also address the code around it, as to me this is the problem right now (that choking of the main start_full_node process). This is the reason that people are being knocked off right now, not really the crunching blocks power (those sub processes that crunch blocks are mostly idle).
Fully agree. The code offers plenty of optimisation.
If I only compare what vanilla plotting did (had 8 NVME to utilize a processor) I’d plot maybe 8 plots a day MAX. With Madmax I reach 24 plots a day with 6 core CPU and 2 NVME drives.
I once had a private project crunching through some 2 TB data. At the beginning it would take a week to process. But it worked.
At some point I got so mad, I spent days optimizing the code. It then took only 1-2h to crunch through the data. Thats how Software development works. First get it working, then optimize.