Dust storm feb 11

Is it just me or is this dust storm going on for a while?

I don’t think it’s unusual for them to last a couple days.

I’ve been getting some of these
2022-02-12T10:00:35.717 full_node chia.full_node.full_node: WARNING Block validation time: 2.99 seconds, pre_validation time: 0.93 seconds, cost: 5248784388, percent full: 47.716%

What does the dust storm in the log file look like?

If your node is capable, you won’t notice anything, if its not, you’ll possibly see rising response times or lose sync.

Best way to tell it’s a storm is looking at graphchia.com. if you see high transition count but low average transition amount.

Pretty continuous high transaction vol, with ave transaction in .01-.04 XCH. I like it. All nodes need a lot more of this done in as many ways possible so everything works transparently.

2 Likes

Easier to look at mempool size.
Still on same link though.

Yeah, looking at those charts, this is more like a smoke belch rather than a dust, or even more storm.

If your node is affected with this one, this may be a good time to check why your node suffers, as this is really just small increase over the daily averages.

You may want to check:

  1. Your CPU usage - whether you are choking
  2. Your stales on the pool side
  3. Status of your connected peers (whether most of them look healthy, and you see Up/Down column being updated for most of them

Your logs are more or less focused on your local stuff (harvesting, etc.), as such it may be not much there to look for. Although, in December, my node started producing tons of worthless entries making those logs rotate every 5-10 minutes (making them worthless). Actually, that extra log traffic may contribute to killing db performance, if logs and dbs share the same media (this is why you may want to do more than just symlinking the whole mainnet folder).

Oddly my old 8350 with cheap old gen ssd has fared fine. Nothing over 5s and it’s normal for that box to be in the 1 to 5 sec range with over 90% of my responses.

More fuel to the fire that it wasn’t my hardware causing the disconnects .11 release seems to have brought with it.

That is what I wanted actually to stress above. Those are your internal lookup times, so those should not be affected by those storms (unless the CPU is hosed, but I don’t think this is the case with those storms). Eventual problems come from the node not being able to send that data to the pool, or pool being also screwed up and not accepting those proofs on time. Therefore, stales are a better prediction

Indeed, I was just referencing my earlier disconnects, where someone advised it could be my cpu or ssd not up to the task, which this kinda proves is not the issue.

I blame a buggy .11 release being as many have the same disconnects.

1 Like

My lookups, solo, no pools, stay >98% <1 sec, >60% <1/2 sec, few ~ .2 sec. OS, logs & db are on a 500GB SATA ssd. Occasional quick bursts up to 20% CPU recently, on an I5-11400. That’s on .11 release, with no issues since release.

I’d want to try an eliminate multi-sec responses. A few seconds is quite a long time getting an answer out, no matter what’s allowed. And what if you get a proof to verify? I’d think an FX8350 & perhaps ssd should be good enough, so the delay is likely elsewhere. A farmer w/remote plot files, perhaps?

My times steadily increased the more plots I added, nearly all are connected via usb though, and with 30 disks connected via one port I’d imagine that’s why.

Ah…I see… I use one 16 port + 2x 7 port, all USB 3.0 hubs, but they all seem to respond ok. Perhaps 30 is too many?

Thats my guess.
Glad you’ve had no 8ssues with .11, so many ppl have had the same issue I had.
Connected but jus stops syncing and needs a restart, after which it’s fine again.

Fingers crossed that v1.3 will fix all and not cause more problems :face_with_monocle:

1 Like

Won on Feb 12 CET / Feb 11 US time. Guess thank you dust storm :wink: