"Latest Block Challenges" shows mixed hashes instead of a series

Guys, is this situation normal?
Usually the challenges are with the same hash and in a sequence, right now they are out of sequence and varied.

Thanks!

There are 5 different challenges in the last attempted proof.
Is this normal or should I restart the chia stack?

I also see it in myself. Who knows this is normal?

Nope, not normal. You’re missing a lot of signage points. Running 1.1.5? Do you have more than 10 connected peers?

1 Like

yes, running 1.1.5 and 50 peers connected.
It has now sorted itself out without needing a restart.
It seems a lot of time stamps of the signage points were old which means I missed them at that point.
Anyone knows the reason for this?

Everything is exactly the same with me

same with me! Farming on Ubuntu, 5 full nodes connected

@Blueoxx So how to resolve this situation?
Do we leave it and let it resolve on it own or should we restart the stack?

Oh, apologies I didn’t see the response for some reason. Here are some more questions:

  • Are you plotting on this machine?
  • Does this happen when you transfer plots to this machine?
  • Are you running the chia GUI on any other machine on your LAN?

Are you plotting on this machine?

Yes, There is 4-6 parallel plot jobs on this machine.

Does this happen when you transfer plots to this machine?

This machine does not receive any transfer of plots

Are you running the chia GUI on any other machine on your LAN?

No GUI or rest of the elements of the stack are running on rest of the machines on the network. Rest are ubuntu harvesters only.
Is it possible that parallel plots sometimes take TOO much resources causing a lag?

Thanks! :slight_smile:

I’m wondering if that is whats happening. Possibly when the plotters are writing the plot files out to storage. But you say this happens all the time, right?

It happens randomly!
Node goins out of sync does not make sense when plots being written to final destination.
However plots taking up a lot of resources might.

Actually, a few of us (myself included) have been discovering that if the communication link to your plots (be it Ethernet, USB, or SATA) gets saturated (such as with writing a plot to storage) you experience some farming lag. Super weird and makes no sense? Yes. Is there a solution? I haven’t seen one yet. I think wolf’s implementation of using a completely different network for moving plots might be the closest to a solution I’ve seen.

I found out I had this issue when transferring plots to a USB drive that was on a Hub with other USB drives. Next time a plot is being copied to disk, check and see if you’re experiencing the issue.

I am doing this on the cloud and the storage is cloud attached volume as well! There is a possibility that the storage connectivity quota is saturated!
However my plots are staggered with 30 mins interval whereas time to copy 1 plot is under 600s.

So this problem returned!
running 1.1.6, not much load on ther server, not much writing in the destination going on either!

been looking at it but only caught 1 instance.

This problem KEEPS on occuring! :frowning:
Its just doing 12 parallel plots staggered by 30 minutes!
Are there any adverse effects of this condition?

Yes. This is why you always have to leave 2 threads and 2 GB RAM for none plotting tasks

But my available resources are ample! :frowning:

Could be your net connection? High packet loss occurring somewhere along the route?

https://packetlosstest.com/