Hi, I was wondering, what if my farmer has only an uptime of 99 or 98% (or even less), thus I will miss some signage points while the farmer is down. Is my chance to win XCH drastically reducing because of this, or does it more or less stay the same? Who would like to share their knowledge of this?
It’s all time and space, man.
You gotta be there with your cabbages or they ain’t gonna sell themselves.
From my understanding it is pretty linear.
You spend x% of your time unsynched, whether that is due to your farmer not running or because you have peering problems, you miss x% of the challenges and thus (on average) x% of the rewards.
Did anyone try just connecting for the challenges every ten minutes, but I suppose the failure rate is not worth the hassle vs idling.
Bigger question is, if you can force deep idle/turn off plot only HDDs somehow while not being challenged and keep it all well under 30 seconds to spin them back up again.
In theory, if odds even out to 1 per 512 plots every 10 minutes, a 5TB HDD with 45 plots would only have to spin up every 2 hours.
If spin up time including plot access is consistently possible within this 30 second elegibility threshold, this would be a cost argument for smaller old HDDs or 2.5 USB portable class drives with 5TB max capacity each. Spin up wear induced failure would be a lesser concern then, if they eventually fail, simply replace at low cost and replot.
At best I expect 1/10th of the power usage overall, which would make quite the significant difference when running 20+ disk harvesters. Where you would have 3 to 5 Watt at idle per disk otherwise.
On second thought, this seems a serious flaw in the PoC green argument, why potentially rule out cold storage at all, with that arbitrary 30 second cutoff for all! plot sizes. Why not allow multiples of this lookup time for higher k values. Would take longer to determine a challenge winner, but I don’t see why this has to be immediate.
Or even broadcast an elegible plot list somewhat in advance. So farmers know when to spin up. Farmers could even specialize on a fraction of those challenges. In the extreme, if you go all in on the 1/512 chance, you would only have to power up twice a week (10m * 512 equals about 3.5 days).
You would drastically reduce the lifetime of your disks if you are going to spin them up and down so often.
Also probably end up using more power - spinning up HDDs is when they are at peak draw.
I can’t imagine running a disk for 2 hours using less power overall than spinning it up for a few seconds once every 2 hours
I admit it taking 10 second to spin up may affect the chance of winning a reward - even if i can meet a challenge i would be far down the list if there were multiple plots that could meet that challenge and the fastest wins. How often this is the case could determine the choice outright
I concede the lifetime affect, but it really depends on the risk/reward balance between power usage/costs vs lifetime remaining on disk life (unless it is pooling and subject to increased challenges for the pool)
Personally I have LOTS of 3TB disks each with 27 plots - i don’t mind if they fail more often vs saving 90% on power - that calculation has yet to be made but it is not so cut and dry and very individual based on age of media and acquisition cost
I don’t know this for certain and I will need to do my research here. Some drives, especially older drives have a horrible burst power efficiency. There was always a myth in the server community that when starting up a server or loading up a new bank of drives, you didn’t want to spin up or activate multiple SAS/SATA ports at the same time because you could over draw the burst rating of the backplane or PSU.
That only requires a surge to 2-300% of steady state power draw - for just a few seconds access - how rare does access be made to save a LOT of power
My main concern is how often a success challenge still loses because it is slower than normal - i am guessing this will get worse as netspace gets larger
I havent measured the current usage on my disks yet but i will be doing soon as i have to size DC->DC->5v converters running off a server PSU which only offers 12v
Exactly this, even 2.5 inch usb drives draw slightly below 3W for me on average. Consumer drives are totally fine with spinning up and down a few times a day for their estimated lifetime. The power burst is a myth.
3.5 is somewhere 5 to 8 W.
chiapower.org has a nice chart showing spin up peak power draw at about 5x idle with: 20-25 W for a mere 6 seconds vs indefinite draw of the idle power.
At k32 a 8TB HDD with 72 plot has a 1 in 8 chance to be elegible for a given challenge.
With k32 this is idling for 80 minutes at 8 W
vs chia optimized use of spinup + plus challenge evaluation for another 5 seconds
Lets say the whole cycle can be completed in 0.5 Minutes at an average of 10W with forced spin down after challenge vs. 80 minutes at 5W of guaranteed pointless idle. So 5Wmin vs 200Wmin (Watt minute)
Thats only 2.5% of the power needed before. Of cause you will have overheads from controllers and hosts, but for 8 disk plus setups reducing power use to 20% or less should easily be doable.
Also found ths at the end of chiapower.org site, so this is not a novel thought of mine:
Also finally a good reason for higher k plots not in the far future, reducing spinup wear and power usage!
Since at k34 a 8TB HDD would only be challenged every 6 hours and so on. Consumer HDDs should be fine with this type of wear. And since only powering them on for 30 seconds would mean minimal temperature change stress too. I simply cant find any data that would cover that use case since the old stories of hot datacenter HDDs failing dont apply to this scenario.
That is excellent information given that mine are older SAS 7200K 3TB with 27 plots on them - if that can be reduced to just a few startups a day it will allow me to run the whole setup on FAR LESS power at the cost of the some advanced wear and more frequent failures. I don’t know what my resale overhead is for these drives so it comes down to the average failure costs compares with power saving + MY TIME to track failed disks and replot (if i feel its worth it)
However, it would be more likely to be a problem in a pool as the pool would likely use more frequent challenges to asses the plot collection - so it might only be viable with solo farming
My collection is 90% plotted but it will be something to consider when i replot for pools (i might not join a pool as i have over 3000 plots) so there seem to be few downsides to large plots, especially if my 2nd pass will largely be using SAS HDD temporary drives rather than constrained SSD and they are good for 900GB
I am hoping to get away with 1 primary farmer (an efficient Ryzen 3600) with 3 SAS expanders - 96 disks (and maybe a few more internal) and a non committed backup machine for farming should the primary have problems - that is limiting the power use as much as possible. The Ryzen 3600 will also stand in for other duties as my day job is as a software developer and i can use it for remote tasks
Sometimes they will power up because some disks will be committed to serving media or as backup/archival
It looks like at least half (Seagate) of my disks will spin up from standby in about 8 seconds - which is completely unacceptable if challenges are addressed serially but (barely) acceptable if executed in parallel - with a power saving of about 55%
However, I can set them to unload heads/servo and lower RPM for a 35% saving and a startup time of only 1s which looks really useful (especially if challenges are addressed in parallel)