NoSSD Chia Pool, +30% reward with new compressed plots, fast plotting without SSD

It was me who told Chris about that lol, so give him some slack, he’s not a dev.

Of course you can dynamically crate the JSON files, but who does that? Usually you don’t give file extensions to dynamic query content.

It just makes it seem like you’re manually editing the files via notepad…

Regarding the “plots are not random data” claim: they are created by hash functions essentially. So I don’t see how they are not random. The only way to “compress” it is to use clever encoding schemes like chia already did. But finding patterns in the data and trying to compress on that, no way…

The current plot format is a bit inefficient due to the way it’s structured to make lookups faster, that’s the few % that Brahm is talking about. Essentially there are unused areas in the plots due to variable sized contents, but to make lookups fast they are stored in worst-case fixed size chunks.

Given that your plotter has different levels of compression, it implies that you are probably compressing larger chunks of data with higher levels of compression. But again this seems implausible due to the random nature of the data.

Regarding the statement “a plot is created from a single plot id”: try to compress the output of sequentially running blake3(n), where n is an integer from 0 to whatever. The data essentially comes from the value zero, so it must be easy to compress?

In order to get 20% reduction you’d have to radically change something, including the whole plotting process and the plot format. But in this case you wouldn’t have different levels of compression, just a singular specific gain or none. I never really looked into compressing the plots while developing my plotter, I just figured that would be a waste of time.

In any case, if this is real, @Dawson is a genius for sure.

2 Likes

Most if not all but one pools are in red. This approach can be used to minimize the cost of “H/W” expenses.

Actually, I spoke with another small pool, and getting the real time data from db for their website was the main cost-driving factor for them, and the primary consideration when trying to enhance their website.

Again, I am not trying to say that this approach is either good or bad, rather stating that drawing any conclusions based on such approach without seeing how the backend operates is just a BS.

1 Like

@Max what would you say if I do a program that do basically this to query plots for PoS:

  1. make a plot using plotid
  2. query plot for PoS
  3. delete plot

Isn’t it an ultimate compression of a plot to 32 bytes?

This is quite an exaggeration but ok. I’ll give you a much faster algorithm how to verify our claims:

  1. download our pool client or use our docker repository
  2. make a compressed plot yourself (use -c 5 switch for maximum compression)
  3. query it for proofs using the --query switch I’ve added and discussed before
  4. come here and publish results

But please be honest! Can community trust you?

if this is real, @Dawson is a genius for sure

Thank you!

Please don’t make such claims. At this point rather the opposite is the case.

1 Like

That’s a space-time tradeoff, basically the re-plotting attack. Due to hardware and power costs it makes no sense though.

Sure I can test your software, but that’s quite a bit of work.

@Jacek I know but some developers and pool owners take our claims too personal. I’ve investigated the core structure of plot while others focused on speed optimizations only maintaining the plot format compatibility. By the way. Can you also help us verify our claims and also make a plot and test it? I know its difficult and time consuming. But seriously guys we are here for almost a week and no one have tested our client on this forum yet.

@Dawson what’s the difficulty you use for partials?

Hi Max, if you mean share difficulty, we use 18 to keep approx 1 share per day from each plot to lower the load on client machines.

Well, maybe you didn’t do a good enough job when starting this thread. When the information is sparse, all those irrelevant but either present or unspoken details draw people attention, and once people start making such claims, other people start mobbing assuming that what previously was said was true. So, I would again not just bang on what others do, but rather on what you can do to move it forward.

As mentioned already, I am not knowledgeable enough to run a test, and draw any conclusions based on that. So, that would be just a dumb monkey test that is worthless.

As far as my position, even assuming that what you are saying is real, the whole structure around it rather doesn’t speak to me. Just too many other issues that are more or less a no-go for me.

Also, as i mentioned, from my point of view, assuming that all that is true, a better option for you would be to open up and partner with someone that has already established name (a bigger pool, a plotter provider). Starting a pool right now is more or less a suicide (IMO) and asking anyone to replot at this point is a tall order. Thinking that you have some advantage has really short legs, as when cat is out of the bag, others are looking how to navigate such bags. So, tomorrow it may be that your advantage will be gone, and you still be a pool that tries to start.

Also, in my opinion, having a client/harvester that only supports your proprietary format is a big mistake (due to replotting needs). Having a proprietary client just smells too much as the next HPool, or any other OG pool, where we have seen plenty of people on this forum asking for help with their lost XCHes. I would think that people could eventually consider testing a drop in proprietary harvester that would act as a normal with standard plots; however, would also report your proprietary plots when connected to a pool would be a much easier solution to test.

2 Likes

@Jacek we considered this.

Our plotting algorithm is fast and doesn’t even require SSD to run, it also able to plot on multiple HDD at once improving the plotting performance even further.

You would have to replot into new compressed plot format anyway. If we sell the code to hpool you would need to replot your plots for hpool. Why do you think this is better than joining our pool? You would have to pay twice in that case, for my work and for their name.

The size of a pool doesn’t matter much. The minimum threshold for a pool is to mine a block in a week basically. Mining with pool is only better than solo mining because you can share that block revenue and don’t have to wait for too long. One block per week is already ok and it doesn’t look too difficult to reach this goal.

Ok here some observations from the first plot I made, with -c 5.

It only took 24 min, compared to 42 min normally with my plotter on that machine (same as on my github). It used 121 GB of RAM, with a temp file of 225G on disk that was never used. It’s using direct IO so no data is cached in RAM by the kernel. No data was ever read from the disk either, so it’s all working on those 121 GB of RAM.

There seems to be 2 phases per table, first one with 100% CPU load and second one with 50% to 20% CPU load, decreasing with each table. This is totally different to normal chia plotting with the 4 phases, it’s basically like phase 3 only, which has itself two phases per table.

Plot size is as advertised 88.1G, I guess it’s not “compressed” yet, but already smaller than the usual 102G.

So all very strange, now the question is if it’s real or not…

EDIT: more thoughts:

Since their “finalization” step achieves the same ~11% reduction in file size irrespective of compression level, it seems that all the magic is happening in the first step already. The ~11% reduction from that second step appears to be what the traditional phase 2 is doing, purging unused entries from the tables, which is around that percentage.

It seems unreal, but also it would be a massive waste of time to just scam 1 or 2 people with potentially no profit.

2 Likes

Well, you got @madMAx43v3r attention already, so buy a ticket and fly to meet with him and show what you got. :slight_smile:

Otherwise, the best what you can get is:

And that is not really helping.

1 Like

@Max you can already run --query command line switch to probe it for PoS. Do it multiple times, you know some challenges only get 1 response some 2 and some doesn’t give any response.

PS: Please don’t forget that only first 8 characters of the query, ie 32 bits, are used in Chia plot challenge for plots size 32. Altering the rest of the challenge string won’t affect result.

Did that, proofs are indeed valid, got roughly the expected amount of proofs too. Only tried ~10 times though, so cannot say if it’s 100% or 90% or 80%.

But we know that simply dropping data from the plot will quickly make it unusable. Dropping 20% will make it only 25% as efficient, and I can see that’s not what is happening.

2 Likes

2 Likes

Max, you are our HERO!!! :clap: :clap: :clap:

So it appears you’re simply doing phase 1 and storing the data right away in a more efficient way. The finalization step is then just a phase 2 on top.

The big question is how you managed to compress phase 1 already below chia’s official end result, without any major additional compute as far as I can see.

don’t think we saw that coming when this topic opened. Didn’t even know he had an account here :sweat_smile:

1 hour later…here’s my first results

#realboss

1 Like

Well I just created it, initially to take the blame for the JSON file thing.

I guess to make sure it’s really me you have to ping me on discord :wink:

1 Like

Everyone, join our pool! :slightly_smiling_face: We have plenty of other features in addition to (now confirmed) compressed plots!

1 Like