Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please can we have a batch blockhash endpoint to provide a source of global/permanent pseudorandom values for onchain games? #3879

Open
ZedZeroth opened this issue Jul 30, 2024 · 17 comments

Comments

@ZedZeroth
Copy link

ZedZeroth commented Jul 30, 2024

Games need random values that are permanently accessible to all players / the whole system so that "everyone" knows what the outcome was.

Examples could be (1) whether one player hits or misses another, (2) what the weather is/was at any given time, (3) whether any kind of random event happens or didn't happen.

Blockhashes are an ideal source of such (psuedo) random data for onchain games. Players cannot easily manipulate blockhashes so their value is unpredictable. Once they have been produced they remain static and provide the same value for all ordinals forever.

I have implemented such randomness in my "chaindrop lottery" that determines whether certain ordinals in the collection receive upgrades or not: https://palindromes.io/flowers/

I am pulling the historic fortnightly difficulty adjustment retarget blockhashes, but these sequential requests are currently the primary bottleneck slowing down the loading of the ordinals:

async harvest(bgn,inc,end){ if(!end)end=parseInt(await this.rec.text('/blockheight'),10); let hsh=[]; for(let i=bgn;i<=end;i+=inc){ const h=await this.rec.text('/blockhash/'+i); hsh.push(h)} return hsh} }

Ideally such an endpoint would include parameters for "start block", "increment" and "end block", and then return a JSON array of all relevant blockhashes. In my case, I start on the first DA blockhash after the halving (840672) then increment by 2016 until the current block height is reached. I think this is sustainable as it only requires an additional 26 blockhashes per year. If performance is a concern, then the parameters of the endpoint could be limited to daily/weekly/fortnightly increments. [Edit: @gmart7t2 says that pagination should fix this]

Unless there is an alternative source of continually generated and permanently stored random values, then I think such an endpoint is essential for the creation of fully functional onchain games.

Thanks

Edit - I think the essential features are:

  1. New values are regularly generated.
  2. The values themselves are unpredictable/random (cannot be manipulated).
  3. The values are stored permanently so that they can always be referred to.

I can't think of any other viable source of such new permanent random data.

@gmart7t2
Copy link
Contributor

This seems reasonable. The result should be paginated, like the recently added child inscriptions endpoint.

@ZedZeroth
Copy link
Author

@owenstrevor I was wondering if the above might be of interest to you? :)

@cryptoni9n
Copy link
Collaborator

@ZedZeroth would something like this work? (where blockhashes/1/10 = blocks 1 through 10)

[image from regtest]

image

@ZedZeroth
Copy link
Author

Hey, thanks for getting back to me. My concern is that I think the step size is very important. In most cases the apps won't want consecutive hashes. For example, in my flowers collection, upgrades are only pulled from difficulty adjustment blockhashes, so every 2016 blocks. So in 100 years they will need to pull 2600 hashes, but not consecutive ones.

Something like blockhashes/0/2016/10000 or blockhashes/0/10000/2016 returning blocks 0, 2016, 4032, 6048, 8064 and then stopping. I can go into more detail with examples / use cases but the short version is that I have been thinking about onchain gamification very deeply for the last few months and having the ability to pull hashes spaced apart with a fixed step size would be essential. Onchain games will not want events such as upgrades, environmental changes etc, to be determined by every single blockhash, primarily because all historic hashes will need to be fetched on every new load, forever.

[Really I should be using "step" rather then "inc" in my initial code to make this intention clearer]

@cryptoni9n
Copy link
Collaborator

Something more like this, then? (where blockhashes/1/2106/10000 = starting block / interval / total records requested)

image

@ZedZeroth
Copy link
Author

Yes that looks amazing :) Could leaving the third argument out default to pulling them all up to the current blockhash? i.e. If no total is specified then it keeps requesting until they run out?

@gmart7t2
Copy link
Contributor

The problem with being able to leave the last argument out is that we probably need the last argument to be a page number. You wouldn't want to try to return thousands of hashes in a single result

@cryptoni9n
Copy link
Collaborator

cryptoni9n commented Aug 2, 2024

I've been working on this more today. I've added pagination and am controlling for cases where different levels of data are provided.

I feel like this is a good time to ask @raphjaph or @casey to take a look at this issue and make sure that it is something they don't mind merging if it can be done in a safe way.

The api format is server/blockhashes/(start_block_height)/(incremental_interval_amount)/(page_number) where each page is 100 rows.

some example scenarios:
curl -s -X GET "http://localhost/blockhashes/0/2/2" | jq will display the 2nd page of results of the dataset, which will be records 201-300 and be the blockhashes for Blocks 400-598

curl -s -X GET "http://localhost/blockhashes/0/2" | jq will display the 0th page of results of the dataset, which will be records 0-100 and be the blockhashes for blocks 0-198. Not providing a value for the page will default to the 0th page.

curl -s -X GET "http://localhost/blockhashes/0" | jq' will display the 0th page of results of the dataset, which will be records 0-100 and be the blockhashes for blocks 0-99. Not providing a value for the interval or page will default to an interval of 1 and the 0th page.

Trying to pull a dataset of beyond what is available just results in the most recent or last set of that data. For example, blockhashes/0/2106/10 and blockhashes/0/2016/11 return the same dataset.

image

@ZedZeroth
Copy link
Author

@cryptoni9n Thank you very much with your help with this, it's really appreciated :) Yes, I would be very happy for @raphjaph / @casey to take a look. I strongly believe this endpoint to be essential for further gamification of ordinals, and I can provide (many) more example use-cases if needed!

So the "total" is no longer needed because we can now just pull as many pages of 100 as we need, correct?

Trying to pull a dataset of beyond what is available just results in the most recent or last set of that data. For example, blockhashes/0/2106/10 and blockhashes/0/2016/11 return the same dataset.

Would it make sense to also add negative numbers that work backwards from the current hash? I can see that being useful, with a game environment changing based on a fixed-sized set of shifting recent hashes.

@cryptoni9n
Copy link
Collaborator

So the "total" is no longer needed because we can now just pull as many pages of 100 as we need, correct?

right

Trying to pull a dataset of beyond what is available just results in the most recent or last set of that data. For example, blockhashes/0/2106/10 and blockhashes/0/2016/11 return the same dataset.

Would it make sense to also add negative numbers that work backwards from the current hash? I can see that being useful, with a game environment changing based on a fixed-sized set of shifting recent hashes.

This seems more difficult and also a little harder to verify. Perhaps if we get the ok to move forward, then we can implement the changes as described and maybe come back at a second request for the reverse if it turns out it is still needed and makes sense.

@ZedZeroth
Copy link
Author

ZedZeroth commented Aug 5, 2024 via email

@ZedZeroth
Copy link
Author

@raphjaph Any chance you could take a look at this please? @cryptoni9n already has some code working so we just need your agreement that this is a good idea and then proceed with a PR. Thanks :)

@raphjaph
Copy link
Collaborator

Games typically have loading screens. What speaks against having a loading screen that fetches all the block hashes you need and then proceeds to the gameplay? Javascript also has async so all those requests could run in parallel. Have you done some tests on that?

@ZedZeroth
Copy link
Author

Games typically have loading screens. What speaks against having a loading screen that fetches all the block hashes you need and then proceeds to the gameplay? Javascript also has async so all those requests could run in parallel. Have you done some tests on that?

Hi. Thanks for getting back to me. Yes, I've done tests and have existing collections that make use of loading screens/animations. Bear in mind that I've had live collections that use combinations of blockhash and child data since February. But there are two issues. Firstly, the loading of data from ordinals servers is much slower than you might expect, and there appears to be a lot of variation between different platforms. Secondly, and a much more serious issue, is that marketplaces (e.g. ME) throttle the number of requests, causing collection pages not to load at all when multiple ordinals try to make multiple requests.

Would this endpoint be any different from the Pizza Pets batch child information endpoint that was added recently? Like some of my collections, Pizza Pets need lots of child inscription data. They could have used a loading screen, but it ends up being slow on the user end, and often breaks when viewing multiple assets. My existing collections that read child inscriptions were suffering from long loading times and broken marketplace views. But this was fixed when the batch child information endpoint was added, as requested by the Pizza Pets team in order to make their assets work better.

So I'm effectively asking for the same thing, just for bockhashes instead of child inscription data. I have found a general and important use-case for blockhashes, somewhat equivalent to the child inscription use-case. I also have no doubt that other projects (art / collectibles / games etc) will want to make use of blockhash randomness in similar ways.

Is there any harm in adding this endpoint? And does it introduce any concerns that are not true for the recently added batch children data endpoint? Thanks

@ZedZeroth
Copy link
Author

@raphjaph It's worth me pointing out that I have been in an ongoing discussion with ME now for 7 months on "the other side" of this problem. I'm asking them to allow more requests, because currently I can't reduce the number of requests on the recursion side. They're not really cooperating, mainly because their team are too busy with other stuff.

If we look at my Flowers4Finney as an example, I chose a relatively low randomness requirement, with new permanent variation being introduced only once every two weeks. That means that in 4 years each ordinal will need to pull ~100 blockhashes. Which means that the ME collection page would need to allow ~10K requests simultaneously as it loads. Whereas the paginated batch endpoint would only need 1 request per ordinal, or 100 requests for the full page.

Perhaps it might be helpful to get other progressive builders involved in this discussion @c12hz @lifofifoX. Or some comments from @cryptoni9n & @gmart7t2.

@cryptoni9n
Copy link
Collaborator

Perhaps it might be helpful to get other progressive builders involved in this discussion @c12hz @lifofifoX. Or some comments from @cryptoni9n & @gmart7t2.

Hi Zed - I don't know what value, if any, my comments on this may have, but here's the way I see it. If an enhancement request like this can be merged with code that 1) provides new functionality to ord and its users and 2) does so in a safe and smart way, where no unreasonable server stress, tech debt or sketchiness ensues, then I am all for it. Specifically, helping a builder that has already shown such innovation would be an honor and a priority. However, Raph and Casey know best about how this functionality could potentially impact the protocol, so I defer to them.

@ZedZeroth
Copy link
Author

Perhaps it might be helpful to get other progressive builders involved in this discussion @c12hz @lifofifoX. Or some comments from @cryptoni9n & @gmart7t2.

Hi Zed - I don't know what value, if any, my comments on this may have, but here's the way I see it. If an enhancement request like this can be merged with code that 1) provides new functionality to ord and its users and 2) does so in a safe and smart way, where no unreasonable server stress, tech debt or sketchiness ensues, then I am all for it. Specifically, helping a builder that has already shown such innovation would be an honor and a priority. However, Raph and Casey know best about how this functionality could potentially impact the protocol, so I defer to them.

Thank you for your comments, and for your help with the code above :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants