You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At LabWeek'24, we were recommended to tweak the tasking algorithm (deal sampling) so that:
New deals get tested soon after they are announced on the chain.
All miners have a reasonably recent Spark RSR, i.e. each miner must be tested frequently enough.
A possible solution is to move away from the current uniform sampling and implement something like the following (for example):
Pick 1/3 of tasks by uniformly sampling deals made in the last 6 hours (assuming we already have a real-time view of all storage deals + fast finality) or the last week (in Spark v1 with manual deal ingestion)
Pick per-miner tasks as follows:
Build a list of active miners not included in the tasks defined in the first step.
For each miner in that list, pick one of miner's deals and add it to the tasks.
If the list of tasks is larger than 1/3 of tasks-per-this-round, the trim the list.
Pick the remaining tasks by uniformly sampling all deals older than 6 hours (or 1 week, depending on the parameter used in the first step).
The text was updated successfully, but these errors were encountered:
At LabWeek'24, we were recommended to tweak the tasking algorithm (deal sampling) so that:
A possible solution is to move away from the current uniform sampling and implement something like the following (for example):
The text was updated successfully, but these errors were encountered: