-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(meta): deprecate dispatcher table and compose dispatcher by fragment relation #20541
Conversation
7994b42
to
4376f69
Compare
6643525
to
878575c
Compare
f524b43
to
cfd4e73
Compare
cfd4e73
to
48b2492
Compare
58a91ac
to
6889cce
Compare
…fragment relation
6889cce
to
533e6fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM
@@ -57,79 +55,3 @@ impl From<DispatcherType> for PbDispatcherType { | |||
} | |||
} | |||
} | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create a migration to drop the table and also remove this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we keep the dispatcher
table for a release? The backfill of fragment_relation
depends on the dispatcher
table. I'm afraid that if something wrong happened after we drop the dispatcher
table and the fragment_relation
table gets rollbacked and also dropped in the down
, then we will lose the dispatcher information forever and cannot backfill the fragment_relation
table.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid that if something wrong happened after we drop the
dispatcher
table and thefragment_relation
table gets rollbacked and also dropped in thedown
🤔 The backfill of fragment_relation
is done in previous migration m20250106_072104_fragment_relation.rs
, if it fails actor_dispatcher
won't be dropped. Actually the logic of "down" is mostly untrustworthy and irreversible, we will not execute it or rely on it to perform a downgrade. Currently, the only consistency that can be guaranteed in rw is backup
and restore
. The only issue I can think of is that fragment_relation
is not included in the backup, which has been fixed in this PR.
It's fine to keep it for now, as it doesn't have any negative effects and aids in recovering abnormal data and pinpointing issues. Shall we leave a TODO
for it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. I will create an issue to track it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.
What's changed and what's your intention?
With
FragmentRelation
table introduced, we don't have to persist the dispatcher information. Therefore, in this PR, we will deprecate thedispatcher
table. In all places that used thedispatcher
table, we will change to use theFragmentRelation
table.Previously, the
dispatcher
is used in either resolving the logical connection between fragments, or loading the existing physical dispatchers. For resolving the logical connection between fragments, we can use theFragmentRelation
directly. For loading existing physical dispatchers, we can actually recompose the physical dispatchers by combining information inFragmentRelation
table and the vnode distribution of upstream and downstream actors.Checklist
Documentation
Release note