-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
automatically archive old blueprints and inventory collections #7278
Comments
I think this could happen after updates too, right? If I have a system on release N that has 10 blueprints, 1 of which is the current target, and I'm going to update to release N+1, I have to still be able to read the existing current target, so presumably I can still read the other 9 too. Once I've made a new blueprint from release N+1, then I can delete all 10 of the old ones in one go. I'm not sure there's a meaningful technical difference between these, but "don't delete old stuff until we're running new stuff" and "delete all the old stuff at once instead of deleting most of it before the upgrade and the last of it after" both seem appealing. |
Hah, sorry for kinda making the same comment twice, but: I think this would be more valuable after the update than before, right? If we do it before, we know what the system looked like before the upgrade, but for any ongoing work for release N+2, it's much more useful to know what the system looked like after the upgrade. Maybe the first time we do this we collect both? Or if it's not too onerous, collect both every time? |
Yeah, maybe doing it both before and after is best. Doing it before feels like it gives us a bit of a safety net if anything during or immediately after the upgrade goes wrong. But I can see the appeal of having that information from after, too. |
tl;dr: I propose that we:
omdb db reconfigurator-save
before mupdate for every release. Maybe we could store these into a debug dataset on the Scrimlet, sort of like a log file? (Maybe we could just put them into a directory that already gets archived for log files?)Why: when a blueprint is replaced as the current target, it's no longer useful to the system. But it never gets deleted unless an operator explicitly does so. This wouldn't be a big deal (it's true for many other database records, too) except keeping old blueprints around in CockroachDB makes it much harder for us to evolve the system because the schema needs to be able to represent those old blueprints. For example, if we want to add some new blueprint field in release N and it's always filled in by release N+1, if we were still keeping around blueprints from release N-1, we have to represent blueprints that don't have the field set, even though that's illegal in the software today. The same also applies to inventory collections, but they get deleted automatically after a few minutes.
At the same time, historical blueprints and collections can be useful for understanding how a system has changed over time. In debugging tricky path-dependent problems in production systems we might well want to go look at very old blueprints and collections.
Also:
With the proposal above:
reconfigurator-save
, we'll wind up with a bunch of other related useful state (e.g., inventory collections that went into some of those blueprints)The text was updated successfully, but these errors were encountered: