Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: store the state/db of the celestia-nodes on rolling updates #35

Closed
Bidon15 opened this issue Sep 6, 2022 · 8 comments
Closed
Labels
enhancement New feature or request

Comments

@Bidon15
Copy link
Member

Bidon15 commented Sep 6, 2022

We need to make sure that we are not syncing from height 1 after every rolling update on celestia-node, so we need to continue investigating this and making sure that our pipeline is working as expected after updates

Ref: #33

@Bidon15 Bidon15 added the enhancement New feature or request label Sep 6, 2022
@Bidon15 Bidon15 assigned sysrex and unassigned jbowen93 Sep 6, 2022
@Bidon15
Copy link
Member Author

Bidon15 commented Mar 10, 2023

Grooming 10/03/2023: @smuu can you please describe your ideas/proposals ?

@renaynay
Copy link
Member

@smuu Why would this require an entirely new node/build? Why is it not possible to just upgrade the binary and run start again?

@renaynay
Copy link
Member

@smuu by autoscaling testnet, do you mean being able to spin up nodes without having to sync them?

@smuu
Copy link
Member

smuu commented Mar 14, 2023

There are two scenarios.

  1. Rolling update: We have a set of nodes (Eg. 3), and we want to upgrade the binaries and still have at least three nodes reachable.
    This would be a rolling update, which means spinning up a new 4. node with the new binary. Once the new node is up and running, we will shut down one old node. This process is continued until all nodes are updated.
  2. (Auto-) Scaling: When we highly utilize dev- and test-net we would like to spin up new nodes quickly without waiting for the sync.

@renaynay
Copy link
Member

  1. Rolling update could be partial downtime for certain bootstrappers in turns (take 2 down at a time, e.g.) but IMO no need to start nodes from scratch / shut down old ones.
  2. I can understand that use case - yes would be nice.

@smuu
Copy link
Member

smuu commented Mar 14, 2023

Let's create a ticket about what needs to be done to support this on the node side. I think @celestiaorg/devops is willing to support development, as it would help us on the infrastructure side.

@renaynay
Copy link
Member

@smuu scoping this out would take some thought + time. We can't really prioritise this now (I mean today/this week) but we can definitely talk about it in our Q2 planning if it's something you all really need immediately.

I created a tracking issue that's linked above.

@Bidon15
Copy link
Member Author

Bidon15 commented May 12, 2023

Grooming 12/05/2023:

Closing as part of #123

@Bidon15 Bidon15 closed this as not planned Won't fix, can't repro, duplicate, stale May 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Archived in project
Development

No branches or pull requests

5 participants