Skip to content
This repository has been archived by the owner on Jul 3, 2023. It is now read-only.

Question: Persistent volumes in services #75

Open
tegamckinney opened this issue Dec 30, 2016 · 3 comments
Open

Question: Persistent volumes in services #75

tegamckinney opened this issue Dec 30, 2016 · 3 comments

Comments

@tegamckinney
Copy link

I am wondering how you guys attach volumes to your services. Based on the stack tasks module, I see the volumes nodes defined but are not exposed or used.

Are you just running stateless containers and haven't needed to mount volumes or have found a different way of doing this?

@achille-roussel
Copy link
Contributor

We have different use cases for this, each with different AMIs, we create clusters for each type of service we have to run:

  • Some services are fully stateless and therefore can run in a cluster that has no special volumes, this is how most of our services run.

  • Some services (like NSQ) are ran out of ECS because in our current setup we run one per EC2 instance in some clusters, so the service configuration is compiled into the AMI along with the volumes they need. This is a mid-level between stateful and stateless, because the service is attached to the host so it's fine to discard the volume when the host is terminated.

  • Some volumes need to persist even if the instances they are attached to go away, in this case we built a separate set of terraform modules to configure a cluster with set number of hosts and volumes, and we don't use an ASG for that. The services running in this cluster can then assume that a volume is gonna be available and that the content will be persisted across restarts of services and hosts.

It would be great if ECS had some ways of expressing that we want to attach mount a volume to a running service but unfortunately it's not the case.

@egarbi
Copy link

egarbi commented Jan 23, 2017

@achille-roussel For the third option, what if you use a EFS volume (created by terraform in the module), mounted under /data by user_data to use as shared file system for dockers containers?
Thus, if the instance/s died a new one/s will be created and shared /data will remain intact allowing ECS services to keep data.
Not sure about performance with EFS, but according to AWS is intended to be not only a NFS server but a real shared file system, it would be something similar to how GlusterFS works (if I understood well).
What do you think?

@achille-roussel
Copy link
Contributor

I would have concerns about using this for hosting a DB or similar. But if you want to share config files, save state to disk between restarts... or anything that doesn't require high IOPS then it should be a great solution.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants