Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added possibility for additional storage mounts #76

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

wilwer
Copy link

@wilwer wilwer commented Apr 29, 2022

In a customer environment it has become necessary to provide an additional storage mount (for geodata) . The customer would very much prefer that the helm chart provided includes a possibility to mount additional storage that is not used otherwise in FME-Server. This PR aims to fulfill that request.

@garnold54
Copy link
Contributor

Hi! Thanks for the pull request! We will review your pull request for inclusion in the helm chart. Just to let you know, we keep an internal copy of our helm charts as well at Safe, and we sync those with this public repository. So if we do end up merging this change, it's possible we will implement it internally first and have it sync to this repo from there. I will update this pull request when the change is merged on this public repo.

Some questions about this change:

  • Why not store the data in the FME Server system share in the Data directory?
  • This new volume is being mounted in both the core pod and the engine pod. Why does the core pod need access to it? I am assuming this volume will contain data to be processed by an FME Engine. The reason I ask is because if it is mounted in both, then those pods either need to run on the same host, or a ReadWriteMany volume needs to be used. If a ReadWriteMany volume is needed, then why not use the one we already have mounted for the FME Server system share? You also get the benefit of being able to access the data through the FME Server web UI.
  • How is this data volume going to be populated? Is it data that is meant to be read by an FME Engine for processing, or is the idea that FME would write out to this volume? If so, how will you access the data written out?

Cheers!

@TinoM
Copy link
Contributor

TinoM commented May 9, 2022

Hi @garnold54 ,

i've discussed this with @wilwer because we are supporting in this project and have similar requirements in another project.
So, the main aspects are:

  • On the cluster running FME Server, there are many additional pods with software in need to exchange data with FME Server
  • So there are existing persistant volumes in the cluster we would like to access with FME Server, which are managed by the Cluster Admins.
  • Mounting these in the engine pods is necessary for engines to read/write on them, and mounting them into the core (web) container is necessary for read/write access them when creating Network mounts in the FME Server Ressource UI.

Until now we have hardcoded these additional PV Claims by editing the Yaml for the engine and core deployments, but of course we have to merge these changes with any FME Helm chart release.
So the plan is to add the possibility to define an array of PVCs via values.yml which automatically is mounted into the engine and core pods. (I think most likely the additional_* yaml files will be stored outside the FME Helm charts, because their lifetime is completly different to the FME Server deployment).

The goal is to provide something like the "old" way in fmeServerConfig.txt to define additonal network resources via Helm files in a cluster. (If FME Server would automaticall show them under Resources would be nice).

@garnold54
Copy link
Contributor

Ok, I understand the requirements better now! I will file an issue internally for us to take a look at providing this functionality. Thanks!

@kev-andrews
Copy link

refreshed this pr in #107

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants