-
Notifications
You must be signed in to change notification settings - Fork 411
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: integrate flagsmith client into API layer #2447
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Uffizzi Preview |
/awards worst PR title ever |
😬 Yep, you got me... |
def reset_globals(mocker): | ||
mocker.patch("integrations.flagsmith.client._defaults", None) | ||
mocker.patch("integrations.flagsmith.client._flagsmith_client", None) | ||
yield |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't love this, but managing the global variables is a bit painful (but necessary I think) and I don't think it's too bad since the fixture is used by every test in this module.
The tests in the previous iteration here were arguably cleaner, but I feel like the code itself to consume the flagsmith client was more messy.
Part of the issue that we have is the need to be able to switch Flagsmith off entirely and rely purely on the defaults. I guess there's an argument here that we should define an option in our SDKs to run in 'offline only' mode, for which we already have an issue here.
@khvn26, I do agree with you here, however, not including some default data became very painful. The reason for this is that I don't want to enforce the logic in the Docker build since we have a docker-compose.yml in the root of the repo that builds the docker image from scratch. In that case though, we could fall back to requiring a FLAGSMITH_ON_FLAGSMITH_SERVER setup. Either of these options, however, would require us either shipping with the environment key (no no!) or requiring users to provide one (also, not great). As such, what I have done here is updated the logic such that it masks any data that could be considered sensitive in the environment document (and isn't used by the engine). Another option I considered is removing this constraint to allow the default handler to be a final fall back if the OfflineHandler fails. What are your thoughts? |
I feel like we have to store the feature state in the repo/app. We also cant rely on people using docker - what if you want to use buildpacks or heroku or similar? |
Yep, that's a good point. I think, in that case, the solution I have here is the best we can do. Keen to know what @khvn26 thinks, however. |
Yep I think having a hard dependency on docker is not the right path. This feature behaviour configuration just so happens to be encoded in a json file. I don't think there's that much wrong with that. Projects like https://cerbos.dev/ encode permissions in JSON! |
# Conflicts: # api/requirements.in # api/requirements.txt
# Conflicts: # api/app/settings/common.py
I'm trying to find a documentation on how I can use this instead of polluting the only project that can be used in self hosted open source version with flagsmith flags. |
Hi @mvrhov, that's not currently possible. I would recommend creating a separate organisation to contain the project for flagsmith flags. |
Changes
Add Flagsmith to the API.
The main point of friction I found when implementing this is that we really need to be able to set the client into offline mode. If we could do that, then the logic to disable flagsmith requests for all self hosted deployments would be a lot simpler.
How did you test this code?
Added unit tests.