Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes model selection response in LLMOpenAIModelOnboardingStep picker #50

Merged
merged 6 commits into from
Mar 18, 2024

Conversation

vishnuravi
Copy link
Member

@vishnuravi vishnuravi commented Mar 17, 2024

Fixes model selection response in LLMOpenAIModelOnboardingStep picker

♻️ Current situation & Problem

The LLMOpenAIModelOnboardingStep contains a picker that currently tags model options with the formatted description of the model (e.g. "GPT 4 Turbo") not the raw value of the model itself, (e.g. "gpt-4-turbo"). This results in the formatted description being passed to the action, and subsequent failure to create a functioning LLMOpenAISchema from this value because it expects the raw value instead.

⚙️ Release Notes

Fixes the picker to pass the raw value of the model to the action so that it can be used to create an LLMOpenAISchema.

📚 Documentation

Documentation is up to date.

✅ Testing

  • Adds a UI test to ensure that the LLMOpenAIModelOnboardingStep is passing to the action the correct raw value for the user's choice.
  • Adds a --resetSecureStorage flag to fix an issue with the existing UI tests failing due to old values for the OpenAI API key remaining in secure storage between tests.

📝 Code of Conduct & Contributing Guidelines

By submitting creating this pull request, you agree to follow our Code of Conduct and Contributing Guidelines:

Copy link

codecov bot commented Mar 17, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 30.17%. Comparing base (ca37910) to head (656eaed).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main      #50   +/-   ##
=======================================
  Coverage   30.17%   30.17%           
=======================================
  Files          66       66           
  Lines        2891     2891           
=======================================
  Hits          872      872           
  Misses       2019     2019           
Files Coverage Δ
...enAI/Onboarding/LLMOpenAIModelOnboardingStep.swift 100.00% <100.00%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ca37910...656eaed. Read the comment docs.

@vishnuravi
Copy link
Member Author

The UI test failure is unrelated to the current PR, it appears to be due to the token being cached in the keychain from previous tests and not being reset at the beginning of each test.

Screenshot 2024-03-17 at 10 40 21 AM

@vishnuravi vishnuravi changed the title Fixes incorrect tags in LLMOpenAIModelOnboardingStep picker Fixes model selection response in LLMOpenAIModelOnboardingStep picker Mar 17, 2024
@vishnuravi vishnuravi marked this pull request as ready for review March 17, 2024 15:12
Copy link
Member

@philippzagar philippzagar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @vishnuravi for the fix, greatly appreciated! Only had some minor comments, feel free to address them and then merge the PR, no additional review needed 👍

@philippzagar
Copy link
Member

The UI test failure is unrelated to the current PR, it appears to be due to the token being cached in the keychain from previous tests and not being reset at the beginning of each test.

Screenshot 2024-03-17 at 10 40 21 AM

Thanks for raising the issue!
As of now, this isn't a big problem as the runners reset the simulator instance for every UI test, therefore only affects local UI tests. Still, very nice to see that you included an easy workaround by resetting the keychain!

@vishnuravi
Copy link
Member Author

The UI test failure is unrelated to the current PR, it appears to be due to the token being cached in the keychain from previous tests and not being reset at the beginning of each test.
Screenshot 2024-03-17 at 10 40 21 AM

Thanks for raising the issue! As of now, this isn't a big problem as the runners reset the simulator instance for every UI test, therefore only affects local UI tests. Still, very nice to see that you included an easy workaround by resetting the keychain!

The screenshot is from the artifact from the runner, so it looks like the runner didn't reset the simulator in that instance.

@philippzagar
Copy link
Member

The UI test failure is unrelated to the current PR, it appears to be due to the token being cached in the keychain from previous tests and not being reset at the beginning of each test.
Screenshot 2024-03-17 at 10 40 21 AM

Thanks for raising the issue! As of now, this isn't a big problem as the runners reset the simulator instance for every UI test, therefore only affects local UI tests. Still, very nice to see that you included an easy workaround by resetting the keychain!

The screenshot is from the artifact from the runner, so it looks like the runner didn't reset the simulator in that instance.

That's interesting, never experienced that before on the runners.. So, the keychain state is kept between different UI tests?
Maybe related to (but should only affect macOS keychain): StanfordSpezi/SpeziStorage#19

@vishnuravi vishnuravi merged commit d6819a1 into main Mar 18, 2024
10 checks passed
@vishnuravi vishnuravi deleted the onboarding-fix branch March 18, 2024 02:54
@PSchmiedmayer
Copy link
Member

PSchmiedmayer commented Mar 18, 2024

Thank you for the fix and additions @vishnuravi 🚀
Thank you for the in-depth review @philippzagar 💪

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants