You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You could compose several adapters in parallel for each language with fine-tuning and compose it on top of base model saving memory, space. This is more modular approach.
The text was updated successfully, but these errors were encountered:
Why most people fine-tune full model instead of just adapter? Should we educate people more on that topic?
You could then use parallel adapter forward pass as implemented by the ‘Parallel’ block and it is already supported by HunggingFace:
Video education about full fine tuning vs adapter
https://www.youtube.com/watch?v=s2BF_gC0X1o
You could compose several adapters in parallel for each language with fine-tuning and compose it on top of base model saving memory, space. This is more modular approach.
The text was updated successfully, but these errors were encountered: