You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have used a custom dataset and preprocessed it to create a .pkl file which will be used for fine-tuning. Post that we executed the following bash command i.e. bash scripts/NVILA/stage3_9tile.sh runs/train/NVILA-15B /home/sample_ft/M3IT/data/captioning/coco/captioning_coco_train.pkl
Error:
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
[rank0]: resolved_file = hf_hub_download(
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
[rank0]: validate_repo_id(arg_value)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
[rank0]: raise HFValidationError(
[rank0]: huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'runs/train/stage2_9tile/model'. Use repo_type argument if needed.
[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/user/VILA/llava/train/train_mem.py", line 49, in
[rank0]: train()
[rank0]: File "/home/user/VILA/llava/train/train.py", line 497, in train
[rank0]: config = LlavaLlamaConfig.from_pretrained(model_args.model_name_or_path, resume=resume_from_checkpoint)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 545, in from_pretrained
[rank0]: config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 574, in get_config_dict
[rank0]: config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 633, in _get_config_dict
[rank0]: resolved_config_file = cached_file(
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/utils/hub.py", line 469, in cached_file
[rank0]: raise EnvironmentError(
[rank0]: OSError: Incorrect path_or_model_id: 'runs/train/stage2_9tile/model'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
It is given in the documentation that one need to use sft.sh file for Supervised Fine Tuning in case of NVILA-8B. The similar process we have followed for NVILA-15B model i.e. scripts/NVILA/stage3_9tile.sh
The above mentioned bash command required Data Mixtures argument which is passed as an empty string in stage3_9tiles.sh file. Also there is some model path mismatch we are facing.
Is this regarding some git repo issue? Kindly do suggest some solution.
The text was updated successfully, but these errors were encountered:
Following repo cloned:
git clone https://huggingface.co/Efficient-Large-Model/NVILA-15B
We have used a custom dataset and preprocessed it to create a .pkl file which will be used for fine-tuning. Post that we executed the following bash command i.e.
bash scripts/NVILA/stage3_9tile.sh runs/train/NVILA-15B /home/sample_ft/M3IT/data/captioning/coco/captioning_coco_train.pkl
Error:
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
[rank0]: resolved_file = hf_hub_download(
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
[rank0]: validate_repo_id(arg_value)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
[rank0]: raise HFValidationError(
[rank0]: huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'runs/train/stage2_9tile/model'. Use
repo_type
argument if needed.[rank0]: The above exception was the direct cause of the following exception:
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/user/VILA/llava/train/train_mem.py", line 49, in
[rank0]: train()
[rank0]: File "/home/user/VILA/llava/train/train.py", line 497, in train
[rank0]: config = LlavaLlamaConfig.from_pretrained(model_args.model_name_or_path, resume=resume_from_checkpoint)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 545, in from_pretrained
[rank0]: config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 574, in get_config_dict
[rank0]: config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 633, in _get_config_dict
[rank0]: resolved_config_file = cached_file(
[rank0]: File "/root/anaconda3/envs/vila_adv/lib/python3.10/site-packages/transformers/utils/hub.py", line 469, in cached_file
[rank0]: raise EnvironmentError(
[rank0]: OSError: Incorrect path_or_model_id: 'runs/train/stage2_9tile/model'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
It is given in the documentation that one need to use sft.sh file for Supervised Fine Tuning in case of NVILA-8B. The similar process we have followed for NVILA-15B model i.e. scripts/NVILA/stage3_9tile.sh
The above mentioned bash command required Data Mixtures argument which is passed as an empty string in stage3_9tiles.sh file. Also there is some model path mismatch we are facing.
Is this regarding some git repo issue? Kindly do suggest some solution.
The text was updated successfully, but these errors were encountered: