Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export llama 3.2-1B error with executorch 4.0 and main branch #7263

Open
fighting300 opened this issue Dec 10, 2024 · 1 comment
Open

export llama 3.2-1B error with executorch 4.0 and main branch #7263

fighting300 opened this issue Dec 10, 2024 · 1 comment
Labels
module: llm LLM examples and apps, and the extensions/llm libraries triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@fighting300
Copy link

fighting300 commented Dec 10, 2024

🐛 Describe the bug

  1. when i export llama 3.2-1B with executorch 4.0, there is a error blow;

python -m examples.models.llama2.export_llama --checkpoint /Users/tanwei/.llama/checkpoints/Llama3.2-1B-Instruct/consolidated.00.pth --params /Users/tanwei/.llama/checkpoints/Llama3.2-1B-Instruct/params.json -kv --use_sdpa_with_kv_cache -X -qmode 8da4w --group_size 128 -d fp32
/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/model.py:101: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See
https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models
for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
checkpoint = torch.load(checkpoint_path, map_location=device, mmap=True)
W1210 16:18:22.692745 49662 site-packages/torch/_export/init.py:64] +============================+
W1210 16:18:22.693156 49662 site-packages/torch/_export/init.py:65] | !!! WARNING !!! |
W1210 16:18:22.693295 49662 site-packages/torch/_export/init.py:66] +============================+
W1210 16:18:22.693485 49662 site-packages/torch/_export/init.py:67] capture_pre_autograd_graph() is deprecated and doesn't provide any function guarantee moving forward.
W1210 16:18:22.693619 49662 site-packages/torch/_export/init.py:68] Please switch to use torch.export.export_for_training instead.

/Users/tanwei/miniconda3/envs/executorch/lib/python3.10/site-packages/executorch/exir/emit/_emitter.py:1512: UserWarning: Mutation on a buffer in the model is detected. ExecuTorch assumes buffers that are mutated in the graph have a meaningless initial state, only the shape and dtype will be serialized.
warnings.warn(

  1. when i run export llama 3.2-1B-qlora ,there is error blow:
    Traceback (most recent call last):
    File "/Users/tanwei/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "/Users/tanwei/miniconda3/envs/executorch/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama.py", line 30, in
    main() # pragma: no cover
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama.py", line 26, in main
    export_llama(modelname, args)
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama_lib.py", line 411, in export_llama
    builder = _export_llama(modelname, args)
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama_lib.py", line 510, in _export_llama
    _prepare_for_llama_export(modelname, args)
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama_lib.py", line 444, in _prepare_for_llama_export
    _load_llama_model(
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/export_llama_lib.py", line 694, in _load_llama_model
    model, example_inputs, _ = EagerModelFactory.create_model(
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/model_factory.py", line 44, in create_model
    model = model_class(**kwargs)
    File "/Users/tanwei/Desktop/Model/executorch/examples/models/llama2/model.py", line 146, in init
    model_args: ModelArgs = ModelArgs(
    TypeError: ModelArgs.init() got an unexpected keyword argument 'quantization_args'

  2. when i run export with branch main , there is a error blow :

No source file tensor_impl_ptr.cpp when build wheel.
CMake Error at extension/tensor/CMakeLists.txt:20 (add_library):
Cannot find source file:

  /Users/tanwei/Desktop/Model/executorch/extension/tensor/tensor_impl_ptr.cpp

Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
.ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
.f95 .f03 .hip .ispc

CMake Error at extension/tensor/CMakeLists.txt:20 (add_library):
No SOURCES given to target: extension_tensor

CMake Generate step failed. Build files cannot be regenerated correctly.
error: command '/Users/tanwei/miniconda3/envs/executorch/bin/cmake' failed with exit code 1
error: subprocess-exited-with-error

× Building wheel for executorch (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /Users/tanwei/miniconda3/envs/executorch/bin/python /Users/tanwei/miniconda3/envs/executorch/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /var/folders/db/rnry4hjj5rv6fjkxjgh2pt580000gn/T/tmpwnm_wksl
cwd: /Users/tanwei/Desktop/Model/executorch
Building wheel for executorch (pyproject.toml) ... error
ERROR: Failed building wheel for executorch
Failed to build executorch
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (executorch)
Traceback (most recent call last):
File "/Users/tanwei/Desktop/Model/executorch/./install_requirements.py", line 185, in
subprocess.run(
File "/Users/tanwei/miniconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/Users/tanwei/miniconda3/envs/executorch/bin/python', '-m', 'pip', 'install', '.', '--no-build-isolation', '-v', '--extra-index-url', '
https://download.pytorch.org/whl/nightly/cpu
']' returned non-zero exit status 1.

4.when i add the file, here is the error blow:

ld: symbol(s) not found for architecture arm64
c++: error: linker command failed with exit code 1 (use -v to see invocation)
make[3]: *** [extension/llm/custom_ops/libcustom_ops_aot_lib.dylib] Error 1
make[2]: *** [extension/llm/custom_ops/CMakeFiles/custom_ops_aot_lib.dir/all] Error 2
make[1]: *** [extension/llm/custom_ops/CMakeFiles/custom_ops_aot_lib.dir/rule] Error 2
make: *** [custom_ops_aot_lib] Error 2
error: command '/Users/tanwei/miniconda3/envs/executorch/bin/cmake' failed with exit code 2
error: subprocess-exited-with-error

× Building wheel for executorch (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /Users/tanwei/miniconda3/envs/executorch/bin/python /Users/tanwei/miniconda3/envs/executorch/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /var/folders/db/rnry4hjj5rv6fjkxjgh2pt580000gn/T/tmpv4fe0kvx
cwd: /Users/tanwei/Desktop/Model/executorch
Building wheel for executorch (pyproject.toml) ... error
ERROR: Failed building wheel for executorch
Failed to build executorch
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (executorch)
Traceback (most recent call last):
File "/Users/tanwei/Desktop/Model/executorch/./install_requirements.py", line 185, in
subprocess.run(
File "/Users/tanwei/miniconda3/envs/executorch/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/Users/tanwei/miniconda3/envs/executorch/bin/python', '-m', 'pip', 'install', '.', '--no-build-isolation', '-v', '--extra-index-url', '
https://download.pytorch.org/whl/nightly/cpu
']' returned non-zero exit status 1.

Versions

Collecting environment information...
PyTorch version: 2.6.0.dev20241112
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.31.1
Libc version: N/A

Python version: 3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 08:22:19) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M4 Pro

Versions of relevant libraries:
[pip3] executorch==0.4.0a0+6a085ff
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnx-weekly==1.18.0.dev20241203
[pip3] onnxruntime==1.20.1
[pip3] onnxscript==0.1.0.dev20241204
[pip3] torch==2.5.0
[pip3] torchao==0.5.0+git0916b5b2
[pip3] torchaudio==2.5.0
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0
[conda] executorch 0.4.0a0+6a085ff pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.6.0.dev20241112 pypi_0 pypi
[conda] torchao 0.5.0+git0916b5b2 pypi_0 pypi
[conda] torchaudio 2.5.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112 pypi_0 pypi

@fighting300 fighting300 changed the title export llama 3.2-1B error with executorch 4.0 and main export llama 3.2-1B error with executorch 4.0 and main branch Dec 10, 2024
@Jack-Khuu Jack-Khuu added module: llm LLM examples and apps, and the extensions/llm libraries triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Dec 17, 2024
@Jack-Khuu
Copy link
Contributor

@lucylq Is this up your alley?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: llm LLM examples and apps, and the extensions/llm libraries triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

2 participants