Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nn_mb_wlu fails on non-cpu devices with Placeholder storage has not been allocated on MPS device! #140

Open
cregouby opened this issue Dec 27, 2023 · 1 comment
Assignees

Comments

@cregouby
Copy link
Collaborator

cregouby commented Dec 27, 2023

as nn_mb_wlu() function is currently device agnostic.

ReprEx

library(tabnet)
x <- torch::torch_randn(2, 2)$to(device="mps")
torch::nnf_elu(x, alpha = 1)
#> torch_tensor
#>  1.0502 -0.2216
#>  0.6809  1.0610
#> [ MPSFloatType{2,2} ]
nnf_mb_wlu(x )
#> Error in (function (self, weight) : Placeholder storage has not been allocated on MPS device!
#> Exception raised from Placeholder at /Users/dfalbel/Documents/actions-runner/mlverse-m1/_work/libtorch-mac-m1/libtorch-mac-m1/pytorch/aten/src/ATen/native/mps/OperationUtils.mm:263 (most recent call first):
#> frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 188 (0x1071c0958 in libc10.dylib)
#> frame #1: at::native::mps::Placeholder::Placeholder(MPSGraphTensor*, at::Tensor const&, NSArray<NSNumber*>*, bool, MPSDataType) + 1336 (0x157236630 in libtorch_cpu.dylib)
#> frame #2: at::native::prelu_mps(at::Tensor const&, at::Tensor const&) + 748 (0x157244e80 in libtorch_cpu.dylib)
#> frame #3: c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::_prelu_kernel(c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, at::Tensor const&>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, at::Tensor const&) + 1312 (0x1559b46d0 in libtorch_cpu.dylib)
#> frame #4: at::_ops::_prelu_kernel::call(at::Tensor const&, at::Tensor const&) + 284 (0x153d0ef98 in libtorch_cpu.dylib)
#> frame #5: at::native::prelu(at::Tensor const&, at::Tensor const&) + 1640 (0x15302772c in libtorch_cpu.dylib)
#> frame #6: at::_ops::prelu::call(at::Tensor const&, at::Tensor const&) + 284 (0x15398cb2c in libtorch_cpu.dylib)
#> frame #7: at::prelu(at::Tensor const&, at::Tensor const&) + 40 (0x148217518 in liblantern.dylib)
#> frame #8: _lantern_prelu_tensor_tensor + 320 (0x148216ee4 in liblantern.dylib)
#> frame #9: cpp_torch_namespace_prelu_self_Tensor_weight_Tensor(XPtrTorchTensor, XPtrTorchTensor) + 76 (0x12665e80c in torchpkg.so)
#> frame #10: _torch_cpp_torch_namespace_prelu_self_Tensor_weight_Tensor + 340 (0x1262091d4 in torchpkg.so)
#> frame #11: R_doDotCall + 268 (0x1031b030c in libR.dylib)
#> frame #12: bcEval + 101932 (0x1031f852c in libR.dylib)
#> frame #13: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #14: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #15: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #16: Rf_eval + 1308 (0x1031df35c in libR.dylib)
#> frame #17: do_docall + 644 (0x10317e644 in libR.dylib)
#> frame #18: bcEval + 29540 (0x1031e6a64 in libR.dylib)
#> frame #19: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #20: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #21: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #22: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #23: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #24: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #25: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #26: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #27: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #28: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #29: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #30: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #31: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #32: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #33: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #34: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #35: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #36: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #37: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #38: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #39: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #40: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #41: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #42: Rf_eval + 1308 (0x1031df35c in libR.dylib)
#> frame #43: do_eval + 1396 (0x1031ffe34 in libR.dylib)
#> frame #44: bcEval + 29540 (0x1031e6a64 in libR.dylib)
#> frame #45: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #46: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #47: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #48: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #49: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #50: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #51: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #52: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #53: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #54: forcePromise + 164 (0x1031f9ca4 in libR.dylib)
#> frame #55: Rf_eval + 728 (0x1031df118 in libR.dylib)
#> frame #56: do_withVisible + 64 (0x1032001c0 in libR.dylib)
#> frame #57: do_internal + 400 (0x103246f10 in libR.dylib)
#> frame #58: bcEval + 30012 (0x1031e6c3c in libR.dylib)
#> frame #59: Rf_eval + 584 (0x1031df088 in libR.dylib)
#> frame #60: R_execClosure + 3084 (0x1031fbd0c in libR.dylib)
#> frame #61: Rf_applyClosure + 524 (0x1031fa58c in libR.dylib)
#> frame #62: bcEval + 27460 (0x1031e6244 in libR.dylib)
#> frame #63: Rf_eval + 584 (0x1031df088 in libR.dylib)

Created on 2023-12-29 with reprex v2.0.2

@cregouby cregouby self-assigned this Dec 27, 2023
@cregouby cregouby changed the title nn_mb_wlufailes on non-cpu devices with Placeholder storage has not been allocated on MPS device! nn_mb_wlu fails on non-cpu devices with Placeholder storage has not been allocated on MPS device! Dec 27, 2023
@cregouby
Copy link
Collaborator Author

This is related to mlverse/torch#1128 which is a pytorch/pytorch issue.

cregouby pushed a commit that referenced this issue Dec 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant