Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error mapreducing over a 0 dimensional array #1141

Closed
GiggleLiu opened this issue Sep 9, 2021 · 1 comment
Closed

Error mapreducing over a 0 dimensional array #1141

GiggleLiu opened this issue Sep 9, 2021 · 1 comment
Labels
bug Something isn't working

Comments

@GiggleLiu
Copy link
Contributor

GiggleLiu commented Sep 9, 2021

MWE,

julia> using CUDA

julia> sum(CUDA.CuArray(fill(randn(), ())))
ERROR: MethodError: no method matching ndims(::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{0}, Nothing, typeof(identity), Tuple{CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}}})
Closest candidates are:
  ndims(::CUDA.CUSPARSE.CuSparseMatrix) at /home/leo/.julia/packages/CUDA/9T5Sq/lib/cusparse/array.jl:166
  ndims(::AbstractFFTs.Plan) at /home/leo/.julia/packages/AbstractFFTs/JebmH/src/definitions.jl:15
  ndims(::Base.Iterators.ProductIterator) at iterators.jl:983
  ...
Stacktrace:
  [1] check_reducedims(R::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, A::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{0}, Nothing, typeof(identity), Tuple{CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}}})
    @ Base ./reducedim.jl:209
  [2] mapreducedim!(f::typeof(identity), op::typeof(Base.add_sum), R::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, A::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{0}, Nothing, typeof(identity), Tuple{CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}}}; init::Float64)
    @ CUDA ~/.julia/packages/CUDA/9T5Sq/src/mapreduce.jl:168
  [3] _mapreduce(f::typeof(identity), op::typeof(Base.add_sum), As::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}; dims::Colon, init::Nothing)
    @ GPUArrays ~/.julia/packages/GPUArrays/UBzTm/src/host/mapreduce.jl:62
  [4] mapreduce(::Function, ::Function, ::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}; dims::Function, init::Nothing)
    @ GPUArrays ~/.julia/packages/GPUArrays/UBzTm/src/host/mapreduce.jl:28
  [5] mapreduce(::Function, ::Function, ::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer})
    @ GPUArrays ~/.julia/packages/GPUArrays/UBzTm/src/host/mapreduce.jl:28
  [6] _sum(f::Function, a::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, ::Colon; kw::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Base ./reducedim.jl:894
  [7] _sum(f::Function, a::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, ::Colon)
    @ Base ./reducedim.jl:894
  [8] _sum(a::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, ::Colon; kw::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Base ./reducedim.jl:893
  [9] _sum(a::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}, ::Colon)
    @ Base ./reducedim.jl:893
 [10] #sum#722
    @ ./reducedim.jl:889 [inlined]
 [11] sum(a::CuArray{Float64, 0, CUDA.Mem.DeviceBuffer})
    @ Base ./reducedim.jl:889
 [12] top-level scope
    @ REPL[78]:1

CUDA version 3.4.2, Julia version 1.7.0-beta3, OS: Ubuntu 20.04

I guess it is better to fix the method defined in: CUDA ~/.julia/packages/CUDA/9T5Sq/src/mapreduce.jl:168 ?

Edit: defining the following function would fix the problem

Base.ndims(::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{0}, Nothing, typeof(identity), Tuple{CuArray{Float64, 0, CUDA.Mem.DeviceBuffer}}}) = 0

Where are other related functions defined?

@GiggleLiu GiggleLiu added the bug Something isn't working label Sep 9, 2021
@maleadt
Copy link
Member

maleadt commented Oct 5, 2021

Sorry, forgot about this issue.

Many of these functions live in GPUArrays, so let's close this in favor of JuliaGPU/GPUArrays.jl#362.
There's a PR there too, JuliaGPU/GPUArrays.jl#363, but that doesn't define ndims.

@maleadt maleadt closed this as completed Oct 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants