Skip to content

Commit

Permalink
Merge pull request #77 from DrChainsaw/flux0.13
Browse files Browse the repository at this point in the history
Updates for Flux 0.13
  • Loading branch information
DrChainsaw authored Apr 10, 2022
2 parents 93e6349 + f572a3d commit f9b0cdb
Show file tree
Hide file tree
Showing 14 changed files with 174 additions and 124 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name = "NaiveNASflux"
uuid = "85610aed-7d32-5e57-bb50-4c2e1c9e7997"
version = "2.0.4"
version = "2.0.5"

[deps]
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
Expand All @@ -14,7 +14,7 @@ Setfield = "efcf1570-3423-57d1-acb7-fd33fddbac46"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"

[compat]
Flux = "0.12"
Flux = "0.13"
Functors = "0.2"
JuMP = "0.19, 0.20, 0.21, 0.22, 0.23, 1"
NaiveNASlib = "2"
Expand Down
2 changes: 1 addition & 1 deletion src/NaiveNASflux.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ using Reexport
@reexport using NaiveNASlib
using NaiveNASlib.Extend, NaiveNASlib.Advanced
import Flux
using Flux: Dense, Conv, ConvTranspose, DepthwiseConv, CrossCor, LayerNorm, BatchNorm, InstanceNorm, GroupNorm,
using Flux: Dense, Conv, ConvTranspose, CrossCor, LayerNorm, BatchNorm, InstanceNorm, GroupNorm,
MaxPool, MeanPool, Dropout, AlphaDropout, GlobalMaxPool, GlobalMeanPool, cpu
import Functors
using Functors: @functor
Expand Down
106 changes: 62 additions & 44 deletions src/constraints.jl
Original file line number Diff line number Diff line change
@@ -1,74 +1,72 @@

"""
DepthwiseConvAllowNinChangeStrategy(newoutputsmax::Integer, multipliersmax::Integer, base, [fallback])
DepthwiseConvAllowNinChangeStrategy(allowed_new_outgroups::AbstractVector{<:Integer}, allowed_multipliers::AbstractVector{<:Integer}, base, [fallback])
GroupedConvAllowNinChangeStrategy(newoutputsmax::Integer, multipliersmax::Integer, base, [fallback])
GroupedConvAllowNinChangeStrategy(allowed_new_outgroups::AbstractVector{<:Integer}, allowed_multipliers::AbstractVector{<:Integer}, base, [fallback])
`DecoratingJuMPΔSizeStrategy` which allows both nin and nout of `DepthwiseConv` layers to change independently.
`DecoratingJuMPΔSizeStrategy` which allows both nin and nout of grouped `Conv` layers (i.e `Conv` with `groups` != 1) to change independently.
Might cause optimization to take very long time so use with care! Use [`DepthwiseConvSimpleΔSizeStrategy`](@ref)
if `DepthwiseConvAllowNinChangeStrategy` takes too long.
Might cause optimization to take very long time so use with care! Use [`GroupedConvSimpleΔSizeStrategy`](@ref)
if `GroupedConvAllowNinChangeStrategy` takes too long.
The elements of `allowed_new_outgroups` determine how many extra elements in the output dimension of the weight
shall be tried for each existing output element. For example, for a `DepthwiseConv((k1,k2), nin=>nout))` there
are `nout / nin` elements in the output dimension. With `allowed_new_outgroups = 0:3` it is allowed to insert
0, 1, 2 or 3 new elements in the output dimension between each already existing element (so with `nout / nin`
elements the maximum increase is `3 * nout / nin`).
shall be tried for each existing output element. For example, for a `Conv((k1,k2), nin=>nout; groups=nin))` one
must insert integer multiples of `nout / nin` elements at the time. With `nin/nout = k` and `allowed_new_outgroups = 0:3` it is allowed to insert 0, `k`, `2k` or `3k` new elements in the output dimension between each already existing element.
The elements of `allowed_multipliers` determine the total number of allowed output elements, i.e the allowed
ratios of `nout / nin`.
If `fallback` is not provided, it will be derived from `base`.
"""
struct DepthwiseConvAllowNinChangeStrategy{S,F} <: DecoratingJuMPΔSizeStrategy
struct GroupedConvAllowNinChangeStrategy{S,F} <: DecoratingJuMPΔSizeStrategy
allowed_new_outgroups::Vector{Int}
allowed_multipliers::Vector{Int}
base::S
fallback::F
end
DepthwiseConvAllowNinChangeStrategy(newoutputsmax::Integer, multipliersmax::Integer,base,fb...) = DepthwiseConvAllowNinChangeStrategy(0:newoutputsmax, 1:multipliersmax, base, fb...)
GroupedConvAllowNinChangeStrategy(newoutputsmax::Integer, multipliersmax::Integer,base,fb...) = GroupedConvAllowNinChangeStrategy(0:newoutputsmax, 1:multipliersmax, base, fb...)


function DepthwiseConvAllowNinChangeStrategy(
function GroupedConvAllowNinChangeStrategy(
allowed_new_outgroups::AbstractVector{<:Integer},
allowed_multipliers::AbstractVector{<:Integer},
base, fb= recurse_fallback(s -> DepthwiseConvAllowNinChangeStrategy(allowed_new_outgroups, allowed_multipliers, s), base))
return DepthwiseConvAllowNinChangeStrategy(collect(Int, allowed_new_outgroups), collect(Int, allowed_multipliers), base, fb)
base, fb= recurse_fallback(s -> GroupedConvAllowNinChangeStrategy(allowed_new_outgroups, allowed_multipliers, s), base))
return GroupedConvAllowNinChangeStrategy(collect(Int, allowed_new_outgroups), collect(Int, allowed_multipliers), base, fb)
end


NaiveNASlib.base(s::DepthwiseConvAllowNinChangeStrategy) = s.base
NaiveNASlib.fallback(s::DepthwiseConvAllowNinChangeStrategy) = s.fallback
NaiveNASlib.base(s::GroupedConvAllowNinChangeStrategy) = s.base
NaiveNASlib.fallback(s::GroupedConvAllowNinChangeStrategy) = s.fallback

NaiveNASlib.add_participants!(s::DepthwiseConvAllowNinChangeStrategy, vs=AbstractVertex[]) = NaiveNASlib.add_participants!(base(s), vs)
NaiveNASlib.add_participants!(s::GroupedConvAllowNinChangeStrategy, vs=AbstractVertex[]) = NaiveNASlib.add_participants!(base(s), vs)


"""
DepthwiseConvSimpleΔSizeStrategy(base, [fallback])
GroupedConvSimpleΔSizeStrategy(base, [fallback])
`DecoratingJuMPΔSizeStrategy` which only allows nout of `DepthwiseConv` layers to change.
`DecoratingJuMPΔSizeStrategy` which only allows nout of grouped `Conv` layers (i.e `Conv` with `groups` != 1) to change.
Use if [`DepthwiseConvAllowNinChangeStrategy`](@ref) takes too long to solve.
Use if [`GroupedConvAllowNinChangeStrategy`](@ref) takes too long to solve.
The elements of `allowed_multipliers` determine the total number of allowed output elements, i.e the allowed
ratios of `nout / nin`.
ratios of `nout / nin` (where `nin` is fixed).
If `fallback` is not provided, it will be derived from `base`.
"""
struct DepthwiseConvSimpleΔSizeStrategy{S, F} <: DecoratingJuMPΔSizeStrategy
struct GroupedConvSimpleΔSizeStrategy{S, F} <: DecoratingJuMPΔSizeStrategy
allowed_multipliers::Vector{Int}
base::S
fallback::F
end

DepthwiseConvSimpleΔSizeStrategy(maxms::Integer, base, fb...) = DepthwiseConvSimpleΔSizeStrategy(1:maxms, base, fb...)
function DepthwiseConvSimpleΔSizeStrategy(ms::AbstractVector{<:Integer}, base, fb=recurse_fallback(s -> DepthwiseConvSimpleΔSizeStrategy(ms, s), base))
return DepthwiseConvSimpleΔSizeStrategy(collect(Int, ms), base, fb)
GroupedConvSimpleΔSizeStrategy(maxms::Integer, base, fb...) = GroupedConvSimpleΔSizeStrategy(1:maxms, base, fb...)
function GroupedConvSimpleΔSizeStrategy(ms::AbstractVector{<:Integer}, base, fb=recurse_fallback(s -> GroupedConvSimpleΔSizeStrategy(ms, s), base))
return GroupedConvSimpleΔSizeStrategy(collect(Int, ms), base, fb)
end
NaiveNASlib.base(s::DepthwiseConvSimpleΔSizeStrategy) = s.base
NaiveNASlib.fallback(s::DepthwiseConvSimpleΔSizeStrategy) = s.fallback
NaiveNASlib.base(s::GroupedConvSimpleΔSizeStrategy) = s.base
NaiveNASlib.fallback(s::GroupedConvSimpleΔSizeStrategy) = s.fallback

NaiveNASlib.add_participants!(s::DepthwiseConvSimpleΔSizeStrategy, vs=AbstractVertex[]) = NaiveNASlib.add_participants!(base(s), vs)
NaiveNASlib.add_participants!(s::GroupedConvSimpleΔSizeStrategy, vs=AbstractVertex[]) = NaiveNASlib.add_participants!(base(s), vs)


recurse_fallback(f, s::AbstractJuMPΔSizeStrategy) = wrap_fallback(f, NaiveNASlib.fallback(s))
Expand All @@ -86,10 +84,11 @@ function NaiveNASlib.compconstraint!(case, s::DecoratingJuMPΔSizeStrategy, lt::
NaiveNASlib.compconstraint!(case, NaiveNASlib.base(s), lt, data)
end
# To avoid ambiguity
function NaiveNASlib.compconstraint!(case::NaiveNASlib.ScalarSize, s::DecoratingJuMPΔSizeStrategy, lt::FluxDepthwiseConv, data)
function NaiveNASlib.compconstraint!(case::NaiveNASlib.ScalarSize, s::DecoratingJuMPΔSizeStrategy, lt::FluxConvolutional, data)
NaiveNASlib.compconstraint!(case, NaiveNASlib.base(s), lt, data)
end
function NaiveNASlib.compconstraint!(::NaiveNASlib.ScalarSize, s::AbstractJuMPΔSizeStrategy, ::FluxDepthwiseConv, data, ms=allowed_multipliers(s))
function NaiveNASlib.compconstraint!(::NaiveNASlib.ScalarSize, s::AbstractJuMPΔSizeStrategy, ::FluxConvolutional, data, ms=allowed_multipliers(s))
ngroups(data.vertex) == 1 && return

# Add constraint that nout(l) == n * nin(l) where n is integer
ins = filter(vin -> vin in keys(data.noutdict), inputs(data.vertex))
Expand All @@ -114,25 +113,26 @@ function NaiveNASlib.compconstraint!(::NaiveNASlib.ScalarSize, s::AbstractJuMPΔ
end
end

allowed_multipliers(s::DepthwiseConvAllowNinChangeStrategy) = s.allowed_multipliers
allowed_multipliers(s::DepthwiseConvSimpleΔSizeStrategy) = s.allowed_multipliers
allowed_multipliers(s::GroupedConvAllowNinChangeStrategy) = s.allowed_multipliers
allowed_multipliers(s::GroupedConvSimpleΔSizeStrategy) = s.allowed_multipliers
allowed_multipliers(::AbstractJuMPΔSizeStrategy) = 1:10


function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::DecoratingJuMPΔSizeStrategy, t::FluxDepthwiseConv, data)
function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::DecoratingJuMPΔSizeStrategy, t::FluxConvolutional, data)
NaiveNASlib.compconstraint!(case, base(s), t, data)
end
function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::AbstractJuMPΔSizeStrategy, t::FluxDepthwiseConv, data)
function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::AbstractJuMPΔSizeStrategy, t::FluxConvolutional, data)
ngroups(data.vertex) == 1 && return
# Fallbacks don't matter here since we won't call it from below here, just add default so we don't accidentally crash due to some
# strategy which hasn't defined a fallback
if 15 < sum(keys(data.outselectvars)) do v
layertype(v) isa FluxDepthwiseConv || return 0
ngroups(v) == 1 && return 0
return log2(nout(v)) # Very roughly determined...
end
return NaiveNASlib.compconstraint!(case, DepthwiseConvSimpleΔSizeStrategy(10, s, NaiveNASlib.DefaultJuMPΔSizeStrategy()), t, data)
return NaiveNASlib.compconstraint!(case, GroupedConvSimpleΔSizeStrategy(10, s, NaiveNASlib.DefaultJuMPΔSizeStrategy()), t, data)
end
# The number of allowed multipliers can probably be better tuned, perhaps based on current size.
return NaiveNASlib.compconstraint!(case, DepthwiseConvAllowNinChangeStrategy(10, 10, s, NaiveNASlib.DefaultJuMPΔSizeStrategy()), t, data)
return NaiveNASlib.compconstraint!(case, GroupedConvAllowNinChangeStrategy(10, 10, s, NaiveNASlib.DefaultJuMPΔSizeStrategy()), t, data)
#=
For benchmarking:
using NaiveNASflux, Flux, NaiveNASlib.Advanced
Expand All @@ -154,37 +154,48 @@ function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::Abstrac
=#
end

function NaiveNASlib.compconstraint!(::NaiveNASlib.NeuronIndices, s::DepthwiseConvSimpleΔSizeStrategy, t::FluxDepthwiseConv, data)
function NaiveNASlib.compconstraint!(::NaiveNASlib.NeuronIndices, s::GroupedConvSimpleΔSizeStrategy, t::FluxConvolutional, data)
model = data.model
v = data.vertex
select = data.outselectvars[v]
insert = data.outinsertvars[v]


ngroups(v) == 1 && return
nin(v)[] == 1 && return # Special case, no restrictions as we only need to be an integer multple of 1

ngroups = div(nout(v), nin(v)[])
if size(weights(layer(v)), indim(v)) != 1
@warn "Handling of convolutional layers with groups != nin not implemented. Model might not be size aligned after mutation!"
end

# Neurons mapped to the same weight are interleaved, i.e layer.weight[:,:,1,:] maps to y[1:ngroups:end] where y = layer(x)
for group in 1:ngroups
neurons_in_group = select[group : ngroups : end]
ngrps = div(nout(v), nin(v)[])

for group in 1:ngrps
neurons_in_group = select[group : ngrps : end]
@constraint(model, neurons_in_group[1] == neurons_in_group[end])
@constraint(model, [i=2:length(neurons_in_group)], neurons_in_group[i] == neurons_in_group[i-1])

insert_in_group = insert[group : ngroups : end]
insert_in_group = insert[group : ngrps : end]
@constraint(model, insert_in_group[1] == insert_in_group[end])
@constraint(model, [i=2:length(insert_in_group)], insert_in_group[i] == insert_in_group[i-1])
end

NaiveNASlib.compconstraint!(NaiveNASlib.ScalarSize(), s, t, data, allowed_multipliers(s))
end

function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::DepthwiseConvAllowNinChangeStrategy, t::FluxDepthwiseConv, data)
function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::GroupedConvAllowNinChangeStrategy, t::FluxConvolutional, data)
model = data.model
v = data.vertex
select = data.outselectvars[v]
insert = data.outinsertvars[v]

ngroups(v) == 1 && return
nin(v)[] == 1 && return # Special case, no restrictions as we only need to be an integer multple of 1?

# Step 0:
# Flux 0.13 changed the grouping of weigths so that size(layer.weight) = (..., nin / ngroups, nout)
# We can get back the shape expected here through weightgroups = reshape(layer.weight, ..., nout / groups, nin)
# Step 1:
# Neurons mapped to the same weight are interleaved, i.e layer.weight[:,:,1,:] maps to y[1:ngroups:end] where y = layer(x)
# where ngroups = nout / nin. For example, nout = 12 and nin = 4 mean size(layer.weight) == (..,3, 4)
Expand All @@ -193,12 +204,15 @@ function NaiveNASlib.compconstraint!(case::NaiveNASlib.NeuronIndices, s::Depthwi
#
ins = filter(vin -> vin in keys(data.noutdict), inputs(v))
# If inputs to v are not part of problem we have to keep nin(v) fixed!
isempty(ins) && return NaiveNASlib.compconstraint!(case, DepthwiseConvSimpleΔSizeStrategy(allowed_multipliers(s), base(s)), t, data)
isempty(ins) && return NaiveNASlib.compconstraint!(case, GroupedConvSimpleΔSizeStrategy(allowed_multipliers(s), base(s)), t, data)
# TODO: Check if input is immutable and do simple strat then too?
inselect = data.outselectvars[ins[]]
ininsert = data.outinsertvars[ins[]]

#ngroups = div(nout(v), nin(v)[])
if size(weights(layer(v)), indim(v)) != 1
@warn "Handling of convolutional layers with groups != nin not implemented. Model might not be size aligned after mutation!"
end
ningroups = nin(v)[]
add_depthwise_constraints(model, inselect, ininsert, select, insert, ningroups, s.allowed_new_outgroups, s.allowed_multipliers)
end
Expand All @@ -213,6 +227,10 @@ function add_depthwise_constraints(model, inselect, ininsert, select, insert, ni
# Inserting one new input element at position i will get us noutgroups new consecutive outputputs at position i
# Thus nout change by Δ * noutgroups.

# Note: Flux 0.13 changed the grouping of weigths so that size(layer.weight) = (..., nin / ngroups, nout)
# We can get back the shape expected here through weightgroups = reshape(layer.weight, ..., nout / groups, nin)
# All examples below assume the pre-0.13 representation!

# Example:

# dc = DepthwiseConv((1,1), 3 => 9; bias=false);
Expand Down
26 changes: 17 additions & 9 deletions src/mutable.jl
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,9 @@ function mutate(m::MutableLayer; inputs, outputs, other = l -> (), insert=neuron
end
end

function mutate(lt::FluxParLayer, m::MutableLayer; inputs=1:nin(m)[], outputs=1:nout(m), other= l -> (), insert=neuroninsert)
mutate(lt::FluxParLayer, m::MutableLayer; kwargs...) = _mutate(lt, m; kwargs...)

function _mutate(lt::FluxParLayer, m::MutableLayer; inputs=1:nin(m)[], outputs=1:nout(m), other= l -> (), insert=neuroninsert)
l = layer(m)
otherdims = other(l)
w = select(weights(l), indim(l) => inputs, outdim(l) => outputs, otherdims...; newfun=insert(lt, WeightParam()))
Expand All @@ -72,19 +74,25 @@ function mutate(lt::FluxParLayer, m::MutableLayer; inputs=1:nin(m)[], outputs=1:
end
otherpars(o, l) = ()

function mutate(lt::FluxDepthwiseConv{N}, m::MutableLayer; inputs=1:nin(m)[], outputs=1:nout(m), other= l -> (), insert=neuroninsert) where N
function mutate(lt::FluxConvolutional{N}, m::MutableLayer; inputs=1:nin(m)[], outputs=1:nout(m), other= l -> (), insert=neuroninsert) where N

if ngroups(lt, layer(m)) == 1
return _mutate(lt, m; inputs, outputs, other, insert)
end

l = layer(m)
otherdims = other(l)

ngroups = div(length(outputs), length(inputs))
# TODO: Handle other cases than ngroups == nin
newingroups = 1

# inputs and outputs are coupled through the constraints (which hopefully were enforced) so we only need to consider outputs
currsize =size(weights(l))
wo = select(reshape(weights(l), currsize[1:N]...,:), N+1 => outputs, otherdims...; newfun=insert(lt, WeightParam()))
newks = size(wo)[1:N]
w = collect(reshape(wo, newks...,ngroups, :))
w = collect(reshape(wo, newks...,newingroups, :))
b = select(bias(l), 1 => outputs; newfun=insert(lt, BiasParam()))
newlayer(m, w, b, otherpars(other, l))
newlayer(m, w, b, (;groups= length(inputs) ÷ newingroups, otherpars(other, l)...))
end

function mutate(lt::FluxRecurrent, m::MutableLayer; inputs=1:nin(m)[], outputs=1:nout(m), other=missing, insert=neuroninsert)
Expand Down Expand Up @@ -131,15 +139,15 @@ function mutate(t::FluxParInvLayer, m::MutableLayer; inputs=missing, outputs=mis
ismissing(outputs) || return mutate(t, m, outputs; insert=insert)
end

function mutate(lt::FluxDiagonal, m::MutableLayer, inds; insert=neuroninsert)
function mutate(lt::FluxScale, m::MutableLayer, inds; insert=neuroninsert)
l = layer(m)
w = select(weights(l), 1 => inds, newfun=insert(lt, WeightParam()))
b = select(bias(l), 1 => inds; newfun=insert(lt, BiasParam()))
newlayer(m, w, b)
end

function mutate(::FluxLayerNorm, m::MutableLayer, inds; insert=neuroninsert)
# LayerNorm is only a wrapped Diagonal. Just mutate the Diagonal and make a new LayerNorm of it
# LayerNorm is only a wrapped Scale. Just mutate the Scale and make a new LayerNorm of it
proxy = MutableLayer(layer(m).diag)
mutate(proxy; inputs=inds, outputs=inds, other=l->(), insert=insert)

Expand Down Expand Up @@ -197,7 +205,7 @@ newlayer(m::MutableLayer, w, b, other=nothing) = m.layer = newlayer(layertype(m)

newlayer(::FluxDense, m::MutableLayer, w, b, other) = Dense(w, b, deepcopy(layer(m).σ))
newlayer(::FluxConvolutional, m::MutableLayer, w, b, other) = setproperties(layer(m), (weight=w, bias=b, σ=deepcopy(layer(m).σ), other...))
newlayer(::FluxDiagonal, m::MutableLayer, w, b, other) = Flux.Diagonal(w, b)
newlayer(::FluxScale, m::MutableLayer, w, b, other) = Flux.Scale(w, b)


"""
Expand Down Expand Up @@ -233,7 +241,7 @@ julia> lazy(ones(Float32, 2, 5)) |> size
(3, 5)
julia> layer(lazy)
Dense(2, 3, relu) # 9 parameters
Dense(2 => 3, relu) # 9 parameters
```
"""
mutable struct LazyMutable <: AbstractMutableComp
Expand Down
15 changes: 10 additions & 5 deletions src/neuronutility.jl
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ function l2_squeeze(x, dimskeep=1:ndims(x))
dims = filter(i -> i dimskeep, 1:ndims(x))
return sqrt.(dropdims(sum(x -> x^2, x, dims=dims), dims=Tuple(dims)))
end
l2_squeeze(z::Flux.Zeros, args...) = z
l2_squeeze(z::Number, args...) = z

"""
mean_squeeze(f, x, dimkeep)
Expand Down Expand Up @@ -90,14 +90,19 @@ neuronutility(l) = neuronutility(layertype(l), l)
# Default: mean of abs of weights + bias. Not a very good metric, but should be better than random
# Maybe do something about state in recurrent layers as well, but CBA to do it right now
neuronutility(::FluxParLayer, l) = l2_squeeze(weights(l), outdim(l)) .+ l2_squeeze(bias(l))
function neuronutility(::FluxDepthwiseConv, l)
wm = l2_squeeze(weights(l), outdim(l))
function neuronutility(::FluxConvolutional{N}, l) where N
ngroups(l) == 1 && return l2_squeeze(weights(l), outdim(l)) .+ l2_squeeze(bias(l))

kernelsize = size(weights(l))[1:N]
weightgroups = reshape(weights(l), kernelsize..., nout(l) ÷ ngroups(l), nin(l)[])

wm = l2_squeeze(weightgroups, indim(l))
bm = l2_squeeze(bias(l))

(length(wm) == 1 || length(wm) == length(bm)) && return wm .+ bm
# use this to get insight on whether to repeat inner or outer:
# cc = DepthwiseConv(reshape([1 1 1 1;2 2 2 2], 1, 1, 2, 4), [0,0,0,0,1,1,1,1])
# cc(fill(10, (1,1,4,1)))
# cc = DepthwiseConv(reshape(Float32[1 1 1 1;2 2 2 2], 1, 1, 4, 2), Float32[0,0,0,0,1,1,1,1])
# cc(fill(10f0, (1,1,4,1)))
return repeat(wm, length(bm) ÷ length(wm)) .+ bm
end

Expand Down
Loading

2 comments on commit f9b0cdb

@DrChainsaw
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/58287

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v2.0.5 -m "<description of version>" f9b0cdba94048ce37a21e3c308044953ecca8790
git push origin v2.0.5

Please sign in to comment.