Skip to content

Commit

Permalink
(0.90.10) on_architecture method for all Oceananigans' types (#3490)
Browse files Browse the repository at this point in the history
* first commit

* couple of more additions

* importing on_architecture here and there

* on_architecture for split explicit

* keep deprecated arch_array

* typo

* remove useless methods

* no inline for tuples

* AbstractSerialArchitecture to disambiguate

* a comment

* export AbstractSerialArchitecture

* import on_architecture in OutputReaders

* still export arch_array

* Update src/AbstractOperations/grid_metrics.jl

Co-authored-by: Navid C. Constantinou <[email protected]>

* Update src/Architectures.jl

Co-authored-by: Navid C. Constantinou <[email protected]>

* Update src/Architectures.jl

Co-authored-by: Navid C. Constantinou <[email protected]>

* Update src/Advection/weno_reconstruction.jl

Co-authored-by: Navid C. Constantinou <[email protected]>

* Update src/Architectures.jl

Co-authored-by: Navid C. Constantinou <[email protected]>

* some changes to distributed on_architecture

* remove using on_architecture

* remove wrong docstring

* some more cleaning

* include("distributed_on_architecture.jl")

* add distributed_on_architecture

* bump patch release

* bugfix

* correct distributed on_architecture

* remove double include

* hopefully last bugfix

* adding a couple of methods

---------

Co-authored-by: Navid C. Constantinou <[email protected]>
  • Loading branch information
simone-silvestri and navidcy committed Mar 7, 2024
1 parent 3d9668b commit fd3b52c
Show file tree
Hide file tree
Showing 91 changed files with 482 additions and 197 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Oceananigans"
uuid = "9e8cae18-63c1-5223-a75c-80ca9d6e9a09"
authors = ["Climate Modeling Alliance and contributors"]
version = "0.90.9"
version = "0.90.10"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Expand Down
2 changes: 1 addition & 1 deletion src/AbstractOperations/AbstractOperations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ using Oceananigans.Operators: interpolation_operator
using Oceananigans.Architectures: device
using Oceananigans: AbstractModel

import Oceananigans.Architectures: architecture
import Oceananigans.Architectures: architecture, on_architecture
import Oceananigans.BoundaryConditions: fill_halo_regions!
import Oceananigans.Fields: compute_at!, indices

Expand Down
9 changes: 9 additions & 0 deletions src/AbstractOperations/binary_operations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -216,3 +216,12 @@ Adapt.adapt_structure(to, binary::BinaryOperation{LX, LY, LZ}) where {LX, LY, LZ
Adapt.adapt(to, binary.▶a),
Adapt.adapt(to, binary.▶b),
Adapt.adapt(to, binary.grid))


on_architecture(to, binary::BinaryOperation{LX, LY, LZ}) where {LX, LY, LZ} =
BinaryOperation{LX, LY, LZ}(on_architecture(to, binary.op),
on_architecture(to, binary.a),
on_architecture(to, binary.b),
on_architecture(to, binary.▶a),
on_architecture(to, binary.▶b),
on_architecture(to, binary.grid))
11 changes: 9 additions & 2 deletions src/AbstractOperations/conditional_operations.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
using Oceananigans.Fields: OneField
using Oceananigans.Grids: architecture
using Oceananigans.Architectures: arch_array
using Oceananigans.Architectures: on_architecture
import Oceananigans.Fields: condition_operand, conditional_length, set!, compute_at!, indices

# For conditional reductions such as mean(u * v, condition = u .> 0))
Expand Down Expand Up @@ -106,7 +106,7 @@ end
@inline condition_operand(func::Function, op::AbstractField, ::Nothing, mask) = ConditionalOperation(op; func, condition=TrueCondition(), mask)

@inline function condition_operand(func::Function, operand::AbstractField, condition::AbstractArray, mask)
condition = arch_array(architecture(operand.grid), condition)
condition = on_architecture(architecture(operand.grid), condition)
return ConditionalOperation(operand; func, condition, mask)
end

Expand Down Expand Up @@ -134,6 +134,13 @@ Adapt.adapt_structure(to, c::ConditionalOperation{LX, LY, LZ}) where {LX, LY, LZ
adapt(to, c.condition),
adapt(to, c.mask))

on_architecture(to, c::ConditionalOperation{LX, LY, LZ}) where {LX, LY, LZ} =
ConditionalOperation{LX, LY, LZ}(on_architecture(to, c.operand),
on_architecture(to, c.func),
on_architecture(to, c.grid),
on_architecture(to, c.condition),
on_architecture(to, c.mask))

Base.summary(c::ConditionalOperation) = string("ConditionalOperation of ", summary(c.operand), " with condition ", summary(c.condition))

compute_at!(c::ConditionalOperation, time) = compute_at!(c.operand, time)
Expand Down
10 changes: 9 additions & 1 deletion src/AbstractOperations/derivatives.jl
Original file line number Diff line number Diff line change
Expand Up @@ -118,10 +118,18 @@ compute_at!(∂::Derivative, time) = compute_at!(∂.arg, time)
##### GPU capabilities
#####

"Adapt `Derivative` to work on the GPU via CUDAnative and CUDAdrv."
"Adapt `Derivative` to work on the GPU."
Adapt.adapt_structure(to, deriv::Derivative{LX, LY, LZ}) where {LX, LY, LZ} =
Derivative{LX, LY, LZ}(Adapt.adapt(to, deriv.∂),
Adapt.adapt(to, deriv.arg),
Adapt.adapt(to, deriv.▶),
nothing,
Adapt.adapt(to, deriv.grid))

on_architecture(to, deriv::Derivative{LX, LY, LZ}) where {LX, LY, LZ} =
Derivative{LX, LY, LZ}(on_architecture(to, deriv.∂),
on_architecture(to, deriv.arg),
on_architecture(to, deriv.▶),
deriv.abstract_∂,
on_architecture(to, deriv.grid))

5 changes: 5 additions & 0 deletions src/AbstractOperations/grid_metrics.jl
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,11 @@ Adapt.adapt_structure(to, gm::GridMetricOperation{LX, LY, LZ}) where {LX, LY, LZ
GridMetricOperation{LX, LY, LZ}(Adapt.adapt(to, gm.metric),
Adapt.adapt(to, gm.grid))

on_architecture(to, gm::GridMetricOperation{LX, LY, LZ}) where {LX, LY, LZ} =
GridMetricOperation{LX, LY, LZ}(on_architecture(to, gm.metric),
on_architecture(to, gm.grid))


@inline Base.getindex(gm::GridMetricOperation, i, j, k) = gm.metric(i, j, k, gm.grid)

indices(::GridMetricOperation) = default_indices(3)
Expand Down
5 changes: 5 additions & 0 deletions src/AbstractOperations/kernel_function_operation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,11 @@ Adapt.adapt_structure(to, κ::KernelFunctionOperation{LX, LY, LZ}) where {LX, LY
Adapt.adapt(to, κ.grid),
Tuple(Adapt.adapt(to, a) for a in κ.arguments)...)

on_architecture(to, κ::KernelFunctionOperation{LX, LY, LZ}) where {LX, LY, LZ} =
KernelFunctionOperation{LX, LY, LZ}(on_architecture(to, κ.kernel_function),
on_architecture(to, κ.grid),
Tuple(on_architecture(to, a) for a in κ.arguments)...)

Base.show(io::IO, kfo::KernelFunctionOperation) =
print(io,
summary(kfo), '\n',
Expand Down
7 changes: 7 additions & 0 deletions src/AbstractOperations/multiary_operations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -150,3 +150,10 @@ Adapt.adapt_structure(to, multiary::MultiaryOperation{LX, LY, LZ}) where {LX, LY
Adapt.adapt(to, multiary.args),
Adapt.adapt(to, multiary.▶),
Adapt.adapt(to, multiary.grid))

on_architecture(to, multiary::MultiaryOperation{LX, LY, LZ}) where {LX, LY, LZ} =
MultiaryOperation{LX, LY, LZ}(on_architecture(to, multiary.op),
on_architecture(to, multiary.args),
on_architecture(to, multiary.▶),
on_architecture(to, multiary.grid))

6 changes: 6 additions & 0 deletions src/AbstractOperations/unary_operations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -130,3 +130,9 @@ Adapt.adapt_structure(to, unary::UnaryOperation{LX, LY, LZ}) where {LX, LY, LZ}
Adapt.adapt(to, unary.arg),
Adapt.adapt(to, unary.▶),
Adapt.adapt(to, unary.grid))

on_architecture(to, unary::UnaryOperation{LX, LY, LZ}) where {LX, LY, LZ} =
UnaryOperation{LX, LY, LZ}(on_architecture(to, unary.op),
on_architecture(to, unary.arg),
on_architecture(to, unary.▶),
on_architecture(to, unary.grid))
3 changes: 2 additions & 1 deletion src/Advection/Advection.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,12 +32,13 @@ using OffsetArrays

using Oceananigans.Grids
using Oceananigans.Grids: with_halo, coordinates
using Oceananigans.Architectures: arch_array, architecture, CPU
using Oceananigans.Architectures: architecture, CPU

using Oceananigans.Operators

import Base: show, summary
import Oceananigans.Grids: required_halo_size
import Oceananigans.Architectures: on_architecture

abstract type AbstractAdvectionScheme{B, FT} end
abstract type AbstractCenteredAdvectionScheme{B, FT} <: AbstractAdvectionScheme{B, FT} end
Expand Down
6 changes: 6 additions & 0 deletions src/Advection/centered_reconstruction.jl
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,12 @@ Adapt.adapt_structure(to, scheme::Centered{N, FT}) where {N, FT} =
Adapt.adapt(to, scheme.coeff_zᵃᵃᶠ), Adapt.adapt(to, scheme.coeff_zᵃᵃᶜ),
Adapt.adapt(to, scheme.buffer_scheme))

on_architecture(to, scheme::Centered{N, FT}) where {N, FT} =
Centered{N, FT}(on_architecture(to, scheme.coeff_xᶠᵃᵃ), on_architecture(to, scheme.coeff_xᶜᵃᵃ),
on_architecture(to, scheme.coeff_yᵃᶠᵃ), on_architecture(to, scheme.coeff_yᵃᶜᵃ),
on_architecture(to, scheme.coeff_zᵃᵃᶠ), on_architecture(to, scheme.coeff_zᵃᵃᶜ),
on_architecture(to, scheme.buffer_scheme))

# Useful aliases
Centered(grid, FT::DataType=Float64; kwargs...) = Centered(FT; grid, kwargs...)

Expand Down
8 changes: 4 additions & 4 deletions src/Advection/reconstruction_coefficients.jl
Original file line number Diff line number Diff line change
Expand Up @@ -243,15 +243,15 @@ end

# Stretched reconstruction coefficients for `Centered` schemes
@inline function calc_reconstruction_coefficients(FT, coord, arch, N, ::Val{1}; order)
cpu_coord = arch_array(CPU(), coord)
cpu_coord = on_architecture(CPU(), coord)
r = ((order + 1) ÷ 2) - 1
s = create_reconstruction_coefficients(FT, r, cpu_coord, arch, N; order)
return s
end

# Stretched reconstruction coefficients for `UpwindBiased` schemes
@inline function calc_reconstruction_coefficients(FT, coord, arch, N, ::Val{2}; order)
cpu_coord = arch_array(CPU(), coord)
cpu_coord = on_architecture(CPU(), coord)
rleft = ((order + 1) ÷ 2) - 2
rright = ((order + 1) ÷ 2) - 1
s = []
Expand All @@ -264,7 +264,7 @@ end
# Stretched reconstruction coefficients for `WENO` schemes
@inline function calc_reconstruction_coefficients(FT, coord, arch, N, ::Val{3}; order)

cpu_coord = arch_array(CPU(), coord)
cpu_coord = on_architecture(CPU(), coord)
s = []
for r in -1:order-1
push!(s, create_reconstruction_coefficients(FT, r, cpu_coord, arch, N; order))
Expand All @@ -280,5 +280,5 @@ end
push!(stencil, stencil_coefficients(i, r, cpu_coord, cpu_coord; order))
end
end
return OffsetArray(arch_array(arch, stencil), -1)
return OffsetArray(on_architecture(arch, stencil), -1)
end
4 changes: 2 additions & 2 deletions src/Advection/stretched_weno_smoothness.jl
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ end

function calc_smoothness_coefficients(FT, beta, coord, arch, N; order)

cpu_coord = arch_array(CPU(), coord)
cpu_coord = on_architecture(CPU(), coord)

order == 3 || throw(ArgumentError("The stretched smoothness coefficients are only implemented for order == 3"))

Expand Down Expand Up @@ -104,7 +104,7 @@ function create_smoothness_coefficients(FT, r, op, cpu_coord, arch, N; order)
end
end

return OffsetArray(arch_array(arch, stencil), -1)
return OffsetArray(on_architecture(arch, stencil), -1)
end

@inline dagger(ψ) = (ψ[2], ψ[3], ψ[1])
Expand Down
7 changes: 7 additions & 0 deletions src/Advection/upwind_biased_reconstruction.jl
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,13 @@ Adapt.adapt_structure(to, scheme::UpwindBiased{N, FT}) where {N, FT} =
Adapt.adapt(to, scheme.buffer_scheme),
Adapt.adapt(to, scheme.advecting_velocity_scheme))

on_architecture(to, scheme::UpwindBiased{N, FT}) where {N, FT} =
UpwindBiased{N, FT}(on_architecture(to, scheme.coeff_xᶠᵃᵃ), on_architecture(to, scheme.coeff_xᶜᵃᵃ),
on_architecture(to, scheme.coeff_yᵃᶠᵃ), on_architecture(to, scheme.coeff_yᵃᶜᵃ),
on_architecture(to, scheme.coeff_zᵃᵃᶠ), on_architecture(to, scheme.coeff_zᵃᵃᶜ),
on_architecture(to, scheme.buffer_scheme),
on_architecture(to, scheme.advecting_velocity_scheme))

# Useful aliases
UpwindBiased(grid, FT::DataType=Float64; kwargs...) = UpwindBiased(FT; grid, kwargs...)

Expand Down
8 changes: 8 additions & 0 deletions src/Advection/vector_invariant_advection.jl
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,14 @@ Adapt.adapt_structure(to, scheme::VectorInvariant{N, FT, M}) where {N, FT, M} =
Adapt.adapt(to, scheme.divergence_scheme),
Adapt.adapt(to, scheme.upwinding))

on_architecture(to, scheme::VectorInvariant{N, FT, M}) where {N, FT, M} =
VectorInvariant{N, FT, M}(on_architecture(to, scheme.vorticity_scheme),
on_architecture(to, scheme.vorticity_stencil),
on_architecture(to, scheme.vertical_scheme),
on_architecture(to, scheme.kinetic_energy_gradient_scheme),
on_architecture(to, scheme.divergence_scheme),
on_architecture(to, scheme.upwinding))

@inline U_dot_∇u(i, j, k, grid, scheme::VectorInvariant, U) = horizontal_advection_U(i, j, k, grid, scheme, U.u, U.v) +
vertical_advection_U(i, j, k, grid, scheme, U) +
bernoulli_head_U(i, j, k, grid, scheme, U.u, U.v)
Expand Down
1 change: 0 additions & 1 deletion src/Advection/vector_invariant_upwinding.jl
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,6 @@ Adapt.adapt_structure(to, scheme::CrossAndSelfUpwinding) =
Adapt.adapt(to, scheme.δu²_stencil),
Adapt.adapt(to, scheme.δv²_stencil))


Base.show(io::IO, a::VelocityUpwinding) =
print(io, summary(a), " \n",
"KE gradient and Divergence flux cross terms reconstruction: ", "\n",
Expand Down
8 changes: 8 additions & 0 deletions src/Advection/weno_reconstruction.jl
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,14 @@ Adapt.adapt_structure(to, scheme::WENO{N, FT, XT, YT, ZT, WF, PP}) where {N, FT,
Adapt.adapt(to, scheme.buffer_scheme),
Adapt.adapt(to, scheme.advecting_velocity_scheme))

on_architecture(to, scheme::WENO{N, FT, XT, YT, ZT, WF, PP}) where {N, FT, XT, YT, ZT, WF, PP} =
WENO{N, FT, WF}(on_architecture(to, scheme.coeff_xᶠᵃᵃ), on_architecture(to, scheme.coeff_xᶜᵃᵃ),
on_architecture(to, scheme.coeff_yᵃᶠᵃ), on_architecture(to, scheme.coeff_yᵃᶜᵃ),
on_architecture(to, scheme.coeff_zᵃᵃᶠ), on_architecture(to, scheme.coeff_zᵃᵃᶜ),
on_architecture(to, scheme.bounds),
on_architecture(to, scheme.buffer_scheme),
on_architecture(to, scheme.advecting_velocity_scheme))

# Retrieve precomputed coefficients (+2 for julia's 1 based indices)
@inline retrieve_coeff(scheme::WENO, r, ::Val{1}, i, ::Type{Face}) = @inbounds scheme.coeff_xᶠᵃᵃ[r+2][i]
@inline retrieve_coeff(scheme::WENO, r, ::Val{1}, i, ::Type{Center}) = @inbounds scheme.coeff_xᶜᵃᵃ[r+2][i]
Expand Down
62 changes: 37 additions & 25 deletions src/Architectures.jl
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
module Architectures

export AbstractArchitecture
export CPU, GPU, MultiGPU
export device, architecture, array_type, arch_array, unified_array, device_copy_to!
export AbstractArchitecture, AbstractSerialArchitecture
export CPU, GPU
export device, architecture, unified_array, device_copy_to!
export array_type, on_architecture, arch_array

using CUDA
using KernelAbstractions
Expand All @@ -16,20 +17,27 @@ Abstract supertype for architectures supported by Oceananigans.
"""
abstract type AbstractArchitecture end

"""
AbstractSerialArchitecture
Abstract supertype for serial architectures supported by Oceananigans.
"""
abstract type AbstractSerialArchitecture <: AbstractArchitecture end

"""
CPU <: AbstractArchitecture
Run Oceananigans on one CPU node. Uses multiple threads if the environment
variable `JULIA_NUM_THREADS` is set.
"""
struct CPU <: AbstractArchitecture end
struct CPU <: AbstractSerialArchitecture end

"""
GPU <: AbstractArchitecture
Run Oceananigans on a single NVIDIA CUDA GPU.
"""
struct GPU <: AbstractArchitecture end
struct GPU <: AbstractSerialArchitecture end

#####
##### These methods are extended in DistributedComputations.jl
Expand All @@ -56,32 +64,30 @@ child_architecture(arch) = arch
array_type(::CPU) = Array
array_type(::GPU) = CuArray

arch_array(::CPU, a::Array) = a
arch_array(::CPU, a::CuArray) = Array(a)
arch_array(::GPU, a::Array) = CuArray(a)
arch_array(::GPU, a::CuArray) = a
# Fallback
on_architecture(arch, a) = a

arch_array(::CPU, a::BitArray) = a
arch_array(::GPU, a::BitArray) = CuArray(a)
# Tupled implementation
on_architecture(arch::AbstractSerialArchitecture, t::Tuple) = Tuple(on_architecture(arch, elem) for elem in t)
on_architecture(arch::AbstractSerialArchitecture, nt::NamedTuple) = NamedTuple{keys(nt)}(on_architecture(arch, Tuple(nt)))

arch_array(::GPU, a::SubArray{<:Any, <:Any, <:CuArray}) = a
arch_array(::CPU, a::SubArray{<:Any, <:Any, <:CuArray}) = Array(a)
# On architecture for array types
on_architecture(::CPU, a::Array) = a
on_architecture(::GPU, a::Array) = CuArray(a)

arch_array(::GPU, a::SubArray{<:Any, <:Any, <:Array}) = CuArray(a)
arch_array(::CPU, a::SubArray{<:Any, <:Any, <:Array}) = a
on_architecture(::CPU, a::CuArray) = Array(a)
on_architecture(::GPU, a::CuArray) = a

arch_array(::CPU, a::AbstractRange) = a
arch_array(::CPU, ::Nothing) = nothing
arch_array(::CPU, a::Number) = a
arch_array(::CPU, a::Function) = a
on_architecture(::CPU, a::BitArray) = a
on_architecture(::GPU, a::BitArray) = CuArray(a)

arch_array(::GPU, a::AbstractRange) = a
arch_array(::GPU, ::Nothing) = nothing
arch_array(::GPU, a::Number) = a
arch_array(::GPU, a::Function) = a
on_architecture(::CPU, a::SubArray{<:Any, <:Any, <:CuArray}) = Array(a)
on_architecture(::GPU, a::SubArray{<:Any, <:Any, <:CuArray}) = a

arch_array(arch::CPU, a::OffsetArray) = OffsetArray(arch_array(arch, a.parent), a.offsets...)
arch_array(arch::GPU, a::OffsetArray) = OffsetArray(arch_array(arch, a.parent), a.offsets...)
on_architecture(::CPU, a::SubArray{<:Any, <:Any, <:Array}) = a
on_architecture(::GPU, a::SubArray{<:Any, <:Any, <:Array}) = CuArray(a)

on_architecture(arch::AbstractSerialArchitecture, a::OffsetArray) = OffsetArray(on_architecture(arch, a.parent), a.offsets...)

cpu_architecture(::CPU) = CPU()
cpu_architecture(::GPU) = CPU()
Expand Down Expand Up @@ -120,5 +126,11 @@ end
@inline convert_args(::GPU, args) = CUDA.cudaconvert(args)
@inline convert_args(::GPU, args::Tuple) = map(CUDA.cudaconvert, args)

# Deprecated functions
function arch_array(arch, arr)
@warn "`arch_array` is deprecated. Use `on_architecture` instead."
return on_architecture(arch, arr)
end

end # module

6 changes: 6 additions & 0 deletions src/BoundaryConditions/boundary_condition.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import Adapt
import Oceananigans.Architectures: on_architecture

"""
struct BoundaryCondition{C<:AbstractBoundaryConditionClassification, T}
Expand Down Expand Up @@ -69,6 +70,11 @@ end
Adapt.adapt_structure(to, b::BoundaryCondition{Classification}) where Classification =
BoundaryCondition(Classification(), Adapt.adapt(to, b.condition))


# Adapt boundary condition struct to be GPU friendly and passable to GPU kernels.
on_architecture(to, b::BoundaryCondition{Classification}) where Classification =
BoundaryCondition(Classification(), on_architecture(to, b.condition))

#####
##### Some abbreviations to make life easier.
#####
Expand Down
7 changes: 7 additions & 0 deletions src/BoundaryConditions/continuous_boundary_function.jl
Original file line number Diff line number Diff line change
Expand Up @@ -217,3 +217,10 @@ Adapt.adapt_structure(to, bf::ContinuousBoundaryFunction{LX, LY, LZ, S}) where {
nothing,
Adapt.adapt(to, bf.field_dependencies_indices),
Adapt.adapt(to, bf.field_dependencies_interp))

on_architecture(to, bf::ContinuousBoundaryFunction{LX, LY, LZ, S}) where {LX, LY, LZ, S} =
ContinuousBoundaryFunction{LX, LY, LZ, S}(on_architecture(to, bf.func),
on_architecture(to, bf.parameters),
on_architecture(to, bf.field_dependencies),
on_architecture(to, bf.field_dependencies_indices),
on_architecture(to, bf.field_dependencies_interp))
Loading

2 comments on commit fd3b52c

@navidcy
Copy link
Collaborator

@navidcy navidcy commented on fd3b52c Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/102490

Tip: Release Notes

Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.

@JuliaRegistrator register

Release notes:

## Breaking changes

- blah

To add them here just re-invoke and the PR will be updated.

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.90.10 -m "<description of version>" fd3b52cb8a49c70c3236643430ce38a2c1c89f00
git push origin v0.90.10

Please sign in to comment.