Releases: FluxML/Flux.jl
Releases · FluxML/Flux.jl
v0.14.9
Flux v0.14.9
Merged pull requests:
- Restore type stability of
conv_transpose_dims
(#2365) (@ToucheSir) - Hotfix for new OneElement on GPU (#2368) (@ToucheSir)
v0.14.8
Flux v0.14.8
Merged pull requests:
- Use stable API for AMDGPU RNG conversion (#2360) (@ToucheSir)
- Non-diff shape handling in norm layers (#2363) (@ToucheSir)
Closed issues:
v0.14.7
Flux v0.14.7
Merged pull requests:
- Bump actions/checkout from 4.0.0 to 4.1.0 (#2340) (@dependabot[bot])
- Use new
public
feature (#2342) (@mcabbott) - Bump thollander/actions-comment-pull-request from 2.4.2 to 2.4.3 (#2347) (@dependabot[bot])
- Bump actions/checkout from 4.1.0 to 4.1.1 (#2348) (@dependabot[bot])
- Fix test that is not broken anymore (#2349) (@devmotion)
- CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#2351) (@github-actions[bot])
- Fix typo in docs/src/tutorials/2021-01-26-mlp.md (#2353) (@poludmik)
- Fixing typos in documentation (#2355) (@poludmik)
- Bump AMDGPU.jl compat to 0.7 (#2356) (@pxl-th)
- Bump AMDGPU compat to 0.8 (#2359) (@pxl-th)
Closed issues:
- Android/iOS support (#2357)
v0.14.6
Flux v0.14.6
Merged pull requests:
- Adding tooling for
JuliaFormatter
. (#2323) (@codetalker7) - Fix typo (#2329) (@christiangnrd)
- Bump AMDGPU compat to 0.6 (#2332) (@pxl-th)
- Bump dorny/paths-filter from 2.9.1 to 2.11.1 (#2333) (@dependabot[bot])
- Bump actions/checkout from 2.2.0 to 4.0.0 (#2334) (@dependabot[bot])
- update CUDA compat (#2338) (@CarloLucibello)
Closed issues:
v0.14.5
Flux v0.14.5
Merged pull requests:
- Bump actions/checkout from 1.0.0 to 3.6.0 (#2324) (@dependabot[bot])
- Bump actions/checkout from 3.6.0 to 4.0.0 (#2326) (@dependabot[bot])
- rename "AMD" backend to "AMDGPU" (#2328) (@CarloLucibello)
v0.14.4
Flux v0.14.4
Merged pull requests:
- allow get_device("Metal") and informative error messages (#2319) (@CarloLucibello)
v0.14.3
Flux v0.14.3
Closed issues:
- No error from negative learning rates (#1982)
- Implement data movement across GPU devices. (#2302)
train!
using Metal and stateful optimizers fails (#2310)- Warning:
sort(d::Dict; args...)
is deprecated, usesort!(OrderedDict(d); args...)
instead. (#2312) - Does
withgradient
have lower precicision than simply calling the function? (#2315) - Warning: sort(d::Dict; args...) is deprecated, use sort!(OrderedDict(d); args...) instead. (#2320)
Merged pull requests:
- Bump thollander/actions-comment-pull-request from 2.4.0 to 2.4.2 (#2307) (@dependabot[bot])
- Implement interface for data transfer across GPU devices. (#2308) (@codetalker7)
- Removing deprecated method call in
GPU_BACKEND_ORDER
. (#2314) (@codetalker7) - Added entry for RobustNeuralNetworks.jl in ecosystem.md (#2317) (@nic-barbara)
- Allow Optimisers.jl v0.3 (#2318) (@CarloLucibello)
v0.14.2
Flux v0.14.2
Closed issues:
- Mixed precision training. (#543)
- have buildkite run GPU tests only (#2271)
- Allow old silent behavior for
gpu
(#2293) - @autosize macro is not working (#2296)
- Why Flux is Significantly Slower than Pytorch? (#2300)
- Huber Loss Fails with Metal GPU (#2305)
Merged pull requests:
- Adding device objects for selecting GPU backends (and defaulting to CPU if none exists). (#2297) (@codetalker7)
- Run only GPU tests on buildkite. (#2301) (@codetalker7)
- Avoid broadcast-related type instabilities with huber_loss (#2306) (@jeremiahpslewis)
v0.14.1
v0.14.0
Flux v0.14.0
Flux now requires Julia 1.9, to take advantage of package extensions. CUDA is no longer loaded automatically, to speed up loading when not using a GPU / using a non-Nvidia one.
Previously deprecated functions removed are:
Flux.stop
andFlux.skip
, in favour ofbreak
/continue
- The macro
@epochs
, in favour of afor
loop Flux.zeros
andFlux.ones
, in favour ofzeros32
andones32
.
Closed issues:
- Choosing model serialization format(s) for cross-framework support (like HuggingFace) (#1907)
- Very slow "using Flux" (#1961)
- SSIM loss (#2165)
- buildkite failure on julia v1.6 (#2214)
- Sysimage compilaiton failed (#2242)
- PackageCompiler fails with Flux on embedded ARM/no GPU (#2262)
- how to make CUDA functionalities an extension (#2265)
- Recurrent layers can't be applied to views of OneHotArrays (#2279)
- Question about using loop in loss function (#2280)
Merged pull requests:
- add CUDA extension (#2268) (@CarloLucibello)
- fix doc of
PairwiseFusion
(#2281) (@ctarn) - Update AMDGPU & Metal compat & add CI job (#2282) (@pxl-th)
- Cleanup for v0.14 release (#2283) (@CarloLucibello)
- tag v0.14 (#2284) (@CarloLucibello)
- Update training.md (#2286) (@dreivmeister)
- Fix typos in 0.14 docs (#2287) (@christiangnrd)