A high-performance optimization library that leverages parallel computing for large-scale optimization problems. DAPAO provides efficient implementations of classical and modern optimization algorithms including Steepest Descent, Newton, Modified Newton, BFGS, L-BFGS, and Newton-CG.
- Parallel Execution: Utilizes Julia's native parallel computing capabilities for faster convergence on multi-core systems
- Distributed Computing Support: Scales to multiple nodes for handling very large optimization problems
- High Performance: Optimized implementations of gradient-based algorithms
- Flexible API: Simple interface that works with both simple and complex optimization problems
- Extensible Design: Easy to add custom optimization methods
You can install DAPAO from the Julia REPL using the package manager:
using Pkg
Pkg.add("DAPAO")
Or, in pkg mode (press ]
in the REPL):
add DAPAO
- Steepest Descent
- Newton
- Modified Newton
- BFGS
- L-BFGS
- Newton-CG
- Implement BFGS method
- Implement L-BFGS method
- Pass all tests for different functions in test/runtest.jl
- Test scalability
Basic usage example:
using DAPAO
using LinearAlgebra
# Define your objective function and its gradient
function f(x, p)
return x[1]^2 + 2x[2]^2
end
function grad(x, p)
g = zeros(2)
g[1] = 2x[1]
g[2] = 4x[2]
return g
end
# Optimize using Steepest Descent
optfunc = OptimizationFunction(f, grad)
x0 = [-1.2, 1.0]
prob = OptimizationProblem(optfunc, x0)
sol = solve(prob, SteepestDescent())
println("Minimum found at: ", sol.x)
println("Minimum value: ", sol.f_val)
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.