Skip to content

Optimization includes a class of methods to find global or local optima for discrete or continuous objectives; from evolutionary-based algorithms to swarm-based ones.

Notifications You must be signed in to change notification settings

ostad-ai/Optimization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Optimization

  1. Evolution Strategy (ES): The basic algorithm in Python
  2. Evolution Strategy (ES): With single adaptive mutation step-size in Python
  3. Hill Climbing: This is a local search algorithm that always tries to go up the hill of the fitness landspace. However, it gets stuck into local maxima. The Python code along an example is provided here.
  4. Simulated Annealing: This is a metaheuristic to find hopefully the global optimum (minimum). It can be viewed as a combination of Hill Climbing and random walk. The code in python with two examples is given.
  5. Gradient (Steepest) Descent Method: This method uses the negative of gradient of the given objective function as the direction to update the paramters of the function in order to get closer to its minimum. Two applications of the Gradient Descent method are provided here.
  6. Convex sets and convex functions: Sets can be convex or concave. Functions can be convex, concave, or both, or neither. If function f is convex, then -f is concave. In this post, we examine conditions for functions to be convex. Convex functions are important for a large class of optimization problems.
  7. Subgradient method: The subgradient method extends the gradient descent to nondifferentiable convex functions. Here, we first mention subgradients and subdifferentials. Then, we express the subgradient method. Two examples are also provided in Python.
  8. Exhaustive search: This is a one-dimensional optimization algorithm. Thus, it is useful for one-dimensional functions. We express the algorithm in Python, and then use it in an example. It is noted that the objective function should be a unimodal function. It is also worth mentioning that we have better one-dimensional optimization algorithms that will be discussed in the future.
  9. Coordinate Descent: Also called Coordiante search is an optimization method in which we search along each coordinate of the objective function, one at a time. As a result, we need a one-dimensional optimization algorithm inside Coordinate Descent. Here, two verions are used: One version uses the negative of partial derivatives. The other version uses the Exhaustive Search reveiwed in the previous post.
  10. Newton's method for Optimization: We may use the negative of the product of the Hessian matrix (it must be positive definite) and the Gradient vector to move toward the minimizer of the objective function. We remember that the gradient descent uses only the negative of gradient vector for search direction. Thus, Newton's method is infact a generalization of gradient descent. Newton's method often converges faster than the gradient descent. Here, we show that for some functions, including nonconvex ones, Newton's method may fail.

About

Optimization includes a class of methods to find global or local optima for discrete or continuous objectives; from evolutionary-based algorithms to swarm-based ones.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published