-
-
Notifications
You must be signed in to change notification settings - Fork 48.9k
Description
Feature description
The current machine_learning directory in TheAlgorithms/Python lacks implementations of neural network optimizers, which are fundamental to training deep learning models effectively. To fill this gap and provide educational, reference-quality implementations, I propose creating a new module, neural_network/optimizers
, including the following optimizers in sequence:
- Stochastic Gradient Descent (SGD)
- Momentum SGD
- Nesterov Accelerated Gradient (NAG)
- Adagrad
- Adam
- Muon (a recent optimizer using Newton-Schulz orthogonalization)
This order introduces optimizers by increasing complexity and practical usage in the community, facilitating incremental contributions and review. Each optimizer will have well-documented code, clear usage examples, type hints, and comprehensive doctests or unittests.
This multi-step approach ensures maintainable growth of the module and benefits learners by covering the optimizers most commonly used in practice.
Feedback and suggestions on this plan are welcome before implementation begins.