Skip to content

Add neural network optimizers module to enhance training capabilities #13662

@Adhithya-Laxman

Description

@Adhithya-Laxman

Feature description

The current machine_learning directory in TheAlgorithms/Python lacks implementations of neural network optimizers, which are fundamental to training deep learning models effectively. To fill this gap and provide educational, reference-quality implementations, I propose creating a new module, neural_network/optimizers, including the following optimizers in sequence:

  • Stochastic Gradient Descent (SGD)
  • Momentum SGD
  • Nesterov Accelerated Gradient (NAG)
  • Adagrad
  • Adam
  • Muon (a recent optimizer using Newton-Schulz orthogonalization)

This order introduces optimizers by increasing complexity and practical usage in the community, facilitating incremental contributions and review. Each optimizer will have well-documented code, clear usage examples, type hints, and comprehensive doctests or unittests.

This multi-step approach ensures maintainable growth of the module and benefits learners by covering the optimizers most commonly used in practice.

Feedback and suggestions on this plan are welcome before implementation begins.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementThis PR modified some existing files

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions