Skip to content

andrewt3000/MachineLearning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

My Notes on Machine Learning

Machine Learning is a sub-field of artificial intelligence that uses data to train predictive models.

3 Types of machine learning

  1. Supervised learning - Minimize an error function using labeled training data.
  2. Unsupervised learning - Find patterns using unlabled training data. Examples include Principal component analysis and clustering.
  3. Reinforcement learning - Maximize a reward. An agent interacts with an environment and learns to take action by maximizing a cumulative reward.

2 Types of machine learning problems

  1. regression - predicting a continuous value attribute (Example: house prices)
  2. classification - predicting a discrete value. (Example: pass or fail, hot dog/not hot dog)

Input features

Features - are the inputs to a machine learning model. They are the measurable property being observed. An example of a features is pixel brightness in computer vision tasks or the square footgage of a house in home pricing prediction.

Feature selection - The process of choosing the features. It is important to pick features that correlate with the output.

Dimensoionality Reduction - Reducing the number of features while preserving important features. A simple example is selecting the area of a house as a feature rather than using width and length seperately. Other examples include singular value decomposition, variational auto-encoders, and t-SNE (for visualizations), and max pooling layers for CNNs.

Data

In suprervised learning data is typically split into training, validation and test data.

An example is a single instance from your dataset.

Machine learning models and applications

Neural Networks - Neural networks are a suitable model for fixed input features.

Convolutional Neural Networks CNNS are suitable models for computer vision problems.

Transformers - are designed to handle sequential data, like text, in a manner that allows for much more parallelization than previous models like recurrent neural networks. See 2017 paper Attention Is All You Need
Transformer is the architecture for chat gpt. Example nanogpt

Computer Vision

These are common computer vision tasks and state of the art methods for solving them.

NLP Natural Language Processing

Deep Learning for NLP Deep learning models and nlp applications such as sentiment analysis, translation and dialog generation.

Transfer learning

Transfer learning - storing knowledge gained while solving one problem and applying it to a different but related problem.

Explainablity

Feature visualization - In computer vision, generating images representative of what neural networks are looking for.
TensorFlow Lucid, Activation Atlas

Feature attribution - In computer vision, determining and representing which pixels contribute to a classification. Example: Saliency maps, Deconvolution, CAM, Grad-CAM
tf explain - tensorflow visualization library.
fast ai heatmap - uses grad-cam

Lime

About

Machine Learning Notes

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published