Books and Resources | Status of Completion |
---|---|
1. Machine Learning From Scratch | âś… |
2. A Comprehensive Guide to Machine Learning | âś… |
3. Hands On Machine Learning with Scikit Learn, Keras and TensorFlow | âś… |
4. Speech and Language Processing | |
5. Machine Learning Crash Course | âś… |
6. Deep Learning with PyTorch: Part I | âś… |
7. Dive into Deep Learning | âś… |
8. Logistic Regression Documentation | âś… |
9. Deep Learning for Coders with Fastai and PyTorch | âś… |
10. Approaching Almost Any Machine Learning Problem | |
11. PyImageSearch |
Research Papers |
---|
1. Practical Recommendations for Gradient based Training of Deep Architectures |
Day1 of 300DaysOfData!
- Gradient Descent and Cross Validation: Gradient Descent is an iterative approach to approximating the Parameters that minimize a Differentiable Loss Function. Cross Validation is a resampling procedure used to evaluate Machine Learning Models on a limited Data sample which has a parameter that splits the data into number of groups. On my Journey of Machine Learning and Deep Learning, Today I have read in brief about the fundamental Topics such as Calculus, Matrices, Matrix Calculus, Random Variables, Density Functions, Distributions, Independence, Maximum Likelihood Estimation and Conditional Probability. I have also read and Implemented about Gradient Descent and Cross Validation. I am starting this Journey from Scratch and I am following the Book:Machine Learning From Scratch. I have presented the Implementation of Gradient Descent and Cross Validation here in the Snapshots. I hope you will also spend some time reading the Topics from the Book mentioned above. I am excited about the days to come!!
- Book:
Day2 of 300DaysOfData!
- Ordinary Linear Regression: Linear Regression is a linear approach to modelling the relationships between a scalar response or dependent variable and one or more explanatory variables or independent variables. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Ordinary Linear Regression, Parameter Estimation, Minimizing Loss and Maximizing Likelihood along with the Construction and Implementation of the LR from the Book Machine Learning From Scratch. I have also started reading the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Regression, Ordinary Least Squares, Vector Calculus, Orthogonal Projection, Ridge Regression, Feature Engineering, Fitting Ellipses, Polynomial Features, Hyperparameters and Validation, Errors and Cross Validation from this book. I have presented the Implementation of Linear Regression along with Visualizations using Python here in the Snapshots. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead!!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day3 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Regularized Regression such as Ridge Regression and Lasso Regression, Bayesian Regression, GLMs, Poisson Regression along with Construction and Implementation of the same from the Book Machine Learning From Scratch. I have also read the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Maximum Likelihood Estimation or MLE and Maximum a Posteriori or MAE for Regression, Probabilistic Model, Bias Variance Tradeoff, Metrics, Bias Variance Decomposition, Alternative Decomposition, Multivariate Gaussians, Estimating Gaussians from Data, Weighted Least Squares, Ridge Regression, and Generalized Least Squares from this Book. I have presented the Implementation of Ridge Regression, Lasso Regression along with Cross Validation, Bayesian Regression and Poisson Regression using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead!!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day4 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Discriminative Classifiers such as Binary and Multiclass Logistic Regression, The Perceptron Algorithm, Parameter Estimation, Fishers Linear Discriminant and Fisher Criterion along with Construction and Implementation of the same from the Book Machine Learning From Scratch. I have also read the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Kernels and Ridge Regression, Linear Algebra Derivation, Computational Analysis, Sparse Least Squares, Orthogonal Matching Pursuit, Total Least Squares, Low rank Formulation, Dimensionality Reduction, Principal Component Analysis, Projection, Changing Coordinates, Minimizing Reconstruction Errors and Probabilistic PCA from this Book. I have presented the Implementation of Binary and Multiclass Logistic Regression, The Perceptron Algorithm and Fishers Linear Discriminant using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead!!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day5 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Generative Classifiers such as Linear Discriminative Analysis or LDA, Quadratic Discriminative Analysis or QDA, Naive Bayes, Parameter Estimation and Data Likelihood along with Construction and Implementation of the same from the Book Machine Learning From Scratch. I have also read the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Generative and Discriminative Classification, Bayes Decision Rule, Least Squares Support Vector Machines, Feature Extension, Neural Network Extension, Binary and Multiclass Logistic Regression, Loss Function, Training, Multiclass Extension, Gaussian Discriminant Analysis, QDA and LDA Classification and Support Vector Machines from this Book. I have presented the Implementation of LDA, QDA and Naive Bayes along with Visualizations using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead!!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day6 of 300DaysOfData!
- Decision Trees: A Decision Tree is an interpretable machine learning for Regression and Classification. It is a flow chart like structure in which each internal node represents a Test on an attribute and each branch represents the outcome of the Test. On my Journey of Machine Learning and Deep Learning, Today I have read about Decision Trees such as Regression Trees and Classification Trees, Building Trees, Making Splits and Predictions, Hyperparameters, Pruning and Regularization along with Construction and Implementation of the same from the Book Machine Learning From Scratch. I have also read the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Decision Tree Learning, Entropy and Information, Gini Impurity, Stopping Criteria, Random Forests, Boosting and AdaBoost, Gradient Boosting and KMeans Clustering from this Book. I have presented the Implementation of Regression Trees and Classification Trees using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead!!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day7 of 300DaysOfData!
- Tree Ensemble Methods: Ensemble Methods combine the outputs of multiple simple Models which is often called Learners in order to create the fine Model with low variance. Due to their high variance, a decision trees often fail to reach a level of precision comparable to other predictive algorithms and Ensemble Methods minimize the variance. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Tree Ensemble Methods such as Bagging for Decision Trees, Bootstrapping, Random Forests and Procedure, Boosting, AdaBoost for Binary Classification, Weighted Classification Trees, The Discrete AdaBoost Algorithm and AdaBoost for Regression along with Construction and Implementation of the same from the Book Machine Learning From Scratch. I have presented the Implementation of Bagging, Random Forests and AdaBoost along with different base estimators using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Book mentioned above. Excited about the days ahead !!
- Books:
Day8 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Neural Networks from the Book Machine Learning From Scratch. I have read about Model Structure, Communication between Layers, Activation Functions such as ReLU, Sigmoid, The Linear Activation Function, Optimization, Back Propagation, Calculating Gradients, Chain Rule and Observations, Loss Functions along with Construction using The Loop Approach and The Matrix Approach and Implementation of the same from this Book. I have also read the Book A Comprehensive Guide to Machine Learning which focuses on Mathematics and Theory behind the Topics. I have read about Convolutional Neural Networks and Layers, Pooling Layers, Back Propagation for CNN, ResNet and Visual Understanding of CNNs from this Book. Besides, I have seen a couple of videos of Neural Networks and Deep Learning. I have presented the simple Implementation of Neural Networks with The Functional API and The Sequential API using TensorFlow here in the Snapshot. I hope you will also spend some time reading the Topics and Books mentioned above. Excited about the days ahead !!
- Books:
- Machine Learning From Scratch
- A Comprehensive Guide to Machine Learning
Day9 of 300DaysOfData!
- Reinforcement Learning: In Reinforcement Learning, The Learning system called an agent in a particular context can observe the environment, select and perform actions and get rewards in return or penalties in the form of negative rewards. It must learn by itself what is the best policy to get the most reward over time. On my Journey of Machine Learning and Deep Learning, Today I have started reading and Implementing from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have read briefly about The Machine Learning Landscape viz. Types of Machine Learning Systems such as Supervised and Unsupervised Learning, Semisupervised Learning, Reinforcement Learning, Batch Learning and Online Learning, Instance Based Learning and Model Based Learning from this Book. I have presented the simple Implementation of Linear Regression and KNearest Neighbors along with a simple plot using Python here in the Snapshot. I hope you will also spend some time reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day10 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read about the Main Challenges of Machine Learning such as Insufficient Quantity of Training Data, Non representative Training Data, Poor Quality Data, Irrelevant Features, Overfitting and Underfitting the Training Data and Testing and Validating, Hyperparameter Tuning and Model Selection and Data Mismatch from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have started working on California Housing Prices Dataset which is included in this Book. I will build a Model of Housing Prices in California in this Project. I have presented the simple Implementation of Data Processing and few techniques of EDA using Python here in the Snapshot. I have also presented the Implementation of Sweetviz Library for Analysis here. I really appreciate Chanin Nantasenamat for sharing about this Library in one of his videos. I hope you will also spend some time reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- Chanin Nantasenamat Video on Sweetviz
- California Housing Prices
Day11 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have learned and Implemented about Creating categories from attributes, Stratified Sampling, Visualizing Data to gain insights, Scatter Plots, Correlations, Scatter Matrix and Attribute Combinations from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have continued working with California Housing Prices Dataset which is included in this Book. This Dataset was based on Data from the 1990 California Census. I will build a Model of Housing Prices in California in this Project. I am still working on the same. I have presented the Implementation of Stratified Sampling, Correlations using Scatter Matrix and Attribute combinations using Python here in the Snapshots. I have also presented the Snapshots of Correlations using Scatter plots here. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- California Housing Prices
Day12 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have learned and Implemented about Preparing the Data for Machine Learning Algorithms, Data Cleaning, Simple Imputer, Ordinal Encoder, OneHot Encoder, Feature Scaling, Transformation Pipeline, Standard Scaler, Column Transformer, Linear Regression, Decision Tree Regressor and Cross Validation from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have continued working with California Housing Prices Dataset which is included in this Book. This Dataset was based on Data from the 1990 California Census. I will build a Model of Housing Prices in California in this Project. The Notebook contains almost every Topics mentioned above. I have presented the Implementation of Data Preparation, Handling missing values, OneHot Encoder, Column Transformer, Linear Regression, Decision Tree Regressor along with Cross Validation using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- California Housing Prices
Day13 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have learned and Implemented about Random Forest Regressor, Ensemble Learning, Tuning the Model, Grid Search, Randomized Search, Analyzing the Best Models and Errors, Model Evaluation, Cross Validation and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have completed working with California Housing Prices Dataset which is included in this Book. This Dataset was based on Data from the 1990 California Census. I have built a Model using Random Forest Regressor of California Housing Prices Dataset to predict the price of the Houses in California. I have presented the Implementation of Random Forest Regressor and Tuning the Model with Grid Search and Randomized Search along with Cross Validation using Python here in the Snapshot. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- California Housing Prices
Day14 of 300DaysOfData!
- Confusion Matrix: Confusion Matrix is a better way to evaluate the performance of a Classifier. The general idea of Confusion Matrix is to count the number of times instances of Class A are classified as Class B. This approach requires to have a set of predictions so that they can be compared to the actual targets. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Classification, Training a Binary Classifier using Stochastic Gradient Descent, Measuring Accuracy using Cross Validation, Implementation of CV, Confusion Matrix, Precision and Recall and their Curves and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of SGD Classifier in MNIST Dataset along with Precision and Recall using Python here in the Snapshots. I have also presented the curves of Precision and Recall here. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. I am excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day15 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about The ROC Curve, Random Forest Classifier, SGD Classifier, Multi Class Classification, One vs One and One vs All Strategies, Cross Validation, Error Analysis using Confusion Matrix, Multi Class Classification, KNeighbors Classifier, Multi Output Classification, Noises, Precision and Recall Tradeoff and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have completed the Topic Classification from this Book. I have presented the Implementation of The ROC Curve, Random Forest Classifier in Multi Class Classification, The One vs One Strategy, Standard Scaler, Error Analysis, Multi Label Classification and Multi Output Classification using Scikit Learn here in the Snapshots. I hope you will also work on the same. I hope you will also spend some time reading the Topics and Book mentioned above. I am excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day16 of 300DaysOfData!
- Ridge Regression: Ridge Regression is a regularized Linear Regression viz. a regularization term is added to the cost function which forces the learning algorithm to not only fit the Data but also keep the model weights as small as possible. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Training the Models, Linear Regression, The Normal Equations and Computational Complexity, Cost Function and Gradient Descent such as Batch Gradient Descent, Convergence Rate, Stochastic Gradient Descent, Mini batch Gradient Descent, Polynomial Regression and Poly Features, Learning Curves, Bias and Variance Tradeoff, Regularized Linear Models such as Ridge Regression and few more related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Polynomial Regression, Learning Curves and Ridge Regression along with Visualization using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day17 of 300DaysOfData!
- Elastic Net: Elastic Net is a middle grouped between Ridge Regression and Lasso Regression. The regularization term r is a simple mix of both Ridge and Lasso's regularization terms. When r equals 0, it is equivalent to Ridge and when r equals 1, it is equivalent to Lasso Regression. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Lasso Regression, Elastic Net, Early Stopping, SGD Regressor, Logistic Regression, Estimating Probabilities, Training and Cost Function, Sigmoid Function, Decision Boundaries, Softmax Regression or Multinomial Logistic Regression, Cross Entropy and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have just started reading the Topic Support Vector Machines. I have presented the simple Implementation of Lasso Regression, Elastic Net, Early Stopping, Logistic Regression and Softmax Regression using Scikit Learn here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day18 of 300DaysOfData!
- Support Vector Machines: A Support Vector Machines or SVM is a very powerful and versatile Machine Learning model which is capable of performing Linear and Nonlinear Classification, Regression and even outlier detection. SVMs are particularly well suited for classification of complex but medium sized datasets. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Support Vector Machines, Linear SVM Classification, Soft Margin Classification, Nonlinear SVM Classification, Polynomial Regression, Polynomial Kernel, Adding Similarity Features, Gaussian RBF Kernel, Computational Complexity, SVM Regression which is Linear as well Nonlinear and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Nonlinear SVM Classification using SVC and Linear SVC along with Visualization using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day19 of 300DaysOfData!
- Voting Classifiers: Voting Classifiers are the classifiers which aggregates the predictions of different Classifiers and predicts the class that gets the most votes. The majority vote classifier is called a Hard Voting Classifier. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Ensemble Learning and Random Forests, Voting Classifiers such as Hard Voting and Soft Voting Classifiers and few more topics related to the same. Actually, I have also started working on a Research Project with an amazing Team. I have presented the Implementation of Hard Voting and Soft Voting Classifiers using Scikit Learn here in the Snapshots. I hope you will spend some time working on the same and reading the Topics mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day20 of 300DaysOfData!
- The CART Training Algorithm: The Algorithm which represents Scikit Learn's Implementation of the Classification and Regression Tree or CART Training algorithm to train Decision Trees also called Growing Trees. It's working principle is splitting the Training set into two subsets using a feature and a threshold. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Decision Functions and Predictions, Decision Trees, Decision Tree Classifier, Making Predictions, Gini Impurity, White Box Models and Black Box Models, Estimating Class Probabilities, The CART Training Algorithm, Computational Complexities, Entropy, Regularization Hyperparameters, Decision Tree Regressor, Cost Function and Instability from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the simple Implementation of Decision Tree Classifier and Decision Tree Regressor along with Visualization of the same using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day21 of 300DaysOfData!
- Bagging and Pasting: It refers to the approach which uses the same Training Algorithm for every predictor but to train them on different random subsets of the Training set. When sampling is performed with replacement, it is called Bagging and when sampling is performed without replacement, it is called Pasting. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Ensemble Learning and Random Forests, Voting Classifiers, Bagging and Pasting in Scikit Learn, Out of Bag Evaluation, Random Patches and Random Subspaces, Random Forests, Extremely Randomized Trees Ensemble, Feature Importance, Boosting, AdaBoost, Gradient Boosting and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Bagging Ensembles, Decision Trees, Random Forest Classifier, Feature Importance, AdaBoost Classifier and Gradient Boosting using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day22 of 300DaysOfData!
- Manifold Learning: Manifold Learning refers to the Dimensionality Reduction Algorithms that work by modeling the manifold on which the training instances lie which relies on manifold hypothesis which holds that most real world high dimensional datasets to a much lower dimensional manifold. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Gradient Boosting, Early Stopping, Stochastic Gradient Boosting, Extreme Gradient Boosting or XGBoost, Stacking and Blending, Dimensionality Reduction, Curse of Dimensionality, Approaches for Dimensionality Reduction, Projection and Manifold Learning and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Gradient Boosting with Early Stopping along with Visualization using Scikit Learn here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day23 of 300DaysOfData!
- Incremental PCA: Incremental PCA or IPCA Algorithms are the algorithms in which we can split the Training set into mini batches and feed an IPCA Algorithm one mini batch at a time. It is useful for large Training sets and also to apply PCA online. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Principal Component Analysis or PCA, Preserving the Variance, Principal Components, Projecting Down the Dimensions, Explained Variance Ratio, Choosing the Right Number of Dimensions, PCA for Compression and Decompression, Reconstruction Error, Randomized PCA, SVD, Incremental PCA and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of PCA, Randomized PCA and Incremental PCA along with Visualizations using Scikit Learn here in the Snapshots. I hope you will spend some time working on the same. I hope you will also spend some time reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day24 of 300DaysOfData!
- Clustering: Clustering Algorithms are the algorithms whose goal is to group similar instances together into Clusters. It is a great tool for Data Analysis, Customer Segmentation, Recommender Systems, Search Engines, Image Segmentation, Dimensionality Reduction and many more. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Kernel Principal Component Analysis, Selecting a Kernel and Tuning Hyperparameters, Pipeline and Grid Search, Locally Linear Embedding, Dimensionality Reduction Techniques such as Multi Dimensional Scaling, Isomap and Linear Discriminant Analysis, Unsupervised Learning such as Clustering and KMeans Clustering Algorithm and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Kernel PCA and Grid Search CV, and KMeans Clustering Algorithm along with a Visualization using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day25 of 300DaysOfData!
- Image Segmentation: Image Segmentation is the task of partitioning an Image into multiple segments. In Semantic Segmentation, all the pixels that are part of the same object type get assigned to the same segment. In Instance Segmentation, all pixels that are part of the individual object are assigned to the same segment. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about KMeans Algorithms, Centroid Initialization, Accelerated KMeans and Mini Batch KMeans, Finding the Optimal Numbers of Clusters, Elbow rule and Silhouette Coefficient score, Limitations of KMeans, Using Clustering for Image Segmentation and Preprocessing such as Dimensionality Reduction and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Clustering Algorithms for Image Segmentation and Preprocessing along with Visualizations using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day26 of 300DaysOfData!
- Gaussian Mixtures Model: A Gaussian Mixture Model or GMM is a probabilistic Model that assumes that the instances were generated from the mixture of several Gaussian distributions whose parameters are unknown. All the instances generated from a single Gaussian Distributions form a cluster that typically looks like an Ellipsoid. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about using Clustering Algorithms for Semi Supervised Learning, Active Learning and Uncertainty Sampling, DBSCAN, Agglomerative Clustering, Birch Algorithms, Mean Shift and Affinity Propagation Algorithms, Spectral Clustering, Gaussian Mixtures Model, Expectation Maximization Algorithm and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Clustering Algorithms for Semi supervised Learning and DBSCAN along with Visualizations using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day27 of 300DaysOfData!
- Anomaly Detection: Anomaly Detection also called Outlier Detection is the task of detecting instances that deviate strongly from the norm. These instances are called anomalies or outliers while the normal instances are called inliers. It is useful in Fraud Detection and more. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Gaussian Mixture Models, Anomaly Detection using Gaussian Mixtures, Novelty Detection, Selecting the Number of Clusters, Bayesian Information Criterion, Akaike Information Criterion, Likelihood Function, Bayesian Gaussian Mixture Models, Fast MCD, Isolation Forest, Local Outlier Factor, One Class SVM and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have just started Neural Networks and Deep Learning from this Book. I have presented the Implementation of Gaussian Mixture Model along with Visualizations using Python here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day28 of 300DaysOfData!
- Rectified Linear Unit Function or ReLU : It is a continuous but not differentiable at 0 where the slope changes abruptly and makes the Gradient Descent bounce around. It works very well and has the advantage of fast to compute. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Introduction to Artificial Neural Networks with Keras, Biological Neurons, Logical Computations with Neurons, The Perceptron, Hebbian Learning, Multi Layer Perceptron and Backpropagation, Gradient Descent, Hyperbolic Tangent Function and Rectified Linear Unit Function, Regression MLPs, Classification MLPs, Softmax Activation and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Building an Image Classifier using the Sequential API along with Visualization using Keras here in the Snapshots. I hope you will spend some time working on the same and reading the Topics and Book mentioned above. I am excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day29 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Creating the Model using Sequential API, Compiling the Model, Loss Function and Activation Function, Training and Evaluating the Model, Learning Curves, Using the Model to make Predictions, Building the Regression MLP using the Sequential API, Building Complex Models using the Functional API, Deep Neural Networks and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Building Regression MLP using Sequential API and Functional API here in the Snapshots. I hope you will gain some insights and you will spend some time working on the same. I hope you will also spend some time reading and Implementing the Topics from the Book mentioned above. I am excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day30 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Building the Complex Models using Functional API, Deep Neural Network Architecture, ReLU Activation Function, Handling Multiple Inputs in the Model, Mean Squared Error Loss Function and Stochastic Gradient Descent Optimizer, Handling Multiple Outputs or Auxiliary Output for Regularization and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Handling Multiple Inputs using Keras Functional API along with the Implementation of Handling Multiple Outputs or Auxiliary Output for Regularization using the same here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. I am excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day31 of 300DaysOfData!
- Callbacks and Early Stopping: Early Stopping is a method that allows you to specify an arbitrarily large number of Training epochs and stopping once the Model stops improving on the validation dataset. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Building Dynamic Models using the Sub Classing API, Sequential API and Functional API, Saving and Restoring the Model, Using Callbacks, Model Checkpoints, Early Stopping, Weights and Biases and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Building Dynamic Models using the Sub Classing API along with the Implementation of Using Callbacks and Early Stopping here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day32 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Visualization using TensorBoard, Learning Curves, Fine Tuning Neural Network Hyperparameters, Randomized Search CV, Regressor, Libraries to optimize Hyperparameters such as Hyperopt, Talos and few more, Number of Hidden Layers, Number of Neurons per Hidden Layer, Learning Rate, Batch size and Other Hyperparameters and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have also spend some time reading the Paper which is named as Practical Recommendations for Gradient based Training of Deep Architectures. Here, I have read about Deep Learning and Greedy Layer Wise Pretraining, Online Learning and Optimization of Generalization Error and few more related to the same. I have presented the Implementation of Tuning Hyperparameters, Keras Regressors and Randomized Search CV here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- Paper:
Day33 of 300DaysOfData!
- Vanishing Gradient: During Backpropagation and calculating Gradients, it often gets smaller and smaller as the Algorithms progresses down to the lower layers which prevents the Training to converge to the good solution. This leads to Vanishing Gradient Problem. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Training Deep Neural Networks, Vanishing and Exploding Gradient Problems, Glorot and He Initialization, Non Saturating Activation Functions, Batch Normalization and its Implementation, Logistic and Sigmoid Activation Function, SELU Activation Function, ReLU Activation Function and Variants, Leaky ReLU and Parametric Leaky ReLU and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Leaky ReLU and Batch Normalization here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day34 of 300DaysOfData!
- Gradient Clipping: Gradient Clipping is the Technique to lessen the exploding Gradients problem which simply clip the Gradients during backpropagation so that they never exceed some threshold and it is mostly used in Recurrent Neural Networks. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Gradient Clipping, Batch Normalization, Reusing Pretrained Layers, Deep Neural Networks and Transfer Learning, Unsupervised Pretraining, Restricted Boltzmann Machines, Pretraining on an Auxiliary Task, Self Supervised Learning, Faster Optimizers, Gradient Descent Optimizer, Momentum Optimization, Nesterov Accelerated Gradient and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the simple Implementation of Transfer Learning using Keras and Sequential API here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day35 of 300DaysOfData!
- Adam Optimization: Adam which stands for Adaptive Moment Estimation combines the ideas of Momentum Optimization and RMSProp where Momentum Optimization keeps track of an exponentially decaying average of past gradients and RMSProp keeps track of an exponentially decaying average of past squared gradients. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about AdaGrad Algorithm, Gradient Descent, RMSProp Algorithm, Adaptive Moment Estimation or Adam Optimization, Adamax, Nadam Optimization, Training Sparse Models, Dual Averaging, Learning Rate Scheduling, Power Scheduling, Exponential Scheduling, Piecewise Constant Scheduling, Performance Scheduling and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Exponential Scheduling and Piecewise Constant Scheduling here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day36 of 300DaysOfData!
- Deep Neural Networks: The best Deep Neural Networks configurations which will work fine in most cases without requiring much Hyperparameter Tuning is here: Kernel Initializer as LeCun Initialization, Activation Function as SELU, Normalization as None, Regularization as Early Stopping, Optimizer as Nadam, Learning Rate Schedule as Performance Scheduling. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Avoiding Overfitting Through Regularization, L1 and L2 Regularization, Dropout Regularization, Self Normalization, Batch Normalization, Monte Carlo Dropout, Max Norm Regularization, Activation Functions like SELU and Leaky ReLU, Nadam Optimization and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of L2 Regularization and Dropout Regularization using Keras here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day37 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Custom Models and Training with TensorFlow, High Level Deep Learning APIs, IO and Preprocessing, Lower Level Deep Learning APIs, Deployment and Optimization, TensorFlow Architecture, Tensors and Operations, Keras Low Level API, Tensors and Numpy, Sparse Tensors, Arrays, String Tensors, Custom Loss Functions, Saving and Loading the Models containing Custom Components and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have also started reading a Book Speech and Language Processing. Here, I have read about Regular Expressions, Text Normalization, Tokenization, Lemmatization, Stemming, Sentence Segmentation, Edit Distance and few more Topics related to the same. I have presented the simple Implementation of Custom Loss Function here in the Snapshot. I hope you will also spend some time reading the Topics from the Books mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- Speech and Language Processing
Day38 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Custom Activation Functions, Initializers, Regularizers and Constraints, Custom Metrics, MAE and MSE, Streaming Metric, Custom Layers, Custom Models, Losses and Metrics based on Models Internals and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have also started reading a Book Speech and Language Processing. Here, I have read about Regular Expressions, Basic Regular Expression Patterns, Disjunction, Range, Kleene Star, Wildcard Expression, Grouping and Precedence, Operator Hierarchy, Greedy and Non Greedy matching, Sequence and Anchors, Counters and few more Topics related to the same. I have presented the Implementation of Custom Activation Functions, Initializers, Regularizers, Constraints and Custom Metrics here in the Snapshots. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
- Speech and Language Processing
Day39 of 300DaysOfData!
- Prefetching and Data API: Prefetching is the loading of the resource before it is required to decrease the time waiting for that resource. In other words, while the Training Algorithm is working on one batch the dataset will already be working in parallel on getting the next batch ready which will improve the performance dramatically. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Loading and Preprocessing Data using TensorFlow, The Data API, Chaining Transformations, Shuffling the Dataset, Gradient Descent, Interleaving Lines From Multiple Files, Parallelism, Preprocessing the Dataset, Decoding, Prefetching, Multithreading and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the simple Implementation of Data API using TensorFlow here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day40 of 300DaysOfData!
- Embedding and Representation Learning: An Embedding is a trainable dense vector that represents a category. The better the representation of the categories, the easier it will be for the Neural Network to make accurate predictions, so Embeddings must make the useful representations of the categories. This is called Representation Learning. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about The Features API, Column Transformer, Numerical and Categorical Features, Crossed Categorical Features, Encoding Categorical Features using One Hot Vectors and Embeddings, Representation Learning, Word Embeddings, Using Feature Columns for Parsing, Using Feature Columns in the Models and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the simple Implementation of The Features API in Numerical and Categorical Columns along with Parsing here in the Snapshots. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day41 of 300DaysOfData!
- Convolutional Layer:The most important building block of CNN is the Convolutional Layer. Neurons in the first Convolutional Layer are not connected to every single pixel in the Input Image but only to pixels in their respective fields. Similarly, each Neurons in second CL is connected only to neurons located within a small rectangle in the first layer. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Deep Computer Vision using Convolutional Neural Networks, The Architecture of the Visual Cortex, Convolutional Layer, Zero Padding, Filters, Stacking Multiple Feature Maps, Padding, Memory Requirements, Pooling Layer, Invariance, Convolutional Neural Network Architectures and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the simple Implementation of Convolutional Neural Network Architecture here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day42 of 300DaysOfData!
- ResNet Model: Residual Network or ResNet won the ILSVRC 2015 Challenge, developed by Kaiming He, using an extremely deep CNN composed of 152 Layers. This Network uses the Skip connections which is also called Shortcut connections: The signal feeding into a layer is also added to the output of a layer located a bit higher up the stack. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about LeNet-5 Architecture, AlexNet CNN Architecture, Data Augmentation, Local Response Normalization, GoogLeNet Architecture, Inception Module, VGGNet, Residual Network or ResNet, Residual Learning, Xception or Extreme Inception, Squeeze and Excitation Network or SENet and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of ResNet 34 CNN using Keras here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day43 of 300DaysOfData!
- Xception Model: Xception which stands for Extreme Inception is a variant of GoogLeNet Architecture which was proposed in 2016 by François Chollet. It merges the ideas of GoogLeNet and ResNet Architecture but it replaces the Inception modules with a special type of layer called a Depthwise Separable Convolution. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Using Pretrained Models from Keras, GoogLeNet and Residual Network or ResNet, ImageNet, Pretrained Models for Transfer Learning, Xception Model, Convolutional Neural Network, Batching, Prefetching, Global Average Pooling and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have presented the Implementation of Pretrained Models such as ResNet and Xception for Transfer Learning here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day44 of 300DaysOfData!
- Semantic Segmentation: In Semantic Segmentation, each pixel is classified according to the class of the object it belongs to but the different objects of the same class are not distinguished. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented about Classification and Localization, Crowdsourcing in Computer Vision, Intersection Over Union metric, Object Detection, Fully Convolutional Networks or FCNs, VALID Padding, You Only Look Once or YOLO Architecture, Mean Average Precision or MAP, Convolutional Neural Networks, Semantic Segmentation and few more Topics related to the same from the Book Hands On Machine Learning with Scikit Learn, Keras and TensorFlow. I have just completed learning from this Book. I have presented the Implementation of Classification and Localization along with the Visualization here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time reading the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Books:
- Hands On Machine Learning with Scikit Learn, Keras and TensorFlow
Day45 of 300DaysOfData!
- Empirical Risk Minimization: Training a Model means learning good values for all the weights and the biases from Labeled examples. In Supervised Learning, a Machine Learning Algorithm builds a Model by examining many examples and attempting to find a Model that minimizes loss which is called Empirical Risk Minimization. On my Journey of Machine Learning and Deep Learning, Today I have started learning from the Machine Learning Crash Course of Google. Here, I have learned about Machine Learning Philosophy, Fundamentals of Machine Learning and Uses, Labels and Features, Labeled and Unlabeled Example, Models and Inference, Regression and Classification, Linear Regression, Weights and Bias, Training and Loss, Empirical Risk Minimization, Mean Squared Error or MSE, Reducing Loss, Gradient Descent and few more Topics related to the same. I have presented the simple Implementation of Basic Recurrent Neural Network here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course mentioned above and below. Excited about the days ahead !!
- Course:
Day46 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have learned from Machine Learning Crash Course of Google. Here, I have learned and Implemented about Learning Rate or Step size, Hyperparameters in Machine Learning Algorithms, Regression, Gradient Descent, Optimizing Learning Rate, Stochastic Gradient Descent or SGD, Batch and Batch Size, Minibatch Stochastic Gradient Descent, Convergence, Hierarchy of TensorFlow Toolkits and few more Topics related to the same. I have also spend some time in reading the Book Speech and Language Processing. Here, I have read about Regular Expressions and Patterns, Precision and Recall, Kleene Star, Aliases for Common Characters, RE Operators for Counting and few more Topics related to the same. I have presented the simple Implementation of Recurrent Neural Network and Deep RNN using Keras here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course and Book mentioned above and below. Excited about the days ahead !!
- Course:
- Book:
Day47 of 300DaysOfData!
- Feature Vector and Feature Engineering: Feature Engineering means transforming Raw Data into Feature Vector, which is the set of Floating values comprising the examples of the Dataset. On my Journey of Machine Learning and Deep Learning, Today I have learned from Machine Learning Crash Course of Google. Here, I have learned and Implemented about Generalization of Model, Overfitting, Gradient Descent and Loss, Statistical and Computational Learning Theories, Stationarity of Data, Splitting of Data and Validation Set, Representation and Feature Engineering, Feature Vector, Categorical Features and Vocabulary, One Hot Encoding and Sparse Representation, Qualities of Good Features and few more Topics related to the same. I have presented the simple Implementation of RNN along with GRU here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course and Book mentioned above and below. Excited about the days ahead !!
- Course:
Day48 of 300DaysOfData!
- Scaling Features: Scaling means converting floating point Feature Values from their Natural range into Standard range such as 0 to 1. If the Feature set contains multiple Features, then Feature Scaling helps Gradient Descent to converge more quickly. On my Journey of Machine Learning and Deep Learning, Today I have learned from Machine Learning Crash Course of Google. Here, I have learned and Implemented about Scaling Feature Values, Handling Extreme Outliers, Binning, Scrubbing the Data, Standard Deviation, Feature Cross and Synthetic Feature, Encoding Nonlinearity, Stochastic Gradient Descent, Cross Product, Crossing One Hot Vectors, Regularization For Simplicity, Generalization Curve, L2 Regularization, Early Stopping, Lambda and Learning Rate and few more Topics related to the same. I have presented the simple Implementation of Linear Regression Model using Sequential API here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course mentioned above and below. Excited about the days ahead !!
- Course:
Day49 of 300DaysOfData!
- Prediction Bias: Prediction Bias is a quantity that measures how far apart is the average of predictions from the average of labels in Dataset. Prediction Bias is completely a different quantity than Bias. On my Journey of Machine Learning and Deep Learning, Today I have learned from Machine Learning Crash Course of Google. Here, I have learned and Implemented about Logistic Regression and Calculating Probability, Sigmoid Function, Binary Classification, Log Loss and Regularization, Early Stopping, L1 and L2 Regularization, Classification and Thresholding, Confusion Matrix, Class Imbalance and Accuracy, Precision and Recall, ROC Curve, Area Under Curve or AUC, Prediction Bias, Calibration Layer, Bucketing, Sparsity, Feature Cross and One Hot Encoding and few more Topics related to the same. I have presented the simple Implementation of Normalization and Binary Classification using Keras here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course mentioned above and below. Excited about the days ahead !!
- Course:
Day50 of 300DaysOfData!
- Categorical Data and Sparse Tensors: Categorical Data refers to Input Features that represent one or more Discrete items from a finite set of choices. Sparse Tensors are the tensors with very few non zero elements. On my Journey of Machine Learning and Deep Learning, Today I have learned from Machine Learning Crash Course of Google. Here, I have learned and Implemented about Neural Networks, Hidden Layers and Activation Functions, Nonlinear Classification and Feature Crosses, Sigmoid Function, Rectified Linear Unit or ReLU, Backpropagation, Vanishing and Exploding Gradients, Dropout Regularization, Multi Class Neural Networks, Softmax, Logistic Regression, Embeddings, Collaborative Filtering, Sparse Features, Principal Component Analysis, Word2Vec and few more Topics related to the same. I have presented the simple Implementation of Deep Neural Networks in Multi Class Classification here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Course mentioned above and below. Excited about the days ahead !!
- Course:
Day51 of 300DaysOfData!
- Deep Learning: Deep Learning is the general class of Algorithms which falls under Artificial Intelligence and deals with training Mathematical entities named Deep Neural Networks by presenting the instructive examples. It uses large amounts of Data to approximate Complex Functions. On my Journey of Machine Learning and Deep Learning, Today I have started reading and Implementing from the Book Deep Learning with PyTorch. Here, I have learned about Core PyTorch, Deep Learning Introduction and Revolution, Tensors and Arrays, Deep Learning Competitive Landscape, Utility Libraries, Pretrained Neural Network that recognizes the subject of an Image, ImageNet, Image Recognition, AlexNet and ResNet, Torch Vision Module and few more Topics related to the same from here. I have presented the Implementation of Obtaining Pretrained Neural Networks for Image Recognition using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day52 of 300DaysOfData!
- The GAN Game: GAN stands for Generative Adversarial Network where Generative means something being created, Adversarial means the two Neural Networks are competing to out smart the other and well Network means Neural Networks. A Cycle GAN can turn Images of one Domain into Images of another Domain without the need for us to explicitly provide matching pairs in the Training set. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Pretrained Models, Generative Adversarial Network or GAN, ResNet Generator and Discriminator Models, Cycle GAN Architecture, Torch Vision Module, Deep Fakes, A Neural Network that turns Horses into Zebras and few more Topics related to the same from here. I have presented the Implementation of Cycle GAN that turns Horses into Zebras using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day53 of 300DaysOfData!
- Tensors and Multi Dimensional Arrays: Tensors are the Fundamental Data Structure in PyTorch. A Tensor is an array that is a Data Structure which stores a collection of numbers that are accessible individually using a index and that can be indexed with multiple indices. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about A Pretrained Neural Network that describes the scenes, NeuralTalk2 Model, Recurrent Neural Network, Torch Hub, Fundamental Building Block: Tensors, The world as Floating Point Numbers, Multidimensional Arrays and Tensors, Lists and Indexing Tensors, Named Tensors, Einsum, Broadcasting and few more Topics related to the same from here. I have presented the simple Implementation of Indexing Tensors and Named Tensors using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day54 of 300DaysOfData!
- Tensors and Multi Dimensional Arrays: Tensors are the Fundamental Data Structure in PyTorch. A Tensor is an array that is a Data Structure which stores a collection of numbers that are accessible individually using a index and that can be indexed with multiple indices. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Named Tensors, Changing the names of Named Tensors, Broadcasting Tensors, Unnamed Dimensions, Tensor Element Types, Specifying the Numeric Data Type, The Tensor API, Creation Operations, Indexing, Random Sampling, Serialization, Parallelism, Tensors Storage, Referencing Storage, Indexing into Storage and few more Topics related to the same from here. I have presented the simple Implementation of Named Tensors, Tensor Datatype Attributes and Tensor API using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day55 of 300DaysOfData!
- Encoding Color Channels: The most common way to encode Colors into numbers is RGB where a color is defined by three numbers representing the Intensity of Red, Green and Blue. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Tensors Metadata such as Size, Offset and Stride, Transposing Tensors without Copying, Transposing in Higher Dimensions, Contiguous Tensors, Managing Tensors Device Attribute such as moving to GPU and CPU, Numpy Interoperability, Generalized Tensors, Serializing Tensors, Data Representation using Tensors, Working with Images, Adding Color Channels, Changing the Layout and few more Topics related to the same from here. I have presented the Implementation of Working with Images such as Changing the Layout and Permute method along with Contiguous Tensors using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day56 of 300DaysOfData!
- Continuous, Ordinal and Categorical Values: Continuous Values are the values which can be counted and measured along with units. Ordinal Values are the continuous values with no fixed relationships between values. Categorical Values are the enumerations of possibilities. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Normalizing the Image Data, Working with 3D Images or Volumetric Image Data, Representing the Tabular Data, Loading the Data Tensors using Numpy, Continuous Values, Ordinal Values, Categorical Values, Ratio Scale and Interval Scale, Nominal Scale, One Hot Encoding and Embeddings, Singleton Dimensions and few more Topics related to the same from here. I have presented the Implementation of Normalizing the Image Data, Volumetric Data, Tabular Data and One Hot Encoding using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day57 of 300DaysOfData!
- Continuous, Ordinal and Categorical Values: Continuous Values are the values which can be counted and measured along with units. Ordinal Values are the continuous values with no fixed relationships between values. Categorical Values are the enumerations of possibilities. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Continuous and Categorical Data, PyTorch Tensor API, Finding Thresholds in Tabular Data, Advanced Indexing, Working with Time Series Data, Adding Time Dimension in Data, Shaping the Data by Time Period, Tensors and Arrays and few more Topics related to the same from here. I have presented the Implementation of Working with Categorical Data, Time Series Data and Finding Thresholds using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day58 of 300DaysOfData!
- Encoding and ASCII: Every written characters is represented by a code which refers to a sequence of bits of appropriate length so that each character can be uniquely identified and it is called Encoding. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Working with Time Series Data, Ordinal Variables, One Hot Encoding and Concatenation, Unsqueeze and Singleton Dimension, Mean, Standard Deviation and Rescaling Variables, Text Representation, Natural Language Processing and Recurrent Neural Networks, Converting the Text into Numbers, Project Gutenberg Corpus, One Hot Encoding of Characters, Encoding and ASCII, Embeddings and Processing the Text and few more Topics related to the same from here. I have presented the Implementation of Time Series Data and Text Representation using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day59 of 300DaysOfData!
- Loss Function: Loss Function is a function that computes a single numerical value that the learning process will attempt to minimize. The calculation of loss typically involves taking the difference between the desired outputs for some training samples. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about One Hot Encoding and Vectors, Data Representation using Tensors, Text Embeddings, Natural Language Processing, The Mechanics of Learning, Johannes Kepler's Lesson in Modeling, Eccentricity, Parameter Estimation, Weight, Bias and Gradients, Simple Linear Model, Loss Function or Cost Function, Mean Square Loss, Broadcasting and few more Topics related to the same from here. I have presented the simple Implementation of Representing Text, Mechanics of Learning and Simple Linear Model using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day60 of 300DaysOfData!
- Gradient Descent: Gradient Descent is the first order iterative Optimization Algorithm for finding a local minimum of a Differentiable Function. Simply, Gradient is the derivates of the Function with respect to each Parameter. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Cost Function or Loss Function, Optimizing Parameters using Gradient Descent, Decreasing Loss Function, Parameter Estimation, Mechanics of Learning, Scaling Factor and Learning Rate, Evaluations of Model, Computing the Derivative of Loss Function and Linear Function, Defining Gradient Function, Partial Derivative and Iterating the Model, The Training Loop and few more Topics related to the same from here. I have presented the Implementation of Loss Function, Computing Derivatives, Gradient Function and Training Loop here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day61 of 300DaysOfData!
- Hyperparameter Tuning: Hyperparameter Tuning refers to the Training of Model's parameters and hyperparameters control how the Training goes. Hyperparameters are generally set manually. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Gradient Descent, Optimizing the Training Loop, Overtraining, Convergence and Divergence, Learning Rate, Hyperparameter Tuning, Normalizing the Inputs, Visualization or Plotting the Data, Argument Unpacking, PyTorch's Autograd and Backpropagation, Chain Rule, Linear Model and few more Topics related to the same from here. I have presented the simple Implementation of Training Loop and Gradient Descent along with Visualization using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day62 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Gradient Descent, PyTorch's Autograd and Backpropagation, Chain Rule and Tensors, Grad Attribute and Parameters, Simple Linear Function and Simple Loss Function, Accumulating Grad Functions, Zeroing the Gradients, Autograd Enabled Training Loop, Optimizers and Vanilla Gradient Descent and Optim Submodule of Torch and few more Topics related to the same from here. I have presented the simple Implementation of Linear Model and Loss Function, Autograd Enabled Training Loop using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day63 of 300DaysOfData!
- Stochastic Gradient Descent: Stochastic Gradient Descent or SGD comes from the fact that the Gradient is typically obtained by averaging over a random subset of all Input samples. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Optimizers, Vanilla Gradient Descent Optimization, Stochastic Gradient Descent, Momentum Argument, Minibatch, Learning Rate and Params, Optim Module, Neural Network Models, Adam Optimizers, Backpropagation, Optimizing Weights, Training, Validation and Overfitting, Evaluating the Training Loss, Generalizing to the Validation Set, Overfitting and Penalization Terms and few more Topics related to the same from here. I have presented the Implementation of SGD and Adam Optimizer along with the Training Loop here in the Snapshots. It is the continuation of the previous Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day64 of 300DaysOfData!
- Activation Functions: Activation Functions are Nonlinear which allows the overall network to approximate more complex functions. They are differentiable so that Gradients can be computed through them. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I am learning to use a Neural Network to fit the Data, Artificial Neurons, The Learning Process and Loss Function, Non Linear Activation Functions, Weights and Biases, Composing a Multilayer Network, Understanding the Error Function, Capping and Compressing the Output Range, Tanh and ReLU Activations, Choosing the Activation Functions, The PyTorch NN Module and few more Topics related to the same from here. I have presented the simple Implementation of Linear Model and Training Loop using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day65 of 300DaysOfData!
- Activation Functions: Activation Functions are Nonlinear which allows the overall network to approximate more complex functions. They are differentiable so that Gradients can be computed through them. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about The PyTorch NN Module, Simple Linear Model, Batching Input Data, Optimizing Batches, Mean Square Error Loss Function, Training Loop, Neural Networks, Sequential Model, Tanh Activation Function, Inspecting Parameters, Weights and Biases, OrderedDict Module, Comparing to the Linear Model, Overfitting and few more Topics related to the same form here. I have presented the simple Implementation of Sequential Model and OrderedDict Submodule using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day66 of 300DaysOfData!
- Computer Vision: Computer Vision is an Interdisciplinary scientific field that deals with how computers can gain high level understanding from digital images or videos. It seeks to understand and automate tasks that the human visual system can do. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have started the new Topic Learning From Images. I have learned about Simple Image Recognition, CIFAR10 which is a Dataset of Tiny Images, Torch Vision Module, The Dataset Class, Iterable Dataset, Python Imaging Library or PIL Package, Dataset Transforms, Arrays and Tensors, Permute Function and few more Topics related to the same. I have presented the simple Implementation of Torch Vision Module along with CIFAR10 Dataset using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day67 of 300DaysOfData!
- Computer Vision: Computer Vision is an Interdisciplinary scientific field that deals with how computers can gain high level understanding from digital images or videos. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Permutation Function, Normalizing the Data, Stacking, Mean and Standard Deviation, Torch Vision Module and Submodules, CIFAR10 Dataset, PIL Package, Image Recognition, Building the Dataset, Building a fully connected Neural Networks Model, Sequential Model, Simple Linear Model, Classification and Regression Problems, One Hot Encoding and Softmax and few more Topics related to the same from here. I have presented the Implementation of Normalizing the Data, Building the Dataset and Neural Network Model using Torch Vision Modules here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day68 of 300DaysOfData!
- Softmax Function: Softmax Function is a type of function that takes a vector of values and produces another vector of the same dimension where the values satisfy the constraints presented as Probabilities. Softmax is a monotone function that the lower values in the input will correspond to lower values in the output. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Representing Output as Probabilities and Softmax Function, PyTorch's NN Module, Backpropagation, A Loss for Classification, MSE Loss, Negative Log Likelihood or NLL Loss, Log Softmax Function, Training the Classifier, Stochastic Gradient Descent, Hyperparameters, Minibatches and few more Topics related to the same from here. I have presented the Implementation of Softmax Function, Building Neural Network Model and Training Loop using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day69 of 300DaysOfData!
- Cross Entropy Loss: Cross Entropy Loss is a negative log likelihood of the predicted distribution under the target distribution as an outcome. The combination of Log Softmax Function and NLL Loss Function is equivalent to using Cross Entropy Loss. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Gradient Descent, Minibatches and Data Loader, Stochastic Gradient Descent, Neural Network Model, Log Softmax Function, NLL Loss Function, Cross Entropy Loss Function, Trainable Parameters, Weights an Biases, Translation Invariant, Data Augment, Torch Vision and NN Modules and few more Topics related to the same from here. I have presented the Implementation of Building Deep Neural Network, Training Loop and Model Evaluation using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day70 of 300DaysOfData!
- Translational Invariance: Translational Invariance makes the Convolutional Neural Network invariant to translation which means that if we translate the Inputs then the CNN will still be able to detect the class to which the Input belongs. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have started reading the Topic Using Convolutions to Generalize. I have learned about Convolutional Neural Network, Translation Invariant, Weights and Biases, Discrete Cross Correlations, Locality or Local Operations on Neighborhood Data, Model Parameters, Multi Channel Image, Padding the Boundary, Kernel Size, Detecting Features with Convolutions and few more Topics related to the same. I have presented the simple Implementation of CNN and Building the Data using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day71 of 300DaysOfData!
- Down Sampling: Down Sampling is the scaling of an Image by half which is equivalent of taking four neighboring pixels as input and producing one pixel as Output. Down Sampling principle can be implemented in different ways. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Kernel Size, Padding the Image, Edge Detection Kernel, Locality and Translation Invariant, Learning Rate and Weight Update, Max Pooling Layer and Down Sampling, Stride, Convolutional Neural Networks, Receptive Field, Tanh Activation Function, Simple Linear Model, Sequential Model, Parameters of the Model and few more Topics related to the same from here. I have presented the Implementation of Convolutional Neural Network, Plotting the Image and Inspecting the Parameters of the Model using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day72 of 300DaysOfData!
- Down Sampling: Down Sampling is the scaling of an Image by half which is equivalent of taking four neighboring pixels as input and producing one pixel as Output. Down Sampling principle can be implemented in different ways. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Sub Classing the NN Module, The Sequential or The Modular API, Forward Function, Linear Model, Max Pooling Layer, Padding the Data, Convolutional Neural Network Architecture, ResNet, Kernel Size and Attributes, Tanh Activation Function, Model Parameters, The Functional API, Stateless Modules and few more Topics related to the same from here. I have presented the Implementation of Sub Classing the NN Module using The Sequential API and The Functional API using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day73 of 300DaysOfData!
- Down Sampling: Down Sampling is the scaling of an Image by half which is equivalent of taking four neighboring pixels as input and producing one pixel as Output. Down Sampling principle can be implemented in different ways. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about The Torch NN Module, The Functional API, Convolutional Neural Network and The Training, The Data Loader Module, Forward and Backward Pass of the Network, Stochastic Gradient Descent Optimizer, Zeroing the Gradients, Cross Entropy Loss Function, Model Evaluation and Gradient Descent and few more Topics related to the same from here. I have presented the Implementation of Training Loop and Model Evaluation using PyTorch here in the Snapshot. Actually, It is the continuation of yesterday's Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day74 of 300DaysOfData!
- Down Sampling: Down Sampling is the scaling of an Image by half which is equivalent of taking four neighboring pixels as input and producing one pixel as Output. Down Sampling principle can be implemented in different ways such as Max Pooling. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Saving and Loading the Model, Weights and Parameters of the Model, Training the Model on GPU, The Torch NN Module and Sub Modules, Map Location Keyword, Designing Model, Long Short Term Memory or LSTM, Adding Memory Capacity or Width to the Network, Feed Forward Network, Overfitting and few more Topics related to the same from here. I have presented the Implementation of Adding Memory Capacity or Width to the Network using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day75 of 300DaysOfData!
- L2 Regularization: L2 Regularization is the sum of the squares of all the weights in the Model whereas L1 Regularization is the sum of the absolute values of all the weights in the Model. L2 Regularization is also referred to as Weight Decay. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Convolutional Neural Network, L2 Regularization and L1 Regularization, Optimization and Generalization, Weight Decay, The PyTorch NN Module and Sub Modules, Stochastic Gradient Descent Optimizer, Overfitting and Dropout, Deep Neural Networks, Randomization and few more Topics related to the same from here. I have presented the Implementation of L2 Regularization and Dropout Layer using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day76 of 300DaysOfData!
- L2 Regularization: L2 Regularization is the sum of the squares of all the weights in the Model whereas L1 Regularization is the sum of the absolute values of all the weights in the Model. L2 Regularization is also referred to as Weight Decay. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Dropout Module, Batch Normalization and Non Linear Activation Functions, Regularization and Principled Augmentation, Convolutional Neural Networks, Minibatch and Standard Deviation, Deep Neural Networks and Depth Module, Skip Connections Mechanism, ReLU Activation Function, Implementation of Functional API and few more Topics related to the same from here. I have presented the Implementation of Batch Normalization and Deep Neural Networks and Depth Module using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day77 of 300DaysOfData!
- Identity Mapping: When the output of the first activations is used as the input of the last in addition to the standard feed forward path then it is called the Identity Mapping. Identity Mapping alleviate the issues of vanishing gradients. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Convolutional Neural Networks, Skip Connections, ResNet Architecture, Simple Linear Layer, Max Pooling Layer, Identity Mapping, Highway Networks, UNet Model, Dense Networks and Very Deep Neural Networks, Sequential and Functional API, Forward and Backpropagation, Torch Vision Module and Sub Modules, Batch Normalization Layer, Custom Initializations and few more Topics related to the same from here. I have presented the Implementation of ResNet Architecture and Very Deep Neural Networks using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day78 of 300DaysOfData!
- Voxel: A Voxel is the 3D equivalent to the familiar 2D pixel. It encloses a volume of space rather than an area. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about CT Scan Dataset, Voxel, Segmentation, Grouping and Classification, Nodules, 3D Convolutions, Neural Networks, Downloading the LUNA Dataset, Data Loading, Parsing the Data, Training and Validation Set and few more Topics related to the same from here. I have started working with LUNA Dataset which stands for Lung Nodule Analysis 2016. The LUNA Grand Challenge is the combination of an open dataset with high quality labels of patient CT scans: many with lung nodules and a public ranking of classifiers against the data. I have presented the Implementation of Preparing the Data using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day79 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Data Loading and Parsing the Data, CT Scan Dataset, Data Pipeline and few more Topics related to the same from here. Besides, I have also learned about Auto Encoders, Recurrent Neural Networks and Long Short Term Memory or LSTM, Data Processing, One Hot Encoding, Random Splitting of Training and Validation Dataset and few more. I have continued working with LUNA Dataset which stands for Lung Nodule Analysis 2016. The LUNA Grand Challenge is the combination of an open dataset with high quality labels of patient CT scans: many with lung nodules and a public ranking of classifiers against the data. I have presented the simple Implementation of Data Preparation using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day80 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about Loading the Individual CT Scans Dataset, 3D Nodules Density Data, SimpleITK Library, Hounsfield Units, Voxels, Batch Normalization, Loading a Nodule using the Patient Coordinate System, Converting between Millimeters and Voxel Addresses, Array Coordinates, Matrix Multiplication and few more Topics related to the same from here. Besides I have also learned about Auto Encoders using LSTM, Stateful Decoder Model and Data Visualization. I have continued working with LUNA Dataset which stands for Lung Nodule Analysis 2016. I have presented the Implementation of Conversion between Patient Coordinates and Arrays Coordinates on CT Scans Dataset using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day81 of 300DaysOfData!
- Voxel and Nodules: A Voxel is the 3D equivalent to the familiar 2D pixel. It encloses a volume of space rather than an area. A mass of tissue made of proliferating cell in the lung is called a Tumor. A small Tumor just a few millimeters wide is called a Nodules. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Deep Learning with PyTorch. Here, I have learned about PyTorch Dataset Instance Implementation, LUNA Dataset Class, Cross Entropy Loss, Positive and Negative Nodules, Arrays and Tensors, Caching Candidate Arrays, Training and Validation Datasets, Data Visualization and few more Topics related to the same from here. Besides I have also learned about about Normalization of Data, Variance Threshold, RDKIT Library and few more Topics related to the same. I have presented the Implementation of Preparing the LUNA Dataset using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day82 of 300DaysOfData!
- Tagging Algorithms: The problem of learning to predict classes that are not mutually exclusive is called Multilabel Classification. Auto Tagging Problems are best described as Multilabel Classification Problems. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about A Motivating Example on Machine Learning, Learning Algorithms, Training Process, Data, Features, Models, Objective Functions, Optimization Algorithms, Supervised Learning, Regression, Binary, Multiclass and Hierarchical Classification, Cross Entropy and Mean Squared Error Loss Functions, Gradient Descent, Tagging Algorithms and few more Topics related to the same from here. I have presented the Implementation of Preparing the Data, Normalization, Removing Low Variance Features and Data Loaders using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day83 of 300DaysOfData!
- Reinforcement Learning: Reinforcement Learning gives a very general statement of problem in which an agent interacts with the environment over a series of time steps and receives some observation and must choose action. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Search Algorithms, Recommender Systems, Sequence Learning, Tagging and Parsing, Machine Translation, Unsupervised Learning, Interacting with an Environment and Reinforcement Learning, Data Manipulation, Mathematical Operations, Broadcasting Mechanisms, Indexing and Slicing, Saving Memory in Tensors, Conversion to Other Datatypes and few more Topics related to the same from here. I have presented the Implementation of Mathematical Operations, Tensors Concatenation, Broadcasting Mechanisms and Datatypes Conversion using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day84 of 300DaysOfData!
- Tensors: Tensors refer to algebraic objects describing the n dimensional arrays with an arbitrary number of axes. Vectors are first order Tensors and Matrices are second order Tensors. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Data Processing, Reading the Dataset, Handling the Missing Data, Categorical Data, Conversion to the Tensor Format, Linear Algebra such as Scalars, Vectors, Length, Dimensionality and Shape, Matrices, Symmetric Matrix, Tensors, Basic Properties of Tensor Arithmetic, Reduction, Non Reduction Sum, Dot Products, Matrix Vector Products and few more Topics related to the same from here. I have presented the Implementation of Data Processing, Handling the Missing Data, Scalars, Vectors, Matrices and Dot Products using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day85 of 300DaysOfData!
- Method of Exhaustion: The ancient process of finding the area of curved shapes such as circle by inscribing the polygons in such shapes which better approximate the circle is called the Method of Exhaustion. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Matrix Multiplication, L1 and L2 Normalization, Frobenius Normalization, Calculus, Method of Exhaustion, Derivatives and Differentiation, Partial Derivatives, Gradient Descents, Chain Rule, Automatic Differentiation, Backward for Non Scalar Variables, Detaching Computation, Backpropagation, Computing the Gradient with Control Flow and few more Topics related to the same from here. I have presented the Implementation of Matrix Multiplication, L1, L2 and Frobenius Normalization, Derivatives and Differentiation, Automatic Differentiation and Computing the Gradient using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day86 of 300DaysOfData!
- Method of Exhaustion: The ancient process of finding the area of curved shapes such as circle by inscribing the polygons in such shapes which better approximate the circle is called the Method of Exhaustion. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Probabilities, Basic Probability Theory, Sampling, Multinomial Distribution, Axioms of Probability Theory, Random Variables, Dealing with Multiple Random Variables, Joint Probability, Conditional Probability, Bayes Theorem, Marginalization, Independence and Dependence, Expectation and Variance, Finding Classes and Functions in a Module and few more Topics related to the same from here. I have presented the Implementation of Multinomial Distribution, Visualization of Probabilities, Derivatives and Differentiation using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day87 of 300DaysOfData!
- Hyperparameters: The parameters that are tunable but not updated in the training loop are called Hyperparameters. Hyperparameters Tuning is the process by which hyperparameters are chosen and typically requires adjusting based on the results of the Training Loop. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Linear Regression, Basic Elements of Linear Regression, Linear Model and Transformation, Loss Function, Analytic Solution, Minibatch Stochastic Gradient Descent, Making Predictions with the Learned Model, Vectorization of Speed, The Normal Distribution and Squared Loss, Linear Regression to Deep Neural Networks, Biological Interpretation, Hyperparameters Tuning and few more Topics related to the same from here. I have presented the Implementation of Vectorization of Speed and Normal Distributions using Python here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day88 of 300DaysOfData!
- Hyperparameters: The parameters that are tunable but not updated in the training loop are called Hyperparameters. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Linear Regression Implementation From Scratch, Data Pipeline, Deep Learning Frameworks, Generating the Artificial Dataset, Scatter Plot and Correlation, Reading the Dataset, Minibatches, Features and Labels, Parallel Computing, Initializing the Model Parameters, Minibatch Stochastic Gradient Descent, Defining the Simple Linear Regression Model, Broadcasting Mechanism, Vectors and Scalars and few more Topics related to the same from here. I have presented the Implementation of Generating the Synthetic Dataset, Generating the Scatter Plot, Reading the Dataset, Initializing the Model Parameters and Defining the Linear Regression Model using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day89 of 300DaysOfData!
- Linear Regression: Linear Regression is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables also known as dependent variables and independent variables. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Linear Regression, Defining the Loss Function, Defining the Optimization Algorithm, Minibatch Stochastic Gradient Descent, Training the Model, Tensors and Differentiation, Concise Implementation of Linear Regression, Generating the Synthetic Dataset, Model Evaluation and few more Topics related to the same from here. I have presented the Implementation of Defining the Loss Function, Minibatch Stochastic Gradient Descent, Training and Evaluating the Model, Concise Implementation of Linear Regression and Reading the Dataset using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day90 of 300DaysOfData!
- Linear Regression: Linear Regression is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables also known as dependent variables and independent variables. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Softmax Regression, Classification Problem, Network Architecture, Parameterization Cost of Fully Connected Layers, Softmax Operation, Vectorization for Minibatches, Loss Function, Log Likelihood, Softmax and Derivatives, Cross Entropy Loss, Information Theory Basics, Entropy and Surprisal, Model Prediction and Evaluation, The Image Classification Dataset and few more Topics related to the same from here. I have presented the Implementation of Image Classification Dataset, Visualization, Softmax Regression and Operation along with Model Parameters using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day91 of 300DaysOfData!
- Activation Functions: Activation Functions decide whether a neuron should be activated or not by calculating the weighted sum and further adding bias with it. They are differentiable operators. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Cross Entropy Loss Function, Classification Accuracy and Training, Softmax Regression, Model Parameters, Optimization Algorithms, Multi Layer Perceptrons, Hidden Layers, Linear Models Problems, From Linear to Nonlinear Models, Universal Approximators, Activation Functions like RELU Function, Sigmoid Function, Tanh Function, Derivatives and Gradients and few more Topics related to the same from here. I have presented the Implementation of Softmax Regression Model, Classification Accuracy, RELU Function, Sigmoid Function, Tanh Function along with Visualizations using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day92 of 300DaysOfData!
- Activation Functions: Activation Functions decide whether a neuron should be activated or not by calculating the weighted sum and further adding bias with it. They are differentiable operators. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Implementation of Multi Layer Perceptrons, Initializing Model Parameters, RELU Activation Functions, Cross Entropy Loss Function, Training the Model, Fully Connected Layers, Simple Linear Layer, Softmax Regression and Function, Stochastic Gradient Descent, Sequential API, High Level APIs, Learning Rate, Weights and Biases, Tensors, Hyperparameters and few more Topics related to the same from here. I have presented the Implementation of Multi Layer Perceptrons, RELU Activation Function, Training the Model and Model Evaluations using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day93 of 300DaysOfData!
- Multi Layer Perceptrons: The simplest deep neural networks are called Multi Layer Perceptrons. They consist of multiple layers of Neurons. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Model Selection, Underfitting, Overfitting, Training Error and Generalization Error, Statistical Learning Theory, Model Complexity, Early Stopping, Training, Testing and Validation Dataset, K-Fold Cross Validation, Dataset Size, Polynomial Regression, Generating the Dataset, Training and Testing the Model, Third Order Polynomial Function Fitting, Linear Function Fitting, High Order Polynomial Function Fitting, Weight Decay, Normalization and few more Topics related to the same from here. I have presented the Implementation of Generating the Dataset, Defining the Training Function and Polynomial Function Fitting using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day94 of 300DaysOfData!
- Multi Layer Perceptrons: The simplest deep neural networks are called Multi Layer Perceptrons. They consist of multiple layers of neurons each fully connected to those in layers below from which they receive input and above which in turn influence. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about High Dimensional Linear Regression, Model Parameters, Defining L2 Normalization Penalty, Defining the Training Loop, Regularization and Weight Decay, Dropout and Overfitting, Bias and Variance Tradeoff, Gaussian Distributions, Stochastic Gradient Descent, Training Error and Test Error and few more Topics related to the same from here. I have presented the Implementation of High Dimensional Linear Regression, Model Parameters, L2 Normalization Penalty, Regularization and Weight Decay using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day95 of 300DaysOfData!
- Dropout and Co-adaption: Dropout is the process of injecting noise while computing each internal layer during forward propagation. Co-adaption is the condition in neural network which is characterized by a state in which each layer relies on the specific pattern of the activations in the previous layer. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Dropout, Overfitting, Generalization Error, Bias and Variance Tradeoff, Robustness through Perturbations, L2 Regularization and Weight Decay, Co-adaption, Dropout Probability, Dropout Layer, Fashion MNIST Dataset, Activation Functions, Stochastic Gradient Descent, The Sequential and Functional API and few more Topics related to the same from here. I have presented the Implementation of Dropout Layer, Training and Testing the Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day96 of 300DaysOfData!
- Dropout and Co-adaption: Dropout is the process of injecting noise while computing each internal layer during forward propagation. Co-adaption is the condition in neural network which is characterized by a state in which each layer relies on the specific pattern of the activations in the previous layer. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Forward Propagation, Backward Propagation and Computational Graphs, Numerical Stability, Vanishing and Exploding Gradients, Breaking the Symmetry, Parameter Initialization, Environment and Distribution Shift, Covariate Shift, Label Shift, Concept Shift, Non stationary Distributions, Empirical Risk and True Risk, Batch Learning, Online Learning, Reinforcement Learning and few more Topics related to the same from here. I have presented the Implementation of Data Preprocessing and Data Preparation using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Predicting Housing Prices
Day97 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Training and Building Deep Networks, Downloading and Caching Datasets, Data Preprocessing, Regression Problems, Accessing and Reading the Dataset, Numerical and Discrete Categorical Features, Optimization and Variance, Arrays and Tensors, Simple Linear Model, The Sequential API, Root Mean Squared Error, Adam Optimizer, Hyperparameter Tuning, K-Fold Cross Validation, Training and Validation Error, Model Selection, Overfitting and Regularization and few more Topics related to the same from here. I have presented the Implementation of Simple Linear Model, Root Mean Squared Error, Training Function and K-Fold Cross Validation using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Predicting Housing Prices
Day98 of 300DaysOfData!
- Constant Parameters: Constant Parameters are the terms that are neither the result of the previous layers nor updatable parameters in the Neural Networks. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about K-Fold Cross Validation, Training and Predictions, Hyperparameters Optimization, Deep Learning Computation, Layers and Blocks, Softmax Regression, Multi Layer Perceptrons, ResNet Architecture, Forward and Backward Propagation Function, RELU Activation Function, The Sequential Block Implementation, MLP Implementation, Constant Parameters and few more Topics related to the same from here. I have presented the Implementation of MLP, The Sequential API Class and Forward Propagation Function using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
- Predicting Housing Prices
Day99 of 300DaysOfData!
- Constant Parameters: Constant Parameters are the terms that are neither the result of the previous layers nor updatable parameters in the Neural Networks. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Parameter Management, Parameter Access, Targeted Parameters, Collecting Parameters from Nested Block, Parameter Initialization, Custom Initialization, Tied Parameters, Deferred Initialization, Multi Layer Perceptrons, Input Dimensions, Defining Custom Layers, Layers without Parameters, Forward Propagation Function, Constant Parameters, Xavier Initializer, Weight and Bias and few more Topics related to the same from here. I have presented the Implementation of Parameter Access, Parameter Initialization, Tied Parameters and Layers without Parameters using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day100 of 300DaysOfData!
- Invariance and Locality Principle: Translation Invariance principle states that out network should respond similarly to the same patch regardless of where it appears in the image. Locality Principle states that the network should focus on local regions without regard to the contents of the image in distant regions. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Fully Connected Layers to Convolutions, Translation Invariance, Locality Principle, Constraining the MLP, Convolutional Neural Networks, Cross Correlation, Images and Channels, File IO, Loading and Saving Tensors, Loading and Saving Model Parameters, Custom Layers, Layers with Parameters and few more Topics related to the same from here. I have presented the Implementation of Layers with Parameters, Loading and Saving the Tensors and Model Parameters using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day101 of 300DaysOfData!
- Invariance and Locality Principle: Translation Invariance principle states that out network should respond similarly to the same patch regardless of where it appears in the image. Locality Principle states that the network should focus on local regions without regard to the contents of the image in distant regions. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Convolutional Neural Networks, Convolutions for Images, The Cross Correlation Operation, Convolutional Layers, Constructor and Forward Propagation Function, Weight and Bias, Object Edge Detection in Images, Learning a Kernel, Back Propagation, Feature Map and Receptive Field, Kernel Parameters and few more Topics related to the same from here. I have presented the Implementation of Cross Correlation Operation, Convolutional Layers and Learning a Kernel using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day102 of 300DaysOfData!
- Maximum Pooling: Pooling Operators consist of a fixed shape window that is slid over all the regions in the input according to its stride computing a single output for each location which is either maximum or the average value of the elements in the pooling window. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Padding and Stride, Strided Convolutions, Cross Correlations, Multiple Input and Multiple Output Channels, Convolutional Layer, Maximum Pooling Layer and Average Pooling Layer, Pooling Window and Operators, Convolutional Neural Networks, LeNet Architecture, Supervised Learning, Convolutional Encoder, Sigmoid Activation Function and few more Topics related to the same from here. I have presented the Implementation of CNN, Implementation of Padding, Stride and Pooling Layers, Multiple Channels using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day103 of 300DaysOfData!
- VGG Networks: VGG Networks construct a network using reusable convolutional blocks. VGG Models are defined by the number of convolutional layers and output channels in each block. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Convolutional Neural Networks, Supervised Learning, Deep CNN and AlexNet, Support Vector Machine and Features, Learning Representations, Data and Hardware Accelerator Problems, Architectures of LeNet and AlexNet, Activation Functions such as ReLU, Networks using CNN Blocks, VGG Neural Networks Architecture, Padding and Pooling, Convolutional Layers, Dropout, Dense and Linear Layers and few more Topics related to the same from here. I have presented the Implementation of AlexNet Architecture and VGG Networks Architecture along with CNN Blocks using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day104 of 300DaysOfData!
- VGG Networks: VGG Networks construct a network using reusable convolutional blocks. VGG Models are defined by the number of convolutional layers and output channels in each block. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Network In Network or NIN Architecture, NIN Blocks and Model, Convolutional Layer, RELU Activation Function, The Sequential and Functional API, Global Average Pooling Layer, Networks with Parallel Concatenations or GoogLeNet, Inception Blocks, GoogLeNet Model and Architecture, Maximum Pooling Layer, Training the Model and few more Topics related to the same from here. I have presented the Implementation of NIN Block and Model, Inception Block and GoogLeNet Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day105 of 300DaysOfData!
- Batch Normalization: Batch Normalization continuously adjusts the intermediate output of the neural network by utilizing the mean and standard deviation of the minibatch so that the values of the intermediate output are more stable. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Batch Normalization, Training Deep Neural Networks, Scale Parameter and Shift Parameter, Batch Normalization Layers, Fully Connected Layers, Convolutional Layers, Batch Normalization during Prediction, Tensors, Mean and Variance, Applying BN in LeNet, Concise Implementation of BN using high level API, Internal Covariate Shift, Dropout Layer, Residual Networks or ResNet, Function Classes, Residual Blocks and few more Topics related to the same from here. I have presented the Implementation of Batch Normalization Architecture using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day106 of 300DaysOfData!
- Batch Normalization: Batch Normalization continuously adjusts the intermediate output of the neural network by utilizing the mean and standard deviation of the minibatch so that the values of the intermediate output are more stable. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Densely Connected Neural Networks or DenseNet, Dense Blocks, Batch Normalization, Activation Functions and Convolutional Layer, Transition Layer, Residual Networks or ResNet, Function Classes, Residual Blocks, Residual Mapping, Residual Connection, ResNet Model, Maximum and Average Pooling Layers, Training the Model and few more Topics related to the same from here. I have presented the Implementation of ResNet Architecture and ResNet Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day107 of 300DaysOfData!
- Sequence Models: The prediction beyond the known observations is called Extrapolation. The estimating between the existing observations is called Interpolation. Sequence Models require specialized statistical tools for estimation such as Auto Regressive Models. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about DenseNet Model, Convolutional Layers, Recurrent Neural Networks, Sequence Models, Interpolation and Extrapolation, Statistical Tools, Autoregressive Models, Latent Autoregressive Models, Markov Models, Reinforcement Learning Algorithms, Causality, Conditional Probability Distribution, Training the MLP, One step ahead prediction and few more Topics related to the same from here. I have presented the Implementation of DenseNet Architectures and Simple Implementation of RNNs using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day108 of 300DaysOfData!
- Tokenization and Vocabulary: Tokenization is the splitting of a string or text into a list of tokens. Vocabulary is the dictionary that maps string tokens into numerical indices. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book ve into Deep LearningHere, I have learned about Text Preprocessing, Corpus of Text, Tokenization Function, Sequence Models and Dataset, Vocabulary, Dictionary, Multilayer Perceptron, One step ahead prediction, Multi step ahead prediction, Tensors, Recurrent Neural Networks and few more Topics related to the same from here. I have presented the Implementation of Reading the Dataset, Tokenization and Vocabulary using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day109 of 300DaysOfData!
- Sequential Partitioning: Sequential Partitioning is the strategy that preserves the order of split subsequences when iterating over minibatches. It ensures that the subsequences from two adjacent minibatches during iteration are adjacent in the original sequence. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Language Models and Sequence Dataset, Conditional Probability, Laplace Smoothing, Markov Models and NGrams, Unigram, Bigram and Trigram Models, Natural Language Statistics, Stop words, Word Frequencies, Zipf's Law, Reading Long Sequence Data, Minibatches, Random Sampling, Sequential Partitioning and few more Topics related to the same from here. I have presented the Implementation of Unigram, Bigram and Trigram Model Frequencies, Random Sampling and Sequential Partitioning using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day110 of 300DaysOfData!
- Recurrent Neural Networks: Recurrent Neural Networks are the networks that uses recurrent computation for hidden states. The hidden state of an RNN can capture historical information of the sequence up to the current time step. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Recurrent Neural Networks or RNN, Hidden State, Neural Networks without Hidden States, RNNs with Hidden States, RNN Layers, RNN based Character Level Language Models, Perplexity, Implementation of RNN from Scratch, One Hot Encoding, Vocabulary, Initializing the Model Parameters, RNN Model, Minibatch and Tanh Activation Function, Prediction and Warm up period, Gradient Clipping, Backpropagation and few more Topics related to the same from here. I have presented the Implementation RNN Model, Gradient Clipping and Training the Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day111 of 300DaysOfData!
- Recurrent Neural Networks: Recurrent Neural Networks are the networks that uses recurrent computation for hidden states. The hidden state of an RNN can capture historical information of the sequence up to the current time step. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Implementation of Recurrent Neural Networks, Defining the RNN Model, Training and Prediction, Backpropagation through Time, Exploding Gradients, Vanishing Gradients, Analysis of Gradients in RNNs, Full Computation, Truncating Time Steps, Randomized Truncation, Gradient Computing strategies in RNNs, Activation Functions, Regular Truncation and few more Topics related to the same from here. I have presented the Implementation of Recurrent Neural Networks, Training and Prediction using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day112 of 300DaysOfData!
- Gated Recurrent Units: Gated Recurrent Units or GRUs are a gating mechanisms in Recurrent Neural Networks in which hidden state should be updated and also when it should be reset. It aims to solve the vanishing gradient problem which comes with standard RNNs. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Modern Recurrent Neural Networks, Gradient Clipping, Gated Recurrent Units or GRUs, Memory cell, Gated Hidden State, Reset Gate and Update Gate, Broadcasting, Candidate Hidden State, Hadamard Product Operator, Hidden State, Initializing Model Parameters, Defining the GRU Model, Training and Prediction and few more Topics related to the same from here. I have presented the Implementation of Gated Recurrent Units, GRU Model, Training and Prediction using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day113 of 300DaysOfData!
- Long Short Term Memory: Long Short Term Memory or LSTM is a type of Recurrent Neural Networks capable of learning order dependence in sequence prediction problems. LSTM has Input Gates, Forget Gates and Output Gates that control the flow of information. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Long Short Term Memory or LSTM, Gated Memory Cell, Input Gate, Forget Gate and Output Gate, Candidate Memory Cell, Tanh Activation Function, Sigmoid Activation Function, Memory Cell, Hidden State, Initializing Model Parameters, Defining the LSTM Model, Training and Prediction, Gated Recurrent Units or GRUs, Gaussian Distribution and few more Topics related to the same from here. I have presented the Implementation of Long Short Term Memory or LSTM Model, Training and Prediction using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day114 of 300DaysOfData!
- Long Short Term Memory: Long Short Term Memory or LSTM is a type of Recurrent Neural Networks capable of learning order dependence in sequence prediction problems. LSTM has Input Gates, Forget Gates and Output Gates that control the flow of information. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Deep Recurrent Neural Networks, Functional Dependencies, Bidirectional Recurrent Neural Networks, Dynamic Programming in Hidden Markov Models, Bidirectional Model, Computational Cost and Applications, Machine Translation and Dataset, Preprocessing the Dataset, Tokenization, Vocabulary, Padding Text Sequences and few more Topics related to the same from here. I have presented the Implementations of Downloading the Dataset, Preprocessing, Tokenization and Vocabulary using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day115 of 300DaysOfData!
- Encoder and Decoder Architecture: Encoder takes a variable length sequence as the input and transforms it into a state with a fixed shape. Decoder maps the encoded state of a fixed shape to a variable length sequence. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Encoder and Decoder Architectures, Machine Translation Model, Sequence Transduction Models, Forward Propagation Function, Sequence to Sequence Learning, Recurrent Neural Networks, Embedding Layer, Gated Recurrent Units or GRU Layers, Hidden States and Units, RNN Encoder and Decoder Architecture, Vocabulary and few more Topics related to the same from here.I have presented the Implementation of Encoder, Decoder Architectures and RNN Encoder Decoder for Sequence to Sequence Learning using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day116 of 300DaysOfData!
- Sequence Search: Greedy Search is the conditional probability of generating an output sequence based on the input sequence. Beam Search is an improved version of Greedy Search with a hyperparameter named beam size. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Softmax Cross Entropy Loss Function, Sequence Masking, Teacher Forcing, Training and Prediction, Evaluation of Predicted Sequences, BLEU or Bilingual Evaluation Understudy, RNN Encoder Decoder, Beam Search, Greedy Search, Exhaustive Search, Attention Mechanisms, Attention Cues, Nonvolitional Cue and Volitional Cue, Queries, Keys and Values, Attention Pooling and few more Topics related to the same from here. I have presented the Implementation of Sequence Masking, Softmax Cross Entropy Loss, Training RNN Encoder Decoder Model and BLEU using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day117 of 300DaysOfData!
- Attention Pooling: Attention Pooling selectively aggregates values or sensory inputs to produce the output. It implies the interaction between queries and keys. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Attention Pooling or Nadaraya Watson Kernel Regression, Queries or Volitional Cues and Keys or Non Volitional Cues, Generating the Dataset, Average Pooling, Non Parametric Attention Pooling, Attention Weight, Gaussian Kernel, Parametric Attention Pooling, Batch Matrix Multiplication, Defining the Model, Training the Model, Stochastic Gradient Descent, MSE Loss Function and few more Topics related to the same from here. I have presented the Implementation of Attention Mechanisms, Non Parametric Attention Pooling, Batch Matrix Multiplication, NW Kernel Regression Model, Training and Prediction using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day118 of 300DaysOfData!
- Attention Pooling: Attention Pooling selectively aggregates values or sensory inputs to produce the output. It implies the interaction between queries or volitional cues and keys or non volitional cues. Attention Pooling is the weighted average of the training outputs. It can be parametric or nonparametric. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Attention Scoring Functions, Gaussian Kernel, Attention Weights, Softmax Activation Function, Masked Softmax Operation, Text Sequences, Probability Distribution, Additive Attention, Queries, Keys and Values, Tanh Activation Function, Dropout and Linear Layer, Attention Pooling and few more Topics related to the same from here. I have presented the Implementation of Masked Softmax Operation and Additive Attention using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day119 of 300DaysOfData!
- Attention Pooling: Attention Pooling selectively aggregates values or sensory inputs to produce the output. It implies the interaction between queries or volitional cues and keys or non volitional cues. Attention Pooling is the weighted average of the training outputs. It can be parametric or nonparametric. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Scaled Dot Product Attention, Queries, Keys and Values, Additive Attention, Attention Pooling, Bahdanau Attention, RNN Encoder Decoder Architecture, Hidden States, Embedding, Defining Decoder with Attention, Sequence to Sequence Attention Decoder and few more Topics related to the same from here.I have presented the Implementation of Scaled Dot Product Attention and Sequence to Sequence Attention Decoder Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day120 of 300DaysOfData!
- Multi Head Attention: Multi Head Attention is the design for attention mechanisms which runs through an attention mechanism several times in parallel. Instead of performing single attention pooling, queries, keys and values can be transformed into learned linear projections which are fed into attention pooling in parallel. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Bahdanau Attention, Recurrent Neural Networks Encoder Decoder Architecture, Training the Sequence to Sequence Model, Embedding Layer, Attention Weights, GRU, Heatmaps, Multi Head Attention, Queries, Keys and Values, Attention Pooling, Additive Attention and Scaled Dot Product Attention, Transpose Functions and few more Topics related to the same from here. I have presented the Implementation Multi Head Attention using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day121 of 300DaysOfData!
- Multi Head Attention: Multi Head Attention is the design for attention mechanisms which runs through an attention mechanism several times in parallel. Instead of performing single attention pooling, queries, keys and values can be transformed into learned linear projections which are fed into attention pooling in parallel. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Multi Head Attention, Queries, Keys and Values, Attention Pooling, Scaled Dot Product Attention, Self Attention and Positional Encoding, Recurrent Neural Networks, Intra Attention, Comparing CNNs, RNNs and Self Attention, Padding Tokens, Absolute Positional Information, Relative Positional Information and few more Topics related to the same from here. I have presented the Implementation of Positional Encoding using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day122 of 300DaysOfData!
- Transformer Architecture: Transformer is an architecture for transforming one sequence into another one with the help of two parts, Encoder and Decoder. It makes the use of Self Attention mechanisms. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Transformer, Self Attention, Encoder and Decoder Architecture, Sequence Embeddings, Positional Encoding, Position Wise Feed Forward Networks, Residual Connection and Layer Normalization, Encoder Block and Multi Head Self Attention, Transformer Decoder, Queries, Keys and Values, Scaled Dot Product Attention and few more Topics related to the same from here. I have presented the Implementation of Position Wise Feed Forward Networks, Residual Connection and Layer Normalization, Encoder, Decoder Block and Transformer Decoder using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day123 of 300DaysOfData!
- Transformer Architecture: Transformer is an architecture for transforming one sequence into another one with the help of two parts, Encoder and Decoder. It makes the use of Self Attention mechanisms. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Decoder Architecture, Self Attention, Encoder Decoder Attention, Position Wise Feed Forward Networks, Residual Connections, Transformer Decoder, Embedding Layer, Sequential Blocks, Training the Transformer Architecture and few more Topics related to the same from here. I have also read about Logistic Regression, Sigmoid Activation Function, Weights Initialization, Gradient Descent, Cost Function and more. I have presented the Implementation of Logistic Regression from Scratch using NumPy, Transformer Decoder and Training using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day124 of 300DaysOfData!
- Transformer Architecture: Transformer is an architecture for transforming one sequence into another one with the help of two parts, Encoder and Decoder. It makes the use of Self Attention mechanisms. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Optimization Algorithms and Deep Learning, Objective Function and Minimization, Goal of Optimization, Generalization Error, Training Error, Risk Function and Empirical Risk Function, Optimization Challenges, Local Minimum and Global Minimum, Saddle Points, Hessian Matrix and Eigenvalues, Vanishing Gradients, Convexity, Convex Sets and Functions, Jensen's Inequality and few more Topics related to the same from here. I have presented the Implementation of Local Minima, Saddle Points, Vanishing Gradients and Convex Functions using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day125 of 300DaysOfData!
- Gradient Descent: Gradient Descent is an optimization algorithm which is used to minimize the differentiable function by iteratively moving in the direction of steepest descent as defined by the negative of the Gradient. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Convexity and Second Derivatives, Constrained Optimization, Lagrangian Function and Multipliers, Penalties, Projections, Gradient Clipping, Stochastic Gradient Descent, One Dimensional Gradient Descent, Objective Function, Learning Rate, Local Minimum and Global Minimum, Multivariate Gradient Descent and few more Topics related to the same from here. I have presented the Implementation of One Dimensional Gradient Descent, Local Minima and Multivariate Gradient Descent using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day126 of 300DaysOfData!
- Gradient Descent: Gradient Descent is an optimization algorithm which is used to minimize the differentiable function by iteratively moving in the direction of steepest descent as defined by the negative of the Gradient. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Multivariate Gradient Descent, Adaptive Methods, Learning Rate, Newtons Method, Taylor Expansion, Hessian Function, Gradient and Backpropagation, Nonconvex Function, Convergence Analysis, Linear Convergence, Preconditioning, Gradient Descent with Line Search, Stochastic Gradient Descent, Loss Functions and few more topics related to the same from here. I have presented the Implementation of Newtons Method, Non Convex Functions and Stochastic Gradient Descent using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day127 of 300DaysOfData!
- Stochastic Gradient Descent: Stochastic Gradient Descent is an iterative method for optimizing an objective function with suitable differentiable properties. It is a variation of the gradient descent algorithm that calculates the error and updates the model. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Stochastic Gradient Descent, Dynamic Learning Rate, Exponential Decay and Polynomial Decay, Convergence Analysis for Convex Objectives, Stochastic Gradient and Finite Samples, Minibatch Stochastic Gradient Descent, Vectorization and Caches, Matrix Multiplications, Minibatches, Variance, Implementation of Gradients and few more topics related to the same from here. I have presented the implementation of Stochastic Gradient Descent and Minibatch Stochastic Gradient Descent using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day128 of 300DaysOfData!
- Stochastic Gradient Descent: Stochastic Gradient Descent is an iterative method for optimizing an objective function with suitable differentiable properties. It is a variation of the gradient descent algorithm that calculates the error and updates the model. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about The Momentum Method, Stochastic Gradient Descent, Leaky Averages, Variance, Accelerated Gradient, An Ill Conditioned Problem and Convergence, Effective Sample Weight, Practical Experiments, Implementation of Momentum with SGD, Theoretical Analysis, Quadratic Convex Functions, Scalar Functions and few more topics related to the same from here. I have presented the implementation of Momentum Method, Effective Sample Weight and Scalar Functions using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
Day129 of 300DaysOfData!
- Stochastic Gradient Descent: Stochastic Gradient Descent is an iterative method for optimizing an objective function with suitable differentiable properties. It is a variation of the gradient descent algorithm that calculates the error and updates the model. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Adagrad Optimization Algorithms, Sparse Features and Learning Rates, Preconditioning, Stochastic Gradient Descent Algorithm, The Algorithms, Implementation of Adagrad from Scratch, Deep Learning and Computational Constraints, Learning Rates and few more Topics related to the same from here. I have presented the implementation Adagrad Optimization Algorithm from Scratch using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
Day130 of 300DaysOfData!
- RMSProp Optimization Algorithm: RMSProp is a gradient based optimization algorithm that utilizes the magnitude of recent gradients to normalize the gradients. It deals with Adagrad's radically diminishing learning rates. It divides the learning rate by an exponentially decaying average of squared gradients. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about RMSProp Optimization Algorithm, Learning Rate, Leaky Averages and Momentum Method, Implementation of RMSProp from scratch, Gradient Descent Algorithm, Preconditioning and few more topics related to the same from here. I have presented the implementation of RMSProp Optimization Algorithm from scratch using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
Day131 of 300DaysOfData!
- RMSProp Optimization Algorithm: RMSProp is a gradient based optimization algorithm that utilizes the magnitude of recent gradients to normalize the gradients. It deals with Adagrad's radically diminishing learning rates. It divides the learning rate by an exponentially decaying average of squared gradients. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Adadelta Optimization Algorithms, Learning Rates, Leaky Averages, Momentum, Gradient Descent, Concise Implementation of Adadelta, Adam Optimization Algorithms, Vectorization and Minibatch SGD, Weighting Parameters, Normalization, Concise Implementation of Adam Algorithms and few more topics related to the same from here. I have presented the Implementation of Adadelta Optimization Algorithm and Adam Optimization Algorithm from scratch using PyTorch here in the Snapshot. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day132 of 300DaysOfData!
- Adam Optimizer: Adam uses exponential weighted moving averages also known as Leaky Averaging to obtain an estimate of both momentum and also the second moment of the gradient. It combines the features of many optimization algorithms. It uses EWMA on minibatch Stochastic Gradient Descent. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Adam and Yogi Optimization Algorithms, Variance, Minibatch SGD, Learning Rate Scheduling, Weight Vectors, Convolutional Layer, Linear Layer, Max Pooling Layer, Sequential API, RELU, Cross Entropy Loss, Schedulers, Overfitting and few more topics related to the same from here. I have presented the implementation of LeNet Architecture and Yogi Optimization Algorithm using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day133 of 300DaysOfData!
- Adam Optimizer: Adam uses exponential weighted moving averages also known as Leaky Averaging to obtain an estimate of both momentum and also the second moment of the gradient. It combines the features of many optimization algorithms. It uses EWMA on minibatch Stochastic Gradient Descent. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Learning Rate Scheduling, Square Root Scheduler, Factor Scheduler, Learning Rate and Polynomial Decay, Multi Factor Scheduler, Piecewise Constant, Optimization and Local Minimum, Cosine Scheduler and few more topics related to the same from here. I have presented the implementation of Multi Factor Scheduler and Cosine Scheduler using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day134 of 300DaysOfData!
- Adam Optimizer: Adam uses exponential weighted moving averages also known as Leaky Averaging to obtain an estimate of both momentum and also the second moment of the gradient. It combines the features of many optimization algorithms. It uses EWMA on minibatch Stochastic Gradient Descent. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Model Computational Performance, Compilers and Interpreters, Symbolic Programming and Imperative Programming, Hybrid Programming, Dynamic Computations Graph, Hybrid Sequential, Acceleration by Hybridization, Multi Layer Perceptrons, Asynchronous Computation and few more topics related to the same from here. I have presented the implementation of Hybrid Sequential, Acceleration by Hybridization and Asynchronous Computation using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day135 of 300DaysOfData!
- Adam Optimizer: Adam uses exponential weighted moving averages also known as Leaky Averaging to obtain an estimate of both momentum and also the second moment of the gradient. It combines the features of many optimization algorithms. It uses EWMA on minibatch Stochastic Gradient Descent. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Asynchronous Computation, Barriers and Blockers, Improving Computation and Memory Footprint, Automatic Parallelism, Parallel Computation and Communication, Training on Multiple GPUs, Splitting the Problem, Data Parallelism, Network Partitioning, Layer Wise Partitioning, Data Parallel Partitioning and few more topics related to the same from here. I have presented the implementation of Initializing Model Parameters and Defining LeNet Model using PyTorch here in the Snapshot. I am still working on the Implementation of LeNet Model. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day136 of 300DaysOfData!
- Adam Optimizer: Adam uses exponential weighted moving averages also known as Leaky Averaging to obtain an estimate of both momentum and also the second moment of the gradient. It combines the features of many optimization algorithms. It uses EWMA on minibatch Stochastic Gradient Descent. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Training on Multiple GPUs, LeNet Architecture, Data Synchronization, Model Parallelism, Data Broadcasting, Data Distribution, Optimization Algorithms, Implementation Back Propagation, Model Animation, Cross Entropy Loss Function, Convolutional Layer, RELU Activation Function, Matrix Multiplication, Average Pooling Layer and few more topics related to the same from here. I have presented the implementation of Data Distribution, Data Synchronization and Training Function using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day137 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Optimization and Synchronization, ResNet Neural Networks Architecture, Convolutional Layer, Batch Normalization Layer, Strides and Padding, The Sequential API, Parameter Initialization and Logistics, Minibatch Gradient Descent, Training ResNet Model, Stochastic Gradient Descent Optimizer, Cross Entropy Loss Function, Back Propagation, Parallelization and few more topics related to the same from here. I have presented the implementation of ResNet Architecture, Initialization and Training the Model using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day138 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision Applications, Image Augmentation, Deep Neural Networks, Common Image Augmentation Method such as Flipping and Cropping, Horizontal Flipping and Vertical Flipping, Changing the Color of Images, Overlying Multiple Image Augmentation Methods, CIFAR10 Dataset, Torch Vision Module and Random Color Jitter Instance and few more topics related to the same from here. I have presented the Implementation of Flipping and Cropping the Images and Changing the Color of Images using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day139 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Image Augmentation, CIFAR10 Dataset, Using a Multi GPU Training Model, Fine Tuning the Model, Overfitting, Pretrain Neural Network, Target Initialization, ResNet Model, ImageNet Dataset, Normalization of RGB Images, Mean and Standard Deviation, Torch Vision Module, Flipping and Cropping Images, Adam Optimization, Cross Entropy Loss Function and few more topics related to the same from here. I have presented the implementation of Training the Model with Image Augmentation and Normalization of Images using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day140 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Fine Tuning the Model, Pretrain Neural Networks, Normalization of Images, Mean and Standard Deviation, Defining and Initializing the Model, Cross Entropy Loss Function, Data Loader Class, Learning Rate and Stochastic Gradient Descent, Model Parameters, Transfer Learning, Source Model and Target Model, Weights and Biases and few more topics related to the same from here. I have presented the implementation of Normalization of Images, Flipping and Cropping the Images and Training Pretrained Model using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day141 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Object Detection and Object Recognition, Image Classification and Computer Vision, Images and Bounding Boxes, Target Location and Axis Coordinates and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Regular Expressions, Disjunction, Grouping and Precedence, Precision and Recall, Substitution and Capture Groups, Lookahead Assertions, Words, Corpora and few more topics related to the same. I have presented the simple implementation of Object Detection and Bounding Boxes using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day142 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, Anchor Boxes, Object Detection Algorithms, Bounding Boxes, Generating Multiple Anchor Boxes, Computation Complexity, Sizes and Ratios and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Text Normalization, Unix Tools for Crude Tokenization and Normalization, Word Tokenization, Named Entity Detection, Penn Treebank Tokenization and few more topics related to the same from here. I have presented the implementation of Generating Anchor Boxes, Object Detection and Bounding Boxes using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day143 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, Generating Multiple Anchor Boxes, Batch Size, Coordinate Values, Intersection Over Union Algorithm, Jaccard Index, Computation Complexity, Sizes and Ratios and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Byte Pair Encoding Algorithm for Tokenization, Subword Tokens, Wordpiece and Greedy Tokenization Algorithm, Maximum Matching Algorithm, Word Normalization, Lemmatization and Stemming, The Porter Stemmer and few more Topics related to the same from here. I have presented the implementation of Generating Anchor Boxes and Intersection Over Union Algorithm using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day144 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, Labeling Training Set Anchor Boxes, Object Detection and Image Recognition, Ground Truth Bounding Box Index, Anchor Boxes and Offset Boxes, Intersection Over Union and Jaccard Algorithm and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Sentence Segmentation, The Minimum Edit Distance Algorithm, Viterbi Algorithm, N Gram Language Models, Probability, Spelling Correction and Grammatical Error Correction and few more topics related to the same from here. I have presented the implementation of Labeling Training Set Anchor Boxes and Initializing Offset Boxes using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day145 of 300DaysOfData!
- Image Segmentation: Image Segmentation is the process of partitioning digital image into multiple segments or set of pixels. The goal of segmentation is to simplify the representation of image into something meaningful and easier to analyze. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Non Maximum Suppression Algorithms, Prediction Bounding Boxes, Ground Truth Bounding Boxes, Confidence Level, Batch Size, Intersection Over Union Algorithm or Jaccard Index, Aspect Ratios, Bounding Boxes for Prediction, Multi Box Target Function, Anchor Boxes and few more topics related to the same from here. I have presented the implementation of Initializing Multi Box Anchor Boxes and Initializing Prediction Bounding Boxes using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day146 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Multiscale Object Detection, Generating Multiple Anchor Boxes, Object Detection, Single Shot Multiple Detection Algorithm, Category Prediction Layer, Bounding Boxes Prediction Layer, Concatenating Predictions for Multiple Scales, Height and Width Down Sample Block, CNN Layer, RELU and Max Pooling Layer and few more topics related to the same from here. I have also spend some time reading the Book "Speech and Language Processing". Here, I have read about Part of Speech Tagging, Information Extraction, Named Entity Recognition, Regular Expressions and few more topics related to the same from here. I have presented the implementation of Initializing Category Prediction Layer and Height & Width Down Sample Block using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
Day147 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Single Shot Multi Box Detection Algorithm, The Base Neural Network, Height Width Down Sample Block, Category Prediction Layer, Bounding Box Prediction Layer, Multiscale Feature Blocks, The Sequential API and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about N Gram Language Models, Chain Rule of Probability, Markov Models, Maximum Likelihood Estimation, Relative Frequency, Evaluating Language Models, Log Probabilities, Perplexity, Generalization & Zeros, Sparsity and few more topics related to the same from here. I have presented the implementation of Base SSD Network and Complete SSD Model using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead!!
- Book:
Day148 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Single Shot Multi Box Detection Model, Implementation of Tiny SSD Model, Forward Propagation Function, Data Reading and Initialization, Object Detection, Multi Scale Feature Block, Global Max Pooling Layer and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Unknown Words or Out of Vocabulary Words, OOV Rate, Smoothing, Laplace Smoothing, Text Classification, Add One Smoothing, MLE, Add K Smoothing and few more topics related to the same from here. I have presented the implementation of Single Shot Multi Box Detection Model and Dataset Initialization using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day149 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Softmax Activation Function, Convolutional Layer, Training the Single Shot Multi Box Detection Model, Multi Scale Anchor Boxes, Cross Entropy Loss Function, L1 Normalization Loss Function, Average Absolute Error, Accuracy Rate, Category and Offset Losses and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Backoff and Interpolation, Katz Backoff, Kneser Ney Smoothing, Absolute Discounting, The Web and Stupid Backoff, Perplexity Relation to Entropy and few more topics related to the same from here. I have presented the implementation of Training Single Shot Multi Box Detection Model, Loss and Evaluation Functions using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day150 of 300DaysOfData!
- Image Segmentation: Image Segmentation is the process of partitioning digital image into multiple segments or set of pixels. The goal of segmentation is to simplify the representation of image into something meaningful and easier to analyze. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Region Based Convolutional Neural Networks, Fast RCNN, Faster RCNN, Mask RCNN, Category Prediction Layer, Bounding Boxes Prediction Layer, Support Vector Machines, Rol Pooling Layer and Rol Alignment Layer, Pixel Level Semantics, Image Segmentation and Instance Segmentation, Pascal VOC2012 Semantic Segmentation, RGB, Data Preprocessing and few more topics related to the same from here. I have presented the implementation of Semantic Segmentation and Data Preprocessing using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day151 of 300DaysOfData!
- Sequence to Sequence Model: Sequence to Sequence Neural Networks can be built with a modular and reusable Encoder and Decoder Architecture. The Encoder Model generates a Thought Vector which is a Dense and fixed Dimension Vector representation of the Data. The Decoder Model use Thought Vectors to generate Output Sequences. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Dataset Classes for Custom Semantic Segmentation, RGB Channels, Normalization of Images, Random Cropping Operation, Sequence to Sequence Recurrent Neural Networks, Label Encoder, One Hot Encoder, Encoding and Vectorization, Long Short Term Memory or LSTM and few more topics related to the same from here. I have presented the implementation Dataset Classes for Custom Semantic Segmentation using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day152 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Transposed Convolutional Layer, CNNs, Basic 2D Transposed Convolution, Broadcasting Matrices, Kernel Size, Padding, Strides and Channels, Analogy to Matrix Transposition, Matrix Multiplication and Matrix Vector Multiplication and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Naive Bayes and Sentiment Classification, Text Categorization, Spam Detection, Probabilistic Classifier, Multinomial NB Classifier, Bag of Words, MLP, Unknown and Stop Words and few more topics related to the same from here. I have presented the implementation of Transposed Convolution, Padding, Strides and Matrix Multiplication using PyTorch here in the Snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day153 of 300DaysOfData!
- Transposed Convolution: Transposed Convolution implies that Stride & Padding do not correspond to the number of zeros added around the image and the amount of shift in the kernel when sliding it across the input as they would in a standard convolution operation. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Fully Convolutional Neural Networks, Semantic Segmentation Principles, Transposed Convolutional Layer, Constructing a Pretrained Neural Networks Model, Global Average Pooling Layer, Flattening Layer, Image Processing and Upsampling, Bilinear Interpolation Kernel Function and few more topics related to the same from here. I have presented the implementation of Fully Convolutional Layer, Pretrained NNs, Bilinear Interpolation Kernel Function and Transposed Convolutional Layer using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day154 of 300DaysOfData!
- Neural Style Transfer Algorithms: It is the task of changing the style of an image in one domain to the style of an image in another domain. It manipulates images or videos in order to adopt the appearance of another image. On my Journey of Machine Learning and Deep Learning, Today I have read and Implemented from the Book Dive into Deep Learning. Here, I have learned about Softmax Cross Entropy Loss Function, Stochastic Gradient Descent, CNNs, Neural Networks Style Transfer, Composite Images, RGB Channels, Normalization and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Optimizing Naive Bayes for Sentiment Analysis, Sentiment Lexicons, Naive Bayes as Language Models, Precision, Recall and FMeasure, Multi Label and Multinomial Classification and few more topics related to the same from here. I have started working on Style Transfer using Neural Networks. The Notebook is mentioned below though I am still working on it.
- Book:
Day155 of 300DaysOfData!
- Neural Style Transfer Algorithms: It is the task of changing the style of an image in one domain to the style of an image in another domain. It manipulates images or videos in order to adopt the appearance of another image. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Neural Networks Style Transfer, Convolutional Neural Networks, Reading the Content and Style Images, Preprocessing and Postprocessing the Images, Extracting Image Features, Composite Images, VGG Neural Networks, Squared Error Loss Faction, Total Variance Loss Function, Normalization of RGB Channels of Images and few more topics related to the same from here. I am still working on Style Transfer using Neural Networks. The Notebook is mentioned below though I am still working on it. I have presented the implementation of Function for Extracting Features and Square Error Loss Function using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day156 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Creating and Initializing the Composite Images, Synchronization Functions, Adam Optimizer, Gram Matrix, Convolutional Neural Networks, Neural Networks Style Transfer, Loss Functions and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Test Sets and Cross Validation, Statistical Significance Testing, Naive Bayes Classifiers, Bootstrapping, Logistic Regression, Generative and Discriminative Classifiers, Feature Representation, Sigmoid Classification, Weight and Bias Term and few more topics related to the same from here. I have completed working on Style Transfer using Neural Networks. The Notebook is mentioned below but I am still updating.
- Book:
Day157 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, Image Classification, CIFAR10 Dataset, Obtaining and Organizing the Dataset, Augmentation and few more topics related to the same. Apart from that, I have learned about Data Scraping and Scrapy, Named Entity Recognition and SpaCy, Trained Transformer Model using SpaCy, Geocoding and few more topics related to the same from here. I have completed working on Style Transfer using Neural Networks Notebook. I have started working on Object Recognition on Images: CIFAR10 Notebook. All the Notebooks are mentioned below. I have presented the implementation of Obtaining and Organizing the CIFAR10 Dataset here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned above and below. Excited about the days ahead !!
- Book:
Day158 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, Image Classification, Image Augmentation and Overfitting, Normalization of RGB Channels, Data Loader and Validation Set and few more topics related to the same from here. Apart from that, I have learned about Stanford NER Algorithms, NLTK, Named Entity Recognition and few more topics related to the same. I have completed working on Style Transfer using Neural Networks Notebook. I have started working on Object Recognition on Images: CIFAR10 Notebook. All the Notebooks are mentioned below. I have presented the implementation of Obtaining and Organizing the Dataset, Image Augmentation and Normalization using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day159 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Computer Vision, ResNet Model and Residual Blocks, Xavier Random Initialization, Cross Entropy Loss Function, Defining Training Functions, Stochastic Gradient Descent, Learning Rate Scheduler, Evaluation Metrics and few more topics related to the same. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Sentiment Classification, Learning in Logistic Regression, Conditional MLE, Cost Function and few more topics related to the same from here. I am working on Object Recognition on Images: CIFAR10 Notebook. The Notebook is mentioned below. I have presented the implementation Defining a Training Function using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day160 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about ImageNet Dataset, Obtaining and Organizing the Dataset, Image Augmentation such as Flipping and Resizing the Image, Changing Brightness and Contrast of Image, Transfer Learning and Features, Normalization of Images and few more topics related to the same from here. I have completed working on Object Recognition on Images: CIFAR10 Notebook. I have started working on Dog Breed Identification: ImageNet Notebook. All the Notebooks are mentioned below. I have presented the implementation of Image Augmentation and Normalization, Defining Neural Networks Model and Loss Function using PyTorch here in the Snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day161 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Defining the Training Functions, Computer Vision, Hyperparameters, Stochastic Gradient Descent Optimization Function, Learning Rate Scheduler and Optimization, Training Loss and Validation Loss and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Gradient for Logistic Regression, SGD Algorithm, Minibatch Training and few more topics related to the same from here. I am working on Dog Breed Identification: ImageNet Notebook. The Notebooks is mentioned below. I have presented the implementation of Defining the Training Function using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead!!
- Book:
Day162 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Pretrained Text Representations, Word Embedding and Word2Vec, One Hot Vectors, The Skip Gram Model and Training, The Continuous Bag of Words Model and Training, Approximate Training, Negative Sampling, Hierarchical Softmax, Reading and Processing the Dataset, Subsampling, Vocabulary and few more topics related to the same from here. Apart from that, I have also read about Improving Chemical Autoencoders Latent Space and Molecular Diversity with Hetero Encoders. I am working on Dog Breed Identification: ImageNet Notebook. The Notebooks is mentioned below. I have presented the implementation of Reading and Preprocessing the Dataset, Subsampling and Comparison using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day163 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Subsampling, Extracting Central Target Words and Context Words, Maximum Context Window Size, Penn Tree Bank Dataset and Pretraining Word Embedding and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Regularization and Overfitting, Manhattan Distance, Lasso and Ridge Regression, Multinomial Logistic Regression, Features in MLR, Learning in MLR, Interpreting Models, Deriving Gradient Equation and few more topics related to the same from here. I have completed working on Dog Breed Identification: ImageNet Notebook. I have presented the implementation of Extracting Central Target Words and Context Words using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day164 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Subsampling and Negative Sampling, Word Embedding and Word2Vec, Probability, Reading into Batches, Concatenation and Padding, Random Minibatches and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Vector Semantics and Embeddings, Lexical Semantics, Lemmas and Senses, Word Sense Disambiguation, Word Similarity, Principle of Contrast, Representation Learning, Synonymy and few more topics related to the same from here. I have presented the implementation Negative Sampling using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day165 of 300DaysOfData!
- Subsampling: Subsampling is a method that reduces data size by selecting a subset of the original data. The subset is specified by choosing a parameter. Subsampling attempts to minimize the impact of high frequency words on the training of a word embedding model. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Word Embedding, Batches, Loss Function and Padding, Center and Context Words, Negative Sampling, Data Loader Instance, Vocabulary, Subsampling, Data Iterations, Mask Variables and few more topics related to the same from here. I have presented the implementation of Reading Batches and Function for Loading PTB Dataset using PyTorch here in the snapshots. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day166 of 300DaysOfData!
- Word Embedding: Word Embedding is a term used for the representation of words for text analysis typically in the form of a real valued vector that encodes the meaning of the word such that the words that are closer in the vector space are expected to be similar in meaning. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Word Embedding, Word2Vec, The Skip Gram Model, Embedding Layer, Word Vector, Skip Gram Model Forward Calculation, Batch Matrix Multiplication, Binary Cross Entropy Loss Function, Negative Sampling, Mask Variables and Padding, Initializing Model Parameters and few more topics related to the same from here. I have presented the implementation of Embedding Layer, Skip Gram Model Forward Calculation and Binary Cross Entropy Loss Function using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day167 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Training Skip Gram Model, Loss Function, Applying Word Embedding Model, Negative Sampling, Word Embedding with Global Vectors or Glove, Conditional Probability, The Glove Model, Cross Entropy Loss Function and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Word Relatedness, Semantic Field, Semantic Frames and Roles, Connotation and Sentiment, Vector Semantics, Embeddings and few more topics related to the same from here. I have presented the implementation of Training Word Embedding Model using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day168 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Subword Embedding, Fast Text and Byte Pair Encoding, Finding Synonyms and Analogies, Pretrained Word Vectors, Token Embedding, Central Words and Context Words and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Words and Vectors, Vectors and Documents, Term Document Matrices, Information Retrieval, Row Vector and Context Matrix and few more topics related to the same from here. I have presented the implementation of Defining Token Embedding Class using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day169 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Finding Synonyms and Analogies, Word Embedding Model and Word2Vec, Applying Pretrained Word Vectors, Cosine Similarity and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have read about Cosine for measuring similarity, Dot and Inner Products, Weighing terms in the vector, Term Frequency Inverse Document Frequency or TFIDF, Collection Frequency, Applications of TFIDF Vector Model and few more topics related to the same from here. I have presented the implementation of Cosine Similarity and Finding Synonyms and Analogies using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day170 of 300DaysOfData!
- Bidirectional Encoder Representations from Transformers: ELMO encodes context bidirectionally but uses task specific architectures and GPT is a task agnostic but encodes context left to right. BERT encodes context bidirectionally and requires minimal architecture changes for a wide range of NLP tasks. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about BERT Architecture, From Context Independent to Context Sensitive, Word Embedding Model and Word2Vec, From Task Specific to Task Agnostic, Embeddings from Language Models or ELMO Architecture, Input Representations, Token, Segment and Positional Embedding and Learnable Positional Embedding and few more topics related to the same from here. I have presented the implementation of BERT Input Representations and BERT Encoder Class using PyTorch here in the snapshot. I hope you will gain some insights. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day171 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about BERT Encoder Class, Pretraining Tasks, Masked Language Modeling, Multi Layer Perceptron, Forward Inference, BERT Input Sequences, Bidirectional Context Encoding and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have learned about Pointwise Mutual Information or PMI, Laplace Smoothing, Word2Vec, Skip Gram with Negative Sampling or SGNS, The Classifier, Logistic and Sigmoid Function, Cosine Similarity and Dot Product and few more topics related to the same from here. I have presented the implementation of Masked Language Modeling and BERT Encoder using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day172 of 300DaysOfData!
- Bidirectional Encoder Representations from Transformers: ELMO encodes context bidirectionally but uses task specific architectures and GPT is a task agnostic but encodes context left to right. BERT encodes context bidirectionally and requires minimal architecture changes for a wide range of NLP tasks. The embeddings are the sum of the Token, Segment and Positional Embeddings. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Bidirectional Encoder Representations from Transformers or BERT Architecture, Next Sentence Prediction Model, Cross Entropy Loss Function, MLP, BERT Model, Masked Language Modeling, BERT Encoder, Pretraining BERT Model and few more topics related to the same from here. I have presented the implementation of Next Sentence Prediction and BERT Model using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day173 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Pretraining BERT Model and Dataset, Defining Helper Functions for Pretraining Tasks, Generating Next Sentence Prediction Task, Generating Masked Language Modeling Task, Sequence Tokens and few more topics related to the same from here. I have also spend some time reading the Book Speech and Language Processing. Here, I have read about Learning Skip Gram Embeddings, Binary Classifier, Target and Context Embedding, Visualizing Embeddings, Semantic Properties of Embeddings and few more topics related to the same from here. I have presented the implementation of Generating Next Sentence Prediction Task and Generating Masked Language Modeling Task using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day174 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Pretraining BERT Model, Next Sentence Prediction Task and Masked Language Modeling Task, Transforming Text into Pretraining Dataset and few more topics related to the same from here. I have also learned about Scorer and Example Instances of SpaCy Model, Long Short Term Memory Neural Networks, Smiles Vectorizer, Feed Forward Neural Networks and few more topics related to the same. I have presented the implementation of Transforming Text into Pretraining Dataset using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day175 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Pretraining BERT Model, Cross Entropy Loss Function, Adam Optimization Function, Zeroing Gradients, Back Propagation and Optimization, Masked Language Modeling Loss and Next Sentence Prediction Loss and few more topics related to the same from here. I have presented the implementation of Pretraining BERT Model, Getting Loss from BERT Model and Training a Neural Networks Model using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day176 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Natural Language Processing Applications, NLP Architecture and Pretraining, Sentiment Analysis and Dataset, Text Classification, Tokenization and Vocabulary, Padding Tokens to Same Length and few more topics related to the same from here. Apart from that I have also learned about Named Entity Recognition, Frequency Distribution, NLTK, Extending Lists and few more topics related to the same from here. I have presented the implementation of Reading the Dataset, Tokenization and Vocabulary and Padding to Fixed Length using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day177 of 300DaysOfData!
- Sentiment Analysis: Sentiment Analysis is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify and study affective states and subjective information. It is widely applied to voice of the customer materials such as reviews and survey responses, online and social media and healthcare materials for applications that range from marketing to customer service to clinical medicine. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Creating Data Iterations, Tokenization and Vocabulary, Truncating and Padding, Recurrent Neural Networks Model and Sentiment Analysis, Pretrained Word Vectors and Glove, Bidirectional LSTM and Embedding Layer, Linear Layer and Decoding, Encoding and Sequence Data, Xavier Initialization and few more topics related to the same from here. I have presented the implementation of Bidirectional Recurrent Neural Networks Model using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day178 of 300DaysOfData!
- Sentiment Analysis: Sentiment Analysis is the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify and study affective states and subjective information. It is widely applied to voice of the customer materials such as reviews and survey responses, online and social media and healthcare materials for applications that range from marketing to customer service to clinical medicine. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Word Vectors and Vocabulary, Training and Evaluating Bidirectional RNN Model, Sentiment Analysis and One Dimensional Convolutional Neural Networks, One Dimensional Cross Correlation Operation, Max Over Time Pooling Layer, The Text CNN Model, RELU Activation Function and Dropout Layer and few more topics related to the same from here. I have presented the implementation of Text Convolutional Neural Networks using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day179 of 300DaysOfData!
- Natural Language Inference: Natural Language Inference is a study where a hypothesis can be inferred from a premise where both are a text sequence. It determines the logical relationship between a pair of text sequences. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Natural Language Inference and Dataset, Premise, Hypothesis or Entailment, Contradiction and Neutral, The Stanford Natural Language Inference Dataset, Reading SNLI Dataset and few more topics related to the same from here. I have presented the implementation of Reading SNLI Dataset using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day180 of 300DaysOfData!
- Natural Language Inference: Natural Language Inference is a study where a hypothesis can be inferred from a premise where both are a text sequence. It determines the logical relationship between a pair of text sequences. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Natural Language Inference and SNLI Dataset, Premises, Hypotheses and Labels, Vocabulary, Padding and Truncation of Sequences, Dataset and DataLoader Module and few more topics related to the same from here. Apart from here, I have also read about Confusion Matrix and Classification Reports, Frequency Distribution and Word Cloud of Text Data. I have presented the implementation of Loading SNLI Dataset using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day181 of 300DaysOfData!
- Natural Language Inference: Natural Language Inference is a study where a hypothesis can be inferred from a premise where both are a text sequence. It determines the logical relationship between a pair of text sequences. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Natural Language Inference using Attention Model, Multi Layer Perceptron or MLP with Attention Mechanisms, Alignment of Premises and Hypotheses, Word Embeddings and Attention Weights and few more topics related to the same from here. I have presented the implementation of MLP and Attention Mechanism using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day182 of 300DaysOfData!
- Comparing and Aggregating Class: Comparing Class compares a word in one sequence with the other sequence that is softly aligned with the word. Aggregating Class aggregates the two sets of comparison vectors to infer the logical relationship. It feeds the concatenation of both summarization results into MLP function to obtain the classification result of the logical relationship. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Comparing Word Sequences, Soft Alignment, Multi Layer Perceptron or MLP Classifier, Aggregating Comparison Vectors, Linear Layer and Concatenation, Decomposable Attention Model, Embedding Layer and few more topics related to the same from here. I have presented the implementation of Comparing Class, Aggregating Class and Decomposable Attention Model using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day183 of 300DaysOfData!
- Comparing and Aggregating Class: Comparing Class compares a word in one sequence with the other sequence that is softly aligned with the word. Aggregating Class aggregates the two sets of comparison vectors to infer the logical relationship. It feeds the concatenation of both summarization results into MLP function to obtain the classification result of the logical relationship. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Decomposable Attention Model, Embedding Layer and Linear Layer, Training and Evaluating the Attention Model, Natural Language Inference, Entailment, Contradiction and Neutral, Pretrained Glove Embedding, SNLI Dataset, Adam Optimizer and Cross Entropy Loss Function, Premises and Hypotheses and few more topics related to the same from here. I have presented the implementation of Training and Evaluating Attention Model using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day184 of 300DaysOfData!
- BERT Model Notes: BERT requires minimal architecture changes for sequence level and token level NLP applications such as Single Text Classification, Text Pair Classification or Regression and Text Tagging. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Fine Tuning BERT for Sequence Level and Token Level Applications, Single Text Classification, Text Pair Classification or Regression, Text Tagging, Question Answering, Natural Language Inference and Pretrained BERT Model, Loading Pretrained BERT Model and Parameters, Semantic Textual Similarity, POS Tagging and few more topics related to the same from here. I have presented the implementation of Loading Pretrained BERT Model and Parameters using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day185 of 300DaysOfData!
- BERT Model Notes: BERT requires minimal architecture changes for sequence level and token level NLP applications such as Single Text Classification, Text Pair Classification or Regression and Text Tagging. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Loading Pretrained BERT Model and Parameters, The Dataset for Fine Tuning BERT Model, Premise, Hypothesis and Input Sequence, Tokenization and Vocabulary, Truncating and Padding Tokens, Natural Language Inference and few more topics related to the same from here. I have presented the implementation of The Dataset for Fine Tuning BERT Model and Generating Training and Test Examples using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day186 of 300DaysOfData!
- Generative Adversarial Networks: Generative Adversarial Networks consist of two deep networks Generator and Discriminator. The Generator generates the image as much closer to the true image as possible to fool Discriminator by maximizing the cross entropy loss. The Discriminator tries to distinguish the generated images from the true images by minimizing the cross entropy loss. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Generative Adversarial Networks, Generator and Discriminator Networks, Updating Discriminator and few more topics related to the same from here. I have also read about Recommender Systems, Collaborative Filtering, Explicit and Implicit Feedbacks, Recommendation Tasks and few more topics related to the same. I have presented a simple implementation of Generator and Discriminator Networks and Optimization using PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day187 of 300DaysOfData!
- Generative Adversarial Networks: Generative Adversarial Networks consist of two deep networks Generator and Discriminator. The Generator generates the image as much closer to the true image as possible to fool Discriminator by maximizing the cross entropy loss. The Discriminator tries to distinguish the generated images from the true images by minimizing the cross entropy loss. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Generator and Discriminator Networks, Binary Cross Entropy Loss Function, Adam Optimizer and Normalized Tensors, Gaussian Distribution, Real and Generated Data and few more topics related to the same from here. I have presented a simple implementation of Updating Generator and Training Function using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day188 of 300DaysOfData!
- Generative Adversarial Networks: Generative Adversarial Networks consist of two deep networks Generator and Discriminator. The Generator generates the image as much closer to the true image as possible to fool Discriminator by maximizing the cross entropy loss. The Discriminator tries to distinguish the generated images from the true images by minimizing the cross entropy loss. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Deep Convolutional Generative Adversarial Networks, The Pokemon Dataset, Resizing and Normalization, DataLoader, The Generator Block Module, Transposed Convolution Layer, Batch Normalization Layer, RELU Activation Function and few more topics related to the same from here. I have also read about Inter Quartile Range, Mean Absolute Deviation, Box Plots, Density Plots, Frequency Tables and few more topics related to the same. I have presented the implementation of The Generator Block and Pokemon Dataset using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the Topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day189 of 300DaysOfData!
- Generative Adversarial Networks: Generative Adversarial Networks consist of two deep networks Generator and Discriminator. The Generator generates the image as much closer to the true image as possible to fool Discriminator by maximizing the cross entropy loss. The Discriminator tries to distinguish the generated images from the true images by minimizing the cross entropy loss. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Deep Convolutional Generative Adversarial Networks, The Generator and The Discriminator Networks, Leaky RELU Activation Function and Dying RELU Problem, Batch Normalization, Convolutional Layer, Stride and Padding and few more topics related to the same from here. I have presented the implementation of The Discriminator Block and The Generator Block using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day190 of 300DaysOfData!
- Generative Adversarial Networks: Generative Adversarial Networks consist of two deep networks Generator and Discriminator. The Generator generates the image as much closer to the true image as possible to fool Discriminator by maximizing the cross entropy loss. The Discriminator tries to distinguish the generated images from the true images by minimizing the cross entropy loss. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the Book Dive into Deep Learning. Here, I have learned about Deep Convolutional Generative Adversarial Networks, The Generator and The Discriminator Blocks, Cross Entropy Loss Function, Adam Optimization Function and few more topics related to the same from here. I have presented the implementation of Training Generator and Discriminator Networks using PyTorch here in the snapshots. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
Day191 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, Today I have started reading and implementing from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Deep Learning in Practice, Areas of Deep Learning, A Brief History of Neural Networks, Fastai and Jupyter Notebooks, Cat and Dog Classification, Image Loaders, Pretrained Models, RESNET and CNNs, Error Rate and few more topics related to the same from here. I have presented the implementation of Cat and Dog Classification using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Introduction Notebook
Day192 of 300DaysOfData!
- Transfer Learning: Transfer Learning is defined as the process of using pretrained model for a task different from what it was originally trained for. Fine Tuning is a transfer learning technique that updates the parameters of pretrained model by training for additional epochs using a different task from that used for pretraining. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Machine Learning and Weight Assignment, Neural Networks and Stochastic Gradient Descent, Limitations Inherent to ML, Image Recognition, Classification and Regression, Overfitting and Validation Set, Transfer Learning, Semantic Segmentation, Sentiment Classification, Data Loaders and few more topics related to the same from here. I have presented the implementation of Semantic Segmentation and Sentiment Classification using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Introduction Notebook
Day193 of 300DaysOfData!
- Transfer Learning: Transfer Learning is defined as the process of using pretrained model for a task different from what it was originally trained for. Fine Tuning is a transfer learning technique that updates the parameters of pretrained model by training for additional epochs using a different task from that used for pretraining. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Tabular Data and Classification, Tabular Data Loaders, Categorical and Continuous Data, Recommendation System and Collaborative Filtering, Datasets for Models, Validation Sets and Test Sets, Judgement in Test Sets and few more topics related to the same from here. I have presented the implementation of Tabular Classification and Recommendation System Model using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Introduction Notebook
Day194 of 300DaysOfData!
- The Drivetrain Approach: It can be stated as start with considering your objective then think about what actions you can take to meet that objective and what data you have or can acquire that can help and then build a model that you can use to determine the best actions to take to get the best results in terms of your objective. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about The Practice of Deep Learning, The State of DL, Computer Vision, Text and NLP, Combining Text and Images, Tabular Data and Recommendation Systems, The Drivetrain Approach, Gathering Data and Duck Duck Go, Questionnaire and few more topics related to the same from here. I have presented the implementation of Gathering Data for Object Detection using Duck Duck Go and Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Detection
Day195 of 300DaysOfData!
- The Drivetrain Approach: It can be stated as start with considering your objective then think about what actions you can take to meet that objective and what data you have or can acquire that can help and then build a model that you can use to determine the best actions to take to get the best results in terms of your objective. On my Journey of Machine Learning and Deep Learning, Today I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have Fastai Dependencies and Functions, Biased Dataset, Data to Data Loaders, Data Block API, Dependent and Independent Variables, Random Splitting, Image Transformations and few more topics related to the same from here. I have presented the implementation of Gathering Data and Initializing Data Loaders using Duck Duck Go and Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Detection
Day196 of 300DaysOfData!
- Data Augmentation: Data Augmentation refers to creating random variations of the input data such that they appear different but do not change the meaning of the data. RandomResizedCrop is a specific example of Data Augmentation. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Data Loaders, Image Block, Resizing, Squishing and Stretching Images, Padding Images, Data Augmentation, Image Transformations, Training the Model and Error Rate, Random Resizing and Cropping and few more topics related to the same from here. I have presented the implementation of Data Loaders, Data Augmentation and Training the Model using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Detection
Day197 of 300DaysOfData!
- Data Augmentation: Data Augmentation refers to creating random variations of the input data such that they appear different but do not change the meaning of the data. RandomResizedCrop is a specific example of Data Augmentation. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Training Pretrained Model, Data Augmentation and Transformations, Classification Interpretation and Confusion Matrix, Cleaning Dataset, Inference Model and Parameters, Notebooks and Widgets and few more topics related to the same from here. I have presented the implementation of Classification Interpretation, Cleaning Dataset, Inference Model and Parameters using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Detection
Day198 of 300DaysOfData!
- Data Ethics: Ethics refers to well founded standards of right and wrong that prescribe what humans should do. It is the study and development of ones ethical standards. Recourse Process, Feedback Loops, Bias are key examples for Data Ethics. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Data Ethics, Bugs and Recourse, Feedback Loops, Bias, Integrating ML with Product Design, Training a Digit Classifier, Pixels and Computer Vision, Tenacity and Deep Learning, Pixel Similarity, List Comprehensions and few more topics related to the same from here. I have presented the simple implementation of Pixels and Computer Vision using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day199 of 300DaysOfData!
- L1 and L2 Norm: Taking the mean of absolute value of differences is called Mean Absolute Difference or L1 Norm. Taking the mean of square of differences and then taking the square root is called Root Mean Squared Error or L2 Norm. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Rank of Tensors, Mean Absolute Difference or L1 Norm and Root Mean Squared Error or L2 Norm, Numpy Arrays and PyTorch Tensors, Computing Metrics using Broadcasting and few more topics related to the same from here. I have presented the simple implementation of Arrays and Tensors, L1 and L2 Norm using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day200 of 300DaysOfData!
- L1 and L2 Norm: Taking the mean of absolute value of differences is called Mean Absolute Difference or L1 Norm. Taking the mean of square of differences and then taking the square root is called Root Mean Squared Error or L2 Norm. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Computing Metrics using Broadcasting, Mean Absolute Error, Stochastic Gradient Descent, Initializing Parameters, Loss Function, Calculating Gradients, Backpropagation and Derivatives, Learning Rate Optimization and few more topics related to the same from here. I have presented the simple implementation of Stochastic Gradient Descent using Fastai here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day201 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about The Gradient Descent Process, Initializing Parameters, Calculating Predictions and Inspecting, Calculating Loss and MSE, Calculating Gradients and Backpropagation, Stepping the Weights and Updating Parameters, Repeating the Process & Stopping the Process and few more topics related to the same from here. I have presented the implementation of The Gradient Descent Process using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day202 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about The MNIST Loss Function, Matrices and Vectors, Independent Variables, Weights and Biases, Parameters, Matrix Multiplication and Dataset Class, Gradient Descent Process and Learning Rate, Activation Function and few more topics related to the same from here. I have presented the implementation of The Dataset Class and Matrix Multiplication using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day203 of 300DaysOfData!
- Accuracy and Loss Function: The key difference between metric such as accuracy and loss function is that the loss is to drive automated learning and the metric is to drive human understanding. The loss must be a function with meaningful derivative and metrics focuses on performance of the model. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Matrix Multiplication, Activation Function, Loss Function, Gradients and Slope, Sigmoid Function, Accuracy Metrics and Understanding and few more topics related to the same from here. I have presented the implementation of Loss Function and Sigmoid using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day204 of 300DaysOfData!
- SGD and Minibatches: The process to change or update the weights based on the gradients in order to consider some of the details involved in the next phase of the learning process is called an Optimization Step. The calculation of average loss for a few data items at a time is called a Minibatch. The number of data items in the Minibatch is called Batchsize. A larger Batchsize means more accurate and stable estimate of the dataset gradients from the loss function whereas a single Batchsize result in an imprecise and unstable gradient. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Stochastic Gradient Descent and Minibatches, Optimization Step, Batch Size, DataLoader and Dataset, Initializing Parameters, Weights and Bias, Backpropagation and Gradients, Loss Function and few more topics related to the same from here. I have presented the implementation of DataLoader and Gradients using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day205 of 300DaysOfData!
- SGD and Minibatches: The process to change or update the weights based on the gradients in order to consider some of the details involved in the next phase of the learning process is called an Optimization Step. The calculation of average loss for a few data items at a time is called a Minibatch. The number of data items in the Minibatch is called Batchsize. A larger Batchsize means more accurate and stable estimate of the dataset gradients from the loss function whereas a single Batchsize result in an imprecise and unstable gradient. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Calculating Gradients and Back Propagation, Weights, Bias and Parameters, Zeroing Gradients, Training Loop and Learning Rate, Accuracy and Evaluation, Creating an Optimizer and few more topics related to the same from here. I have presented the implementation of Calculating Gradients, Accuracy and Training using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day206 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Creating an Optimizer, Linear Module, Weights and Biases, Model Parameters, Optimization and Zeroing Gradients, SGD Class, Data Loaders and Learner Class of Fastai and few more topics related to the same from here. I have presented the implementation of Creating Optimizer and Learner Class using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day207 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Adding a Nonlinearity, Simple Linear Classifiers, Basic Neural Networks, Weight and Bias Tensors, Rectified Linear Unit or RELU Activation Function, Universal Approximation Theorem, Sequential Module and few more topics related to the same from here. I have presented the implementation of Creating Simple Neural Networks using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Training Classifier
Day208 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Image Classification, Localization, Regular Expressions, Data Block and Data Loaders, Regex Labeller, Data Augmentation, Presizing, Checking and Debugging Data Block, Item and Batch Transformations and few more topics related to the same from here. I have presented the implementation of Creating and Debugging Data Block and Data Loaders using Fastai and PyTorch here in the snapshot. I have used Resize as an item transform with a large size and RandomResizedCrop as a batch transform with a smaller size. RandomResizedCrop will be added if min scale parameter is passed in aug transforms function as was done in DataBlock call below. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Classification
Day209 of 300DaysOfData!
- Exponential Function: Exponential Function is defined as e**x where e is a special number approximately equal to 2.718. It is the inverse of natural logarithm function. Exponential Function is always positive and increases very rapidly. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Cross Entropy Loss Function, Viewing Activations and Labels, Softmax Activation Function, Sigmoid Function, Exponential Function, Negative Log Likelihood, Binary Classification and few more topics related to the same from here. I have presented the implementation of Softmax Function and Negative Log Likelihood using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Classification
Day210 of 300DaysOfData!
- Exponential Function: Exponential Function is defined as e**x where e is a special number approximately equal to 2.718. It is the inverse of natural logarithm function. Exponential Function is always positive and increases very rapidly. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Logarithmic Function, Negative Log Likelihood, Cross Entropy Loss Function, Softmax Function, Model Interpretation, Confusion Matrix, Improving the Model, The Learning Rate Finder, Logarithmic Scale and few more topics related to the same from here. I have presented the implementation of Cross Entropy Loss, Confusion Matrix and Learning Rate Finder using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Classification
Day211 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Unfreezing and Transfer Learning, Freezing Trained Layers, Discriminative Learning Rates, Selecting the Number of Epochs, Deeper Architectures and few more topics related to the same from here. I have presented the implementation of Unfreezing and Transfer Learning and Discriminative Learning Rates using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Image Classification
Day212 of 300DaysOfData!
- Multilabel Classification: Multilabel Classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Questionnaire of Image Classification, Multilabel Classification and Regression, Pascal Dataset, Pandas and DataFrames, Constructing DataBlock, Datasets and DataLoaders, Lambda Functions and few more topics related to the same from here. I have presented the implementation of Creating DataBlock and DataLoaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Multilabel Classification & Regression
Day213 of 300DaysOfData!
- Multilabel Classification: Multilabel Classification refers to the problem of identifying the categories of objects in images that may not contain exactly one type of object. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Lambda Functions, Transformation Blocks such as Image Block and Multi Category Block, One Hot Encoding, Data Splitting, DataLoaders, Datasets and DataBlock, Resizing and Cropping and few more topics related to the same from here. I have presented the implementation of Creating DataBlock and DataLoaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Multilabel Classification & Regression
Day214 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Binary Cross Entropy Loss Function, DataLoaders and Learner, Getting Model Activations, Sigmoid and Softmax Functions, One Hot Encoding, Getting Accuracy, Partial Function and few more topics related to the same from here. F.binary_cross_entropy and its module equivalent nn.BCELoss calculate cross entropy on a one hot encoded target but don't include the initial sigmoid. Normally, F.binary_cross_entropy_with_logits or nn.BCEWithLogitsLoss do both sigmoid and binary cross entropy in a single function. Similarly for single label dataset, F.nll_loss or nn.NLLoss for the version without initial softmax and F.cross_entropy or nn.CrossEntropyLoss for the version with initial softmax. I have presented the implementation of Cross Entropy Loss Functions and Accuracy using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Multilabel Classification & Regression
Day215 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Multilabel Classification and Threshold, Sigmoid Activation, Overfitting, Image Regression, Validation Loss and Metrics, Partial Function and few more topics related to the same from here. F.binary_cross_entropy and its module equivalent nn.BCELoss calculate cross entropy on a one hot encoded target but don't include the initial sigmoid. Normally, F.binary_cross_entropy_with_logits or nn.BCEWithLogitsLoss do both sigmoid and binary cross entropy in a single function. Similarly for single label dataset, F.nll_loss or nn.NLLoss for the version without initial softmax and F.cross_entropy or nn.CrossEntropyLoss for the version with initial softmax. I have presented the implementation of Training the Convolutions with Accuracy and Threshold using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Multilabel Classification & Regression
- Fastai: Image Regression
Day216 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Image Regression and Localization, Assembling the Dataset, Initializing DataBlock and DataLoaders, Points and Data Augmentation, Training the Model, Sigmoid Range, MSE Loss Function, Transfer Learning and few more topics related to the same from here. I have presented the implementation of Initializing DataBlock and DataLoaders and Training Image Regression using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai: Multilabel Classification & Regression
- Fastai: Image Regression
Day217 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Imagenette Classification, DataBlock and DataLoaders, Data Normalization and Normalize Function, Progressive Resizing and Data Augmentation, Transfer Learning, Mean and Standard Deviation and few more topics related to the same from here. Progressive Resizing is the process of gradually using larger and larger images as training progresses. I have presented the implementation of Initializing DataBlock and DataLoaders, Normalization and Progressive Resizing using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Advanced Classification
Day218 of 300DaysOfData!
- Label Smoothing: Label Smoothing is a process which replaces all the labels i.e 1s with a number a bit less than 1 and 0s with a number a bit more than 0 for training. It will make training more robust even if there is mislabeled data which results to be a model that generalizes better at inference. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Progressive Resizing, Test Time Augmentation, Mixup Augmentation, Linear Combinations, Callbacks, Label Smoothing and Cross Entropy Loss Function and few more topics related to the same from here. During inference or validation, creating multiple versions of each image using data augmentation and then taking the average or maximum of the predictions for each augmented version of the image is called Test Time Augmentation. I have presented the implementation of Progressive Resizing, Test Time Augmentation, Mixup Augmentation and Label Smoothing using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Advanced Classification
Day219 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Collaborative Filtering, Learning the Latent Factors, Loss Function and Stochastic Gradient Descent, Creating DataLoaders, Batches, Dot Product and Matrix Multiplication and few more topics related to the same from here. The mathematical operation of multiplying the elements of two vectors together and then summing up the result is called Dot Product. I have presented the implementation of Initializing Dataset and Creating DataLoaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Collaborative Filtering
Day220 of 300DaysOfData!
- Embedding: The special layer that indexes into a vector using an integer but has its derivative calculated in such a way that it is identical to what it would have been if it had done a matrix multiplication with a one hot encoded vector is called Embedding. Multiplying by a one hot encoded matrix using the computational shortcut that it can be implemented by simply indexing directly. The thing that multiply the one hot encoded matrix is called the Embedding Matrix. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Creating DataLoaders, Embedding Matrix, Collaborative Filtering, Object Oriented Programming with Python, Inheritance, Module and Forward Propagation Function, Batches and Learner, Sigmoid Range and few more topics related to the same from here. I have presented the implementation Embedding, Dot Product Class and Sigmoid Range using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Collaborative Filtering
Day221 of 300DaysOfData!
- Embedding: The special layer that indexes into a vector using an integer but has its derivative calculated in such a way that it is identical to what it would have been if it had done a matrix multiplication with a one hot encoded vector is called Embedding. Multiplying by a one hot encoded matrix using the computational shortcut that it can be implemented by simply indexing directly. The thing that multiply the one hot encoded matrix is called the Embedding Matrix. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Collaborative Filtering, Weight Decay or L2 Regularization, Overfitting, Creating Embeddings and Weight Matrices, Parameter Module and few more topics related to the same from here. Weight Decay consists of adding sum of the squared weights to the loss function. The idea is that the larger the coefficients are, the sharper the canyons will be in the loss function. I have presented the implementation of Biases and Weight Decay and Matrices using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Collaborative Filtering
Day222 of 300DaysOfData!
- Embedding: The special layer that indexes into a vector using an integer but has its derivative calculated in such a way that it is identical to what it would have been if it had done a matrix multiplication with a one hot encoded vector is called Embedding. Multiplying by a one hot encoded matrix using the computational shortcut that it can be implemented by simply indexing directly. The thing that multiply the one hot encoded matrix is called the Embedding Matrix. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Interpreting Embedding and Biases, Principal Component Analysis or PCA, Collab Learner, Embedding Distance and Cosine Similarity, Bootstrapping a Collaborative Filtering Model, Probabilistic Matrix Factorization or Dot Product Model and few more topics related to the same from here. I have presented the implementation Interpreting Biases, Collab Learner Model and Embedding Distance using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Collaborative Filtering
Day223 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Deep Learning and Collaborative Filtering, Embedding Matrices, Linear Function, RELU and Nonlinear Functions, Sigmoid Range, Forward Propagation Function, Tabular Model and Embedding Neural Networks and few more topics related to the same from here. In Python kwargs in a parameter list means "put any additional keyword arguments into a dict called kwargs." And kwargs in an argument list means "insert all key and value pairs in the kwargs dict as named arguments here." I have presented the implementation Deep Learning for Collaborative Filtering and Neural Networks using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Collaborative Filtering
Day224 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Tabular Modeling, Categorical Embeddings, Continuous and Categorical Variables, Recommendation System, The Tabular Dataset, Ordinal Columns, Decision Trees, Handling Dates, Tabular Pandas and Tabular Proc Object and few more topics related to the same from here. I have presented the implementation of Handling Dates, Tabular Pandas and Tabular Proc using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day225 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Tabular Modeling, Creating the Decision Tree, Leaf Nodes, Root Mean Squared Error, DTreeviz Library, Stopping Criterion, Overfitting and few more topics related to the same from here. I have presented the implementation of Creating Decision Tree and Leaf Nodes using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day226 of 300DaysOfData!
- Random Forest: A Random Forest is a model that averages the predictions of a large number of decision trees which are generated by randomly varying various parameters that specify what data is used to train the tree and other tree parameters. Bagging is a particular approach to ensembling or combining the results of multiple models together. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Categorical Variables, Random Forests and Bagging Predictors, Ensembling, Optimal Parameters, Out of Bag Error, Tree Variance for Prediction Confidence and Standard Deviation, Model Interpretation and few more topics related to the same from here. The Out of Bag Error or OOB error is a way of measuring prediction error in the training dataset by including in the calculation of a rows error trees only where that row was not included in the training. I have presented the implementation of Creating Random Forest and Model Interpretation using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day227 of 300DaysOfData!
- Random Forest: A Random Forest is a model that averages the predictions of a large number of decision trees which are generated by randomly varying various parameters that specify what data is used to train the tree and other tree parameters. Bagging is a particular approach to ensembling or combining the results of multiple models together. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Random Forest, Feature Importance, Removing Low Importance Variables, Removing Redundant Features, Determining Similarity of Features, Rank Correlation, OOB Score and few more topics related to the same from here. I have presented the implementation of Random Forest and Feature Importance using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day228 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Removing Redundant Features, Determining Similarity, OOB Score, Partial Dependence Plots, Data Leakage, Root Mean Squared Error and few more topics related to the same from here. Standard Deviation of predictions across the trees presents the relative confidence of predictions. The model is more consistent when the Standard Deviation is lower. I have presented the implementation of Removing Redundant Features and Partial Dependence Plots using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day229 of 300DaysofData!
- Random Forest Model just averages the predictions of a number of trees and therefore it can never predict values outside the range of the training data. Random Forests are not able to extrapolate outside the types of data i.e out of domain data. Here prediction is simply the prediction that the Random Forest makes. Here bias is the prediction based on taking the mean of the dependent variable. Similarly contributions tells us the total change in prediction due to each of the independent variables. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Tree Interpreter, Redundant Features, Waterfall Charts or Plots, Random Forest, Prediction, Bias and Contributions, The Extrapolation Problem, Unsqueeze Method, Out of Domain Data and few more topics related to the same from here. I have presented the implementation of Tree Interpreter, Waterfall Plots, Extrapolation Problem using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day230 of 300DaysOfData!
- Random Forest Model just averages the predictions of a number of trees and therefore it can never predict values outside the range of the training data. Random Forests are not able to extrapolate outside the types of data i.e out of domain data. Here prediction is simply the prediction that the Random Forest makes. Here bias is the prediction based on taking the mean of the dependent variable. Similarly contributions tells us the total change in prediction due to each of the independent variables. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about The Extrapolation Problem and Random Forest, Finding Out of Domain Data, Root Mean Squared Error and Feature Importance, Histograms and few more topics related to the same from here. I have presented the implementation of Finding Out of Domain Data and RMSE using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day231 of 300DaysOfData!
- Random Forest: Random Forest Model just averages the predictions of a number of trees and therefore it can never predict values outside the range of the training data. Random Forests are not able to extrapolate outside the types of data i.e out of domain data. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Tabular Modeling and Neural Networks, Continuous and Categorical Features, Embedding Matrix, Mean Squared Error and Regression, Tabular Learner and Learning Rate, Ensembling, Bagging and Boosting, Combining Embeddings and few more topics related to the same from here. Ensembling is the generalization technique in which the average of the predictions of several models are used. I have presented the implementation of Tabular Modeling and Neural Networks and Ensembling using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Tabular Modeling
Day232 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about NLP and Language Model, Self Supervised Learning, Text Preprocessing, Tokenization, Numericalization and Embedding Matrix, Subword and Characters, Tokens and few more topics related to the same from here. Token is a element of a list created by the Tokenization process which could be a word, a part of a word or subword or a single character. I have presented the implementation of Loading the Data and Word Tokenization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Natural Language Processing
Day233 of 300DaysOfData!
- Tokenization: Subword Tokenization splits words into smaller parts based on the most commonly occurring sub strings. Word Tokenization splits a sentence on spaces as well as applying language specific rules to try to separate parts of meaning even when there are no spaces. Subword Tokenization provides a way to easily scale between character tokenization i.e. using a small subword vocab and word tokenization i.e using a large subword vocab and handles every human language without needing language specific algorithms to be developed. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Word Tokenization, Subword Tokenization, Setup Method, Vocabulary, Numericalization with Fastai, Embedding Matrices and few more topics related to the same from here. I have presented the implementation of Subword Tokenization and Numericalization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Natural Language Processing
Day234 of 300DaysOfData!
- Tokenization: Subword Tokenization splits words into smaller parts based on the most commonly occurring sub strings. Word Tokenization splits a sentence on spaces as well as applying language specific rules to try to separate parts of meaning even when there are no spaces. Subword Tokenization provides a way to easily scale between character tokenization i.e. using a small subword vocab and word tokenization i.e using a large subword vocab and handles every human language without needing language specific algorithms to be developed. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Numericalization with Fastai, Embedding Matrices, Creating Batches for Language Model, Tokenization, Training a Text Classifier, Language Model using DataBlock, Data Loaders, Fine Tuning Language Model and Transfer Learning and few more topics related to the same from here. I have presented the implementation of Creating Data Loaders and Data Block for Language Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Natural Language Processing
Day235 of 300DaysOfData!
- Encoder: Encoder is defined as the model which doesn't contain task specific final layers. The term Encoder means much the same thing as body when applied to vision CNN but Encoder tends to be more used for NLP and generative models. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Encoder Model, Text Generation and Classification, Creating the Classifier Data Loaders, Embeddings, Data Augmentation, Fine Tuning the Classifier, Discriminative Learning Rates and Gradual Unfreezing, Disinformation and Language Models and few more topics related to the same from here. I have presented the implementation of Training Text Classifier Model using Discriminative Learning Rates and Gradual Unfreezing using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Natural Language Processing
Day236 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Data Munging with Fastai, Tokenization and Numericalization, Creating Data Loaders and Data Block, Mid Level API, Transforms, Decode Method, Data Augmentation, Cropping and Padding and few more topics related to the same from here. I have presented the implementation of Creating Data Loaders, Tokenization and Numericalization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Data Munging
Day237 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Data Munging, Decorator, Pipeline Method, Transformed Collections, Training and Validation Set, Data Loaders Object, Categorize Method, Transformations and few more topics related to the same from here. I have presented the implementation of Pipeline Class and Transformed Collections using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Data Munging
Day238 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I I have read about Datasets Class, Transformed Collections, Pipelines, Categorize Method, Data Loaders and Data Block, Text Block, Partial Function, Category Block and few more topics related to the same from here. I have presented the implementation of Datasets Class, Transformed Collections and Data Loaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Data Munging
Day239 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Applying Mid Level Data API for Siamese Pair and Computer Vision, Data Loaders, Transforms and Resizing Images, Data Augmentation, Subclasses, Transformed Collections and few more topics related to the same from here. Datasets class will apply two or more pipelines in parallel to the same raw object and build a tuple with the result. It will automatically do the setup and index into a Datasets. I have presented the implementation of Siamese Image Object and Data Augmentation using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Data Munging
Day240 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Siamese Transform Object, Random Splitting, Transformed Collections and Datasets Class, Data Loaders, ToTensor and IntToFloatTensor Methods, Data and Batch Normalization and few more topics related to the same from here. ToTensor method converts images to tensors. IntToFloatTensor method converts the tensor of images containing the integers from 0 to 255 to a tensor of floats and divide by 255 to make values between 0 and 1. I have presented the implementation of Siamese Transform Object and Data Augmentation using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Data Munging
Day241 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Language Model from Scratch, Data Concatenation and Tokenization, Vocabulary and Numericalization, Neural Networks, Independent Variables and Dependent Variable, Sequence of Tensors and few more topics related to the same from here. I have presented the implementation of Preparing Sequence of Tensors for Language Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day242 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Language Model from Scratch using PyTorch, Sequence Tensors, Creating Data Loaders and Batchsize, Neural Network Architecture and Linear Layers, Words Embeddings and Activations, Weight Matrix, Creating Learner and Training and few more topics related to the same from here. I will create neural network architecture that takes three words as input and returns the predictions of the probability of each possible next word in the vocab. I will use three standard linear layers. The first linear layer will use only the first words embedding as activations. The second layer will use the second words embedding plus the first layers output activations and the third layer will use the third words embedding plus the second layers output activations. The key effect is that every word is interpreted in the information context of any words preceding it. Each of these three layers will use the same weight matrix. I have presented the implementation of Creating Data Loaders, Language Model from Scratch and Training using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day243 of 300DaysOfData!
- Backpropagation Through Time: Backpropagation through Time is a process of treating a neural network with effectively one layer per time step as one big model and calculating gradients on it in the usual way. The BPTT technique is used to avoid running out of memory and time which detaches the history of computation steps in the hidden state every few time steps. Hidden State is defined as the activations that are updated at each step of a recurrent neural network. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Recurrent Neural Networks, Hidden State of NN, Improving the RNN, Maintaining the State of RNN, Unrolled Representation, Backpropagation and Derivatives, Detach Method, Stateful RNN, Backpropagation Through Time and few more topics related to the same from here. I have presented the implementation of Recurrent Neural Networks and Language Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day244 of 300DaysOfData!
- Backpropagation Through Time: Backpropagation through Time is a process of treating a neural network with effectively one layer per time step as one big model and calculating gradients on it in the usual way. The BPTT technique is used to avoid running out of memory and time which detaches the history of computation steps in the hidden state every few time steps. Hidden State is defined as the activations that are updated at each step of a recurrent neural network. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Backpropagation Through Time, LMDataLoader Object and Arranging the Dataset, Creating Data Loaders, Callbacks and Reset Method, Creating More Signal and few more topics related to the same from here. I have presented the implementation of Arranging Dataset, Creating Data Loaders, Callbacks and Reset Method using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day245 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Creating More Signal or Sequence, Cross Entropy Loss Function and Flatten Method, Multilayer Recurrent Neural Networks and Activations, Unrolled Representation, Stack and few more topics related to the same from here. The single layer Recurrent Neural Network performed better than Multilayer Recurrent Neural Network because a deeper model leads to exploding and vanishing activations. I have presented the implementation Creating more Signal and Multilayer Recurrent Neural Network using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day246 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Exploding and Disappearing Activations, Matrix Multiplication, Architecture of Long Short Term Memory and RNN, Sigmoid and Tanh Function, Hidden State and Cell State, Forget Gate, Input Gate, Cell Gate and Output Gate, Chunk Method and few more topics related to the same from here. I have presented the implementation Long Short Term Memory using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day247 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Training Language Model using LSTM, Embedding Layer, Linear Layer, Overfitting and Regularization of LSTM, Dropout Regularization, Training or Inference, Bernoulli Method and few more topics related to the same from here. Dropout is a regularization technique which randomly changes some activations to zero at a training time. I have presented the implementation Language Model using Long Short Term Memory and Dropout using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day248 of 300DaysOfData!
- Activation Regularization: Activation Regularization is a process of adding the small penalty to the final activations produced by the LSTM to make it as small as possible. It is a regularization method very similar to weight decay. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Activation Regularization and Temporal Activation Regularization, Language Model using Long Short Term Memory, Weight Decay, Training a Weight Tied Regularized LSTM, Weight Tying and Input Embeddings, Text Learner, Cross Entropy Loss Function and few more topics related to the same from here. I have presented the implementation Language Model using Regularized Long Short Term Memory and Regularized Dropout and Activation Regularization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Language Model from Scratch
Day249 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Convolutional Neural Networks, The Magic of Convolutions, Feature Engineering, Kernel and Matrix, Mapping a Convolutional Kernel, Nested List Comprehensions, Matrix Multiplications and few more topics related to the same from here. Feature Engineering is the process of creating a new transformations of the input data in order to make it easier to model. I have presented the implementation of Feature Engineering and Mapping a Convolutional Kernel using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Convolutional Neural Networks
Day250 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Convolutions with PyTorch, Rank Tensors, Creating Data Block and Data Loaders, Channel of Images, Unsqueeze Method and Unit Axis, Strides and Padding, Understanding the Convolutions Equations, Matrix Multiplication, Shared Weights and few more topics related to the same from here. A channel is a single basic color in an image. For a regular full color images, there are three channels : red, green and blue. Kernels passed to convolutions need to be rank 4 tensors. I have presented the implementation of Convolutions and DataLoaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Convolutional Neural Networks
Day251 of 300DaysOfData!
- Channels and Features: Channels and Features are largely used interchangeably and refer to the size of the second axis of a weight matrix which is the number of activations per grid cell after a convolution. Channels refer to the input data i.e colors or activations inside the network. Using a stride 2 convolution often increases the number of Features at the same time because the number of activations in the activation map decrease by the factor of 4. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Convolutional Neural Network, Refactoring, Channels and Features, Understanding Convolution Arithmetic, Biases, Receptive Fields, Convolution over RGB Image, Stochastic Gradient Descent and few more topics related to the same from here. I have presented the implementation of Convolutional Neural Network and Training the Learner using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Convolutional Neural Networks
Day252 of 300DaysOfData!
- Channels and Features: Channels and Features are largely used interchangeably and refer to the size of the second axis of a weight matrix which is the number of activations per grid cell after a convolution. Channels refer to the input data i.e colors or activations inside the network. Using a stride 2 convolution often increases the number of Features at the same time because the number of activations in the activation map decrease by the factor of 4. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Improving Training Stability of Convolutional Neural Networks, Batch Size and Splitting the Dataset, Simple Baseline Network, Activations and Kernel Size, Activation Stat Callbacks, Learning Rate, Creating a Learner and Training and few more topics related to the same from here. I have presented the implementation of Convolutional Neural Network and Training the Learner using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Convolutional Neural Networks
Day253 of 300DaysOfData!
- One Cycle Training: 1 Cycle Training is a combination of warmup and annealing. Warmup is the one where learning rate grows from the minimum value to the maximum value and Annealing is the one where it decreases back to the minimum value. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Activation Stats Callbacks, Increasing Batch Size, Activations, 1 Cycle Training, Warmup and Annealing, Super Convergence, Learning Rate and Momentum, Colorful Dimension and Histograms and few more topics related to the same from here. I have presented the implementation of Increasing Batch Size, 1 Cycle Training and Inspecting Momentum and Activations using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Convolutional Neural Networks
Day254 of 300DaysOfData!
- Fully Convolutional Networks: The idea in Fully Convolutional Networks is to take the average of activations across a convolutional grid. A Fully Convolutional Networks has a number of convolutional layers, some of which will be stride 2 convolutions at the end of which is an adaptive average pooling layer, a flatten layer to remove the unit axis and finally a linear layer. Larger batches have gradients that are more accurate since they are calculated from more data. But larger batch size means fewer batches per epoch which means fewer opportunities for the model to update weights. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Residual Networks or ResNets, Convolutional Neural Networks, Strides and Padding, Fully Convolutional Networks, Adaptive Average Pooling Layer, Flatten Layer, Activations and Matrix Multiplications and few more topics related to the same from here. I have presented the implementation of Preparing Data and Fully Convolutional Networks using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Residual Networks
Day255 of 300DaysOfData!
- Fully Convolutional Networks: The idea in Fully Convolutional Networks is to take the average of activations across a convolutional grid. A Fully Convolutional Networks has a number of convolutional layers, some of which will be stride 2 convolutions at the end of which is an adaptive average pooling layer, a flatten layer to remove the unit axis and finally a linear layer. Larger batches have gradients that are more accurate since they are calculated from more data. But larger batch size means fewer batches per epoch which means fewer opportunities for the model to update weights. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Fully Convolutional Neural Networks, Building ResNet, Skip Connections, Identity Mapping, SGD, Batch Normalization Layer, Trainable Parameters, True Identity Path, Convolutional Neural Networks, Average Pooling Layer and few more topics related to the same from here. I have presented the implementation of ResNet Architecture and Skip Connections using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Residual Networks
Day256 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Residual Networks, RELU Activation Function, Skip Connections, Training Deeper Models, Loss Landscape of NN, Stem of the Network, Convolutional Layers, Max Pooling Layer and few more topics related to the same from here. Stem is defined as the first few layers of CNN. It has different structure than the main body of CNN. I have presented the implementation of Training Deeper Models and Stem of Network using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Residual Networks
Day257 of 300DaysOfData!
- Bottleneck Layers: Bottleneck Layers use three convolutions: Two 1X1 at the begining and the end and one 3X3. The 1X1 convolutions are much faster which facilitates to use higher number of filters in and out. The 1X1 convolutions diminish and then restore the number of channels so called Bottleneck. The overall impact is to facilitate the use of more filters in the same amount of time. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Stem of the Network, Residual Network Architecture, Bottleneck Layers, Convolutional Neural Networks, Progressive Resizing and few more topics related to the same from here. I have presented the implementation of Training Deeper Networks and Bottleneck Layers using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Residual Networks
Day258 of 300DaysOfData!
- Splitter Function: A splitter is a function that tells the fastai library how to split the model into parameter groups which are used to train only the head of the model during transfer learning. The params is just a function that returns all parameters of a given module. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Body and Head of Networks, Batch Normalization Layer, Unet Learner and Architecture, Generative Vision Models, Nearest Neighbor Interpolation, Transposed Convolutions, Siamese Network, Loss Function and Splitter Function and few more topics related to the same from here. I have presented the implementation of Siamese Network Model, Loss Function and Splitter Function using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Architecture Details
Day259 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Stochastic Gradient Descent, Loss Function, Updating Weights, Optimization Function, Creating Data Block and Data Loaders, ResNet Model and Learner, Training Process and few more topics related to the same from here. I have presented the implementation of Preparing Dataset and Baseline Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Training Process
Day260 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Training Process, Stochastic Gradient Descent, Optimization Function, Learning Rate Finder, Momentum, Optimizer Callbacks, Zeroing Gradients, Partial Function and few more topics related to the same from here. I have presented the implementation of Functions for Optimizer and SGD here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Training Process
Day261 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Stochastic Gradient Descent and Optimization Function, Momentum, Exponentially Weighted Moving Average, Gradient Averages, Callbacks, RMS Prop, Adaptive Learning Rate, Divergence and Epsilon and few more topics related to the same from here. I have presented the implementation of Momentum and RMS Prop using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Training Process
Day262 of 300DaysOfData!
- Adam Optimizer: Adam mixes the idea of SGD with momentum and RMSProp together where it uses the moving average of the gradients as a direction and divides by the square root of the moving average of the gradients squared to give an adaptive learning rate to each parameter. It takes the unbiased moving average. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about RMSProp Optimizer, SGD, Adam Optimizer, Unbiased Moving Average of Gradients, Momentum Parameter, Decoupled Weight Decay, L1 and L2 Regularization, Callbacks and few more topics related to the same from here. I have presented the implementation of RMS Prop and Adam Optimizer using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Training Process
Day263 of 300DaysOfData!
- Adam Optimizer: Adam mixes the idea of SGD with momentum and RMSProp together where it uses the moving average of the gradients as a direction and divides by the square root of the moving average of the gradients squared to give an adaptive learning rate to each parameter. It takes the unbiased moving average. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Creating Callbacks, Loss Functions, Model Resetter Callbacks, RNN Regularization, Callback Ordering and Exceptions, Stochastic Gradient Descent and few more topics related to the same from here. I have presented the implementation of Model Resetter Callback and RNN Regularization Callback using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Training Process
Day264 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Neural Networks, Building a Neural Network from Scratch, Modeling a Neuron, Nonlinear Activation Functions, Hidden Size, Fully Connected Layer and Dense Layer, Linear Layer, Matrix Multiplication from Scratch, Elementwise Arithmetic and few more topics related to the same from here. I have presented the implementation of Matrix Multiplication from Scratch and Elementwise Arithmetic using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day265 of 300DaysOfData!
- Forward and Backward Passes: Computing all the gradients of a given loss with respect to its parameters is known as Backward Pass. Similarly computing the output of the model on a given input based on the matrix products is known as Forward Pass. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Broadcasting with Scalar, Broadcasting Vector and Matrix, Unsqueeze Method, Einstein Summation, Matrix Multiplication, The Forward and Backward Passes, Defining and Initializing Layer, Activation Function, Linear Layer, Weights and Biases and few more topics related to the same from here. I have presented the implementation of Einstein Summation and Defining and Initializing Linear Layer using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day266 of 300DaysOfData!
- Forward and Backward Passes: Computing all the gradients of a given loss with respect to its parameters is known as Backward Pass. Similarly computing the output of the model on a given input based on the matrix products is known as Forward Pass. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Mean and Standard Deviation, Matrix Multiplications, Xavier Initialization, RELU Activation, Kaiming Initialization, Weights and Activations and few more topics related to the same from here. I have presented the implementation of Xavier Initialization, RELU Activation, and Matrix Multiplications using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day267 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Kaiming Initialization, Forward Pass, Mean Squared Error Loss Function, Gradients and Backward Pass, Linear Layers and RELU Activation Function, Chain Rule, Backpropagation and few more topics related to the same from here. I have presented the implementation of Kaiming Initialization, MSE Loss Function and Gradients using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day268 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Gradients of Matrix Multiplication, Symbolic Computation, Forward and Backward Propagation Function, Model Parameters, Weights and Biases, Refactoring the Model, Callable Module and few more topics related to the same from here. I have presented the implementation of RELU Module, Linear Module and Mean Squared Error Module using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day269 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Initializing Model Architecture, Callable Function, Forward and Backward Propagation Function, Linear Function, Mean Squared Error Loss Function, RELU Activation Function, Back Propagation Function and Gradients, Squeeze Function and few more topics related to the same from here. I have also read about Perturbations and Neural Networks, Vanishing Gradients and Convolutional Neural Networks. I have presented the implementation of Defining Model Architecture, Layer Function and RELU using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day270 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Defining Base Class and Sub Classes, Linear Layer, RELU Activation Function and Non Linearities, Mean Squared Error Function, Super Class Initializer, Kaiming Initialization, Elementwise Arithmetic and Broadcasting and few more topics related to the same from here. I have presented the implementation of Defining Linear Layer and Linear Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Neural Network Foundations
Day271 of 300DaysOfData!
- Class Activation Map: The Class Activation Map uses the output of the last convolutional layer which is just before the average pooling layer together with predictions to give a heatmap visualization of model decision. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about CNN Interpretation, Class Activation Map, Hooks, Heatmap Visualization, Activations and Convolutional Layer, Dot Product, Feature Map, Data Loaders and few more topics related to the same from here. I have presented the implementation of Defining Hook Function and Decoding Images using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- CNN Interpretation with CAM
Day272 of 300DaysOfData!
- Class Activation Map: The Class Activation Map uses the output of the last convolutional layer which is just before the average pooling layer together with predictions to give a heatmap visualization of model decision. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Hook Class and Context Manager, Gradient Class Activation Map, Heatmap Visualization, Activations and Weights, Gradients and Back Propagation, Model Interpretation and few more topics related to the same from here. I have presented the implementation of Defining Hook Function, Activations, Gradients and Heatmap Visualization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- CNN Interpretation with CAM
Day273 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Fastai Learner from Scratch, Dependent and Independent Variable, Vocabulary, Dataset and Indexing and few more topics related to the same. I have also read about Convolutional Neural Networks, Perturbations and Loss Functions. I have presented the implementation of Preparing Training and Validation Dataset using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day274 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Creating Collation Function, Parallel Preprocessing, Decoding Images, Data Loader Class, Normalization and Image Statistics, Permuting Axis Order, Precision and few more topics related to the same from here. I have presented the implementation of Initializing Data Loader and Normalization using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day275 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Module and Parameter, Forward Propagation Function, Convolutional Layer, Training Attributes, Kaiming Normalization and Xavier Normalization Initializer, Transformation Function, Weights and Biases, Linear Model, Tensors and few more topics related to the same from here. I have presented the implementation of Defining Module : Convolutional Layer and Linear Model using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day276 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Convolutional Neural Networks, Linear Model, Testing Module, Sequential Module, Parameters, Adaptive Pooling Layer and Mean, Stride, Hook Function, Pipeline and few more topics related to the same from here. I have presented the implementation of Testing Module, Sequential Module and Convolutional Neural Network using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day277 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Loss Function, Negative Log Likelihood Function, Log Softmax Function, Log of Sum of Exponentials, Stochastic Gradient Descent Optimizer Function, Data Loaders, Training and Validation Sets and few more topics related to the same from here. I have presented the implementation of Negative Log Likelihood Function, Cross Entropy Loss Function, SGD Optimizer and Data Loaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day278 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Data, Convolutional Neural Net Model, Loss Function, Stochastic Gradient Descent and Optimization Function, Learner, Callbacks, Parameters, Training and Epochs and few more topics related to the same from here. I have presented the implementation of Learner and Callbacks using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Fastai Learner from Scratch
Day279 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read Binary Classification, Chest X-Rays, DICOM or Digital Imaging and Communications in Medicine, Plotting the DICOM Data, Random Splitter Function, Medical Imaging, Pixel Data and few more topics related to the same from here. I have presented the implementation of Getting DICOM Files and Inspection using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Chest X-Rays Classification
Day280 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Binary Classification, Initializing Data Block and Data Loaders, Image Block and Category Block, Batch Transformations, Training Pretrained Model, Learning Rate Finder, Tensors and Probabilities, Model Interpretation and few more topics related to the same from here. I have presented the implementation of Initializing Data Block and Data Loaders, Training Pretrained Model and Interpretation using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Chest X-Rays Classification
Day281 of 300DaysOfData!
- Sensitivity & Specificity: Sensitivity = True Positive / (True Positive + False Negative). It is also known as a Type II Error. Specificity = True Negative / (False Positive + True Negative). It is also known as a Type I Error. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Sensitivity and Specificity, Positive Predictive Value and Negative Predictive Value, Confusion Matrix and Model Interpretation, Type I & II Error, Accuracy and Prevalence and few more topics related to the same from here. I have presented the implementation of Confusion Matrix, Sensitivity and Specificity, Accuracy using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Chest X-Rays Classification
Day282 of 300DaysOfData!
- Cross Validation: Cross Validation is a step in the process of building a machine learning model which helps us to ensure that our models fit the data accurately and also ensures that we do not overfit. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Supervised and Unsupervised Learning, Features, Samples and Targets, Classification and Regression, Clustering, T-Distributed Stochastic Neighbour Embedding, 2D Arrays, Cross Validation, Overfitting and few more topics related to the same from here. I have presented the implementation of TSNE Decomposition and Preparing Dataset here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Supervised and Unsupervised Learning
Day283 of 300DaysOfData!
- Cross Validation: Cross Validation is a step in the process of building a machine learning model which helps us to ensure that our models fit the data accurately and also ensures that we do not overfit. On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Decision Trees and Classification, Features and Parameters, Accuracy and Model Predictions, Overfitting and Model Generalization, Training Loss and Validation Loss, Cross Validation and few more topics related to the same from here. I have presented the implementation of Decision Tree Classifier and Model Evaluation here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Supervised and Unsupervised Learning
Day284 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Stratified KFold Cross Validation, Skewed Dataset and Classification, Data Distribution, Hold Out Cross Validation, Time Series Data, Regression and Sturge's Rule, Probabilities, Evaluation Metrics and Accuracy and few more topics related to the same from here. I have presented the implementation of Distribution of Labels and Stratified KFold here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Supervised and Unsupervised Learning
Day285 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Evaluation Metrics and Accuracy Score, Training and Validation Set, Precision and Recall, True Positive and True Negative, False Positive and False Negative, Binary Classification and few more topics related to the same from here. I have presented the implementation of True Negative, False Negative, False Positive and Accuracy Score here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Evaluation Metrics
Day286 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about True Positive Rate, Recall and Sensitivity, False Positive Rate and Specificity, Area Under ROC Curve, Prediction, Probability and Thresholds, Log Loss Function, Multiclass Classification and Macro Averaged Precision and few more topics related to the same from here. I have presented the implementation of True Negative Rate, False Positive Rate, Log Loss Function and Macro Averaged Precision here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Evaluation Metrics
Day287 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Multiclass Classification, Macro Averaged Precision, Micro Averaged Precision, Weighted Precision, Recall Metrics, Random Forest Regressor, Mean Squared Error, Root Mean Squared Error and few more topics related to the same from here. I have presented the implementation of Micro Averaged Precision and Weighted Precision here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Evaluation Metrics
Day288 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Recall Metrics for Multiclass Classification, Weighted F1 Score, Confusion Matrix, Type I Error and Type II Error, AUC Curve, Multilabel Classification and Average Precision and few more topics related to the same from here. I have presented the implementation of Weighted F1 Score and Average Precision here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Evaluation Metrics
Day289 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Approaching Almost Any Machine Learning Problem. Here, I have read about Regression Metrics such as Mean Absolute and Average Error, Root Mean Squared Error, Squared Logarithmic Error, Mean Absolute Percentage Error, R-Squared and Coefficient of Determination, Cohen's Kappa Score, MCC Score and few more topics related to the same from here. I have presented the implementation of Mean Absolute and Average Error, Squared Logarithmic Error, Mean Absolute Percentage Error, R-Squared and MCC Score here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Evaluation Metrics
Day290 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented about Object Detection and Fine Tuning, Image Segmentation, Tensors and Aspect Ratio, Arrays, Dataset and Data Loaders. I have also started the Machine Learning Engineering for Production Specialization from Coursera. Here, I have read about Steps of ML Project and Case Study, ML Project Lifecycle and few more topics related to the same from here. I have presented the implementation of Dataset Class here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resource:
- Machine Learning Engineering for Production
Day291 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about OpenCV, Loading and Displaying an Image, Accessing Pixels, Array Slicing and Cropping, Resizing Images, Rotating Image, Smoothing Image, Drawing on an Image and few more topics related to the same. I have also read about ML Project Lifecycle, Deployment Patterns and Pipeline Monitoring from Machine Learning Engineering for Production Specialization of Coursera. I have presented the implementation of OpenCV in Resizing and Rotating and Image, Smoothing and Drawing on an Image here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- OpenCV Notebook
Day292 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about OpenCV, Counting Objects, Converting Image to Grayscale, Edge Detection, Thresholding, Detecting and Drawing Contours, Erosions and Dilations, Masking and Bitwise Operations and few more topics related to the same from here. I have also read about Modeling Overview, Key Challenges and Low Average Error from Machine Learning Engineering for Production Specialization of Coursera. I have presented the implementation of OpenCV in Converting Image to Grayscale, Edge Detection, Thresholding, Detecting and Drawing Contours, Erosions and Dilations here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- OpenCV Notebook
Day293 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about OpenCV, Rotating Images, Image Preprocessing, Rotation Matrix and Center Coordinates, Image Parsing, Edge Detection and Contour Detection, Masking and Blurring Images and few more topics related to the same from here. I have also read about Baseline Model, Selecting and Training Model, Error Analysis and Prioritization from Machine Learning Engineering for Production Specialization of Coursera. I have presented the implementation of OpenCV in Rotating Images and Getting ROI of Images here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- OpenCV Project I
Day294 of 300DaysOfData!
- Histogram Matching: Histogram Matching can be used as a normalization technique in an image processing pipeline as a form of color correction and color matching which allows to obtain a consistent, normalized representation of images even if lighting conditions change. On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about OpenCV, Color Detection, RGB Colorspace, Histogram Matching, Pixel Distribution, Cumulative Distribution, Resizing Image and few more topics related to the same from here. I have also read about Skewed Datasets, Performance Auditing, Data Centric AI Development and Data Augmentation from Machine Learning Engineering for Production Specialization of Coursera. I have presented the implementation of OpenCV in Histogram Matching here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- OpenCV Project II
Day295 of 300DaysOfData!
- Histogram Matching: Histogram Matching can be used as a normalization technique in an image processing pipeline as a form of color correction and color matching which allows to obtain a consistent, normalized representation of images even if lighting conditions change. On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about Convolutional Neural Networks, Convolutional Matrix, Kernels, Spatial Dimensions, Padding, ROI of Image, Elementwise Multiplication and Addition, Rescaling Intensity, Laplacian Kernel, Detecting Blur and Smoothing and few more topics related to the same from here. I have presented the implementation of Convolution Method and Constructing Kernels here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- Convolution
Day296 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about Convolutional Layers, Filters and Kernel Size, Strides, Padding, Input Data Format, Dilation Rate, Activation Function, Weights and Biases, Kernel and Bias Initializer and Regularizer, Generalization and Overfitting, Kernel and Bias Constraint, Caltech Dataset, Strided Net and few more topics related to the same from here. I have presented the implementation of Strided Net here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- Convolutional Layer
Day297 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about CNN Architecture, Strided Net, Label Binarizer and One Hot Encoding, Image Data Generator and Data Augmentation, Loading and Resizing Images and few more topics related to the same from here. I have presented the implementation of Label Binarizer and Preparing Dataset here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- Convolutional Layer
Day298 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from PyImageSearch Blogs. Here, I have read about Convolutional Neural Networks, Adam Optimization Function, Compiling and Training Strided Net Model, Data Augmentation and Image Data Generator, Classification Report, Plotting Training Loss and Accuracy, Overfitting and Generalization and few more topics related to the same from here. I have presented the implementation of Compiling and Training Model, Classification Report, Training Loss and Accuracy here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Resources:
- Machine Learning Engineering for Production
- PyImageSearch
- Convolutional Layer
Day299 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Transformers Model, GPT2 Pretrained Model and Tokenizer, Encodes and Decodes Methods, Preparing Dataset, Transform Method, Data Loaders and few more topics related to the same from here. I have presented the implementation of Pretrained GPT2 Model and Tokenizer and Transformed DataLoaders using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Transformers
Day300 of 300DaysOfData!
- On my Journey of Machine Learning and Deep Learning, I have read and implemented from the book Deep Learning for Coders with Fastai and PyTorch. Here, I have read about Transformers Model, Data Loaders, Batch Size and Sequence Length, Language Model, Fine Tuning GPT2 Model, Callback, Learner, Perplexity and Cross Entropy Loss Function, Learner Rate Finder, Training and Generating Predictions and few more topics related to the same from here. I have presented the implementation of Initializing DataLoaders, Fine Tuning GPT2Model and LR Finder using Fastai and PyTorch here in the snapshot. I hope you will gain some insights and work on the same. I hope you will also spend some time learning the topics from the Book mentioned below. Excited about the days ahead !!
- Book:
- Deep Learning for Coders with Fastai and PyTorch
- Transformers