Hands-on notebooks and reference implementations spanning feedforward nets, convnets, sequence models, and TensorFlow workflows—from NumPy-first building blocks to applied vision, audio, and NLP.
This repository is a structured lab-style archive of deep learning work: foundational mechanics implemented step by step, classic CNN and RNN applications, recommendation and representation-learning examples, and end-to-end TensorFlow notebooks. It is meant as a practical map you can browse, reuse, or extend—whether you are comparing an idea on paper to working code or looking for a starting point for a new experiment.
Each top-level folder groups projects by theme. Notebooks are self-contained where possible; many folders include images, small datasets, or helper assets checked in alongside the code.
| Area | What it covers | Core ideas | Standout notebooks |
|---|---|---|---|
| Foundational Projects | Logistic regression as a tiny net, planar classification, L-layer networks from scratch, regularization, optimization, evaluation | Binary classification, decision boundaries, backprop through deep stacks, initialization, gradient checks, optimizers, error analysis and model selection | Cat_Classifier/Logistic_Regression_with_a_Neural_Network_mindset.ipynb, Cat_Classifier/Deep Neural Network - Application.ipynb, Decision Boundary with 1 hidden layer/Planar_data_classification_with_one_hidden_layer.ipynb, Build_NN_step_by_step/Building_your_Deep_Neural_Network_Step_by_Step.ipynb, Improving_NNs/Regularization.ipynb, Improving_NNs/Initialization.ipynb, Improving_NNs/Gradient_Checking.ipynb, Optimization_Algorithm/Optimization_methods .ipynb, Evaluation_&_Diagnostics/C2W3_Lab_01_Model_Evaluation_and_Selection.ipynb |
| CNN Projects | Conv layers from scratch, ResNets, segmentation, detection, style transfer, face verification, transfer learning, recommender nets | Conv/cross-correlation, residual blocks, U-Net, YOLO-style detection, Gram matrices & style loss, triplet-style reasoning, MobileNet fine-tuning, neural collaborative filtering | Convolution NN/Convolution_model_Step_by_Step_v1.ipynb, Residual Networks & ResNet 50 9.33.19 AM/Residual_Networks.ipynb, Image_segmentation_Unet_v2 9.33.19 AM/Image_segmentation_Unet_v2.ipynb, Autonomous_driving_application_Car_detection - YOLO/Autonomous_driving_application_Car_detection.ipynb, Art_Generation_with_Neural_Style_Transfer/Art_Generation_with_Neural_Style_Transfer.ipynb, Face_Recognition/Face_Recognition.ipynb, Transfer_learning_with_MobileNet_v1 9.33.19 AM/Transfer_learning_with_MobileNet_v1.ipynb, Content Based Filtering Recommendation System 9.33.19 AM/C3_W2_RecSysNN_Assignment.ipynb, Collaborative Filtering Based Recommendation System 9.33.19 AM/C3_W2_Collaborative_RecSys_Assignment.ipynb |
| RNN Projects — Sequence Modeling | RNN building blocks, language modeling, attention, music generation, keyword spotting, embeddings | Vanilla RNN/GRU/LSTM mechanics, character-level LM, neural MT with attention, sequence-to-sequence audio, word analogies, sentiment-style classification | Building_a_Recurrent_Neural_Network_Step_by_Step/Building_a_Recurrent_Neural_Network_Step_by_Step.ipynb, Dinosaurus_Island_Character_level_language_model/Dinosaurus_Island_Character_level_language_model.ipynb, Self-Attention Mechanism - Neural Machine Translation/Neural_machine_translation_with_attention_v4a.ipynb, Improvise_a_Jazz_Solo_with_an_LSTM_Network_v4/Improvise_a_Jazz_Solo_with_an_LSTM_Network_v4.ipynb, Audio Trigger Word Detection/Trigger_word_detection_v2a.ipynb, Word Vector Operations/Operations_on_word_vectors_v2a.ipynb, Emojifier/Emoji_v3a.ipynb |
| ML Projects | Classical ML angle on representation | PCA, visualization of projected data | PCA - Visualization/C3_W2_Lab01_PCA_Visualization_Examples.ipynb |
| TensorFlow Developer Professional Certificate | End-to-end TensorFlow/Keras work across vision, NLP, and time series | Functional and Sequential APIs, image pipelines, text tokenization and models, forecasting and sequence models | Four themed tracks under TensorFlow Developer Professional Certificate/: C1 (TensorFlow fundamentals), C2 (image models), C3 (text models), C4 (sequences and forecasting)—each with runnable notebooks grouped by topic |
Paths above are relative to the repository root. Several CNN project folders use a 9.33.19 AM suffix (with a narrow space before AM) from the original export; the notebooks inside are unchanged.
Clone the repository, create a virtual environment (recommended), install a typical deep learning notebook stack, and launch Jupyter:
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install jupyter numpy matplotlib tensorflow keras
jupyter labIndividual notebooks may import additional libraries (e.g. SciPy, scikit-learn, h5py). If a notebook fails on import, install the missing package with pip in the same environment. Projects that ship local images/, data/, or models/ folders expect those paths to stay next to the notebook—run Jupyter from the repo root or the project subfolder so relative paths resolve.
Comfort with Python, basic linear algebra, and introductory machine learning helps. You will see implementations and applications of:
- Feedforward networks, activations, and backpropagation (including NumPy-only layers)
- Convolutional architectures, residual connections, segmentation, and object detection
- Recurrent models, attention, and sequence modeling for text and audio
- Transfer learning, embeddings, and simple deep recommender setups
- TensorFlow 2 / Keras patterns for data input pipelines and training loops
No single narrative ties every notebook together—pick a row in the project map that matches what you want to build or debug, open the notebook, and follow the cells top to bottom.
This project is licensed under the MIT License—see the LICENSE file for details.