Nanodegree key: nd101
Version: 14.0.5
Locale: en-us
This Nanodegree trains the learner about foundational topics in the exciting field of deep learning, the technology behind state-of-the-art artificial intelligence.
Content
Part 01 : Welcome to the Deep Learning Nanodegree Program
The Deep Learning Nanodegree program offers a solid introduction to the world of artificial intelligence. In this program, you’ll master fundamentals that will enable you to go further in the field, launch or advance a career, and join the next generation of deep learning talent that will help define a beneficial, new, AI-powered future for our world. You will study cutting-edge topics such as Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Generative Adversarial Networks, and build projects in PyTorch.
-
Module 01: Welcome to the [NAME] Nanodegree Program
-
Lesson 01: An Introduction to Your Nanodegree Program
Welcome! We're so glad you're here. Join us in learning a bit more about what to expect and ways to succeed.
-
Lesson 02: Getting Help
You are starting a challenging but rewarding journey! Take 5 minutes to read how to get help with projects and content.
-
Part 02 : Introduction to Deep Learning
This course covers foundational deep learning theory and practice. We begin with how to think about deep learning and when it is the right tool to use. The course covers the fundamental algorithms of deep learning, deep learning architecture and goals, and interweaves the theory with implementation in PyTorch.
-
Module 01: Intro to Deep Learning
-
Lesson 01: Introduction to Deep Learning
Meet your instructor, get an overview of the course, and find a few interesting resources in this introductory lesson.
-
Lesson 02: Deep Learning
This introductory lesson on Deep Learning covers how experts think about deep learning and how to know when deep learning is the right tool for the job, including some examples.
- Concept 01: Lesson Outline
- Concept 02: How Do Experts Think About Deep Learning?
- Concept 03: AI, ML, and Deep Learning
- Concept 04: History of Deep Learning
- Concept 05: Tools for Deep Learning
- Concept 06: Exercise: Deep Learning Tools
- Concept 07: When To Use Deep Learning
- Concept 08: Exercise: Choosing the Right Method
- Concept 09: (Optional) Computer Vision Demo: TinyYOLOv2
- Concept 10: (Optional) Image Generation Demo: DALL·E Mini
- Concept 11: (Optional) Speech Recognition Demo: Whisper
- Concept 12: Lesson Review
- Concept 13: Glossary
-
Lesson 03: Minimizing Error Function with Gradient Descent
Beginning with PyTorch and moving into both Error Functions, Gradient Descent, and Backpropagation, this lesson provides an overview of foundational neural network concepts.
- Concept 01: Intro to Minimizing Error Function
- Concept 02: Lesson Outline
- Concept 03: PyTorch Basics
- Concept 04: PreProcessing Data with PyTorch
- Concept 05: Exercise: Data in PyTorch
- Concept 06: Error Functions
- Concept 07: Log-Loss Error Function
- Concept 08: Maximum Likelihood
- Concept 09: Cross Entropy
- Concept 10: Gradient Descent
- Concept 11: Logistic Regression
- Concept 12: Exercise: Implementing Gradient Descent
- Concept 13: Perceptrons
- Concept 14: Multilayer Perceptrons
- Concept 15: Backpropagation
- Concept 16: Implementing Gradient Descent with Backpropagation
- Concept 17: Lesson Review
- Concept 18: Glossary
-
Lesson 04: Intro to Neural Networks
This introduction to neural networks explains how algorithms inspired by the human brain operate and puts to use those concepts when designing a neural network to solve particular problems.
- Concept 01: Lesson Outline
- Concept 02: Why "Neural Networks"?
- Concept 03: Perceptrons vs. Neural Networks
- Concept 04: Neural Network Architecture
- Concept 05: Feedforward
- Concept 06: Exercise: Neural Network Architectures
- Concept 07: Activation Functions
- Concept 08: Output Functions
- Concept 09: Exercise: Choosing Your Activation and Output Function
- Concept 10: Neural Network Objectives
- Concept 11: Decision Boundaries
- Concept 12: Exercise: Designing Your Neural Network
- Concept 13: Lesson Review
- Concept 14: Glossary
-
Lesson 05: Training Neural Networks
Learn how to train neural networks and avoid overfitting or underfitting by employing techniques like Early Stopping, Regularization, Dropout, Local Minima, and Random Restart!
- Concept 01: Introduction to Training Neural Networks
- Concept 02: Lesson Outline
- Concept 03: Training, Validation, and Testing
- Concept 04: Overfitting and Underfitting
- Concept 05: Early Stopping
- Concept 06: Visualizing Training with Tensorboard
- Concept 07: Regularization
- Concept 08: Exercise: Implementing Regularization
- Concept 09: Dropout
- Concept 10: Local Minima and Random Restart
- Concept 11: Exercise: Training Techniques
- Concept 12: Vanishing and Exploding Gradients
- Concept 13: Learning Rate Decay
- Concept 14: Momentum
- Concept 15: Optimizers
- Concept 16: Batch vs Stochastic Gradient Descent
- Concept 17: Training Techniques in PyTorch
- Concept 18: Training Loops in PyTorch
- Concept 19: Exercise: Training Neural Networks
- Concept 20: Lesson Review
- Concept 21: Glossary
- Concept 22: Course Overview
- Concept 23: Congratulations and Good Luck!
-
Lesson 06: Developing a Handwritten Digits Classifier with PyTorch
In this project, you will use your skills in designing and training neural networks to classify handwritten digits using the well-known MNIST dataset.
Project Description - Developing a Handwritten Digits Classifier with PyTorch Project
Project Rubric - Developing a Handwritten Digits Classifier with PyTorch Project
-
Part 03 : Convolutional Neural Networks
This course introduces Convolutional Neural Networks, the most widely used type of neural networks specialized in image processing. You will learn the main characteristics of CNNs that make them so useful for image processing, their inner workings, and how to build them from scratch to complete image classification tasks. You will learn what are the most successful CNN architectures, and what are their main characteristics. You will apply these architectures to custom datasets using transfer learning. You will also learn about autoencoders, a very important architecture at the basis of many modern CNNs, and how to use them for anomaly detection as well as image denoising. Finally, you will learn how to use CNNs for object detection and semantic segmentation.
-
Module 01: CNNs
-
Lesson 01: Introduction to CNNs
In this lesson we will look at the main applications of CNNs, understand professional roles involved in the development of a CNN-based application, and learn about the history of CNNs.
-
Lesson 02: CNN Concepts
In this lesson we will recap how to use a Multi-Layer Perceptron for image classification, understand the limitations of this approach, and learn how CNNs can overcome these limitations.
- Concept 01: Lesson Outline
- Concept 02: MNIST Dataset
- Concept 03: How Computers Interpret Images
- Concept 04: MLP Structure and Class Scores
- Concept 05: Loss and Optimization
- Concept 06: Loading Data and Transforming Images in PyTorch
- Concept 07: Defining a Network in PyTorch
- Concept 08: Training the Network in PyTorch
- Concept 09: Model Validation
- Concept 10: Evaluating and Testing the Network in PyTorch
- Concept 11: Quiz: MNIST and Early Concepts
- Concept 12: Exercise: MLP Classification, MNIST
- Concept 13: Exercise Solution: MLP Classification, MNIST
- Concept 14: Typical Image Classification Steps
- Concept 15: MLPs vs. CNNs
- Concept 16: Local Connectivity and Convolution
- Concept 17: Filters and the Convolutional Layer
- Concept 18: Filters and Edges
- Concept 19: Frequency in Images
- Concept 20: High-Pass Filters
- Concept 21: Quiz: Kernels
- Concept 22: Sobel Filters
- Concept 23: Pooling
- Concept 24: Effective Receptive Fields
- Concept 25: CNN Architecture Blueprint
- Concept 26: Quiz: Lesson Topics
- Concept 27: Glossary
- Concept 28: Lesson Review
-
Lesson 03: CNNs in Depth
In this lesson we will study in depth the basic layers used in CNNs, build a CNN from scratch in PyTorch, use it to classify images, improve its performance, and export it for production.
- Concept 01: Lesson Outline
- Concept 02: Why Should I Learn to Build a CNN from Scratch?
- Concept 03: Convolutional Layers in Depth
- Concept 04: Convolutional Layers in PyTorch
- Concept 05: Stride and Padding
- Concept 06: Exercise: Convolutional Layer Visualization
- Concept 07: Pooling Layers
- Concept 08: Pooling Layers in PyTorch
- Concept 09: Quiz: Layers
- Concept 10: Exercise: Pooling Layer Visualization
- Concept 11: Structure of a Typical CNN
- Concept 12: CNN Structure and Layers in PyTorch: Recap
- Concept 13: Feature Vectors
- Concept 14: CNNs in PyTorch: Summary So Far
- Concept 15: Quiz: CNN Structure
- Concept 16: Exercise: CNNs for CIFAR Image Classification
- Concept 17: Solution: CNNs for CIFAR Image Classification
- Concept 18: Image Augmentation
- Concept 19: Augmentation Using Transformations
- Concept 20: Batch Normalization
- Concept 21: BatchNorm in PyTorch
- Concept 22: Optimizing the Performance of Our Network
- Concept 23: Tracking Your Experiments
- Concept 24: Quiz: Improving Performance
- Concept 25: Exercise: Improving Performance
- Concept 26: Exercise Solution: Improving Performance
- Concept 27: Weight Initialization
- Concept 28: Export a Model for Production
- Concept 29: Glossary
- Concept 30: Lesson Review
-
Lesson 04: Transfer Learning
In this lesson we will learn about key CNN architectures and their innovations, and apply multiple ways of adapting them to our use cases with transfer learning.
- Concept 01: Introduction to Transfer Learning
- Concept 02: Lesson Outline
- Concept 03: Innovative CNN Architectures
- Concept 04: Input Size and the GAP Layers
- Concept 05: Attention Layers
- Concept 06: State of the Art Computer Vision Models
- Concept 07: Quiz: Computer Vision Architectures
- Concept 08: Transfer Learning
- Concept 09: Reusing Pre-Trained Networks
- Concept 10: Fine Tuning
- Concept 11: Quiz: Transfer Learning
- Concept 12: Exercise: Transfer Learning, Flowers
- Concept 13: Exercise Solution: Transfer Learning, Flowers
- Concept 14: Visualizing CNNs (part 1)
- Concept 15: Visualizing CNNs (part 2)
- Concept 16: Glossary
- Concept 17: Lesson Review
-
Lesson 05: Autoencoders
In this lesson we will design and train linear and CNN-based autoencoders for anomaly detection and for image denoising.
- Concept 01: Introduction to Autoencoders
- Concept 02: Lesson Outline
- Concept 03: Autoencoders
- Concept 04: A Linear Autoencoder
- Concept 05: Quiz: Autoencoders
- Concept 06: Exercise: Linear Autoencoder
- Concept 07: Exercise Solution: Linear Autoencoder
- Concept 08: Learnable Upsampling
- Concept 09: Transposed Convolutions
- Concept 10: Convolutional Autoencoder
- Concept 11: Quiz: More on Autoencoders
- Concept 12: Exercise: Convolutional Autoencoder
- Concept 13: Exercise Solution: Convolutional Autoencoder
- Concept 14: Denoising
- Concept 15: Quiz: Denoising
- Concept 16: Exercise: Denoising Autoencoder
- Concept 17: Exercise Solution: Denoising Autoencoders
- Concept 18: Autoencoders and Generative Models
- Concept 19: Glossary
- Concept 20: Lesson Review
-
Lesson 06: Object Detection and Segmentation
In this lesson we will study applications of CNNs beyond image classification. We will train and evaluate an object detection model as well as a semantic segmentation model on custom datasets.
- Concept 01: Introduction to Object Detection and Segmentation
- Concept 02: Lesson Outline
- Concept 03: Object Localization
- Concept 04: Object Detection
- Concept 05: One-Stage Object Detection: RetinaNet
- Concept 06: Object Detection Metrics
- Concept 07: Quiz: Object Detection
- Concept 08: Exercise: Object Detection
- Concept 09: Exercise Solution: Object Detection
- Concept 10: Image Segmentation
- Concept 11: Semantic Segmentation: UNet
- Concept 12: Quiz: Semantic Segmentation
- Concept 13: Exercise: Semantic Segmentation
- Concept 14: Exercise Solution: Semantic Segmentation
- Concept 15: Glossary
- Concept 16: Lesson Review
- Concept 17: Course Review
-
Lesson 07: Landmark Classification & Tagging for Social Media
In this project, you will apply the skills you have acquired in the Convolutional Neural Network (CNN) course to build a landmark classifier.
Project Description - Landmark Classification & Tagging for Social Media Project
Project Rubric - Landmark Classification & Tagging for Social Media Project
-
Part 04 : RNNs & Transformers
This course covers multiple RNN architectures, discusses design patterns for those models, and arrives at the latest Transformer architectures.
-
Module 01: RNNs
-
Lesson 01: Intro to RNN
- Concept 01: RNN Introduction
- Concept 02: RNN Applications
- Concept 03: Recurrent Neural Networks
- Concept 04: Unfolded Model Quiz
- Concept 05: RNN Example
- Concept 06: Backpropagation Through Time - I
- Concept 07: Backpropagation Through Time - II
- Concept 08: BPTT Quizzes
- Concept 09: RNN Theory Summary
- Concept 10: Implementing RNNs
- Concept 11: Simple RNN - Predicting Time Series Data
- Concept 12: Simple RNN - Predicting Household Power Consumption
- Concept 13: Dealing with textual data
- Concept 14: Text Preprocessing
- Concept 15: Word Embeddings
- Concept 16: Implementing Word Embeddings
-
Lesson 02: Introduction to LSTM
- Concept 01: Lesson Overview
- Concept 02: RNN vs LSTM
- Concept 03: From RNN to LSTM
- Concept 04: Basics of LSTM
- Concept 05: Architecture of LSTM
- Concept 06: LSTM Gates
- Concept 07: Quiz
- Concept 08: Predicting Temperature using LSTM
- Concept 09: Sentiment Analysis using LSTM
- Concept 10: Exercise: Text classification using LSTM
-
Lesson 03: Introduction to Transformers
- Concept 01: Intro
- Concept 02: Transformers in detail
- Concept 03: HuggingFace
- Concept 04: Benefits of Transformers
- Concept 05: Intro to BERT
- Concept 06: Text classification using BERT
- Concept 07: Fine-tuning BERT
- Concept 08: GPT
- Concept 09: Text generation using pre-trained GPT models from HuggingFace
- Concept 10: Text translation using pre-trained Transformers
- Concept 11: GPT3 and ChatGPT
-
Part 05 : Building Generative Adversarial Networks
Learn to understand and implement a Deep Convolutional GAN (generative adversarial network) to generate realistic images, with Ian Goodfellow, the inventor of GANs, and Jun-Yan Zhu, the creator of CycleGANs.
-
Module 01: GANs
-
Lesson 01: Introduction to Generative Adversarial Networks
Introduction to this course, prerequisites, and your course instructor.
-
Lesson 02: Generative Adversarial Networks
Ian Goodfellow, the inventor of GANs, introduces you to these exciting models. You'll also implement your own GAN on the MNIST dataset.
- Concept 01: Welcome to GANs
- Concept 02: Lesson Outline
- Concept 03: Introducing Ian Goodfellow
- Concept 04: Applications of GANs
- Concept 05: How GANs Work
- Concept 06: Exercise Part 1: MNIST GAN Generator Discriminator
- Concept 07: Exercise Part 1: Solution
- Concept 08: Games and Equilibria
- Concept 09: Tips for Training GANs
- Concept 10: Exercise Part 2: Discriminator and Generator Losses
- Concept 11: Exercise Part 2: Solution
- Concept 12: Generating Fake Images
- Concept 13: MNIST GAN
- Concept 14: Exercise Part 3: MNIST GAN
- Concept 15: Exercise Part 3: Solution
- Concept 16: Lesson Review
-
Lesson 03: Training a Deep Convolutional GANs
In this lesson, you'll implement a Deep Convolution GAN to generate complex color images.
- Concept 01: Introduction to DCGANs
- Concept 02: Lesson Outline
- Concept 03: Deep Convolutional GANs
- Concept 04: DCGAN, Discriminator
- Concept 05: DCGAN Generator
- Concept 06: What is Batch Normalization?
- Concept 07: Benefits of Batch Normalization
- Concept 08: Exercise Part 1: DCGAN Generator Discriminator
- Concept 09: Exercise Part 1: Solution
- Concept 10: Optimization Strategy / Hyperparameters
- Concept 11: Exercise Part 2: DCGAN Training
- Concept 12: Exercise Part 2: Solution
- Concept 13: GAN Evaluation
- Concept 14: The Inception Score
- Concept 15: Frechet Inception Distance
- Concept 16: Exercise Part 3: IS FID
- Concept 17: Exercise Part 3: Solution
- Concept 18: Other Applications of GANs
- Concept 19: Lesson Review
-
Lesson 04: Image to Image Translation
Jun-Yan Zhu, one of the creators of the CycleGAN, will lead you through Pix2Pix and CycleGAN formulations that learn to do image-to-image translation tasks.
- Concept 01: Welcome to Image to Image Translation
- Concept 02: Lesson Outline
- Concept 03: Introduction to Jun-Yan Zhu
- Concept 04: Image to Image Translation
- Concept 05: Designing Loss Functions
- Concept 06: GANs, a Recap
- Concept 07: Pix2Pix Generator
- Concept 08: Pix2Pix Discriminator
- Concept 09: CycleGANs & Unpaired Data
- Concept 10: Exercise 1: CycleGAN Dataloader
- Concept 11: Exercise 1: Solution
- Concept 12: Cycle Consistency Loss
- Concept 13: Why Does This Work?
- Concept 14: Exercise 2: CycleGAN Generator & Loss
- Concept 15: Exercise 2: Solution
- Concept 16: Exercise 3: CycleGAN
- Concept 17: Exercise 3: Solution
- Concept 18: Beyond CycleGANs
- Concept 19: When to Use Image to Image Translation
- Concept 20: Lesson Review
-
Lesson 05: Modern GANs
In this lesson, you will implement more advanced GAN architectural techniques that have had a significant impact on the realism of generated images.
- Concept 01: Introduction to Modern GANs
- Concept 02: Lesson Outline
- Concept 03: Limitations of the BCE Loss
- Concept 04: Wasserstein Loss
- Concept 05: Gradient Penalties
- Concept 06: Exercise 1: Wasserstein Loss Gradient
- Concept 07: Exercise 1: Solution
- Concept 08: Progressive Growing of GANS
- Concept 09: ProGAN components
- Concept 10: Exercise 2: ProGAN
- Concept 11: Exercise 2: Solution
- Concept 12: StyleGAN: Introduction
- Concept 13: StyleGAN Components 1
- Concept 14: SyleGAN Components 2
- Concept 15: Exercise 3: StyleGAN
- Concept 16: Exercise 3: Solution
- Concept 17: When to Use Modern GAN Techniques
- Concept 18: Lesson Review
- Concept 19: Course Summary
-
Lesson 06: Face Generation
Define two adversarial networks, a generator, and a discriminator, and train them until you can generate realistic faces.
-
Part 06 : Career Services
-
Module 01:
-
Lesson 01: Take 30 Min to Improve your LinkedIn
Find your next job or connect with industry peers on LinkedIn. Ensure your profile attracts relevant leads that will grow your professional network.
- Concept 01: Get Opportunities with LinkedIn
- Concept 02: Use Your Story to Stand Out
- Concept 03: Why Use an Elevator Pitch
- Concept 04: Create Your Elevator Pitch
- Concept 05: Use Your Elevator Pitch on LinkedIn
- Concept 06: Create Your Profile With SEO In Mind
- Concept 07: Profile Essentials
- Concept 08: Work Experiences & Accomplishments
- Concept 09: Build and Strengthen Your Network
- Concept 10: Reaching Out on LinkedIn
- Concept 11: Boost Your Visibility
- Concept 12: Up Next
-
Lesson 02: Optimize Your GitHub Profile
Other professionals are collaborating on GitHub and growing their network. Submit your profile to ensure your profile is on par with leaders in your field.
- Concept 01: Prove Your Skills With GitHub
- Concept 02: Introduction
- Concept 03: GitHub profile important items
- Concept 04: Good GitHub repository
- Concept 05: Interview with Art - Part 1
- Concept 06: Identify fixes for example “bad” profile
- Concept 07: Quick Fixes #1
- Concept 08: Quick Fixes #2
- Concept 09: Writing READMEs with Walter
- Concept 10: Interview with Art - Part 2
- Concept 11: Commit messages best practices
- Concept 12: Participating in open source projects I
- Concept 13: Interview with Art - Part 3
- Concept 14: Participating in open source projects II
- Concept 15: Starring interesting repositories
- Concept 16: Next Steps
-
Part 07 : Congratulations!
Congratulations on finishing your program!
-
Module 01: Congratulations!
-
Lesson 01: Congratulations!
Congratulations on your graduation from this program! Please join us in celebrating your accomplishments.
-