Convolutional Neural Networks with TensorFlow in Python
Course descriptionThis course offers a deep dive into an advanced neural network construction – Convolutional Neural Networks. First, we explain the concept of image kernels, and how it relates to CNNs. Then, you will get familiar with the CNN itself, its building blocks, and what makes this kind of network necessary for Computer Vision. You’ll apply the theoretical bit to the MNIST example using TensorFlow, and understand how to track and visualize useful metrics using TensorBoard in a dedicated practical section. Later in the course, you’ll be introduced to a handful of techniques to improve the performance of neural networks, and a huge real-world practical project for classifying fashion items pictures. Finally, we will cap it all off with an intriguing look through the history of the most influential CNN architectures.
This is where we have an in-depth discussion of the Convolutional Neural Networks: you will understand the motivation and fundamental strength of this type of network and learn more about the concepts and layers that make it work – feature maps and pooling layers. Finally, you will discover how the dimensions change in such a network.
Neural network techniques (revision)
In this section, we quickly revise the main concepts of neural networks in general. These include activation functions, overfitting and early stopping, and optimizers. This part is intended only as a reference and is not a substitute for a full course on the basics of Machine Learning.
CNN assembling - MNIST
In this section, you will put theory into practice. You will develop a simple CNN architecture, implement it from scratch, and train the model on the MNIST example.
Tensorboard: Visualization tool for TensorFlow
After training a model to classify digits, we turn our attention to tracking and visualizing different metrics. The ability to analyze your models is crucial in Machine Learning, and here, we will lay the groundwork which makes that possible through TensorBoard.
Common techniques for better performance of neural networks
This section introduces 3 crucial concepts for improving the performance of neural networks in general. We will discuss theoretically L2 regularization and weight decay, Dropout, and Data Augmentation, which you will practice in the next section.
A practical project: Labelling fashion items
So far, you have already gained some practice with CNNs in a somewhat sterile environment. However, the real world is much messier than the MNIST example. That’s why, in this section, we have prepared a huge practical example inspired by tasks you are very likely to do in your job. You will practice classifying clothes and other fashion items with multiple labels. Here, you will see why the neat results from the MNIST example are almost unattainable and gain valuable insight into the procedure of undertaking such a project.
Popular CNN architectures
This section is dedicated to exploring the most influential and breakthrough Convolutional Neural Networks of the past - AlexNet, VGG, GoogLeNet, and ResNet - and explaining the concepts behind their architectures. We will also look at the Computer Vision competition ILSVRC and its development over the years.