Good introduction to Deep learning course. This course provided both theory and coding using tensorflow and jupyter notebook. If you are new to this area, this course is very helpful to make you understand the idea of this field.
Machine and deep learning are some of those quantitative analysis skills that differentiate the data scientist from the other members of the team. Not to mention that the field of machine learning is the driving force of artificial intelligence. This course will teach you how to leverage deep learning and neural networks for the purposes of data science. The technology we employ is TensorFlow 2.0, which is the state-of-the-art deep learning framework.
This course will teach you the inner workings of deep neural networks with emphasis on the why and how of things. You will see the theory implemented in practice with the powerful framework TensorFlow 2.0.
In this introductory part of the course, we will discuss why you will need machine learning when working as a data scientist, what you will see in the following chapters of this training, and what the best way to take the course is.Why machine learning Free
The basic logic behind training an algorithm involves four ingredients: data, model, objective function, and an optimization algorithm. In this part of the course, we describe each of them and build a solid foundation that allows you to understand the idea behind using neural networks. After completing this chapter, you will know what the various types of machine learning are, how to train a machine learning model, and understand terms like objective function, L2-norm loss, cross-entropy loss, one gradient descent, and n-parameter gradient descent.Introduction to neural networks Free Training the model theory Free Types of machine learning Free The linear model Free The linear model. Multiple inputs. Free The linear model. Multiple inputs and multiple outputs Free Graphical representation Free The objective function Free L2-norm loss Free Cross-entropy loss Free One-parameter gradient descent Free N-parameter gradient descent Free
Here, we will show you how to install the Jupyter Notebook (the environment we will use to code in Python) and how to import the relevant libraries. Because this course is based in Python, we will be working with several popular libraries: NumPy, SciPy, scikit-learn and TensorFlow.Setting up the environment - Do not skip, please! Why Python and why Jupyter Installing Anaconda Jupyter Dashboard - Part 1 Jupyter Dashboard - Part 2 Installing the TensorFlow package
It is time to build your first machine learning algorithm. We will show you how to import the relevant libraries, how to generate random input data for the model to train on, how to create the targets the model will aim at, and how to plot the training data. The mechanics of this model exemplify how all regressions you’ve run in different packages (scikit-learn) or software (Excel) work. This is an iterative method aiming to find the best-fitting line.Outline Generating the data (optional) Initializing the variables Training the model
Having created the simple net, we 'translate' it to TensorFlow. This is our way of taking a simple, well-understood problem to introduce the syntax and logic of TensorFlow.TensorFlow Outline TensorFlow 2 Intro A note on coding in TensorFlow Types of file formats in Tensorflow and data handling Model layout - inputs, outputs, targets, weights, bias, optimizer, and loss Interpreting the result and extracting the weights and bias Customizing your model
From this section on, we will explore deep neural networks. Most real-life dependencies cannot be modelled with a simple linear combination (as we have done so far). And because we want to be better forecasters, we need better models. Most of the time, this means working with a model that is more sophisticated than a liner model. In this section, we will talk about concepts like deep nets, non-linearities, activation functions, softmax activation, and backpropagation. Sounds a bit complex, but we have made it easy for you!The layer What is a deep net Really understand deep nets Why do we need non-linearities Activation functions Softmax activation Backpropagation Backpropagation - intuition
Тo get a truly deep understanding of deep neural networks, one will have to look at the mathematics of it. As backpropagation is at the core of the optimization process, we wanted to introduce you to it. This is not a necessary part of the course, as in TensorFlow, sklearn, or any other machine learning package (as opposed to simply NumPy), will have backpropagation methods incorporated.Backpropagation mathematics
Some of the most common pitfalls you can have when creating predictive models, and especially in deep learning, is to either underfit or overfit your data. This means to either take less advantage of the machine learning algorithm than you could have due to insufficient training (underfitting), or alternatively create a model that fits the training data too much (overtrain the model) which makes it unsuitable for a different sample (overfitting).Underfitting and overfitting. A regression example Underfitting and overfitting. A classification example Train vs validation Train vs validation vs test N-fold cross validation Early stopping - motivation and types
Initialization is the process in which we set the initial values of weights, and it's an important aspect of building a machine learning model. In this section, you will learn how to initialize the weights of your model and how to apply Xavier initialization.Initialization Types of simple initializations Xavier's initialization
The gradient descent iterates the whole training set before updating the weights. Every iteration updates the weights in a relatively small way. Here, you will learn common pitfalls related to this method and how to boost them, using stochastic gradient descent, momentum, learning rate schedules, and adaptive learning rates.SGD&Batching Local minima pitfalls Momentum Learning rate schedules Learning rate schedules. A picture Adaptive learning schedules Adaptive moment estimation
A large part of the effort data scientists make when creating a new model is related to preprocessing. This process refers to any manipulation we apply to the dataset before running it and training the model. Learning how to preprocess data is fundamental for anyone who wants to be able to create machine learning models, as no meaningful framework can simply take raw data and provide an answer. In this part of the course, we will show you how to prepare your data for analysis and modeling.Preprocessing Basic preprocessing Standardization Dealing with categorical data One hot vs binary
Once we have learned all the relevant theory, we are ready to jump into deep waters. We explore the 'Hello world' of deep learning - the MNIST dataset, where we classify 60,000 images into 10 classes (the 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10).MNIST dataset How to tackle the MNIST dataset MNIST - Importing libraries and data Preprocess the data - create a validation dataset and scale the data Preprocess the data - shuffle and batch Outline the model Select the loss and the optimizer Learning Testing the model
Data science without an application is nothing but research. Since we at 365 believe that the skills you acquire should be relevant for your work, we finish the course with a business case, where we implement all the deep learning knowledge you've acquired.Exploring the dataset and identifying predictors Outlining the business case solution Balancing a dataset Preprocessing the data Load the preprocessed data Learning and interpreting the result Setting an early stopping mechanism Testing the business model
This section is designed to help you continue your specialization and data science journey. In this section, we discuss what is further out there in the machine learning world, how Google’s DeepMind uses machine learning, what are RNNs, and what non-NN approaches are there.Summary What's more out there An overview of CNNs How DeepMind uses deep learning An overview of RNNs Non-NN approaches
with Iskren Vankov and Iliya Valchanov