Deep Learning with TensorFlow

Building on everything covered in the program to teach you how to develop, implement, and deploy complicated machine and deep learning algorithms (with TensorFlow).
Hours

6

Lessons

84

Quizzes

15

Assignments

6

Course description

Machine and deep learning are some of those quantitative analysis skills that differentiate the data scientist from the other members of the team. The field of machine learning is the driving force of artificial intelligence. This course will teach you how to leverage deep learning and neural networks from this powerful tool for the purposes of data science. We will be doing this with TensorFlow.

FREE
1

Introduction

In this introductory part of the course, we will discuss why you will need machine learning when working as a data scientist, what you will see in the following chapters of this training, and what the best way to take the course is.

FREE
2

Neural networks Intro

The basic logic behind training an algorithm involves four ingredients: data, model, objective function, and an optimization algorithm. In this part of the course, we describe each of them and build a solid foundation that allows you to understand the idea behind using neural networks. After completing this chapter, you will know what the various types of machine learning are, how to train a machine learning model, and understand terms like objective function, L2-norm loss, cross-entropy loss, one gradient descent, and n-parameter gradient descent.

3

Setting up the environment

Here, we will show you how to install the Jupyter Notebook (the environment we will use to code in Python) and how to import the relevant libraries. Because this course is based in Python, we will be working with several popular libraries: NumPy, SciPy, scikit-learn and TensorFlow.

4

Minimal example

It is time to build your first machine learning algorithm. We will show you how to import the relevant libraries, how to generate random input data for the model to train on, how to create the targets the model will aim at, and how to plot the training data. The mechanics of this model exemplify how all regressions you’ve run in different packages (scikit-learn) or software (Excel) work. This is an iterative method aiming to find the best-fitting line.

5

Introduction to TensorFlow

In this section, we will introduce the TensorFlow framework – a deep learning library developed by Google. It allows you to construct fairly sophisticated models with little coding. This intro section teaches you what tensors are and why the TensorFlow framework is one of the preferred tools of data scientists in 2019.

6

Deep nets overview

From this section on, we will explore deep neural networks. Most real-life dependencies cannot be modelled with a simple linear combination (as we have done so far). And because we want to be better forecasters, we need better models. Most of the time, this means working with a model that is more sophisticated than a liner model. In this section, we will talk about concepts like deep nets, non-linearities, activation functions, softmax activation, and backpropagation. Sounds a bit complex, but we have made it easy for you!

7

Backpropagation (optional)

Тo get a truly deep understanding of deep neural networks, one will have to look at the mathematics of it. As backpropagation is at the core of the optimization process, we wanted to introduce you to it. This is not a necessary part of the course, as in TensorFlow, sklearn, or any other machine learning package (as opposed to simply NumPy), will have backpropagation methods incorporated.

8

Overfitting

Some of the most common pitfalls you can have when creating predictive models, and especially in deep learning, is to either underfit or overfit your data. This means to either take less advantage of the machine learning algorithm than you could have due to insufficient training (underfitting), or alternatively create a model that fits the training data too much (overtrain the model) which makes it unsuitable for a different sample (overfitting).

9

Initialization

Initialization is the process in which we set the initial values of weights, and it's an important aspect of building a machine learning model. In this section, you will learn how to initialize the weights of your model and how to apply Xavier initialization.

10

Optimizers

The gradient descent iterates the whole training set before updating the weights. Every iteration updates the weights in a relatively small way. Here, you will learn common pitfalls related to this method and how to boost them, using stochastic gradient descent, momentum, learning rate schedules, and adaptive learning rates.

11

Preprocessing

A large part of the effort data scientists make when creating a new model is related to preprocessing. This process refers to any manipulation we apply to the dataset before running it and training the model. Learning how to preprocess data is fundamental for anyone who wants to be able to create machine learning models, as no meaningful framework can simply take raw data and provide an answer. In this part of the course, we will show you how to prepare your data for analysis and modeling.

12

Deeper example

All the lessons so far will have given you a solid preparation for what we're about to start doing: writing code. The problem we will solve here is the “Hello, world” of machine learning. It is called MNIST classification and consists of 70,000 handwritten digits. Together, we will create an algorithm that takes an image as input and then correctly determines which number is shown in that image.

13

Business case

In this section, we will solve a real-life business case, such as the ones data scientists solve on the job. You will build a model that will determine how likely it is that a specific client will come back and buy another product from a company selling audiobooks. This is a great example of how machine learning can help a company optimize its marketing efforts and ultimately grow its bottom line results.

14

Conclusion

This section is designed to help you continue your specialization and data science journey. In this section, we discuss what is further out there in the machine learning world, how Google’s DeepMind uses machine learning, what are RNNs, and what non-NN approaches are there.