# Deep Learning with TensorFlow 2.0

Machine and deep learning are some of those quantitative analysis skills that differentiate the data scientist from the other members of the team. The field of machine learning is the driving force of artificial intelligence. This course will teach you how to leverage deep learning and neural networks from this powerful tool for the purposes of data science. The technology we employ is TensorFlow 2.0, which is the state-of-the-art deep learning framework.

##### Our graduates work at exciting places:     ## Introduction

What is machine learning, deep learning and AI? How is it useful and is it really as important as people tend to believe? Welcome to Machine Learning What does the course cover

## Neural networks Intro

Neural networks are more or less what we mean by 'deep learning' nowadays. In this section we explain the main rationale behind simple feed-forward neural networks. Introduction to neural networks Training the model Types of machine learning The linear model Graphical representation The objective function L2-norm loss Cross-entropy loss
Show all lessons One-parameter gradient descent N-parameter gradient descent
Show fewer lessons

## Setting up the environment

Here, we will show you how to install the Jupyter Notebook (the environment we will use to code in Python) and how to import the relevant libraries. Because this course is based in Python, we will be working with several popular libraries: NumPy, SciPy, scikit-learn and TensorFlow 2.0. Setting up the environment - Do not skip, please! Why Python and why Jupyter Installing Anaconda Jupyter Dashboard - Part 1 Jupyter Dashboard - Part 2 Installing TensorFlow 2.0

## Minimal example

To understand the inner workings of Neural Networks, we start with a simple example (called 'minimal example'). It is a naïve network, basically equivalent to a linear regression. Outline Generating the data (optional) Initializing the variables Training the model

## Introduction to TensorFlow 2

Having created the simple net, we 'translate' it to TensorFlow. This is our way of taking a simple, well-understood problem to introduce the syntax and logic of TensorFlow. TensorFlow outline TensorFlow 2 intro A note on coding in TensorFlow Types of file formats in TensorFlow and data handling Model layout - inputs, outputs, targets, weights, biases, optimizer and loss Interpreting the result and extracting the weights and bias Customizing your model

## Deep nets overview

To have 'deep learning' we need 'deep' neural networks. In this section, we explain what exactly it means to be deep and focus on other important characteristics like width and activation functions. Finally, we explore the backpropagation algorithm. The layer What is a deep net Really understand deep nets Why do we need non-linearities Activation functions Softmax activation Backpropagation Backpropagation - intuition

## Overfitting

Neural networks are extremely good at modeling the data at hand. That's why we can often 'learn the data TOO WELL.' This is called overfitting. Of course, there are numerous ways to prevent this from happening, which we explore in that section. Underfitting and overfitting Underfitting and overfitting. A classification example Train vs validation Train vs validation vs test N-fold cross validation Early stopping - motivation and types

## Initializaiton

When the model is learning, it is searching for better and better solutions to the problem at hand. However, it starts from some initial values for its parameters. It matters what our starting point is, and that's what initialization is all about. Initializaiton Types of simple initializations Xavier's initialization

## Optimizers

There is a trade-off between having a fast model and an accurate model. In this section, we explore different optimization algorithms, based on the gradient descent logic, as well as learning rate schedules and batching. SGD&Batching Local minima pitfalls Momentum Learning rate schedules Learning rate schedules. A picture Adaptive learning schedules Adaptive moment estimation

## Preprocessing

Preprocessing is a crucial step relevant to any modeling problem. While there are dozens of different preprocessing techniques, there are several that are commonly employed for almost all neural networks. Preprocessing Basic preprocessing Standardization Dealing with categorical data One-hot vs binary

## Deeper example

Once we have learned all the relevant theory, we are ready to jump into deep waters. We explore the 'Hello world' of deep learning - the MNIST dataset, where we classify 60,000 images into 10 classes (the 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10). The dataset How to tackle the MNIST Importing the relevant libraries and loading the data Preprocess the data - create a validation dataset and scale the data Preprocess the data - shuffle and batch the data Outline the model Select the loss and the optimizer Learning
Show all lessons Testing the model
Show fewer lessons

### Section 12

Data science without an application is nothing but research. Since we at 365 believe that the skills you acquire should be relevant for your work, we finish the course with a business case, where we implement all the deep learning knowledge you've acquired. Exploring the dataset and identifying predictors Outlining the business case solution Balancing the dataset Preprocessing the data Load the preprocessed data Learning and interpreting the result Setting an early stopping mechanism Testing the model
MODULE 3

## Machine and Deep Learning This course is part of Module 3 of the 365 Data Science Program. The complete training consists of four modules, each building upon your knowledge from the previous one. Expanding on your statistical and programming skills from Modules 1 and 2, Module 3 is designed to improve your programming skills and develop your advanced statistical thinking. You will learn how to build complete linear and logistic regression models, how to cluster data, and how to build deep learning models with TensorFlow 2.0.

## Trust the other 500,000 students 