The 365 Data Science team is proud to invite you to our own community forum. A very well built system to support your queries, questions and give the chance to show your knowledge and help others in their path of becoming Data Science specialists.
Ask
Anybody can ask a question
Answer
Anybody can answer
Vote
The best answers are voted up and moderated by our team

When performing the linear model with l2 norm loss and gradient decent loss function reaches infinity and then nan

When performing the linear model with l2 norm loss and gradient decent loss function reaches infinity and then nan

0
Votes
1
Answer

going through the TensorFlow with neural networks, and I tried to build a linear model with house price data set from Kaggle, where I took two inputs and assigned the sales price as the target variable.
when I tried training the model with 100 iterations with the learning rate 0.02 initially, it gave me iterative values for the first 10 instances and started giving me infinity values and then Nan.
I even tried tweaking the learning rate and the iteration value to 10 
but still, I got a non-linear graph for the model which is unlikely.

Start your code here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
train=pd.read_csv('C:/Users/HP/Downloads/train.csv')
inputs=train[['LotArea','LotFrontage']]
inputs=inputs.interpolate(method='linear',limit_direction='forward')
inputn=inputs.to_numpy()
inputn.shape
target=train['SalePrice']
targetn=train['SalePrice'].to_numpy()
targetn=targetn.reshape((1460,1))
targetn.shape
observations=len(targetn)
print(observations)
init_range=0.1
weights=np.random.uniform(-init_range,init_range,size=(2,1))
bias=np.random.uniform(-init_range,init_range,size=1)
print(weights,bias)
learning_rate=0.02
for i in range(10):
outputs=np.dot(inputn,weights)+bias
deltas=outputs-targetn
loss=np.sum(deltas ** 2)/2/observations
print(loss)
deltas_scaled=deltas/observations
weights=weights- learning_rate*np.dot(inputn.T,deltas_scaled)
bias=bias-learning_rate*np.sum(deltas_scaled)
print(weights,bias)
plt.plot(outputs,target)
plt.xlabel('Output')
plt.ylabel('Target')
plt.show()

Link for the dataset used:
https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data?select=train.csv
1 Answer

365 Team
0
Votes

Hi there,
The reason for that is that your loss function is not converging to 0, but rather – diverging to infinity.
This usually happens when the learning rate you have chosen is too big. Please try with some lower number, e.g. 0.0001.
Best,
The 365 Team