Last answered:

09 Oct 2020

Posted on:

08 Oct 2020

0

Objective function + algorithm optimization

Hi,  This is more a conceptual question.  In the course there is a point where the confusion matrix is created after testing the model. After that, the ROC curve and a couple of model performance measures (Gini & Kolmogorov -Smirnov) are calculated. Nevertheless, it seems that there are no further iterations for the model to learn. The coefficients and intercept obtained in the first place seem good enough.  The question is, could the model be further improved to make better predictions ?? should/could this be done by minimising the loss function using the gradient descent (whatever they are for the logistic regression) ? Thanks. 
1 answers ( 0 marked as helpful)
Posted on:

09 Oct 2020

0
I just realised that if there is a logistic regression, with a  linear regression involved , there can only be 1 best fit of the line (which implies a particular set of coefficients and intercept). So the only way I see to improve the model, is by adding more independent variables. So in other words, there is no need for choosing an objective function and an optimization method. These are more for machine learning (vs traditional statistical methods). Please confirm.

Submit an answer