Slight difference between my results and results shown from the lecture
Is it normal to obtain result from my model that is slightly different to the one shown in the lecture? Or theoretically given that we set up the same parameters for train and test sets and use the same model, the results must be exactly the same?
Thank you
Im referring to the accuracy of the model
Hi Kai N!
Thanks for reaching out.
When randomization has not been used, the reason for a different output from the course is usually one (or more) among the following:
- using a different dataset
- making a step differently from what's been done in the course
- a change in the formula's/procedures used during the computations.
While it is up to the user to ensure the first two are not the cause for the difference, the third one can be observed in case there have been some updates in the modules used (since we have recorded the lectures at an earlier point in time).
Hope this helps.
Kind regards,
Martin