Last answered:

18 Dec 2023

Posted on:

18 Dec 2023


Resolved: K-fold cross validation

Hi, i have a question about the k-fold cross validation, i understand that in this method we separate our data in k folds and train a model for every fold, and for validation of our method, we can use the average of precision of our models. But when coming to the part of inference how this model works, like technically we have more than 1 model in this case right? We assemble then?

Thanks for the attention !

1 answers ( 1 marked as helpful)
Posted on:

18 Dec 2023


Hi Ryo!

Thanks for reaching out!

In K-fold cross-validation, we split our data into K folds (each being a different fold of the data) and train a separate model for every single one of them. However, for inference, you typically don't use all these models. Instead, the main purpose of K-fold cross-validation is to evaluate and tune the performance of your model. Once you have assessed the performance using cross-validation, you generally train a final model on the entire dataset using the best parameters found during cross-validation. This final model is then used for inference.

Hope this helps.



Submit an answer