Objective function in reinforcement learning
I am just wondering why is the objective function meant to be maximized in the reinforcement learning approach? Isnt it anti - intuitive that someone trying to optimize a model would want to maximize the objective function. It's like maximizing the error or making it further from the target.
Hi Anas!
Thanks for reaching out.
The objective function and the error are two different components concerning an ML model.
In fact, when referring to reinforcement learning, we are not trying to maximize the error. Instead, we are trying to maximize the objective function to optimize the incentive to continue obtaining rewards (and thus reinforce the use of the model, so to speak).
Please refer to minute 5:55 from the following video from the course for a more detailed explanation. Thank you.
https://learn.365datascience.com/courses/intro-to-data-and-data-science/machine-learning-ml-types-of-machine-learning/
Hope this helps.
Best,
Martin