REWARD FUNCTIONS
Are reward functions not functions we are trying to maximize?
If I got the lesson correctly, the higher the reward function, the better the output. This means higher reward function is a joy to the Machine Learning scientist.
Please educate me.
Hey Jonathan,
Good to hear from you!
Yes, you are correct! Reward functions in the context of machine learning, particularly in reinforcement learning, are designed to provide feedback to the model about how well it is performing. Higher rewards indicate better performance, and improving these rewards is a key objective for machine learning practitioners.
Best,
Ned
Thank you Ned for the clarification.
I asked the question because of what the correct answer was stated as in the number 2 question:
Reward functions are NOT:
I chose the option: "functions we are trying to maximize"
However, I got the feedback that the correct answer should be: "functions we are trying to minimize"
Did I misunderstand the question?