Cohere and Fujitsu Announce Strategic Partnership To Provide Japanese Enterprise AI Services

Learn More
< Back to more papers

Lifting the Veil on Hyper-parameters for Value-based Deep Reinforcement Learning

AUTHORS

João G.M. Araújo, Johan S. Obando-Ceron, Pablo Samuel Castro

ABSTRACT

Successful applications of deep reinforcement learning (deep RL) combine algorithmic design and careful hyper-parameter selection. The former often comes from iterative improvements over existing algorithms, while the latter is either inherited from prior methods or tuned for the specific method being introduced. Although critical to a method’s performance, the effect of the various hyper-parameter choices are often overlooked in favour of algorithmic advances. In this paper, we perform an initial empirical investigation into a number of often-overlooked hyperparameters for value-based deep RL agents, demonstrating their varying levels of importance. We conduct this study on a varied set of classic control environments which helps highlight the effect each environment has on an algorithm’s hyper-parameter sensitivity.