Evolutionary learning of interpretable decision trees

Authors: Leonardo Lucio Custode, Giovanni Iacca

66 pages, 31 figures, code available at: https://gitlab.com/leocus/ge_q_dts
License: CC BY-NC-ND 4.0

Abstract: Reinforcement learning techniques achieved human-level performance in several tasks in the last decade. However, in recent years, the need for interpretability emerged: we want to be able to understand how a system works and the reasons behind its decisions. Not only we need interpretability to assess the safety of the produced systems, we also need it to extract knowledge about unknown problems. While some techniques that optimize decision trees for reinforcement learning do exist, they usually use greedy algorithms or they do not exploit the rewards given by the environment. This means that these techniques may easily get stuck in bad local optima. In this work, we propose a novel approach to interpretable reinforcement learning that uses decision trees. We use a two-level optimization approach that combines the advantages of evolutionary algorithms with the advantages of Q-learning. This way we decompose the problem in two sub-problems: the problem of finding a meaningful and useful decomposition of the state space, and the problem of associating an action to each state. We test our approach in three well-known reinforcement learning benchmarks and the results make our approach competitive with respect to the state of the art in both performance and interpretability. Finally, we perform an ablation study that confirms that using the two-level optimization approach gives us a boost in performance in non-trivial environments with respect to a one-layer optimization technique.

Submitted to arXiv on 14 Dec. 2020

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.