Open this publication in new window or tab >>Show others...
2023 (English)In: Concurrency and Computation, ISSN 1532-0626, E-ISSN 1532-0634, Vol. 35, no 19, article id e7351Article in journal (Refereed) Published
Abstract [en]
Reinforcement learning (RL) is an effective approach to developing control policies by maximizing the agent's reward. Deep reinforcement learning uses deep neural networks (DNNs) for function approximation in RL, and has achieved tremendous success in recent years. Large DNNs often incur significant memory size and computational overheads, which may impede their deployment into resource-constrained embedded systems. For deployment of a trained RL agent on embedded systems, it is necessary to compress the policy network of the RL agent to improve its memory and computation efficiency. In this article, we perform model compression of the policy network of an RL agent by leveraging the relevance scores computed by layer-wise relevance propagation (LRP), a technique for Explainable AI (XAI), to rank and prune the convolutional filters in the policy network, combined with fine-tuning with policy distillation. Performance evaluation based on several Atari games indicates that our proposed approach is effective in reducing model size and inference time of RL agents. We also consider robust RL agents trained with RADIAL-RL versus standard RL agents, and show that a robust RL agent can achieve better performance (higher average reward) after pruning than a standard RL agent for different attack strengths and pruning rates.
Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
embedded systems, knowledge distillation, policy distillation, reinforcement learning
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-200565 (URN)10.1002/cpe.7351 (DOI)000868806400001 ()2-s2.0-85139981238 (Scopus ID)
Note
Special Issue.
First published online October 2022.
2022-12-012022-12-012023-11-09Bibliographically approved