Continuous Control of an Underground Loader Using Deep Reinforcement LearningShow others and affiliations
2021 (English)In: Machines, E-ISSN 2075-1702, Vol. 9, no 10, article id 216Article in journal (Refereed) Published
Abstract [en]
The reinforcement learning control of an underground loader was investigated in a simulated environment by using a multi-agent deep neural network approach. At the start of each loading cycle, one agent selects the dig position from a depth camera image of a pile of fragmented rock. A second agent is responsible for continuous control of the vehicle, with the goal of filling the bucket at the selected loading point while avoiding collisions, getting stuck, or losing ground traction. This relies on motion and force sensors, as well as on a camera and lidar. Using a soft actor–critic algorithm, the agents learn policies for efficient bucket filling over many subsequent loading cycles, with a clear ability to adapt to the changing environment. The best results—on average, 75% of the max capacity—were obtained when including a penalty for energy usage in the reward.
Place, publisher, year, edition, pages
MDPI, 2021. Vol. 9, no 10, article id 216
Keywords [en]
autonomous excavation, bucket filling, deep reinforcement learning, mining robotics, simulation, wheel loader
National Category
Robotics Computer Sciences Applied Mechanics
Research subject
Physics; Computer Science
Identifiers
URN: urn:nbn:se:umu:diva-187947DOI: 10.3390/machines9100216ISI: 000717124100001Scopus ID: 2-s2.0-85116361998OAI: oai:DiVA.org:umu-187947DiVA, id: diva2:1597655
Funder
Vinnova, 2019-048322021-09-272021-09-272023-09-05Bibliographically approved