Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards autonomous forwarding using deep learning and simulation
Umeå University, Faculty of Science and Technology, Department of Physics. (Digital Physics)
Umeå University, Faculty of Science and Technology, Department of Physics. (Digital Physics)ORCID iD: 0000-0002-1842-7032
Umeå University, Faculty of Science and Technology, Department of Physics. (Digital Physics)ORCID iD: 0000-0002-0787-4988
Umeå University, Faculty of Science and Technology, Department of Physics. (Digital Physics)
2024 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Fully autonomous forwarding is a challenge, with more imminent scenarios including operator assistance, remote-controlled machines, and semi-autonomous functions. We present several subsystems for autonomous forwarding, developed using machine learning and physics simulation,

- trafficability analysis and path planning,

- autonomous driving,

- identification of logs and high quality grasp poses, and

- crane control from snapshot camera data.

Forwarding is an energy demanding process, and repeated passages with heavy equipment can damage the soil. To avoid damage and ensure efficient use of energy, it is important with a good path planning, adapted speed, and efficient loading and unloading of logs. The collection and availability of large amounts of data is increasing in the field of forestry, opening up for autonomous solutions and efficiency improvements. This is a difficult problem though, as the forest terrain is rough, and as weather, season, obstructions, and wear present challenges in collecting and interpreting sensor-data.

Our proposed subsystems assume access to pre-scanned, high-resolution elevation maps and snapshots of log piles, captured in between crane cycles by an onboard camera. By utilizing snapshots instead of a continuous image stream in the loading task, we separate image segmentation from crane control. This removes any coupling to specific vehicle models, and greatly increases the limit on computational resources and time for the challenge of image segmentation. Log piles are normally static except at the grasp moments and given good enough grasp poses, this lack of information is not necessarily a problem.

We show how snapshot image data can be used when deploying a Reinforcement Learning agent to control the crane to grasp logs in challenging piles. Given pile RGB-D images, our grasp detection model identifies high quality grasp poses, allowing for multiple logs to be loaded in each crane cycle. Further, we show that our model is able to learn to avoid obstructions in the environment such as tree stumps or boulders. We discuss the possibility of using our model to optimize the loading task over a sequence of grasps.

Finally, we discuss how the solutions can be combined in a multi-agent forwarding system with or without a human operator in-the loop.

Place, publisher, year, edition, pages
2024. article id T5.30
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:umu:diva-227464OAI: oai:DiVA.org:umu-227464DiVA, id: diva2:1879279
Conference
IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024
Funder
Mistra - The Swedish Foundation for Strategic Environmental ResearchAvailable from: 2024-06-27 Created: 2024-06-27 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Conference programme

Authority records

Fälldin, ArvidLundbäck, MikaelServin, MartinWallin, Erik

Search in DiVA

By author/editor
Fälldin, ArvidLundbäck, MikaelServin, MartinWallin, Erik
By organisation
Department of Physics
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 253 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf