Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Fälldin, Arvid
Publications (7 of 7) Show all publications
Lundbäck, M., Fälldin, A., Wallin, E. & Servin, M. (2024). Learning forwarder trafficability from real and synthetic data. In: IUFRO 2024: Detailed programme. Paper presented at IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024. , Article ID T5.30.
Open this publication in new window or tab >>Learning forwarder trafficability from real and synthetic data
2024 (English)In: IUFRO 2024: Detailed programme, 2024, article id T5.30Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Forwarder trafficability is a function of terrain and vehicle properties. Predicting trafficability is vital for energy efficient planning- and operator-assisting systems, as well as for remote and autonomous driving. Inaccurate or insufficient information can lead to inefficient paths, excessive fuel usage, equipment wear, and soil damages. Training trafficability models require data in a quantity hard to collect solely from in-field experiments, especially considering the need for data from situations ranging from very easy to non-traversable.

To circumvent this problem, we perform in-field system identification for a forwarder in the Nordic cut-to-length system, to obtain a calibrated multi-body dynamics simulation model traversing firm but potentially rough and blocky terrain. By letting the real-world forward derdrive in very difficult terrain, the model is able to reflect a wide range of real conditions. The model is used in simulations, where collecting large amounts of data from a variety of situations is easy, cheap, and hazard free. Using this data, a deep neural network is trained to predict trafficability in terms of attainable driving speed, energy consumption, and machine wear.

The resulting predictor model uses laser scanned terrains to efficiently produce trafficability measures with high fidelity and accuracy, e.g., depending on the vehicle’s precise location, speed, heading, and weight. Trafficability on wet and weak soil is not addressed in this work. The predictor model is machine specific, but general enough for practical application in diverse terrain conditions. Our emphasis on energy consumption enables elaborate calculations of emissions, profoundly contributing to sustainable forest operations. Apart from the benefits from reduced emissions, the model can also be used to optimize extraction trail routing, which is a major contributor to the total extraction cost. Rough terrain trafficability is only part of an optima loute, but it has been neglected in previous research. We see big potential in combining our predictor model with existing route optimization methods to achieve a more complete result. By creating an open library of annotated machine data and code for preparing input terrain-data and running the trafficability model, we enable adoption of the results by others and application in existing and new software.

National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:umu:diva-227460 (URN)
Conference
IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research
Available from: 2024-06-27 Created: 2024-06-27 Last updated: 2025-02-07Bibliographically approved
Fälldin, A., Häggström, C., Höök, C., Jönsson, P., Lindroos, O., Lundbäck, M., . . . Servin, M. (2024). Open data, models, and software for machine automation. In: IUFRO 2024: Detailed programme. Paper presented at IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024. , Article ID T5.10.
Open this publication in new window or tab >>Open data, models, and software for machine automation
Show others...
2024 (English)In: IUFRO 2024: Detailed programme, 2024, article id T5.10Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

We create partially annotated datasets from field measurements for developing models and algorithms for perception and control of forest machines using artificial intelligence, simulation, and experiments on physical testbeds.  The datasets, algorithms, and trained models for object identification, 3D perception, and motion planning and control will be made publicly available through data and code-sharing repositories.

The data is recorded using forest machines and other equipment with suitable sensors operating in the forest environment. The data include the machine and crane tip position at high resolution, and event time logs (StanForD) while the vehicle operates in high-resolution laser-scanned forest areas.  For annotation, the plan is to use both CAN-bus data and audiovisual data from operators that are willing to participate in the research. Also, by fusing visual perception with operator tree characteristics input or decision, we aim to develop a method for auto-annotation, facilitating a rapid increase in labeled training data for computer vision. In other activities, images of tree plants and bark are collected.

Research questions include, how to automate the process of creating annotated datasets and train models for identifying and positioning forestry objects, such as plants, tree species, logs, terrain obstacles, and do 3D reconstruction for motion planning and control? How large and varied datasets are required for the models to handle the variability in forests, weather, light conditions, etc.? Would additional synthetic data increase model inference accuracy?

In part we focus on forwarders traversing terrain, avoiding obstacles, and loading or unloading logs, with consideration for efficiency, safety, and environmental impact. We explore how to auto-generate and calibrate forestry machine simulators and automation scenario descriptions using the data recorded in the field. The demonstrated automation solutions serve as proofs-of-concept and references, important for developing commercial prototypes and for understanding what future research should focus on.

National Category
Computer and Information Sciences Robotics and automation
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-227458 (URN)
Conference
IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research
Available from: 2024-06-27 Created: 2024-06-27 Last updated: 2025-02-05Bibliographically approved
Wiberg, V., Wallin, E., Fälldin, A., Semberg, T., Rossander, M., Wadbro, E. & Servin, M. (2024). Sim-to-real transfer of active suspension control using deep reinforcement learning. Robotics and Autonomous Systems, 179, Article ID 104731.
Open this publication in new window or tab >>Sim-to-real transfer of active suspension control using deep reinforcement learning
Show others...
2024 (English)In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 179, article id 104731Article in journal (Refereed) Published
Abstract [en]

We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform nearly at the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang–bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of predictive planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation.

Place, publisher, year, edition, pages
Elsevier, 2024
National Category
Electrical Engineering, Electronic Engineering, Information Engineering Other Physics Topics
Research subject
Physics; computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-226893 (URN)10.1016/j.robot.2024.104731 (DOI)2-s2.0-85196769514 (Scopus ID)
Projects
Mistra Digital Forest
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research, Grant DIA 2017/14 #6Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-06-23 Created: 2024-06-23 Last updated: 2024-07-03Bibliographically approved
Fälldin, A., Lundbäck, M., Servin, M. & Wallin, E. (2024). Towards autonomous forwarding using deep learning and simulation. In: : . Paper presented at IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024. , Article ID T5.30.
Open this publication in new window or tab >>Towards autonomous forwarding using deep learning and simulation
2024 (English)Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Fully autonomous forwarding is a challenge, with more imminent scenarios including operator assistance, remote-controlled machines, and semi-autonomous functions. We present several subsystems for autonomous forwarding, developed using machine learning and physics simulation,

- trafficability analysis and path planning,

- autonomous driving,

- identification of logs and high quality grasp poses, and

- crane control from snapshot camera data.

Forwarding is an energy demanding process, and repeated passages with heavy equipment can damage the soil. To avoid damage and ensure efficient use of energy, it is important with a good path planning, adapted speed, and efficient loading and unloading of logs. The collection and availability of large amounts of data is increasing in the field of forestry, opening up for autonomous solutions and efficiency improvements. This is a difficult problem though, as the forest terrain is rough, and as weather, season, obstructions, and wear present challenges in collecting and interpreting sensor-data.

Our proposed subsystems assume access to pre-scanned, high-resolution elevation maps and snapshots of log piles, captured in between crane cycles by an onboard camera. By utilizing snapshots instead of a continuous image stream in the loading task, we separate image segmentation from crane control. This removes any coupling to specific vehicle models, and greatly increases the limit on computational resources and time for the challenge of image segmentation. Log piles are normally static except at the grasp moments and given good enough grasp poses, this lack of information is not necessarily a problem.

We show how snapshot image data can be used when deploying a Reinforcement Learning agent to control the crane to grasp logs in challenging piles. Given pile RGB-D images, our grasp detection model identifies high quality grasp poses, allowing for multiple logs to be loaded in each crane cycle. Further, we show that our model is able to learn to avoid obstructions in the environment such as tree stumps or boulders. We discuss the possibility of using our model to optimize the loading task over a sequence of grasps.

Finally, we discuss how the solutions can be combined in a multi-agent forwarding system with or without a human operator in-the loop.

National Category
Computer graphics and computer vision
Identifiers
urn:nbn:se:umu:diva-227464 (URN)
Conference
IUFRO 2024 - XXVI IUFRO World Congress, Stockholm, Sweden, June 23-29, 2024
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research
Available from: 2024-06-27 Created: 2024-06-27 Last updated: 2025-02-07Bibliographically approved
Aoshima, K., Fälldin, A., Wadbro, E. & Servin, M. (2024). World modeling for autonomous wheel loaders. Automation, 5(3), 259-281
Open this publication in new window or tab >>World modeling for autonomous wheel loaders
2024 (English)In: Automation, ISSN 2673-4052, Vol. 5, no 3, p. 259-281Article in journal (Refereed) Published
Abstract [en]

This paper presents a method for learning world models for wheel loaders performing automatic loading actions on a pile of soil. Data-driven models were learned to output the resulting pile state, loaded mass, time, and work for a single loading cycle given inputs that include a heightmap of the initial pile shape and action parameters for an automatic bucket-filling controller. Long-horizon planning of sequential loading in a dynamically changing environment is thus enabled as repeated model inference. The models, consisting of deep neural networks, were trained on data from a 3D multibody dynamics simulation of over 10,000 random loading actions in gravel piles of different shapes. The accuracy and inference time for predicting the loading performance and the resulting pile state were, on average, 95% in 1.21.2 ms and 97% in 4.54.5 ms, respectively. Long-horizon predictions were found feasible over 40 sequential loading actions.

Place, publisher, year, edition, pages
MDPI, 2024
Keywords
wheel loader, earthmoving, automation, bucket-filling, world modeling, deep learning, multibody simulation
National Category
Robotics and automation Computer graphics and computer vision Other Physics Topics
Research subject
Physics; Automatic Control
Identifiers
urn:nbn:se:umu:diva-227746 (URN)10.3390/automation5030016 (DOI)001323274900001 ()2-s2.0-85205125062 (Scopus ID)
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-07-07 Created: 2024-07-07 Last updated: 2025-02-05Bibliographically approved
Aoshima, K., Fälldin, A., Wadbro, E. & Servin, M.Data-driven models for predicting the outcome of autonomous wheel loader operations.
Open this publication in new window or tab >>Data-driven models for predicting the outcome of autonomous wheel loader operations
(English)Manuscript (preprint) (Other academic)
Abstract [en]

This paper presents a method using data-driven models for selecting actions and predicting the total performance of autonomous wheel loader operations over many loading cycles in a changing environment. The performance includes loaded mass, loading time, work. The data-driven models input the control parameters of a loading action and the heightmap of the initial pile state to output the inference of either the performance or the resulting pile state. By iteratively utilizing the resulting pile state as the initial pile state for consecutive predictions, the prediction method enables long-horizon forecasting. Deep neural networks were trained on data from over 10,000 random loading actions in gravel piles of different shapes using 3D multibody dynamics simulation. The models predict the performance and the resulting pile state with, on average, 95% accuracy in 1.2 ms, and 97% in 4.5 ms, respectively. The performance prediction was found to be even faster in exchange for accuracy by reducing the model size with the lower dimensional representation of the pile state using its slope and curvature. The feasibility of long-horizon predictions was confirmed with 40 sequential loading actions at a large pile. With the aid of a physics-based model, the pile state predictions are kept sufficiently accurate for longer-horizon use.

National Category
Other Physics Topics Computer graphics and computer vision Transport Systems and Logistics Robotics and automation
Research subject
Physics
Identifiers
urn:nbn:se:umu:diva-220410 (URN)10.48550/arXiv.2309.12016 (DOI)
Available from: 2024-02-02 Created: 2024-02-02 Last updated: 2025-02-05
Wiberg, V., Wallin, E., Fälldin, A., Semberg, T., Rossander, M., Wadbro, E. & Servin, M.Sim-to-real transfer of active suspension control using deep reinforcement learning.
Open this publication in new window or tab >>Sim-to-real transfer of active suspension control using deep reinforcement learning
Show others...
(English)Manuscript (preprint) (Other academic)
Abstract [en]

We explore sim-to-real transfer of deep reinforcement learning controllers for a heavy vehicle with active suspensions designed for traversing rough terrain. While related research primarily focuses on lightweight robots with electric motors and fast actuation, this study uses a forestry vehicle with a complex hydraulic driveline and slow actuation. We simulate the vehicle using multibody dynamics and apply system identification to find an appropriate set of simulation parameters. We then train policies in simulation using various techniques to mitigate the sim-to-real gap, including domain randomization, action delays, and a reward penalty to encourage smooth control. In reality, the policies trained with action delays and a penalty for erratic actions perform at nearly the same level as in simulation. In experiments on level ground, the motion trajectories closely overlap when turning to either side, as well as in a route tracking scenario. When faced with a ramp that requires active use of the suspensions, the simulated and real motions are in close alignment. This shows that the actuator model together with system identification yields a sufficiently accurate model of the actuators. We observe that policies trained without the additional action penalty exhibit fast switching or bang-bang control. These present smooth motions and high performance in simulation but transfer poorly to reality. We find that policies make marginal use of the local height map for perception, showing no indications of look-ahead planning. However, the strong transfer capabilities entail that further development concerning perception and performance can be largely confined to simulation. 

Keywords
autonomous vehicles, rough terrain navigation, machine learning, sim-to-real, reinforcement learning, heavy vehicles
National Category
Computer graphics and computer vision Other Physics Topics
Research subject
Physics; computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-207977 (URN)10.48550/arXiv.2306.11171 (DOI)
Funder
Mistra - The Swedish Foundation for Strategic Environmental Research, DIA 2017/14 #6
Available from: 2023-05-05 Created: 2023-05-05 Last updated: 2025-02-01Bibliographically approved
Organisations

Search in DiVA

Show all publications