Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Ringdahl, Ola, DocentORCID iD iconorcid.org/0000-0002-4600-8652
Publications (10 of 42) Show all publications
Kurtser, P., Lowry, S. & Ringdahl, O. (2024). Advances in machine learning for agricultural robots. In: Eldert van Henten; Yael Edan (Ed.), Advances in agri-food robotics: (pp. 103-134). Cambridge: Burleigh Dodds Science Publishing
Open this publication in new window or tab >>Advances in machine learning for agricultural robots
2024 (English)In: Advances in agri-food robotics / [ed] Eldert van Henten; Yael Edan, Cambridge: Burleigh Dodds Science Publishing , 2024, p. 103-134Chapter in book (Refereed)
Abstract [en]

This chapter presents a survey of the advances in using machine learning algorithms for agricultural robotics. The development of machine learning algorithms in the last decade has been astounding, and there has therefore been a rapid increase in the widespread deployment of machine learning algorithms in many domains, such as agricultural robotics. However, there are also major challenges to be overcome in ML for agri-robotics, due to the unavoidable complexity and variability of the operating environments, and the difficulties in accessing the required quantities of relevant training data. This chapter presents an overview of the usage of ML for agri-robotics and discusses the use of ML for data analysis and decision-making for perception and navigation. It outlines the main trends of the last decade in employed algorithms and available data. We then discuss the challenges the field is facing and ways to overcome these challenges.

Place, publisher, year, edition, pages
Cambridge: Burleigh Dodds Science Publishing, 2024
Series
Burleigh dodds series in agricultural science, ISSN 2059-6936, E-ISSN 2059-6944 ; 139
National Category
Computer Sciences Computer graphics and computer vision
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-223680 (URN)10.19103/AS.2023.0124.04 (DOI)9781801462778 (ISBN)9781801462792 (ISBN)9781801462785 (ISBN)
Available from: 2024-04-23 Created: 2024-04-23 Last updated: 2025-02-01Bibliographically approved
Kurtser, P. & Ringdahl, O. (2024). Calibration-free multi-camera vision for hand gesture recognition in human-robot interaction. In: : . Paper presented at ICRA@40, 40th Anniversary of the IEEE International Conference on Robotics and Automation, Rotterdam, Netherlands, September 23-26, 2024.
Open this publication in new window or tab >>Calibration-free multi-camera vision for hand gesture recognition in human-robot interaction
2024 (English)Conference paper, Poster (with or without abstract) (Refereed)
Abstract [en]

Our research results align with previous studies showing hand-gesture recognition (HGR) performance is significantly dependent on the viewpoint. This leads to methods and results that often do not generalize well to the human-robot interaction (HRI) scenarios where viewpoints vary significantly. This work proposes two methods for fusing complementary multi-view information for HGR. We evaluate the methods using a multi-view hand pose dataset HanCo and compare them to two standard methods relying on either a single viewpoint or fully calibrated      stereo-vision. We show that in HRI settings multiple complementary viewpoints are necessary, and information fusion should be performed at the extracted features stage, as suggested in our proposed network architecture. Additionally, we show that in some scenarios, camera calibration can be avoided, leading to simplified acquisition protocols.

National Category
Robotics and automation Computer Sciences
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-230924 (URN)
Conference
ICRA@40, 40th Anniversary of the IEEE International Conference on Robotics and Automation, Rotterdam, Netherlands, September 23-26, 2024
Available from: 2024-10-17 Created: 2024-10-17 Last updated: 2025-02-05Bibliographically approved
Arad, B., Balendonck, J., Barth, R., Ben-Shahar, O., Edan, Y., Hellström, T., . . . van Tuijl, B. (2020). Development of a sweet pepper harvesting robot. Journal of Field Robotics, 37(6), 1027-1039
Open this publication in new window or tab >>Development of a sweet pepper harvesting robot
Show others...
2020 (English)In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 37, no 6, p. 1027-1039Article in journal (Refereed) Published
Abstract [en]

This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.

Place, publisher, year, edition, pages
John Wiley & Sons, 2020
Keywords
agriculture, computer vision, field test, motion control, real-world conditions, robotics
National Category
Robotics and automation
Research subject
Computer Science; Mechanical Engineering
Identifiers
urn:nbn:se:umu:diva-167658 (URN)10.1002/rob.21937 (DOI)000509488400001 ()2-s2.0-85078783496 (Scopus ID)
Funder
EU, Horizon 2020, 644313
Available from: 2020-01-31 Created: 2020-01-31 Last updated: 2025-02-09Bibliographically approved
Kurtser, P., Ringdahl, O., Rotstein, N., Berenstein, R. & Edan, Y. (2020). In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera. IEEE Robotics and Automation Letters, 5(2), 2031-2038
Open this publication in new window or tab >>In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera
Show others...
2020 (English)In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, no 2, p. 2031-2038Article in journal (Refereed) Published
Abstract [en]

Current practice for vine yield estimation is based on RGB cameras and has limited performance. In this paper we present a method for outdoor vine yield estimation using a consumer grade RGB-D camera mounted on a mobile robotic platform. An algorithm for automatic grape cluster size estimation using depth information is evaluated both in controlled outdoor conditions and in commercial vineyard conditions. Ten video scans (3 camera viewpoints with 2 different backgrounds and 2 natural light conditions), acquired from a controlled outdoor experiment and a commercial vineyard setup, are used for analyses. The collected dataset (GRAPES3D) is released to the public. A total of 4542 regions of 49 grape clusters were manually labeled by a human annotator for comparison. Eight variations of the algorithm are assessed, both for manually labeled and auto-detected regions. The effect of viewpoint, presence of an artificial background, and the human annotator are analyzed using statistical tools. Results show 2.8-3.5 cm average error for all acquired data and reveal the potential of using lowcost commercial RGB-D cameras for improved robotic yield estimation.

Place, publisher, year, edition, pages
IEEE, 2020
Keywords
Field Robots, RGB-D Perception, Agricultural Automation, Robotics in Agriculture and Forestry
National Category
Computer graphics and computer vision
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-167778 (URN)10.1109/LRA.2020.2970654 (DOI)000526520700001 ()2-s2.0-85079829054 (Scopus ID)
Available from: 2020-02-03 Created: 2020-02-03 Last updated: 2025-02-07Bibliographically approved
Kurtser, P., Ringdahl, O., Rotstein, N. & Andreasson, H. (2020). PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions. In: Proceedings of the Northern Lights Deep Learning Workshop: . Paper presented at 3rd Northern Lights Deep Learning Workshop, Tromsö, Norway, January 19-21, 2020. (pp. 1-6). Septentrio Academic Publishing, 1
Open this publication in new window or tab >>PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions
2020 (English)In: Proceedings of the Northern Lights Deep Learning Workshop, Septentrio Academic Publishing , 2020, Vol. 1, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.

Place, publisher, year, edition, pages
Septentrio Academic Publishing, 2020
Keywords
RGBD, Deep-learning, Agricultural robotics, outdoor vision, grape
National Category
Computer graphics and computer vision Other Agricultural Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-177113 (URN)10.7557/18.5155 (DOI)
Conference
3rd Northern Lights Deep Learning Workshop, Tromsö, Norway, January 19-21, 2020.
Available from: 2020-11-27 Created: 2020-11-27 Last updated: 2025-02-01Bibliographically approved
Ostovar, A., Talbot, B., Puliti, S., Astrup, R. & Ringdahl, O. (2019). Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning. Sensors, 19(7), Article ID 1579.
Open this publication in new window or tab >>Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning
Show others...
2019 (English)In: Sensors, E-ISSN 1424-8220, Vol. 19, no 7, article id 1579Article in journal (Refereed) Published
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting, and therewith on the individual forest owner and collectively on the forest and wood processing industries. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution to addressing the problem without increasing workload complexity for the machine operator. In this study, we developed and evaluated an approach based on RGB images to automatically detect tree stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps into three classes of infestation; rot = 0%, 0% < rot > 50% and rot ≥ 50%. In this work we used deep-learning approaches and conventional machine-learning algorithms for detection and classification tasks. The results showed that tree stumps were detected with precision rate of 95% and recall of 80%. Using only the correct output (TP) of the stump detector, stumps without and with RBR were correctly classified with accuracy of 83.5% and 77.5%, respectively. Classifying rot into three classes resulted in 79.4%, 72.4%, and 74.1% accuracy for stumps with rot = 0%, 0% < rot > 50% and rot ≥ 50%, respectively. With some modifications, the developed algorithm could be used either during the harvesting operation to detect RBR regions on the tree stumps or as an RBR detector for post-harvest assessment of tree stumps and logs.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
deep learning; forest harvesting; tree stumps; automatic detection and classification
National Category
Computer graphics and computer vision
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-157716 (URN)10.3390/s19071579 (DOI)000465570700098 ()30939827 (PubMedID)2-s2.0-85064193099 (Scopus ID)
Projects
PRECISION
Funder
The Research Council of Norway, NFR281140
Available from: 2019-04-01 Created: 2019-04-01 Last updated: 2025-02-07Bibliographically approved
Ringdahl, O., Kurtser, P. & Edan, Y. (2019). Evaluation of approach strategies for harvesting robots: case study of sweet pepper harvesting. Journal of Intelligent and Robotic Systems, 95(1), 149-164
Open this publication in new window or tab >>Evaluation of approach strategies for harvesting robots: case study of sweet pepper harvesting
2019 (English)In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 95, no 1, p. 149-164Article in journal (Refereed) Published
Abstract [en]

Robotic harvesters that use visual servoing must choose the best direction from which to approach the fruit to minimize occlusion and avoid obstacles that might interfere with the detection along the approach. This work proposes different approach strategies, compares them in terms of cycle times, and presents a failure analysis methodology of the different approach strategies. The different approach strategies are: in-field assessment by human observers, evaluation based on an overview image using advanced algorithms or remote human observers, or attempting multiple approach directions until the fruit is successfully reached. In the latter approach, each attempt costs time, which is a major bottleneck in bringing harvesting robots into the market. Alternatively, a single approach strategy that only attempts one direction can be applied if the best approach direction is known a-priori. The different approach strategies were evaluated for a case study of sweet pepper harvesting in laboratorial and greenhouse conditions. The first experiment, conducted in a commercial greenhouse, revealed that the fruit approach cycle time increased 8% and 116% for reachable and unreachable fruits respectively when the multiple approach strategy was applied, compared to the single approach strategy. The second experiment measured human observers’ ability to provide insights to approach directions based on overview images taken in both greenhouse and laboratorial conditions. Results revealed that human observers are accurate in detecting unapproachable directions while they tend to miss approachable directions. By detecting fruits that are unreachable (via automatic algorithms or human operators), harvesting cycle times can be significantly shortened leading to improved commercial feasibility of harvesting robots.

Place, publisher, year, edition, pages
Springer Netherlands, 2019
Keywords
Agricultural robotics, Robotic harvesting, Fruit approach, Human-robot collaboration
National Category
Computer graphics and computer vision
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-150404 (URN)10.1007/s10846-018-0892-7 (DOI)000475763400010 ()2-s2.0-85051483992 (Scopus ID)
Funder
EU, Horizon 2020, 644313
Available from: 2018-08-06 Created: 2018-08-06 Last updated: 2025-02-07Bibliographically approved
Ringdahl, O., Kurtser, P. & Edan, Y. (2019). Performance of RGB-D camera for different object types in greenhouse conditions. In: Libor Přeučil; Sven Behnke; Miroslav Kulich (Ed.), 2019 European conference on mobile robots (ECMR): conference proceedings September 4- 6, 2019 Prague Czech Republic. Paper presented at European Conference on Mobile Robots (ECMR), Prague, Czech Republic, September 4–6, 2019.. IEEE, Article ID 8870935.
Open this publication in new window or tab >>Performance of RGB-D camera for different object types in greenhouse conditions
2019 (English)In: 2019 European conference on mobile robots (ECMR): conference proceedings September 4- 6, 2019 Prague Czech Republic / [ed] Libor Přeučil; Sven Behnke; Miroslav Kulich, IEEE, 2019, article id 8870935Conference paper, Published paper (Refereed)
Abstract [en]

RGB-D cameras play an increasingly important role in localization and autonomous navigation of mobile robots. Reasonably priced commercial RGB-D cameras have recently been developed for operation in greenhouse and outdoor conditions. They can be employed for different agricultural and horticultural operations such as harvesting, weeding, pruning and phenotyping. However, the depth information extracted from the cameras varies significantly between objects and sensing conditions. This paper presents an evaluation protocol applied to a commercially available Fotonic F80 time-of-flight RGB-D camera for eight different object types. A case study of autonomous sweet pepper harvesting was used as an exemplary agricultural task. Each of the objects chosen is a possible item that an autonomous agricultural robot must detect and localize to perform well. A total of 340 rectangular regions of interests (ROI) were marked for the extraction of performance measures of point cloud density, and variability around center of mass, 30-100 ROIs per object type. An additional 570 ROIs were generated (57 manually and 513 replicated) to evaluate the repeatability and accuracy of the point cloud. A statistical analysis was performed to evaluate the significance of differences between object types. The results show that different objects have significantly different point density. Specifically metallic materials and black colored objects had significantly less point density compared to organic and other artificial materials introduced to the scene as expected. The point cloud variability measures showed no significant differences between object types, except for the metallic knife that presented significant outliers in collected measures. The accuracy and repeatability analysis showed that 1-3 cm errors are due to the the difficulty for a human to annotate the exact same area and up to ±4 cm error is due to the sensor not generating the exact same point cloud when sensing a fixed object.

Place, publisher, year, edition, pages
IEEE, 2019
Keywords
agriculture, cameras, feature extraction, greenhouses, image colour analysis, image sensors, industrial robots, mobile robots, object tracking, robot vision, statistical analysis, pruning, sensing conditions, evaluation protocol, object types, autonomous sweet pepper harvesting, exemplary agricultural task, autonomous agricultural robot, ROI, point cloud density, object type, point density, black colored objects, point cloud variability measures, fixed object, greenhouse conditions, autonomous navigation, mobile robots, agricultural operations, horticultural operations, commercial RGB-D cameras, Fotonic F80 time-of-flight RGB-D camera, size 4.0 cm, size 1.0 cm to 3.0 cm, Cameras, Three-dimensional displays, Robot vision systems, End effectors, Green products
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-165545 (URN)10.1109/ECMR.2019.8870935 (DOI)000558081900031 ()2-s2.0-85074395978 (Scopus ID)978-1-7281-3606-6 (ISBN)978-1-7281-3605-9 (ISBN)
Conference
European Conference on Mobile Robots (ECMR), Prague, Czech Republic, September 4–6, 2019.
Funder
Knowledge FoundationEU, Horizon 2020, 66313
Available from: 2019-11-26 Created: 2019-11-26 Last updated: 2025-02-07Bibliographically approved
Ostovar, A., Talbot, B., Puliti, S., Rasmus, A. & Ringdahl, O. (2019). Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce. In: Simon Berg & Bruce Talbot (Ed.), Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway. Paper presented at NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.. Norsk institutt for bioøkonomi (NIBIO)
Open this publication in new window or tab >>Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce
Show others...
2019 (English)In: Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway / [ed] Simon Berg & Bruce Talbot, Norsk institutt for bioøkonomi (NIBIO) , 2019Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution in addressing the problem without increasing workload complexity for the machine operator. In this study we developed and evaluated an approach based on RGB images to automatically detect tree-stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps to three classes of infestation; rot = 0%, 0% < rot < 50% and rot ≥50%. We used deep learning approaches and conventional machine learning algorithms for detection and classification tasks. The results showed that tree-stumps were detected with precision rate of 95% and recall of 80%. Stumps without and with root and butt-rot were correctly classified with accuracy of 83.5% and 77.5%. Classifying rot into three classes resulted in 79.4%, 72.4% and 74.1% accuracy respectively. With some modifications, the algorithm developed could be used either during the harvesting operation to detect RBR regions on the tree-stumps or as a RBR detector for post-harvest assessment of tree-stumps and logs.

Place, publisher, year, edition, pages
Norsk institutt for bioøkonomi (NIBIO), 2019
Series
NIBIO Bok, E-ISSN 2464‐1189 ; 5(6)2019
National Category
Forest Science Robotics and automation Signal Processing Computer graphics and computer vision
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-159977 (URN)978-82-17-02339-5 (ISBN)
Conference
NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.
Funder
The Research Council of Norway, NFR281140
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2025-02-05Bibliographically approved
Ostovar, A., Ringdahl, O. & Hellström, T. (2018). Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot. Robotics, 7(1), Article ID 11.
Open this publication in new window or tab >>Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
2018 (English)In: Robotics, E-ISSN 2218-6581, Vol. 7, no 1, article id 11Article in journal (Refereed) Published
Abstract [en]

The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.

Place, publisher, year, edition, pages
MDPI, 2018
Keywords
reinforcement learning, Q-Learning, image thresholding, ϵ-greedy strategies
National Category
Computer graphics and computer vision
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-144513 (URN)10.3390/robotics7010011 (DOI)000432680200008 ()2-s2.0-85042553994 (Scopus ID)
Funder
EU, Horizon 2020, 644313
Available from: 2018-02-05 Created: 2018-02-05 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4600-8652

Search in DiVA

Show all publications