Umeå universitets logga

umu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (10 of 41) Visa alla publikationer
Kurtser, P., Lowry, S. & Ringdahl, O. (2024). Advances in machine learning for agricultural robots. In: Eldert van Henten; Yael Edan (Ed.), Advances in agri-food robotics: (pp. 103-134). Cambridge: Burleigh Dodds Science Publishing
Öppna denna publikation i ny flik eller fönster >>Advances in machine learning for agricultural robots
2024 (Engelska)Ingår i: Advances in agri-food robotics / [ed] Eldert van Henten; Yael Edan, Cambridge: Burleigh Dodds Science Publishing , 2024, s. 103-134Kapitel i bok, del av antologi (Refereegranskat)
Abstract [en]

This chapter presents a survey of the advances in using machine learning algorithms for agricultural robotics. The development of machine learning algorithms in the last decade has been astounding, and there has therefore been a rapid increase in the widespread deployment of machine learning algorithms in many domains, such as agricultural robotics. However, there are also major challenges to be overcome in ML for agri-robotics, due to the unavoidable complexity and variability of the operating environments, and the difficulties in accessing the required quantities of relevant training data. This chapter presents an overview of the usage of ML for agri-robotics and discusses the use of ML for data analysis and decision-making for perception and navigation. It outlines the main trends of the last decade in employed algorithms and available data. We then discuss the challenges the field is facing and ways to overcome these challenges.

Ort, förlag, år, upplaga, sidor
Cambridge: Burleigh Dodds Science Publishing, 2024
Serie
Burleigh dodds series in agricultural science, ISSN 2059-6936, E-ISSN 2059-6944 ; 139
Nationell ämneskategori
Datavetenskap (datalogi) Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-223680 (URN)10.19103/AS.2023.0124.04 (DOI)9781801462778 (ISBN)9781801462792 (ISBN)9781801462785 (ISBN)
Tillgänglig från: 2024-04-23 Skapad: 2024-04-23 Senast uppdaterad: 2024-05-14Bibliografiskt granskad
Arad, B., Balendonck, J., Barth, R., Ben-Shahar, O., Edan, Y., Hellström, T., . . . van Tuijl, B. (2020). Development of a sweet pepper harvesting robot. Journal of Field Robotics, 37(6), 1027-1039
Öppna denna publikation i ny flik eller fönster >>Development of a sweet pepper harvesting robot
Visa övriga...
2020 (Engelska)Ingår i: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 37, nr 6, s. 1027-1039Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.

Ort, förlag, år, upplaga, sidor
John Wiley & Sons, 2020
Nyckelord
agriculture, computer vision, field test, motion control, real-world conditions, robotics
Nationell ämneskategori
Robotteknik och automation
Forskningsämne
Datavetenskap; Maskinteknik
Identifikatorer
urn:nbn:se:umu:diva-167658 (URN)10.1002/rob.21937 (DOI)000509488400001 ()2-s2.0-85078783496 (Scopus ID)
Forskningsfinansiär
EU, Horisont 2020, 644313
Tillgänglig från: 2020-01-31 Skapad: 2020-01-31 Senast uppdaterad: 2023-03-24Bibliografiskt granskad
Kurtser, P., Ringdahl, O., Rotstein, N., Berenstein, R. & Edan, Y. (2020). In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera. IEEE Robotics and Automation Letters, 5(2), 2031-2038
Öppna denna publikation i ny flik eller fönster >>In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera
Visa övriga...
2020 (Engelska)Ingår i: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, nr 2, s. 2031-2038Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Current practice for vine yield estimation is based on RGB cameras and has limited performance. In this paper we present a method for outdoor vine yield estimation using a consumer grade RGB-D camera mounted on a mobile robotic platform. An algorithm for automatic grape cluster size estimation using depth information is evaluated both in controlled outdoor conditions and in commercial vineyard conditions. Ten video scans (3 camera viewpoints with 2 different backgrounds and 2 natural light conditions), acquired from a controlled outdoor experiment and a commercial vineyard setup, are used for analyses. The collected dataset (GRAPES3D) is released to the public. A total of 4542 regions of 49 grape clusters were manually labeled by a human annotator for comparison. Eight variations of the algorithm are assessed, both for manually labeled and auto-detected regions. The effect of viewpoint, presence of an artificial background, and the human annotator are analyzed using statistical tools. Results show 2.8-3.5 cm average error for all acquired data and reveal the potential of using lowcost commercial RGB-D cameras for improved robotic yield estimation.

Ort, förlag, år, upplaga, sidor
IEEE, 2020
Nyckelord
Field Robots, RGB-D Perception, Agricultural Automation, Robotics in Agriculture and Forestry
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-167778 (URN)10.1109/LRA.2020.2970654 (DOI)000526520700001 ()2-s2.0-85079829054 (Scopus ID)
Tillgänglig från: 2020-02-03 Skapad: 2020-02-03 Senast uppdaterad: 2024-01-17Bibliografiskt granskad
Kurtser, P., Ringdahl, O., Rotstein, N. & Andreasson, H. (2020). PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions. In: Proceedings of the Northern Lights Deep Learning Workshop: . Paper presented at 3rd Northern Lights Deep Learning Workshop, Tromsö, Norway, January 19-21, 2020. (pp. 1-6). Septentrio Academic Publishing, 1
Öppna denna publikation i ny flik eller fönster >>PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions
2020 (Engelska)Ingår i: Proceedings of the Northern Lights Deep Learning Workshop, Septentrio Academic Publishing , 2020, Vol. 1, s. 1-6Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.

Ort, förlag, år, upplaga, sidor
Septentrio Academic Publishing, 2020
Nyckelord
RGBD, Deep-learning, Agricultural robotics, outdoor vision, grape
Nationell ämneskategori
Datorseende och robotik (autonoma system) Annan lantbruksvetenskap
Forskningsämne
Datavetenskap
Identifikatorer
urn:nbn:se:umu:diva-177113 (URN)10.7557/18.5155 (DOI)
Konferens
3rd Northern Lights Deep Learning Workshop, Tromsö, Norway, January 19-21, 2020.
Tillgänglig från: 2020-11-27 Skapad: 2020-11-27 Senast uppdaterad: 2022-08-24Bibliografiskt granskad
Ostovar, A., Talbot, B., Puliti, S., Astrup, R. & Ringdahl, O. (2019). Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning. Sensors, 19(7), Article ID 1579.
Öppna denna publikation i ny flik eller fönster >>Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning
Visa övriga...
2019 (Engelska)Ingår i: Sensors, E-ISSN 1424-8220, Vol. 19, nr 7, artikel-id 1579Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting, and therewith on the individual forest owner and collectively on the forest and wood processing industries. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution to addressing the problem without increasing workload complexity for the machine operator. In this study, we developed and evaluated an approach based on RGB images to automatically detect tree stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps into three classes of infestation; rot = 0%, 0% < rot > 50% and rot ≥ 50%. In this work we used deep-learning approaches and conventional machine-learning algorithms for detection and classification tasks. The results showed that tree stumps were detected with precision rate of 95% and recall of 80%. Using only the correct output (TP) of the stump detector, stumps without and with RBR were correctly classified with accuracy of 83.5% and 77.5%, respectively. Classifying rot into three classes resulted in 79.4%, 72.4%, and 74.1% accuracy for stumps with rot = 0%, 0% < rot > 50% and rot ≥ 50%, respectively. With some modifications, the developed algorithm could be used either during the harvesting operation to detect RBR regions on the tree stumps or as an RBR detector for post-harvest assessment of tree stumps and logs.

Ort, förlag, år, upplaga, sidor
MDPI, 2019
Nyckelord
deep learning; forest harvesting; tree stumps; automatic detection and classification
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datoriserad bildanalys
Identifikatorer
urn:nbn:se:umu:diva-157716 (URN)10.3390/s19071579 (DOI)000465570700098 ()30939827 (PubMedID)2-s2.0-85064193099 (Scopus ID)
Projekt
PRECISION
Forskningsfinansiär
Norges forskningsråd, NFR281140
Tillgänglig från: 2019-04-01 Skapad: 2019-04-01 Senast uppdaterad: 2023-03-24Bibliografiskt granskad
Ringdahl, O., Kurtser, P. & Edan, Y. (2019). Evaluation of approach strategies for harvesting robots: case study of sweet pepper harvesting. Journal of Intelligent and Robotic Systems, 95(1), 149-164
Öppna denna publikation i ny flik eller fönster >>Evaluation of approach strategies for harvesting robots: case study of sweet pepper harvesting
2019 (Engelska)Ingår i: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 95, nr 1, s. 149-164Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Robotic harvesters that use visual servoing must choose the best direction from which to approach the fruit to minimize occlusion and avoid obstacles that might interfere with the detection along the approach. This work proposes different approach strategies, compares them in terms of cycle times, and presents a failure analysis methodology of the different approach strategies. The different approach strategies are: in-field assessment by human observers, evaluation based on an overview image using advanced algorithms or remote human observers, or attempting multiple approach directions until the fruit is successfully reached. In the latter approach, each attempt costs time, which is a major bottleneck in bringing harvesting robots into the market. Alternatively, a single approach strategy that only attempts one direction can be applied if the best approach direction is known a-priori. The different approach strategies were evaluated for a case study of sweet pepper harvesting in laboratorial and greenhouse conditions. The first experiment, conducted in a commercial greenhouse, revealed that the fruit approach cycle time increased 8% and 116% for reachable and unreachable fruits respectively when the multiple approach strategy was applied, compared to the single approach strategy. The second experiment measured human observers’ ability to provide insights to approach directions based on overview images taken in both greenhouse and laboratorial conditions. Results revealed that human observers are accurate in detecting unapproachable directions while they tend to miss approachable directions. By detecting fruits that are unreachable (via automatic algorithms or human operators), harvesting cycle times can be significantly shortened leading to improved commercial feasibility of harvesting robots.

Ort, förlag, år, upplaga, sidor
Springer Netherlands, 2019
Nyckelord
Agricultural robotics, Robotic harvesting, Fruit approach, Human-robot collaboration
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-150404 (URN)10.1007/s10846-018-0892-7 (DOI)000475763400010 ()2-s2.0-85051483992 (Scopus ID)
Forskningsfinansiär
EU, Horisont 2020, 644313
Tillgänglig från: 2018-08-06 Skapad: 2018-08-06 Senast uppdaterad: 2022-08-24Bibliografiskt granskad
Ringdahl, O., Kurtser, P. & Edan, Y. (2019). Performance of RGB-D camera for different object types in greenhouse conditions. In: Libor Přeučil; Sven Behnke; Miroslav Kulich (Ed.), 2019 European conference on mobile robots (ECMR): conference proceedings September 4- 6, 2019 Prague Czech Republic. Paper presented at European Conference on Mobile Robots (ECMR), Prague, Czech Republic, September 4–6, 2019.. IEEE, Article ID 8870935.
Öppna denna publikation i ny flik eller fönster >>Performance of RGB-D camera for different object types in greenhouse conditions
2019 (Engelska)Ingår i: 2019 European conference on mobile robots (ECMR): conference proceedings September 4- 6, 2019 Prague Czech Republic / [ed] Libor Přeučil; Sven Behnke; Miroslav Kulich, IEEE, 2019, artikel-id 8870935Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

RGB-D cameras play an increasingly important role in localization and autonomous navigation of mobile robots. Reasonably priced commercial RGB-D cameras have recently been developed for operation in greenhouse and outdoor conditions. They can be employed for different agricultural and horticultural operations such as harvesting, weeding, pruning and phenotyping. However, the depth information extracted from the cameras varies significantly between objects and sensing conditions. This paper presents an evaluation protocol applied to a commercially available Fotonic F80 time-of-flight RGB-D camera for eight different object types. A case study of autonomous sweet pepper harvesting was used as an exemplary agricultural task. Each of the objects chosen is a possible item that an autonomous agricultural robot must detect and localize to perform well. A total of 340 rectangular regions of interests (ROI) were marked for the extraction of performance measures of point cloud density, and variability around center of mass, 30-100 ROIs per object type. An additional 570 ROIs were generated (57 manually and 513 replicated) to evaluate the repeatability and accuracy of the point cloud. A statistical analysis was performed to evaluate the significance of differences between object types. The results show that different objects have significantly different point density. Specifically metallic materials and black colored objects had significantly less point density compared to organic and other artificial materials introduced to the scene as expected. The point cloud variability measures showed no significant differences between object types, except for the metallic knife that presented significant outliers in collected measures. The accuracy and repeatability analysis showed that 1-3 cm errors are due to the the difficulty for a human to annotate the exact same area and up to ±4 cm error is due to the sensor not generating the exact same point cloud when sensing a fixed object.

Ort, förlag, år, upplaga, sidor
IEEE, 2019
Nyckelord
agriculture, cameras, feature extraction, greenhouses, image colour analysis, image sensors, industrial robots, mobile robots, object tracking, robot vision, statistical analysis, pruning, sensing conditions, evaluation protocol, object types, autonomous sweet pepper harvesting, exemplary agricultural task, autonomous agricultural robot, ROI, point cloud density, object type, point density, black colored objects, point cloud variability measures, fixed object, greenhouse conditions, autonomous navigation, mobile robots, agricultural operations, horticultural operations, commercial RGB-D cameras, Fotonic F80 time-of-flight RGB-D camera, size 4.0 cm, size 1.0 cm to 3.0 cm, Cameras, Three-dimensional displays, Robot vision systems, End effectors, Green products
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-165545 (URN)10.1109/ECMR.2019.8870935 (DOI)000558081900031 ()2-s2.0-85074395978 (Scopus ID)978-1-7281-3606-6 (ISBN)978-1-7281-3605-9 (ISBN)
Konferens
European Conference on Mobile Robots (ECMR), Prague, Czech Republic, September 4–6, 2019.
Forskningsfinansiär
KK-stiftelsenEU, Horisont 2020, 66313
Tillgänglig från: 2019-11-26 Skapad: 2019-11-26 Senast uppdaterad: 2022-08-24Bibliografiskt granskad
Ostovar, A., Talbot, B., Puliti, S., Rasmus, A. & Ringdahl, O. (2019). Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce. In: Simon Berg & Bruce Talbot (Ed.), Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway. Paper presented at NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.. Norsk institutt for bioøkonomi (NIBIO)
Öppna denna publikation i ny flik eller fönster >>Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce
Visa övriga...
2019 (Engelska)Ingår i: Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway / [ed] Simon Berg & Bruce Talbot, Norsk institutt for bioøkonomi (NIBIO) , 2019Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution in addressing the problem without increasing workload complexity for the machine operator. In this study we developed and evaluated an approach based on RGB images to automatically detect tree-stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps to three classes of infestation; rot = 0%, 0% < rot < 50% and rot ≥50%. We used deep learning approaches and conventional machine learning algorithms for detection and classification tasks. The results showed that tree-stumps were detected with precision rate of 95% and recall of 80%. Stumps without and with root and butt-rot were correctly classified with accuracy of 83.5% and 77.5%. Classifying rot into three classes resulted in 79.4%, 72.4% and 74.1% accuracy respectively. With some modifications, the algorithm developed could be used either during the harvesting operation to detect RBR regions on the tree-stumps or as a RBR detector for post-harvest assessment of tree-stumps and logs.

Ort, förlag, år, upplaga, sidor
Norsk institutt for bioøkonomi (NIBIO), 2019
Serie
NIBIO Bok, E-ISSN 2464‐1189 ; 5(6)2019
Nationell ämneskategori
Skogsvetenskap Robotteknik och automation Signalbehandling Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-159977 (URN)978-82-17-02339-5 (ISBN)
Konferens
NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.
Forskningsfinansiär
Norges forskningsråd, NFR281140
Tillgänglig från: 2019-06-11 Skapad: 2019-06-11 Senast uppdaterad: 2020-02-05Bibliografiskt granskad
Ostovar, A., Ringdahl, O. & Hellström, T. (2018). Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot. Robotics, 7(1), Article ID 11.
Öppna denna publikation i ny flik eller fönster >>Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
2018 (Engelska)Ingår i: Robotics, E-ISSN 2218-6581, Vol. 7, nr 1, artikel-id 11Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.

Ort, förlag, år, upplaga, sidor
MDPI, 2018
Nyckelord
reinforcement learning, Q-Learning, image thresholding, ϵ-greedy strategies
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datoriserad bildanalys
Identifikatorer
urn:nbn:se:umu:diva-144513 (URN)10.3390/robotics7010011 (DOI)000432680200008 ()2-s2.0-85042553994 (Scopus ID)
Forskningsfinansiär
EU, Horisont 2020, 644313
Tillgänglig från: 2018-02-05 Skapad: 2018-02-05 Senast uppdaterad: 2023-03-24Bibliografiskt granskad
Ringdahl, O., Kurtser, P. & Edan, Y. (2017). Strategies for selecting best approach direction for a sweet-pepper harvesting robot. In: Yang Gao, Saber Fallah, Yaochu Jin, Constantina Lekakou (Ed.), Yang Gao; Saber Fallah; Yaochu Jin; Constantina Lekakou (Ed.), Towards autonomous robotic systems: 18th annual conference, TAROS 2017, Guildford, UK, July 19–21, 2017, proceedings. Paper presented at TAROS 2017: the 18th Towards Autonomous Robotic Systems (TAROS) Conference, University of Surrey, Guildford, UK, July 19–21, 2017. (pp. 516-525). Cham: Springer
Öppna denna publikation i ny flik eller fönster >>Strategies for selecting best approach direction for a sweet-pepper harvesting robot
2017 (Engelska)Ingår i: Towards autonomous robotic systems: 18th annual conference, TAROS 2017, Guildford, UK, July 19–21, 2017, proceedings / [ed] Yang Gao; Saber Fallah; Yaochu Jin; Constantina Lekakou, Cham: Springer, 2017, s. 516-525Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

An autonomous sweet pepper harvesting robot must perform several tasks to successfully harvest a fruit. Due to the highly unstructured environment in which the robot operates and the presence of occlusions, the current challenges are to improve the detection rate and lower the risk of losing sight of the fruit while approaching the fruit for harvest. Therefore, it is crucial to choose the best approach direction with least occlusion from obstacles.

The value of ideal information regarding the best approach direction was evaluated by comparing it to a method attempting several directions until successful harvesting is performed. A laboratory experiment was conducted on artificial sweet pepper plants using a system based on eye-in-hand configuration comprising a 6DOF robotic manipulator equipped with an RGB camera. The performance is evaluated in laboratorial conditions using both descriptive statistics of the average harvesting times and harvesting success as well as regression models. The results show roughly 40–45% increase in average harvest time when no a-priori information of the correct harvesting direction is available with a nearly linear increase in overall harvesting time for each failed harvesting attempt. The variability of the harvesting times grows with the number of approaches required, causing lower ability to predict them.

Tests show that occlusion of the front of the peppers significantly impacts the harvesting times. The major reason for this is the limited workspace of the robot often making the paths to positions to the side of the peppers significantly longer than to positions in front of the fruit which is more open.

Ort, förlag, år, upplaga, sidor
Cham: Springer, 2017
Serie
Lecture Notes in Computer Science : Lecture Notes in Artificial Intelligence, ISSN 0302-9743, E-ISSN 1611-3349 ; 10454
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-138381 (URN)10.1007/978-3-319-64107-2_41 (DOI)000453217100041 ()2-s2.0-85026771770 (Scopus ID)978-3-319-64107-2 (ISBN)978-3-319-64106-5 (ISBN)
Konferens
TAROS 2017: the 18th Towards Autonomous Robotic Systems (TAROS) Conference, University of Surrey, Guildford, UK, July 19–21, 2017.
Forskningsfinansiär
EU, Horisont 2020, 66313
Tillgänglig från: 2017-08-21 Skapad: 2017-08-21 Senast uppdaterad: 2022-08-24Bibliografiskt granskad
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0002-4600-8652

Sök vidare i DiVA

Visa alla publikationer