umu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 35) Show all publications
diva2:1301359
Open this publication in new window or tab >>Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning
Show others...
2019 (English)In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, no 7, article id 1579Article in journal (Refereed) Published
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting, and therewith on the individual forest owner and collectively on the forest and wood processing industries. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution to addressing the problem without increasing workload complexity for the machine operator. In this study, we developed and evaluated an approach based on RGB images to automatically detect tree stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps into three classes of infestation; rot = 0%, 0% < rot > 50% and rot ≥ 50%. In this work we used deep-learning approaches and conventional machine-learning algorithms for detection and classification tasks. The results showed that tree stumps were detected with precision rate of 95% and recall of 80%. Using only the correct output (TP) of the stump detector, stumps without and with RBR were correctly classified with accuracy of 83.5% and 77.5%, respectively. Classifying rot into three classes resulted in 79.4%, 72.4%, and 74.1% accuracy for stumps with rot = 0%, 0% < rot > 50% and rot ≥ 50%, respectively. With some modifications, the developed algorithm could be used either during the harvesting operation to detect RBR regions on the tree stumps or as an RBR detector for post-harvest assessment of tree stumps and logs.

Place, publisher, year, edition, pages
MDPI, 2019
Keywords
deep learning; forest harvesting; tree stumps; automatic detection and classification
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-157716 (URN)10.3390/s19071579 (DOI)000465570700098 ()30939827 (PubMedID)
Projects
PRECISION
Funder
The Research Council of Norway, NFR281140
Available from: 2019-04-01 Created: 2019-04-01 Last updated: 2019-11-11Bibliographically approved
Ringdahl, O., Ostovar, A., Talbot, B., Puliti, S. & Rasmus, A. (2019). Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce. In: Forest Operations in Response to Environmental Challenges: . Paper presented at NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019..
Open this publication in new window or tab >>Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce
Show others...
2019 (English)In: Forest Operations in Response to Environmental Challenges, 2019Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution in addressing the problem without increasing workload complexity for the machine operator. In this study we developed and evaluated an approach based on RGB images to automatically detect tree-stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps to three classes of infestation; rot = 0%, 0% < rot < 50% and rot ≥50%. We used deep learning approaches and conventional machine learning algorithms for detection and classification tasks. The results showed that tree-stumps were detected with precision rate of 95% and recall of 80%. Stumps without and with root and butt-rot were correctly classified with accuracy of 83.5% and 77.5%. Classifying rot into three classes resulted in 79.4%, 72.4% and 74.1% accuracy respectively. With some modifications, the algorithm developed could be used either during the harvesting operation to detect RBR regions on the tree-stumps or as a RBR detector for post-harvest assessment of tree-stumps and logs.

National Category
Forest Science Robotics Signal Processing Computer Vision and Robotics (Autonomous Systems)
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-159977 (URN)
Conference
NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.
Funder
The Research Council of Norway, NFR281140
Available from: 2019-06-11 Created: 2019-06-11 Last updated: 2019-06-25
Ostovar, A., Ringdahl, O. & Hellström, T. (2018). Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot. Robotics, 7(1), Article ID 11.
Open this publication in new window or tab >>Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
2018 (English)In: Robotics, E-ISSN 2218-6581, Vol. 7, no 1, article id 11Article in journal (Refereed) Published
Abstract [en]

The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.

Place, publisher, year, edition, pages
MDPI, 2018
Keywords
reinforcement learning, Q-Learning, image thresholding, ϵ-greedy strategies
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:umu:diva-144513 (URN)10.3390/robotics7010011 (DOI)000432680200008 ()
Funder
EU, Horizon 2020, 644313
Available from: 2018-02-05 Created: 2018-02-05 Last updated: 2019-11-11Bibliographically approved
Ringdahl, O., Kurtser, P. & Edan, Y. (2017). Strategies for selecting best approach direction for a sweet-pepper harvesting robot. In: Yang Gao, Saber Fallah, Yaochu Jin, Constantina Lekakou (Ed.), Yang Gao, Saber Fallah, Yaochu Jin, Constantina Lekakou (Ed.), Towards Autonomous Robotic Systems: 18th Annual Conference, TAROS 2017, Guildford, UK, July 19–21, 2017, Proceedings. Paper presented at TAROS 2017: the 18th Towards Autonomous Robotic Systems (TAROS) Conference, University of Surrey, Guildford, UK, July 19–21, 2017 (pp. 516-525). Cham: Springer
Open this publication in new window or tab >>Strategies for selecting best approach direction for a sweet-pepper harvesting robot
2017 (English)In: Towards Autonomous Robotic Systems: 18th Annual Conference, TAROS 2017, Guildford, UK, July 19–21, 2017, Proceedings / [ed] Yang Gao, Saber Fallah, Yaochu Jin, Constantina Lekakou, Cham: Springer, 2017, p. 516-525Conference paper, Published paper (Refereed)
Abstract [en]

An autonomous sweet pepper harvesting robot must perform several tasks to successfully harvest a fruit. Due to the highly unstructured environment in which the robot operates and the presence of occlusions, the current challenges are to improve the detection rate and lower the risk of losing sight of the fruit while approaching the fruit for harvest. Therefore, it is crucial to choose the best approach direction with least occlusion from obstacles.

The value of ideal information regarding the best approach direction was evaluated by comparing it to a method attempting several directions until successful harvesting is performed. A laboratory experiment was conducted on artificial sweet pepper plants using a system based on eye-in-hand configuration comprising a 6DOF robotic manipulator equipped with an RGB camera. The performance is evaluated in laboratorial conditions using both descriptive statistics of the average harvesting times and harvesting success as well as regression models. The results show roughly 40–45% increase in average harvest time when no a-priori information of the correct harvesting direction is available with a nearly linear increase in overall harvesting time for each failed harvesting attempt. The variability of the harvesting times grows with the number of approaches required, causing lower ability to predict them.

Tests show that occlusion of the front of the peppers significantly impacts the harvesting times. The major reason for this is the limited workspace of the robot often making the paths to positions to the side of the peppers significantly longer than to positions in front of the fruit which is more open.

Place, publisher, year, edition, pages
Cham: Springer, 2017
Series
Lecture Notes in Computer Science : Lecture Notes in Artificial Intelligence, ISSN 0302-9743, E-ISSN 1611-3349 ; 10454
National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-138381 (URN)10.1007/978-3-319-64107-2_41 (DOI)000453217100041 ()978-3-319-64107-2 (ISBN)978-3-319-64106-5 (ISBN)
Conference
TAROS 2017: the 18th Towards Autonomous Robotic Systems (TAROS) Conference, University of Surrey, Guildford, UK, July 19–21, 2017
Funder
EU, Horizon 2020, 66313
Available from: 2017-08-21 Created: 2017-08-21 Last updated: 2019-01-08Bibliographically approved
Ostovar, A., Hellström, T. & Ringdahl, O. (2016). Human Detection Based on Infrared Images in Forestry Environments. In: Image Analysis and Recognition (ICIAR 2016): 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, Póvoa de Varzim, Portugal, July 13-15, 2016, Proceedings. Paper presented at 13th International Conference on Image Analysis and Recognition, ICIAR 2016, July 13-15, 2016, Póvoa de Varzim, Portugal (pp. 175-182).
Open this publication in new window or tab >>Human Detection Based on Infrared Images in Forestry Environments
2016 (English)In: Image Analysis and Recognition (ICIAR 2016): 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, Póvoa de Varzim, Portugal, July 13-15, 2016, Proceedings, 2016, p. 175-182Conference paper, Published paper (Refereed)
Abstract [en]

It is essential to have a reliable system to detect humans in close range of forestry machines to stop cutting or carrying operations to prohibit any harm to humans. Due to the lighting conditions and high occlusion from the vegetation, human detection using RGB cameras is difficult. This paper introduces two human detection methods in forestry environments using a thermal camera; one shape-dependent and one shape-independent approach. Our segmentation algorithm estimates location of the human by extracting vertical and horizontal borders of regions of interest (ROIs). Based on segmentation results, features such as ratio of height to width and location of the hottest spot are extracted for the shape-dependent method. For the shape-independent method all extracted ROI are resized to the same size, then the pixel values (temperatures) are used as a set of features. The features from both methods are fed into different classifiers and the results are evaluated using side-accuracy and side-efficiency. The results show that by using shape-independent features, based on three consecutive frames, we reach a precision rate of 80 % and recall of 76 %.

Series
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9730
Keywords
Human detection, Thermal images, Shape-dependent, Shape-independent, Side-accuracy, Side-efficiency
National Category
Robotics
Identifiers
urn:nbn:se:umu:diva-124428 (URN)10.1007/978-3-319-41501-7_20 (DOI)000386604000020 ()978-3-319-41501-7 (ISBN)978-3-319-41500-0 (ISBN)
Conference
13th International Conference on Image Analysis and Recognition, ICIAR 2016, July 13-15, 2016, Póvoa de Varzim, Portugal
Available from: 2016-08-10 Created: 2016-08-10 Last updated: 2019-11-11Bibliographically approved
Ringdahl, O., Kurtser, P., Barth, R. & Edan, Y. (2016). Operational flow of an autonomous sweetpepper harvesting robot. In: : . Paper presented at The 5th Israeli Conference on Robotics 2016, 13-14 April 2016, Air Force Conference Center Hertzilya, Israel.
Open this publication in new window or tab >>Operational flow of an autonomous sweetpepper harvesting robot
2016 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

Advanced automation is required for greenhouse production systems due to the lack of skilled workforce and increasing labour costs [1]. As part of the EU project SWEEPER, we are working on developing an autonomous robot able to harvest sweet pepper fruits in greenhouses. This paper focuses on the operational flow of the robot for the high level task planning.

In the SWEEPER project, an RGB camera is mounted on the end effector to detect fruits. Due to the dense plant rows, the camera is located at a maximum of 40 cm from the plants and hence cannot provide an overview of all fruit locations. Only a few ripe fruits at each acquisition can be seen. This implies that the robot must incorporate a search pattern to look for fruits. When at least one fruit has been detected in the image, the search is aborted and a harvesting phase is initiated. The phase starts with directing the manipulator to a point close to the fruit and then activating a visual servo control loop. This motion approach ensures that the fruit is grasped despite the occlusions caused by the stems and leaves. When the manipulator has reached the fruit, it is harvested and automatically released into a container. If there are more fruits that have already been detected, the system continues to pick them. When all detected fruits have been harvested, the system resumes the search pattern again. When the search pattern is finished and no more fruits are detected, the robot base is advanced along the row to the next plant and repeats the operations above.

To support implementation of the workflow into a program controlling the actual robot, a generic software framework for development of agricultural and forestry robots was used [2]. The framework is constructed with a hybrid robot architecture, using a state machine implementing the following flowchart.

National Category
Computer Vision and Robotics (Autonomous Systems)
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-121333 (URN)
Conference
The 5th Israeli Conference on Robotics 2016, 13-14 April 2016, Air Force Conference Center Hertzilya, Israel
Funder
EU, Horizon 2020, 66313
Available from: 2016-05-31 Created: 2016-05-31 Last updated: 2018-08-10
diva2:851123
Open this publication in new window or tab >>CROPS: Clever Robots for Crops
Show others...
2015 (English)In: Engineering & Technology Reference, ISSN 2056-4007, Vol. 1, no 1Article in journal (Refereed) Published
Abstract [en]

In the EU-funded CROPS project robots are developed for site-specific spraying and selective harvesting of fruit and fruit vegetables. The robots are being designed to harvest crops, such as greenhouse vegetables, apples, grapes and for canopy spraying in orchards and for precision target spraying in grape vines. Attention is paid to the detection of obstacles for autonomous navigation in a safe way in plantations and forests. For the different applications, platforms were built. Sensing systems and vision algorithms have been developed. For software the Robot Operating System is used. A 9 degrees of freedom manipulator was designed and tested for sweet-pepper harvesting, apple harvesting and in close range spraying. For the applications different end-effectors were designed and tested. For sweet pepper a platform that can move in between the crop rows on the common greenhouse rail system which also serves as heating pipes was built. The apple harvesting platform is based on a current mechanical grape harvester. In discussion with growers so-called ‘walls of fruit trees’ have been designed which bring robots closer to the practice. A canopy-optimised sprayer has been designed as a trailed sprayer with a centrifugal blower. All the applications have been tested under practical conditions.

National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
urn:nbn:se:umu:diva-108099 (URN)10.1049/etr.2015.0015 (DOI)
Funder
EU, FP7, Seventh Framework Programme, 246252
Available from: 2015-09-03 Created: 2015-09-03 Last updated: 2018-06-07Bibliographically approved
Lindroos, O., Ringdahl, O., Pedro, L. H., Hohnloser, P. & Hellström, T. (2015). Estimating the position of the harvester head: a key step towards the precision forestry of the future?. Croatian Journal of Forest Engineering, 36(2), 147-164
Open this publication in new window or tab >>Estimating the position of the harvester head: a key step towards the precision forestry of the future?
Show others...
2015 (English)In: Croatian Journal of Forest Engineering, ISSN 1845-5719, E-ISSN 1848-9672, Vol. 36, no 2, p. 147-164Article in journal (Refereed) Published
Abstract [en]

Modern harvesters are technologically sophisticated, with many useful features such as the ability to automatically measure stem diameters and lengths. This information is processed in real time to support value optimization when cutting stems into logs. It can also be transferred from the harvesters to centralized systems and used for wood supply management. Such information management systems have been available since the 1990s in Sweden and Finland, and are constantly being upgraded. However, data on the position of the harvester head relative to the machine are generally not recorded during harvesting. The routine acquisition and analysis of such data could offer several opportunities to improve forestry operations and related processes in the future. Here, we analyze the possible benefits of having this information, as well as the steps required to collect and process it. The benefits and drawbacks of different sensing technologies are discussed in terms of potential applications, accuracy and cost. We also present the results of preliminary testing using two of the proposed methods. Our analysis indicates that an improved scope for mapping and controlling machine movement is the main benefit that is directly related to the conduct of forestry operations. In addition, there are important indirect benefits relating to ecological mapping. Our analysis suggests that both of these benefits can be realized by measuring the angles of crane joints or the locations of crane segments and using the resulting information to compute the head's position. In keeping with our findings, two companies have recently introduced sensor equipped crane solutions.

Place, publisher, year, edition, pages
Zagreb, Croatia: Croatian Journal of Forest Engineering, 2015
Keywords
boom tip control, automation, ALS, sensors, harvester data
National Category
Forest Science Robotics Computer Vision and Robotics (Autonomous Systems)
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-109881 (URN)000363907900001 ()
Available from: 2015-10-08 Created: 2015-10-08 Last updated: 2018-06-07Bibliographically approved
Bontsema, J., Hemming, J., Pekkeriet, E., Saeys, W., Edan, Y., Shapiro, A., . . . Ringdahl, O. (2014). CROPS: high tech agricultural robots. In: : . Paper presented at AgEng, International Conference of Agricultural Engineering, Zurich, July 6-10, 2014. The European Society of Agricultural Engineers
Open this publication in new window or tab >>CROPS: high tech agricultural robots
Show others...
2014 (English)Conference paper, Published paper (Other academic)
Place, publisher, year, edition, pages
The European Society of Agricultural Engineers, 2014
National Category
Robotics
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-95232 (URN)
Conference
AgEng, International Conference of Agricultural Engineering, Zurich, July 6-10, 2014
Projects
Crops
Funder
EU, FP7, Seventh Framework Programme, 246252
Available from: 2014-10-24 Created: 2014-10-24 Last updated: 2019-06-25Bibliographically approved
Barth, R., Baur, J., Buschmann, T., Edan, Y., Hellström, T., Nguyen, T., . . . Vitzrabin, E. (2014). Using ROS for agricultural robotics: design considerations and experiences. In: Pablo Gonzalez-de-Santos and Angela Ribeiro (Ed.), RHEA-2014: . Paper presented at Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry, RHEA-2014. May 21-23, 2014 Madrid, Spain. (pp. 509-518).
Open this publication in new window or tab >>Using ROS for agricultural robotics: design considerations and experiences
Show others...
2014 (English)In: RHEA-2014 / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, 2014, p. 509-518Conference paper, Published paper (Refereed)
Abstract [en]

We report on experiences of using the ROS middleware for developmentof agricultural robots. We describe software related design considerations for all maincomponents in developed subsystems as well as drawbacks and advantages with thechosen approaches. This work was partly funded by the European Commission(CROPS GA no 246252).

Keywords
Agriculture, Robotics, ROS, Software development
National Category
Robotics
Research subject
computer and systems sciences
Identifiers
urn:nbn:se:umu:diva-89441 (URN)
Conference
Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry, RHEA-2014. May 21-23, 2014 Madrid, Spain.
Funder
EU, FP7, Seventh Framework Programme, 246252
Available from: 2014-06-02 Created: 2014-06-02 Last updated: 2018-06-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-4600-8652

Search in DiVA

Show all publications