umu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
BETA
Ostovar, Ahmad
Publikationer (9 of 9) Visa alla publikationer
Ostovar, A., Talbot, B., Puliti, S., Astrup, R. & Ringdahl, O. (2019). Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning. Sensors, 19(7), Article ID 1579.
Öppna denna publikation i ny flik eller fönster >>Detection and classification of Root and Butt-Rot (RBR) in Stumps of Norway Spruce Using RGB Images and Machine Learning
Visa övriga...
2019 (Engelska)Ingår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 19, nr 7, artikel-id 1579Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting, and therewith on the individual forest owner and collectively on the forest and wood processing industries. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution to addressing the problem without increasing workload complexity for the machine operator. In this study, we developed and evaluated an approach based on RGB images to automatically detect tree stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps into three classes of infestation; rot = 0%, 0% < rot > 50% and rot ≥ 50%. In this work we used deep-learning approaches and conventional machine-learning algorithms for detection and classification tasks. The results showed that tree stumps were detected with precision rate of 95% and recall of 80%. Using only the correct output (TP) of the stump detector, stumps without and with RBR were correctly classified with accuracy of 83.5% and 77.5%, respectively. Classifying rot into three classes resulted in 79.4%, 72.4%, and 74.1% accuracy for stumps with rot = 0%, 0% < rot > 50% and rot ≥ 50%, respectively. With some modifications, the developed algorithm could be used either during the harvesting operation to detect RBR regions on the tree stumps or as an RBR detector for post-harvest assessment of tree stumps and logs.

Ort, förlag, år, upplaga, sidor
MDPI, 2019
Nyckelord
deep learning; forest harvesting; tree stumps; automatic detection and classification
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datoriserad bildanalys
Identifikatorer
urn:nbn:se:umu:diva-157716 (URN)10.3390/s19071579 (DOI)000465570700098 ()30939827 (PubMedID)
Projekt
PRECISION
Forskningsfinansiär
Norges forskningsråd, NFR281140
Tillgänglig från: 2019-04-01 Skapad: 2019-04-01 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Ostovar, A., Bensch, S. & Hellström, T. (2019). Natural Language Guided Object Retrieval in Images. Sensors
Öppna denna publikation i ny flik eller fönster >>Natural Language Guided Object Retrieval in Images
2019 (Engelska)Ingår i: Sensors, ISSN 1424-8220, E-ISSN 1424-8220Artikel i tidskrift (Refereegranskat) Submitted
Abstract [en]

In this paper we propose a method for generation of responses to natural language queries regarding objects and their spatial relations in given images. The responses comprise identification of objects in the image, and generation of appropriate text answering the query. The proposed method uses a pre-defined neural network (YOLO) for object detection, combined with natural language processing of the given queries. Probabilistic measures are constructed for object classes, spatial relations, and word similarity such that the most likely grounding of the query can be done. By computing semantic similarity, our method overcame the problems with a limited number of object classes in pre-trained network models. At the same time, flexibility regarding the varying ways users express spatial relations was achieved. The method was implemented, and evaluated by 30 test users who considered 81.9\% of the generated answers as correct. The work may be applied in applications where visual input (images or video) and natural language input (speech or text) have to be related to each other. For example, processing of videos may benefit from functionality that relates audio to visual content. Urban Search and Rescue Robots (USAR) are used to find people in catastrophic situations such as flooding or earthquakes. It would be very beneficial if such a robot is able to respond to verbal questions from the operator about what the robot sees with its remote cameras.

Ort, förlag, år, upplaga, sidor
MDPI, 2019
Nyckelord
convolutional neural network, natural language grounding, object retrieval, spatial relations, semantic similarity
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:umu:diva-165065 (URN)
Tillgänglig från: 2019-11-08 Skapad: 2019-11-08 Senast uppdaterad: 2019-12-10
Ostovar, A. (2019). Object Detection and Recognition in Unstructured Outdoor Environments. (Doctoral dissertation). Umeå: Umeå University
Öppna denna publikation i ny flik eller fönster >>Object Detection and Recognition in Unstructured Outdoor Environments
2019 (Engelska)Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
Abstract [en]

Computer vision and machine learning based systems are often developed to replace humans in harsh, dangerous, or tedious situations, as well as to reduce the required time to accomplish a task. Another goal is to increase performance by introducing automation to tasks such as inspections in manufacturing applications, sorting timber during harvesting, surveillance, fruit grading, yield prediction, and harvesting operations.Depending on the task, a variety of object detection and recognition algorithms can be applied, including both conventional and deep learning based approaches. Moreover, within the process of developing image analysis algorithms, it is essential to consider environmental challenges, e.g. illumination changes, occlusion, shadows, and divergence in colour, shape, texture, and size of objects.

The goal of this thesis is to address these challenges to support development of autonomous agricultural and forestry systems with enhanced performance and reduced need for human involvement.This thesis provides algorithms and techniques based on adaptive image segmentation for tree detection in forest environment and also yellow pepper recognition in greenhouses. For segmentation, seed point generation and a region growing method was used to detect trees. An algorithm based on reinforcement learning was developed to detect yellow peppers. RGB and depth data was integrated and used in classifiers to detect trees, bushes, stones, and humans in forest environments. Another part of the thesis describe deep learning based approaches to detect stumps and classify the level of rot based on images.

Another major contribution of this thesis is a method using infrared images to detect humans in forest environments. To detect humans, one shape-dependent and one shape-independent method were proposed.

Algorithms to recognize the intention of humans based on hand gestures were also developed. 3D hand gestures were recognized by first detecting and tracking hands in a sequence of depth images, and then utilizing optical flow constraint equations.

The thesis also presents methods to answer human queries about objects and their spatial relation in images. The solution was developed by merging a deep learning based method for object detection and recognition with natural language processing techniques.

Ort, förlag, år, upplaga, sidor
Umeå: Umeå University, 2019. s. 88
Serie
Report / UMINF, ISSN 0348-0542 ; 19.08
Nyckelord
Computer vision, Deep Learning, Harvesting Robots, Automatic Detection and Recognition
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-165069 (URN)978-91-7855-147-7 (ISBN)
Disputation
2019-12-05, MA121, MIT Building, Umeå, 13:00 (Engelska)
Opponent
Handledare
Tillgänglig från: 2019-11-14 Skapad: 2019-11-08 Senast uppdaterad: 2019-11-12Bibliografiskt granskad
Ostovar, A., Talbot, B., Puliti, S., Rasmus, A. & Ringdahl, O. (2019). Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce. In: Simon Berg & Bruce Talbot (Ed.), Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway. Paper presented at NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.. Norsk institutt for bioøkonomi (NIBIO)
Öppna denna publikation i ny flik eller fönster >>Using RGB images and machine learning to detect and classify Root and Butt-Rot (RBR) in stumps of Norway spruce
Visa övriga...
2019 (Engelska)Ingår i: Forest Operations in Response to Environmental Challenges: Proceedings of the Nordic-Baltic Conference on Operational Research (NB-NORD), June 3-5, Honne, Norway / [ed] Simon Berg & Bruce Talbot, Norsk institutt for bioøkonomi (NIBIO) , 2019Konferensbidrag, Muntlig presentation med publicerat abstract (Refereegranskat)
Abstract [en]

Root and butt-rot (RBR) has a significant impact on both the material and economic outcome of timber harvesting. An accurate recording of the presence of RBR during timber harvesting would enable a mapping of the location and extent of the problem, providing a basis for evaluating spread in a climate anticipated to enhance pathogenic growth in the future. Therefore, a system to automatically identify and detect the presence of RBR would constitute an important contribution in addressing the problem without increasing workload complexity for the machine operator. In this study we developed and evaluated an approach based on RGB images to automatically detect tree-stumps and classify them as to the absence or presence of rot. Furthermore, since knowledge of the extent of RBR is valuable in categorizing logs, we also classify stumps to three classes of infestation; rot = 0%, 0% < rot < 50% and rot ≥50%. We used deep learning approaches and conventional machine learning algorithms for detection and classification tasks. The results showed that tree-stumps were detected with precision rate of 95% and recall of 80%. Stumps without and with root and butt-rot were correctly classified with accuracy of 83.5% and 77.5%. Classifying rot into three classes resulted in 79.4%, 72.4% and 74.1% accuracy respectively. With some modifications, the algorithm developed could be used either during the harvesting operation to detect RBR regions on the tree-stumps or as a RBR detector for post-harvest assessment of tree-stumps and logs.

Ort, förlag, år, upplaga, sidor
Norsk institutt for bioøkonomi (NIBIO), 2019
Serie
NIBIO Bok, E-ISSN 2464‐1189 ; 5(6)2019
Nationell ämneskategori
Skogsvetenskap Robotteknik och automation Signalbehandling Datorseende och robotik (autonoma system)
Forskningsämne
data- och systemvetenskap
Identifikatorer
urn:nbn:se:umu:diva-159977 (URN)978-82-17-02339-5 (ISBN)
Konferens
NB Nord Conference: Forest Operations in Response to Environmental Challenges, Honne, Norway, June 3-5, 2019.
Forskningsfinansiär
Norges forskningsråd, NFR281140
Tillgänglig från: 2019-06-11 Skapad: 2019-06-11 Senast uppdaterad: 2020-02-05Bibliografiskt granskad
Ostovar, A., Ringdahl, O. & Hellström, T. (2018). Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot. Robotics, 7(1), Article ID 11.
Öppna denna publikation i ny flik eller fönster >>Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot
2018 (Engelska)Ingår i: Robotics, E-ISSN 2218-6581, Vol. 7, nr 1, artikel-id 11Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The presented work is part of the H2020 project SWEEPER with the overall goal to develop a sweet pepper harvesting robot for use in greenhouses. As part of the solution, visual servoing is used to direct the manipulator towards the fruit. This requires accurate and stable fruit detection based on video images. To segment an image into background and foreground, thresholding techniques are commonly used. The varying illumination conditions in the unstructured greenhouse environment often cause shadows and overexposure. Furthermore, the color of the fruits to be harvested varies over the season. All this makes it sub-optimal to use fixed pre-selected thresholds. In this paper we suggest an adaptive image-dependent thresholding method. A variant of reinforcement learning (RL) is used with a reward function that computes the similarity between the segmented image and the labeled image to give feedback for action selection. The RL-based approach requires less computational resources than exhaustive search, which is used as a benchmark, and results in higher performance compared to a Lipschitzian based optimization approach. The proposed method also requires fewer labeled images compared to other methods. Several exploration-exploitation strategies are compared, and the results indicate that the Decaying Epsilon-Greedy algorithm gives highest performance for this task. The highest performance with the Epsilon-Greedy algorithm ( ϵ = 0.7) reached 87% of the performance achieved by exhaustive search, with 50% fewer iterations than the benchmark. The performance increased to 91.5% using Decaying Epsilon-Greedy algorithm, with 73% less number of iterations than the benchmark.

Ort, förlag, år, upplaga, sidor
MDPI, 2018
Nyckelord
reinforcement learning, Q-Learning, image thresholding, ϵ-greedy strategies
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Forskningsämne
datoriserad bildanalys
Identifikatorer
urn:nbn:se:umu:diva-144513 (URN)10.3390/robotics7010011 (DOI)000432680200008 ()
Forskningsfinansiär
EU, Horisont 2020, 644313
Tillgänglig från: 2018-02-05 Skapad: 2018-02-05 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Ostovar, A., Hellström, T. & Ringdahl, O. (2016). Human Detection Based on Infrared Images in Forestry Environments. In: Image Analysis and Recognition (ICIAR 2016): 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, Póvoa de Varzim, Portugal, July 13-15, 2016, Proceedings. Paper presented at 13th International Conference on Image Analysis and Recognition, ICIAR 2016, July 13-15, 2016, Póvoa de Varzim, Portugal (pp. 175-182).
Öppna denna publikation i ny flik eller fönster >>Human Detection Based on Infrared Images in Forestry Environments
2016 (Engelska)Ingår i: Image Analysis and Recognition (ICIAR 2016): 13th International Conference, ICIAR 2016, in Memory of Mohamed Kamel, Póvoa de Varzim, Portugal, July 13-15, 2016, Proceedings, 2016, s. 175-182Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

It is essential to have a reliable system to detect humans in close range of forestry machines to stop cutting or carrying operations to prohibit any harm to humans. Due to the lighting conditions and high occlusion from the vegetation, human detection using RGB cameras is difficult. This paper introduces two human detection methods in forestry environments using a thermal camera; one shape-dependent and one shape-independent approach. Our segmentation algorithm estimates location of the human by extracting vertical and horizontal borders of regions of interest (ROIs). Based on segmentation results, features such as ratio of height to width and location of the hottest spot are extracted for the shape-dependent method. For the shape-independent method all extracted ROI are resized to the same size, then the pixel values (temperatures) are used as a set of features. The features from both methods are fed into different classifiers and the results are evaluated using side-accuracy and side-efficiency. The results show that by using shape-independent features, based on three consecutive frames, we reach a precision rate of 80 % and recall of 76 %.

Serie
Lecture Notes in Computer Science, ISSN 0302-9743 ; 9730
Nyckelord
Human detection, Thermal images, Shape-dependent, Shape-independent, Side-accuracy, Side-efficiency
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-124428 (URN)10.1007/978-3-319-41501-7_20 (DOI)000386604000020 ()978-3-319-41501-7 (ISBN)978-3-319-41500-0 (ISBN)
Konferens
13th International Conference on Image Analysis and Recognition, ICIAR 2016, July 13-15, 2016, Póvoa de Varzim, Portugal
Tillgänglig från: 2016-08-10 Skapad: 2016-08-10 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Abedan Kondori, F., Yousefi, S., Ostovar, A., Liu, L. & Li, H. (2014). A Direct Method for 3D Hand Pose Recovery. In: 22nd International Conference on Pattern Recognition: . Paper presented at 22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden (pp. 345-350).
Öppna denna publikation i ny flik eller fönster >>A Direct Method for 3D Hand Pose Recovery
Visa övriga...
2014 (Engelska)Ingår i: 22nd International Conference on Pattern Recognition, 2014, s. 345-350Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

This paper presents a novel approach for performing intuitive 3D gesture-based interaction using depth data acquired by Kinect. Unlike current depth-based systems that focus only on classical gesture recognition problem, we also consider 3D gesture pose estimation for creating immersive gestural interaction. In this paper, we formulate gesture-based interaction system as a combination of two separate problems, gesture recognition and gesture pose estimation. We focus on the second problem and propose a direct method for recovering hand motion parameters. Based on the range images, a new version of optical flow constraint equation is derived, which can be utilized to directly estimate 3D hand motion without any need of imposing other constraints. Our experiments illustrate that the proposed approach performs properly in real-time with high accuracy. As a proof of concept, we demonstrate the system performance in 3D object manipulation. This application is intended to explore the system capabilities in real-time biomedical applications. Eventually, system usability test is conducted to evaluate the learnability, user experience and interaction quality in 3D interaction in comparison to 2D touch-screen interaction.

Serie
International Conference on Pattern Recognition, ISSN 1051-4651
Nationell ämneskategori
Annan elektroteknik och elektronik
Identifikatorer
urn:nbn:se:umu:diva-108475 (URN)10.1109/ICPR.2014.68 (DOI)000359818000057 ()978-1-4799-5208-3 (ISBN)
Konferens
22ND International Conference on Pattern Recognition (ICPR, 24–28 August 2014, Stockholm, Sweden
Tillgänglig från: 2015-09-14 Skapad: 2015-09-11 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Hellström, T. & Ostovar, A. (2014). Detection of Trees Based on Quality Guided Image Segmentation. In: Pablo Gonzalez-de-Santos and Angela Ribeiro (Ed.), Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014): New trends in mobile robotics, perception and actuation for agriculture and forestry. Paper presented at Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014) (pp. 531-540). RHEA Consortium
Öppna denna publikation i ny flik eller fönster >>Detection of Trees Based on Quality Guided Image Segmentation
2014 (Engelska)Ingår i: Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014): New trends in mobile robotics, perception and actuation for agriculture and forestry / [ed] Pablo Gonzalez-de-Santos and Angela Ribeiro, RHEA Consortium , 2014, s. 531-540Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Detection of objects is crucial for any autonomous field robot orvehicle. Typically, object detection is used to avoid collisions whennavigating, but detection capability is essential also for autonomous or semiautonomousobject manipulation such as automatic gripping of logs withharvester cranes used in forestry. In the EU financed project CROPS,special focus is given to detection of trees, bushes, humans, and rocks inforest environments. In this paper we address the specific problem ofidentifying trees using color images. A presented method combinesalgorithms for seed point generation and segmentation similar to regiongrowing. Both algorithms are tailored by heuristics for the specific task oftree detection. Seed points are generated by scanning a verticallycompressed hue matrix for outliers. Each one of these seed points is thenused to segment the entire image into segments with pixels similar to asmall surrounding around the seed point. All generated segments are refinedby a series of morphological operations, taking into account thepredominantly vertical nature of trees. The refined segments are evaluatedby a heuristically designed quality function. For each seed point, thesegment with the highest quality is selected among all segments that coverthe seed point. The set of all selected segments constitute the identified treeobjects in the image. The method was evaluated with images containing intotal 197 trees, collected in forest environments in northern Sweden. In thispreliminary evaluation, precision in detection was 81% and recall rate 87%.

Ort, förlag, år, upplaga, sidor
RHEA Consortium, 2014
Nyckelord
Seed point, Image segmentation, Region growing
Nationell ämneskategori
Datorseende och robotik (autonoma system)
Identifikatorer
urn:nbn:se:umu:diva-93290 (URN)978-84-697-0248-2 (ISBN)
Konferens
Second International Conference on Robotics and associated High-technologies and Equipment for Agriculture and forestry (RHEA-2014)
Forskningsfinansiär
EU, FP7, Sjunde ramprogrammet, 246252
Tillgänglig från: 2014-09-15 Skapad: 2014-09-15 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Pordel, M., Hellström, T. & Ostovar, A. (2012). Integrating kinect depth data with a stochastic object classification framework for forestry robots. In: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics: Volume 2. Paper presented at 9th International Conference on Informatics in Control, Automation and Robotics, 28-31 July 2012, Rome, Italy (pp. 314-320). SciTePress
Öppna denna publikation i ny flik eller fönster >>Integrating kinect depth data with a stochastic object classification framework for forestry robots
2012 (Engelska)Ingår i: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics: Volume 2, SciTePress , 2012, s. 314-320Konferensbidrag, Publicerat paper (Övrigt vetenskapligt)
Ort, förlag, år, upplaga, sidor
SciTePress, 2012
Nationell ämneskategori
Robotteknik och automation
Identifikatorer
urn:nbn:se:umu:diva-71443 (URN)
Konferens
9th International Conference on Informatics in Control, Automation and Robotics, 28-31 July 2012, Rome, Italy
Tillgänglig från: 2013-05-29 Skapad: 2013-05-29 Senast uppdaterad: 2019-11-11Bibliografiskt granskad
Organisationer

Sök vidare i DiVA

Visa alla publikationer