Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Arad, Boaz
    et al.
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Balendonck, Jos
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Barth, Ruud
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Ben-Shahar, Ohad
    Department of Computer Science, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hemming, Jochen
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben‐Gurion University of the Negev, Beer‐Sheva, Israel.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tielen, Toon
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    van Tuijl, Bart
    Greenhouse Horticulture, Wageningen University & Research, Wageningen, The Netherlands.
    Development of a sweet pepper harvesting robot2020In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 37, no 6, p. 1027-1039Article in journal (Refereed)
    Abstract [en]

    This paper presents the development, testing and validation of SWEEPER, a robot for harvesting sweet pepper fruit in greenhouses. The robotic system includes a six degrees of freedom industrial arm equipped with a specially designed end effector, RGB-D camera, high-end computer with graphics processing unit, programmable logic controllers, other electronic equipment, and a small container to store harvested fruit. All is mounted on a cart that autonomously drives on pipe rails and concrete floor in the end-user environment. The overall operation of the harvesting robot is described along with details of the algorithms for fruit detection and localization, grasp pose estimation, and motion control. The main contributions of this paper are the integrated system design and its validation and extensive field testing in a commercial greenhouse for different varieties and growing conditions. A total of 262 fruits were involved in a 4-week long testing period. The average cycle time to harvest a fruit was 24 s. Logistics took approximately 50% of this time (7.8 s for discharge of fruit and 4.7 s for platform movements). Laboratory experiments have proven that the cycle time can be reduced to 15 s by running the robot manipulator at a higher speed. The harvest success rates were 61% for the best fit crop conditions and 18% in current crop conditions. This reveals the importance of finding the best fit crop conditions and crop varieties for successful robotic harvesting. The SWEEPER robot is the first sweet pepper harvesting robot to demonstrate this kind of performance in a commercial greenhouse.

    Download full text (pdf)
    fulltext
  • 2.
    Arad, Boaz
    et al.
    Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Barnea, Ehud
    Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Harel, Ben
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Ben-Shahar, Ohad
    Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications: The Case Study of Sweet Pepper Robotic Harvesting2019In: Sensors, E-ISSN 1424-8220, Vol. 19, no 6, article id 1390Article in journal (Refereed)
    Abstract [en]

    Current harvesting robots are limited by low detection rates due to the unstructured and dynamic nature of both the objects and the environment. State-of-the-art algorithms include color- and texture-based detection, which are highly sensitive to the illumination conditions. Deep learning algorithms promise robustness at the cost of significant computational resources and the requirement for intensive databases. In this paper we present a Flash-No-Flash (FNF) controlled illumination acquisition protocol that frees the system from most ambient illumination effects and facilitates robust target detection while using only modest computational resources and no supervised training. The approach relies on the simultaneous acquisition of two images—with/without strong artificial lighting ("Flash"/"no-Flash"). The difference between these images represents the appearance of the target scene as if only the artificial light was present, allowing a tight control over ambient light for color-based detection. A performance evaluation database was acquired in greenhouse conditions using an eye-in-hand RGB camera mounted on a robotic manipulator. The database includes 156 scenes with 468 images containing a total of 344 yellow sweet peppers. Performance of both color blob and deep-learning detection algorithms are compared on Flash-only and FNF images. The collected database is made public.

  • 3.
    Gupta, Himanshu
    et al.
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Andreasson, Henrik
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden; Perception for Intelligent Systems, Technical University of Munich, Munich, Germany.
    Kurtser, Polina
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Robust scan registration for navigation in forest environment using low-resolution LiDAR sensors2023In: Sensors, E-ISSN 1424-8220, Vol. 23, no 10, article id 4736Article in journal (Refereed)
    Abstract [en]

    Automated forest machines are becoming important due to human operators’ complex and dangerous working conditions, leading to a labor shortage. This study proposes a new method for robust SLAM and tree mapping using low-resolution LiDAR sensors in forestry conditions. Our method relies on tree detection to perform scan registration and pose correction using only low-resolution LiDAR sensors (16Ch, 32Ch) or narrow field of view Solid State LiDARs without additional sensory modalities like GPS or IMU. We evaluate our approach on three datasets, including two private and one public dataset, and demonstrate improved navigation accuracy, scan registration, tree localization, and tree diameter estimation compared to current approaches in forestry machine automation. Our results show that the proposed method yields robust scan registration using detected trees, outperforming generalized feature-based registration algorithms like Fast Point Feature Histogram, with an above 3 m reduction in RMSE for the 16Chanel LiDAR sensor. For Solid-State LiDAR the algorithm achieves a similar RMSE of 3.7 m. Additionally, our adaptive pre-processing and heuristic approach to tree detection increased the number of detected trees by 13% compared to the current approach of using fixed radius search parameters for pre-processing. Our automated tree trunk diameter estimation method yields a mean absolute error of 4.3 cm (RSME = 6.5 cm) for the local map and complete trajectory maps.

    Download full text (pdf)
    fulltext
  • 4.
    Gupta, Himanshu
    et al.
    Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Örebro, Sweden.
    Lilienthal, Achim J.
    Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Örebro, Sweden; Perception for Intelligent Systems, Technical University of Munich, Munich, Germany.
    Andreasson, Henrik
    Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Örebro, Sweden.
    Kurtser, Polina
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. Centre for Applied Autonomous Sensor Systems, Institutionen för naturvetenskap & teknik, Örebro University, Örebro, Sweden.
    NDT-6D for color registration in agri-robotic applications2023In: Journal of Field Robotics, ISSN 1556-4959, E-ISSN 1556-4967, Vol. 40, no 6, p. 1603-1619Article in journal (Refereed)
    Abstract [en]

    Registration of point cloud data containing both depth and color information is critical for a variety of applications, including in-field robotic plant manipulation, crop growth modeling, and autonomous navigation. However, current state-of-the-art registration methods often fail in challenging agricultural field conditions due to factors such as occlusions, plant density, and variable illumination. To address these issues, we propose the NDT-6D registration method, which is a color-based variation of the Normal Distribution Transform (NDT) registration approach for point clouds. Our method computes correspondences between pointclouds using both geometric and color information and minimizes the distance between these correspondences using only the three-dimensional (3D) geometric dimensions. We evaluate the method using the GRAPES3D data set collected with a commercial-grade RGB-D sensor mounted on a mobile platform in a vineyard. Results show that registration methods that only rely on depth information fail to provide quality registration for the tested data set. The proposed color-based variation outperforms state-of-the-art methods with a root mean square error (RMSE) of 1.1-1.6 cm for NDT-6D compared with 1.1 - 2.3 cm for other color-information-based methods and 1.2 - 13.7 cm for noncolor-information-based methods. The proposed method is shown to be robust against noises using the TUM RGBD data set by artificially adding noise present in an outdoor scenario. The relative pose error (RPE) increased 14% for our method compared to an increase of 75% for the best-performing registration method. The obtained average accuracy suggests that the NDT-6D registration methods can be used for in-field precision agriculture applications, for example, crop detection, size-based maturity estimation, and growth modeling.

    Download full text (pdf)
    fulltext
  • 5.
    Harel, Ben
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    van Herck, Liesbet
    Proefstation voor de Groenteteelt, Sint-Katelijne-Waver, Belgium.
    Parmet, Yisrael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Sweet pepper maturity evaluation via multiple viewpoints color analyses2016Conference paper (Refereed)
    Abstract [en]

    Maturity evaluation is an important feature for selective robotic harvesting. This paper focuses on maturity evaluation derived by a color camera for a sweet pepper robotic harvester. Fruit visibility for sweet peppers is limited to 65% and multiple viewpoints are necessary to detect more than 90% of the fruit. This paper aims to determine the number of viewpoints required to determine the maturity level of a sweet pepper and the best single viewpoint. Different colorbased measures to estimate the maturity level of a pepper were evaluated. Two datasets were analyzed: images of 54 yellow bell sweet peppers and 30 red peppers both harvested at the last fruit setting; all images were taken in uniform illumination conditions with white background. Each pepper was photographed from 5-6 viewpoints: one photo of the top of the pepper, one photo of the bottom and 3-4 photos of the pepper sides. Each pepper was manually tagged by a human professional observer as ‘mature’ or ‘immature’. Image processing routines were implemented to extract color level measures which included different hue features. Results indicates high correlation between the sides to the bottom view, the bottom view shows the best 0.86 correlation in the case of yellow peppers while the side view shows the best 0.835 correlation in the case of red peppers (the bottom view yields 0.82 correlation).

  • 6.
    Herdenstam, Anders P. F.
    et al.
    Örebro universitet, Restaurang- och hotellhögskolan.
    Kurtser, Polina
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Swahn, Johan
    Örebro universitet, Restaurang- och hotellhögskolan.
    Arunachalam, Ajay
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Nature versus machine: a pilot study using a semi-trained culinary panel to perform sensory evaluation of robot-cultivated basil affected by mechanically induced stress2022In: International Journal of Gastronomy and Food Science, ISSN 1878-450X, E-ISSN 1878-4518, Vol. 29, article id 100578Article in journal (Refereed)
    Abstract [en]

    In this paper we present a multidisciplinary approach combining technical practices with sensory data to optimize cultivation practices for production of plants using sensory evaluation and further the how it affects nutritional content. We apply sensory evaluation of plants under mechanical stress, in this case robot cultivated basil. Plant stress is a research field studying plants' reactions to suboptimal conditions leading to effects on growth, crop yield, and resilience to harsh environmental conditions. Some of the effects induced by mechanical stress have been shown to be beneficial, both in futuristic commercial growing paradigms (e.g., vertical farming), as well as in altering the plant's nutritional content. This pilot study uses established sensory methods such as Liking, Just-About-Right (JAR) and Check-All-That-Apply (CATA) to study the sensory effect of mechanical stress on cropped basil induced by a specially developed robotic platform. Three different kinds of cropped basil were evaluated: (a) mechanically stressed-robot cultivated, (b) non-stressed -robot cultivated from the same cropping bed (reference); and (c) a commercially organic produced basil. We investigated liking, critical attributes, sensory profile, and the use of a semi-trained culinary panel to make any presumptions on consumer acceptance. The semi-trained panel consisted of 24 culinary students with experience of daily judging sensory aspects of specific food products and cultivated crops. The underlying goal is to assess potential market aspects related to novel mechanical cultivation systems. Results shows that basil cropped in a controlled robot cultivated platform resulted in significantly better liking compared to commercially organic produced basil. Results also showed that mechanical stress had not negatively affected the sensory aspects, suggesting that eventual health benefits eating stressed plants do not come at the expense of the sensory experience.

  • 7.
    Herdenstam, Anders P. F.
    et al.
    Örebro University School of Hospitality Culinary Arts and Meal Science, Grythyttan, Sweden.
    Kurtser, Polina
    Örebro University, The School of Science and Technology, The Life Science Center-Biology, Örebro, Sweden.
    Swahn, Johan
    Örebro University Grythyttan Campus, Grythyttan, Sweden.
    Arunachalam, Ajay
    Örebro University, The School of Science and Technology, The Life Science Center-Biology, Örebro, Sweden.
    Edberg, Karl-Magnus
    Örebro University School of Hospitality Culinary Arts and Meal Science, Grythyttan, Sweden.
    Nature versus machine: Sensory evaluation of robot-cultivated basil affected by mechanically induced stress2022Conference paper (Other academic)
    Download full text (pdf)
    Poster
  • 8.
    Kurtser, Polina
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Arad, Boaz
    Department of Computer Science, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Ben-Shahar, Ohad
    Department of Computer Science, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    van Bree, Milan
    Irmato Industrial Solutions Veghel B.V., Veghel, the Netherlands.
    Moonen, Joep
    Irmato Industrial Solutions Veghel B.V., Veghel, the Netherlands.
    van Tujil, Bart
    Greenhouse Horticulture, Wageningen University and Research Centre, Wageningen, the Netherlands.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Robotic data acquisition of sweet pepper images for research and development2016Conference paper (Refereed)
    Abstract [en]

    A main problem limiting the development of robotic harvesters is robust fruit detection [5]. Despite intensive research conducted in identifying the fruits and their location [2,3], current fruit detection algorithms have a limited detection rate of 0.87 which is unfeasible from an economic perspective [5]. The complexity of the fruit detection task is due to the unstructured and dynamic nature of both the objects and the environment [4-6]: the fruit have inherent high variability in size, shape, texture, and location; occlusion and variable illumination conditions significantly influence the detection performance[3].

    A common practice for image processing R&D for complicated problems is the acquisition of a large database (e.g., Labelme open source labeling database [1], Oxford building dataset [2]). These datasets enable to advance vision algorithms development [7] and provide a benchmark for evaluating new algorithms. To the best of our knowledge, to date there is no open dataset available for R&D in image processing of agricultural objects. Evaluation of previously reported algorithms was based on limited data [5]. Previous research indicated the importance of evaluating algorithms for a wide range of sensory, crop, and environmental conditions [5].

    A robotic acquisition system and procedure was developed using a 6 degree of freedom manipulator, equipped with 3 different sensors to automatically acquire images from several viewpoints with different sensors and illumination conditions. Measurements were conducted along the day and at night in a commercial greenhouse and resulted in a total of 1764 images from 14 viewpoints for each scene. Additionally, drawbacks and advantages of the proposed approach as compared to other approaches previously utilized will be discussed along with recommendations for future acquisitions.

  • 9.
    Kurtser, Polina
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Castro Alves, Victor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Arunachalam, Ajay
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Sjöberg, Viktor
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Hanell, Ulf
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Hyötyläinen, Tuulia
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Andreasson, Henrik
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Development of novel robotic platforms for mechanical stress induction, and their effects on plant morphology, elements, and metabolism2021In: Scientific Reports, E-ISSN 2045-2322, Vol. 11, no 1, article id 23876Article in journal (Refereed)
    Abstract [en]

    This research evaluates the effect on herbal crops of mechanical stress induced by two specially developed robotic platforms. The changes in plant morphology, metabolite profiles, and element content are evaluated in a series of three empirical experiments, conducted in greenhouse and CNC growing bed conditions, for the case of basil plant growth. Results show significant changes in morphological features, including shortening of overall stem length by up to 40% and inter-node distances by up to 80%, for plants treated with a robotic mechanical stress-induction protocol, compared to control groups. Treated plants showed a significant increase in element absorption, by 20-250% compared to controls, and changes in the metabolite profiles suggested an improvement in plants' nutritional profiles. These results suggest that repetitive, robotic, mechanical stimuli could be potentially beneficial for plants' nutritional and taste properties, and could be performed with no human intervention (and therefore labor cost). The changes in morphological aspects of the plant could potentially replace practices involving chemical treatment of the plants, leading to more sustainable crop production.

  • 10.
    Kurtser, Polina
    et al.
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Planning the sequence of tasks for harvesting robots2020In: Robotics and Autonomous Systems, ISSN 0921-8890, E-ISSN 1872-793X, Vol. 131, article id 103591Article in journal (Refereed)
    Abstract [en]

    A methodology for planning the sequence of tasks for a harvesting robot is presented. The fruit targets are situated at unknown locations and must be detected by the robot through a sequence of sensing tasks. Once the targets are detected, the robot must execute a harvest action at each target location. The traveling salesman paradigm (TSP) is used to plan the sequence of sensing and harvesting tasks taking into account the costs of the sensing and harvesting actions and the traveling times. Sensing is planned online. The methodology is validated and evaluated in both laboratory and greenhouse conditions for a case study of a sweet pepper harvesting robot. The results indicate that planning the sequence of tasks for a sweet pepper harvesting robot results in 12% cost reduction. Incorporating the sensing operation in the planning sequence for fruit harvesting is a new approach in fruit harvesting robots and is important for cycle time reduction. Furthermore, the sequence is re-planned as sensory information becomes available and the costs of these new sensing operations are also considered in the planning.

  • 11.
    Kurtser, Polina
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Statistical models for fruit detectability: spatial and temporal analyses of sweet peppers2018In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 171, p. 272-289Article in journal (Refereed)
    Abstract [en]

    Statistical models for fruit detectability were developed to provide insights into preferable variable configurations for better robotic harvesting performance.

    The methodology includes several steps: definition of controllable and measurable variables, data acquisition protocol design, data processing, definition of performance measures and statistical modelling procedures. Given the controllable and measurable variables, a data acquisition protocol is defined to allow adequate variation in the variables, and determine the dataset size to ensure significant statistical analyses. Performance measures are defined for each combination of controllable and measurable variables identified in the protocol. Descriptive statistics of the measures allow insights into preferable configurations of controllable variables given the measurable variables values. The statistical model is performed by back-elimination Poisson regression with a loglink function process. Spatial and temporal analyses are performed.

    The methodology was applied to develop statistical models for sweet pepper (Capsicum annuum) detectability and revealed best viewpoints. 1312 images acquired from 10 to 14 viewpoints for 56 scenes were collected in commercial greenhouses, using an eye-in-hand configuration of a 6 DOF manipulator equipped with a RGB sensor and an illumination rig. Three databases from different sweet-pepper varieties were collected along different growing seasons.

    Target detectability highly depends on the imaging acquisition distance and the sensing system tilt. A minimum of 12 training scenes are necessary to discover the statistically significant spatial variables. Better prediction was achieved at the beginning of the season with slightly better prediction achieved in a temporal split of training and testing sets.

  • 12.
    Kurtser, Polina
    et al.
    Ben-Gurion University of the Negev, Department of Industrial Engineering and Management, Beer-Sheva, Israel.
    Edan, Yael
    Ben-Gurion University of the Negev, Department of Industrial Engineering and Management, Beer-Sheva, Israel.
    The use of dynamic sensing strategies to improve detection for a pepper harvesting robot2018In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, p. 8286-8293Conference paper (Refereed)
    Abstract [en]

    This paper presents the use of dynamic sensing strategies to improve detection results for a pepper harvesting robot. The algorithm decides if an additional viewpoint is needed and selects the best-fit viewpoint location from a predefined set of locations based on the predicted profitability of such an action. The suggestion of a possible additional viewpoint is based on image analysis for fruit and occlusion level detection, prediction of the expected number of additional targets sensed from that viewpoint, and final decision if choosing the additional viewpoint is beneficial. The developed heuristic was applied on 96 greenhouse images of 30 sweet peppers and resulted in up to 19% improved detection. The harvesting utility cost function decreased by up to 10% compared to the conventional single viewpoint strategy.

  • 13.
    Kurtser, Polina
    et al.
    Centre for Applied Autonomous Sensor Systems, Örebro University, 701 82 Örebro, Sweden.
    Hanell, Ulf
    Ecosystem Ecology group, Örebro University, 701 82 Örebro, Sweden.
    Andreasson, Henrik
    Centre for Applied Autonomous Sensor Systems, Örebro University, 701 82 Örebro, Sweden.
    Robotic Platform for Precise Mechanical Stress Induction in Greenhouses Cultivation2020In: 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), IEEE, 2020, p. 1558-1565Conference paper (Refereed)
    Abstract [en]

    This paper presents an autonomous robotic platform for research of mechanically induced stress in plants growing in controlled greenhouse conditions. The platform provides a range of possibilities for mechanical stimuli including motion type, frequency, speed, and torque. The motions can be tailored for a single pot, making study of mechanical plant stress versatile, rapid and precise. We evaluate the performance of the platform for a use-case of basil plant cultivation. An eight week experiment was performed in greenhouse conditions on 220 basil plants. We show that the induction of mechanical stress by the platform significantly affects plant morphology, such as shortening stem length by 30 % -40 % and inter-node length by 50 % -80 %, while preserving leaf weight which is the main part of the basil plant used for culinary purposes. Results also show that variations in types of mechanical stimuli motions provides significant differences in the effect on plant morphology. Finally we show that decreasing the mechanical stimuli frequency to rates feasible to be performed manually significantly reduces the effect, stressing the need for autonomous systems capable of providing continuous stimuli during day and night. These results validate previously published findings in research of mechanical stress induction, and therefore implies the platform can be used for research of this phenomena.

  • 14.
    Kurtser, Polina
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Levi, Ofer
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Gontar, Vladimir
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Detection and classification of ECG chaotic components using ANN trained by specially simulated data2012In: Engineering Applications of Neural Networks / [ed] Jayne, C., Yue, S., Iliadis, L., Berlin, Heidelberg: Springer , 2012, p. 193-202Conference paper (Refereed)
    Abstract [en]

    This paper presents the use of simulated ECG signals with knownchaotic and random noise combination for training of an Artificial NeuralNetwork (ANN) as a classification tool for analysis of chaotic ECGcomponents. Preliminary results show about 85% overall accuracy in the abilityto classify signals into two types of chaotic maps – logistic and Henon. Robustness to random noise is also presented. Future research in the form ofraw data analysis is proposed, and further features analysis is needed

  • 15.
    Kurtser, Polina
    et al.
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. Centre for Applied Autonomous Sensor Systems, Örebro University, Sweden.
    Lowry, Stephanie
    Centre for Applied Autonomous Sensor Systems, Örebro University, Sweden.
    RGB-D datasets for robotic perception in site-specific agricultural operations: a survey2023In: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 212, article id 108035Article in journal (Refereed)
    Abstract [en]

    Fusing color (RGB) images and range or depth (D) data in the form of RGB-D or multi-sensory setups is a relatively new but rapidly growing modality for many agricultural tasks. RGB-D data have potential to provide valuable information for many agricultural tasks that rely on perception, but collection of appropriate data and suitable ground truth information can be challenging and labor-intensive, and high-quality publicly available datasets are rare. This paper presents a survey of the existing RGB-D datasets available for agricultural robotics, and summarizes key trends and challenges in this research field. It evaluates the relative advantages of the commonly used sensors, and how the hardware can affect the characteristics of the data collected. It also analyzes the role of RGB-D data in the most common vision-based machine learning tasks applied to agricultural robotic operations: visual recognition, object detection, and semantic segmentation, and compares and contrasts methods that utilize 2-D and 3-D perceptual data.

    Download full text (pdf)
    fulltext
  • 16.
    Kurtser, Polina
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Örebro University, Örebro, Sweden.
    Lowry, Stephanie
    Örebro University, Örebro, Sweden.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Advances in machine learning for agricultural robots2024In: Advances in agri-food robotics / [ed] Eldert van Henten; Yael Edan, Cambridge: Burleigh Dodds Science Publishing , 2024, p. 103-134Chapter in book (Refereed)
    Abstract [en]

    This chapter presents a survey of the advances in using machine learning algorithms for agricultural robotics. The development of machine learning algorithms in the last decade has been astounding, and there has therefore been a rapid increase in the widespread deployment of machine learning algorithms in many domains, such as agricultural robotics. However, there are also major challenges to be overcome in ML for agri-robotics, due to the unavoidable complexity and variability of the operating environments, and the difficulties in accessing the required quantities of relevant training data. This chapter presents an overview of the usage of ML for agri-robotics and discusses the use of ML for data analysis and decision-making for perception and navigation. It outlines the main trends of the last decade in employed algorithms and available data. We then discuss the challenges the field is facing and ways to overcome these challenges.

  • 17.
    Kurtser, Polina
    et al.
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Rotstein, Nati
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Andreasson, Henrik
    Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions2020In: Proceedings of the Northern Lights Deep Learning Workshop, Septentrio Academic Publishing , 2020, Vol. 1, p. 1-6Conference paper (Refereed)
    Abstract [en]

    In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.

    Download full text (pdf)
    fulltext
  • 18.
    Kurtser, Polina
    et al.
    The Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Ringdahl, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Rotstein, Nati
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    Berenstein, Ron
    The Institute of Agricultural Engineering, Agricultural Research Organization, The Volcani Center, Rishon Lezion, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D camera2020In: IEEE Robotics and Automation Letters, E-ISSN 2377-3766, Vol. 5, no 2, p. 2031-2038Article in journal (Refereed)
    Abstract [en]

    Current practice for vine yield estimation is based on RGB cameras and has limited performance. In this paper we present a method for outdoor vine yield estimation using a consumer grade RGB-D camera mounted on a mobile robotic platform. An algorithm for automatic grape cluster size estimation using depth information is evaluated both in controlled outdoor conditions and in commercial vineyard conditions. Ten video scans (3 camera viewpoints with 2 different backgrounds and 2 natural light conditions), acquired from a controlled outdoor experiment and a commercial vineyard setup, are used for analyses. The collected dataset (GRAPES3D) is released to the public. A total of 4542 regions of 49 grape clusters were manually labeled by a human annotator for comparison. Eight variations of the algorithm are assessed, both for manually labeled and auto-detected regions. The effect of viewpoint, presence of an artificial background, and the human annotator are analyzed using statistical tools. Results show 2.8-3.5 cm average error for all acquired data and reveal the potential of using lowcost commercial RGB-D cameras for improved robotic yield estimation.

  • 19.
    Levi-Bliech, Michal
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Pliskin, Nava
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Fink, Lior
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Mobile apps and employee behavior: An empirical investigation of the implementation of a fleet-management app2019In: International Journal of Information Management, ISSN 0268-4012, E-ISSN 1873-4707, Vol. 49, p. 355-365Article in journal (Refereed)
    Abstract [en]

    Whereas implementing a mobile application (app) in support of organizational processes is quite common in contemporary organizations, only few empirical studies have investigated the impact of app implementation in an organizational context. This study explores the association between the driving behavior of employed drivers and pre-driving app use of a fleet-management app. Users can get from the app not only real-time notifications while driving but can also take advantage of a unique app capability, that more traditional driving technologies do not provide, and receive feedback about their driving before their next drive. We hypothesize that pre-driving app use is associated with reduced risky driving behavior, and that this association is mitigated by real-time notifications and enhanced by experience with the app. The supportive results of the study confirm the organizational impact of implementing a fleet-management app via better driving behavior of employees who engage in pre-driving app use.

  • 20.
    Levi-Bliech, Michal
    et al.
    Ben-Gurion University of the Negev.
    Kurtser, Polina
    Ben-Gurion University of the Negev.
    Pliskin, Nava
    Ben-Gurion University of the Negev.
    Fink, Lior
    Ben-Gurion University of the Negev.
    The effects of a fleet-management app on driver behavior2018In: 26th European Conference on Information Systems: Beyond Digitization - Facets of Socio-Technical Change, ECIS 2018 / [ed] Frank U., Kautz K., Bednar P.M., Association for Information Systems, 2018, article id 11Conference paper (Refereed)
    Abstract [en]

    Whereas implementing a mobile application (app) in support of organizational processes is quite common in contemporary organizations, only few empirical studies have investigated the effects of app implementation on employee behavior. This study aims at exploring the effects of a fleet-management app on the behavior of drivers, in particular the extent to which they are involved in risky behavior. We hypothesize that use of the app before driving reduces such behavior, and that this effect is mitigated by the existence of notifications while driving. These hypotheses are tested with data about 11,805 trips by 109 drivers employed in a large-scale organization. The preliminary results support the research hypotheses and confirm that the implementation of a fleet-management app has an organizational impact via an app-induced change in driver behavior.

  • 21.
    Ringdahl, Ola
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Barth, Ruud
    Greenhouse Horticulture, Wageningen University and Research Centre, Wageningen, the Netherlands.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    Operational flow of an autonomous sweetpepper harvesting robot2016Conference paper (Other academic)
    Abstract [en]

    Advanced automation is required for greenhouse production systems due to the lack of skilled workforce and increasing labour costs [1]. As part of the EU project SWEEPER, we are working on developing an autonomous robot able to harvest sweet pepper fruits in greenhouses. This paper focuses on the operational flow of the robot for the high level task planning.

    In the SWEEPER project, an RGB camera is mounted on the end effector to detect fruits. Due to the dense plant rows, the camera is located at a maximum of 40 cm from the plants and hence cannot provide an overview of all fruit locations. Only a few ripe fruits at each acquisition can be seen. This implies that the robot must incorporate a search pattern to look for fruits. When at least one fruit has been detected in the image, the search is aborted and a harvesting phase is initiated. The phase starts with directing the manipulator to a point close to the fruit and then activating a visual servo control loop. This motion approach ensures that the fruit is grasped despite the occlusions caused by the stems and leaves. When the manipulator has reached the fruit, it is harvested and automatically released into a container. If there are more fruits that have already been detected, the system continues to pick them. When all detected fruits have been harvested, the system resumes the search pattern again. When the search pattern is finished and no more fruits are detected, the robot base is advanced along the row to the next plant and repeats the operations above.

    To support implementation of the workflow into a program controlling the actual robot, a generic software framework for development of agricultural and forestry robots was used [2]. The framework is constructed with a hybrid robot architecture, using a state machine implementing the following flowchart.

    Download full text (pdf)
    fulltext
  • 22.
    Ringdahl, Ola
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Evaluation of approach strategies for harvesting robots: case study of sweet pepper harvesting2019In: Journal of Intelligent and Robotic Systems, ISSN 0921-0296, E-ISSN 1573-0409, Vol. 95, no 1, p. 149-164Article in journal (Refereed)
    Abstract [en]

    Robotic harvesters that use visual servoing must choose the best direction from which to approach the fruit to minimize occlusion and avoid obstacles that might interfere with the detection along the approach. This work proposes different approach strategies, compares them in terms of cycle times, and presents a failure analysis methodology of the different approach strategies. The different approach strategies are: in-field assessment by human observers, evaluation based on an overview image using advanced algorithms or remote human observers, or attempting multiple approach directions until the fruit is successfully reached. In the latter approach, each attempt costs time, which is a major bottleneck in bringing harvesting robots into the market. Alternatively, a single approach strategy that only attempts one direction can be applied if the best approach direction is known a-priori. The different approach strategies were evaluated for a case study of sweet pepper harvesting in laboratorial and greenhouse conditions. The first experiment, conducted in a commercial greenhouse, revealed that the fruit approach cycle time increased 8% and 116% for reachable and unreachable fruits respectively when the multiple approach strategy was applied, compared to the single approach strategy. The second experiment measured human observers’ ability to provide insights to approach directions based on overview images taken in both greenhouse and laboratorial conditions. Results revealed that human observers are accurate in detecting unapproachable directions while they tend to miss approachable directions. By detecting fruits that are unreachable (via automatic algorithms or human operators), harvesting cycle times can be significantly shortened leading to improved commercial feasibility of harvesting robots.

    Download full text (pdf)
    fulltext
  • 23.
    Ringdahl, Ola
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel; Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Performance of RGB-D camera for different object types in greenhouse conditions2019In: 2019 European conference on mobile robots (ECMR): conference proceedings September 4- 6, 2019 Prague Czech Republic / [ed] Libor Přeučil; Sven Behnke; Miroslav Kulich, IEEE, 2019, article id 8870935Conference paper (Refereed)
    Abstract [en]

    RGB-D cameras play an increasingly important role in localization and autonomous navigation of mobile robots. Reasonably priced commercial RGB-D cameras have recently been developed for operation in greenhouse and outdoor conditions. They can be employed for different agricultural and horticultural operations such as harvesting, weeding, pruning and phenotyping. However, the depth information extracted from the cameras varies significantly between objects and sensing conditions. This paper presents an evaluation protocol applied to a commercially available Fotonic F80 time-of-flight RGB-D camera for eight different object types. A case study of autonomous sweet pepper harvesting was used as an exemplary agricultural task. Each of the objects chosen is a possible item that an autonomous agricultural robot must detect and localize to perform well. A total of 340 rectangular regions of interests (ROI) were marked for the extraction of performance measures of point cloud density, and variability around center of mass, 30-100 ROIs per object type. An additional 570 ROIs were generated (57 manually and 513 replicated) to evaluate the repeatability and accuracy of the point cloud. A statistical analysis was performed to evaluate the significance of differences between object types. The results show that different objects have significantly different point density. Specifically metallic materials and black colored objects had significantly less point density compared to organic and other artificial materials introduced to the scene as expected. The point cloud variability measures showed no significant differences between object types, except for the metallic knife that presented significant outliers in collected measures. The accuracy and repeatability analysis showed that 1-3 cm errors are due to the the difficulty for a human to annotate the exact same area and up to ±4 cm error is due to the sensor not generating the exact same point cloud when sensing a fixed object.

  • 24.
    Ringdahl, Ola
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel.
    Strategies for selecting best approach direction for a sweet-pepper harvesting robot2017In: Towards autonomous robotic systems: 18th annual conference, TAROS 2017, Guildford, UK, July 19–21, 2017, proceedings / [ed] Yang Gao; Saber Fallah; Yaochu Jin; Constantina Lekakou, Cham: Springer, 2017, p. 516-525Conference paper (Refereed)
    Abstract [en]

    An autonomous sweet pepper harvesting robot must perform several tasks to successfully harvest a fruit. Due to the highly unstructured environment in which the robot operates and the presence of occlusions, the current challenges are to improve the detection rate and lower the risk of losing sight of the fruit while approaching the fruit for harvest. Therefore, it is crucial to choose the best approach direction with least occlusion from obstacles.

    The value of ideal information regarding the best approach direction was evaluated by comparing it to a method attempting several directions until successful harvesting is performed. A laboratory experiment was conducted on artificial sweet pepper plants using a system based on eye-in-hand configuration comprising a 6DOF robotic manipulator equipped with an RGB camera. The performance is evaluated in laboratorial conditions using both descriptive statistics of the average harvesting times and harvesting success as well as regression models. The results show roughly 40–45% increase in average harvest time when no a-priori information of the correct harvesting direction is available with a nearly linear increase in overall harvesting time for each failed harvesting attempt. The variability of the harvesting times grows with the number of approaches required, causing lower ability to predict them.

    Tests show that occlusion of the front of the peppers significantly impacts the harvesting times. The major reason for this is the limited workspace of the robot often making the paths to positions to the side of the peppers significantly longer than to positions in front of the fruit which is more open.

    Download full text (pdf)
    fulltext
  • 25.
    Seeburger, P.
    et al.
    School of Science and Technology, Örebro University, Örebro, Sweden.
    Herdenstam, Anders P. F.
    School of Hospitality, Culinary Arts and Meal Science, Örebro, Sweden.
    Kurtser, Polina
    Umeå University, Faculty of Medicine, Department of Radiation Sciences, Radiation Physics. School of Science and Technology, Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Arunachalam, A.
    School of Science and Technology, Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Castro Alves, Victor
    School of Science and Technology, Man-Technology-Environment Research Centre, Örebro University, Örebro, Sweden.
    Hyötyläinen, Tuulia
    School of Science and Technology, Man-Technology-Environment Research Centre, Örebro University, Örebro, Sweden.
    Andreasson, Henrik
    School of Science and Technology, Centre for Applied Autonomous Sensor Systems, Örebro University, Örebro, Sweden.
    Controlled mechanical stimuli reveal novel associations between basil metabolism and sensory quality2023In: Food Chemistry, ISSN 0308-8146, E-ISSN 1873-7072, Vol. 404, no Part A, article id 134545Article in journal (Refereed)
    Abstract [en]

    There is an increasing interest in the use of automation in plant production settings. Here, we employed a robotic platform to induce controlled mechanical stimuli (CMS) aiming to improve basil quality. Semi-targeted UHPLC-qToF-MS analysis of organic acids, amino acids, phenolic acids, and phenylpropanoids revealed changes in basil secondary metabolism under CMS, which appear to be associated with changes in taste, as revealed by different means of sensory evaluation (overall liking, check-all-that-apply, and just-about-right analysis). Further network analysis combining metabolomics and sensory data revealed novel links between plant metabolism and sensory quality. Amino acids and organic acids including maleic acid were negatively associated with basil quality, while increased levels of secondary metabolites, particularly linalool glucoside, were associated with improved basil taste. In summary, by combining metabolomics and sensory analysis we reveal the potential of automated CMS on crop production, while also providing new associations between plant metabolism and sensory quality.

    Download full text (pdf)
    fulltext
  • 26.
    van Herck, Liesbet
    et al.
    Proefstation voor de Groenteteelt (PSKW), Sint-Katelijne-Waver, Belgium.
    Kurtser, Polina
    Örebro universitet, Institutionen för naturvetenskap och teknik.
    Wittemans, Lieve
    Proefstation voor de Groenteteelt (PSKW), Sint-Katelijne-Waver, Belgium.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Crop design for improved robotic harvesting: a case study of sweet pepper harvesting2020In: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 192, p. 294-308Article in journal (Refereed)
    Abstract [en]

    Current harvesting robots have limited performance, due to the unstructured and dynamic nature of both the target crops and their environment. Efforts to date focus on improving sensing and robotic systems. This paper presents a parallel approach, to "design" the crop and its environment to best fit the robot, similar to robotic integration in industrial robot deployments.

    A systematic methodology to select and modify the crop "design" (crop and environment) to improve robotic harvesting is presented. We define crop-dependent robotic features for successful harvesting (e.g., visibility, reachability), from which associated crop features are identified (e.g., crop density, internode length). Methods to influence the crop features are derived (e.g., cultivation practices, climate control) along with a methodological approach to evaluate the proposed designs. A case study of crop "design" for robotic sweet pepper harvesting is presented, with statistical analyses of influential parameters. Since comparison of the multitude of existing crops and possible modifications is impossible due to complexity and time limitations, a sequential field experimental setup is planned. Experiments over three years, 10 cultivars, two climate control conditions, two cultivation techniques and two artificial illumination types were performed. Results showed how modifying the crop effects the crops characteristics influencing robotic harvesting by increased visibility and reachability. The systematic crop "design" approach also led to robot design recommendations. The presented "engineering" the crop "design" framework highlights the importance of close synergy between crop and robot design achieved by strong collaboration between robotic and agronomy experts resulting in improved robotic harvesting performance.

  • 27.
    Zemmour, Elie
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
    Automatic Parameter Tuning for Adaptive Thresholding in Fruit Detection2019In: Sensors, E-ISSN 1424-8220, Vol. 19, no 9, article id 2130Article in journal (Refereed)
    Abstract [en]

    This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions. The image is dynamically split into variably-sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected to accommodate three different illumination levels for three different dimensions in four color spaces: RGB, HSI, LAB, and NDI. Each color space uses a different method to represent a pixel in an image: RGB (Red, Green, Blue), HSI (Hue, Saturation, Intensity), LAB (Lightness, Green to Red and Blue to Yellow) and NDI (Normalized Difference Index, which represents the normal difference between the RGB color dimensions). The thresholds were selected by quantifying the required relation between the true positive rate and false positive rate. A tuning process was developed to determine the best fit values of the algorithm parameters to enable easy adaption to different kinds of fruits (shapes, colors) and environments (illumination conditions). Extensive analyses were conducted on three different databases acquired in natural growing conditions: red apples (nine images with 113 apples), green grape clusters (129 images with 1078 grape clusters), and yellow peppers (30 images with 73 peppers). These databases are provided as part of this paper for future developments. The algorithm was evaluated using cross-validation with 70% images for training and 30% images for testing. The algorithm successfully detected apples and peppers in variable lighting conditions resulting with an F-score of 93.17% and 99.31% respectively. Results show the importance of the tuning process for the generalization of the algorithm to different kinds of fruits and environments. In addition, this research revealed the importance of evaluating different color spaces since for each kind of fruit, a different color space might be superior over the others. The LAB color space is most robust to noise. The algorithm is robust to changes in the threshold learned by the training process and to noise effects in images.

    Download full text (pdf)
    fulltext
  • 28.
    Zemmour, Elie
    et al.
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    Kurtser, Polina
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    Edan, Yael
    Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva, Israel .
    Dynamic thresholding algorithm for robotic apple detection2017In: 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), IEEE, 2017, p. 240-246Conference paper (Refereed)
    Abstract [en]

    This paper presents a dynamic thresholding algorithm for robotic apple detection. The algorithm enables robust detection in highly variable lighting conditions. The image is dynamically split into variable sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected so as to accommodate three different illumination levels for three different dimensions in the natural difference index (NDI) space by quantifying the required relation between true positive rate and false positive rate. This rate can change along the robotic harvesting process, aiming to decrease FPR from far views (to minimize cycle times) and to increase TPR from close views (to increase grasping accuracy). Analyses were conducted on apple images acquired in outdoor conditions. The algorithm improved previously reported results and achieved 91.14% true positive rate (TPR) with 3.05% false positive rate (FPR) using the NDI first dimension and a noise removal process.

1 - 28 of 28
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf