umu.sePublications
Change search
Refine search result
1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Pordel, Mostafa
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A component based architecture to improve testability, targeted FPGA-BasedVision systems2011In: 2011 International Conference on System Modeling and Optimization (ICSMO 2011), 2011Conference paper (Refereed)
    Abstract [en]

    Abstract—FPGA has been used in many robotics projects forreal-time image processing. It provides reliable systems withlow execution time and simplified timing analysis. Many ofthese systems take a lot of time in development and testingphases. In some cases, it is not possible to test the system in realenvironments very often, due to accessibility, availability orcost problems. This paper is the result of a case study on visionsystems for two robotics projects in which the vision teamconsisted of seven students working for six months fulltime ondeveloping and implementing different image algorithms.While FPGA has been used for real-time image processing,some steps have been taken in order to reduce the developmentand testing phases. The main focus of the project is to integratedifferent testing methods with FPGA development. It includesa component based solution that uses a two-waycommunication with a PC controller for system evaluation andtesting. Once the data is acquired from the vision board, thesystem stores it and simulates the same environment that hasbeen captured earlier by feeding back the obtained data toFPGA. This approach addresses and implements a debuggingmethodology for FPGA based solutions which accelerate thedevelopment phase. In order to transfer massive informationof images, RMII which is an interface for Ethernetcommunication, has been investigated and implemented. Theprovided solution makes changes easier, saves time and solvesthe problems mentioned earlier.

  • 2.
    Pordel, Mostafa
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    FPGA implementation of real-time ethernet communication using RMII interface2011In: 2011 International Conference on Information and Computer Networks (ICICN 2011), 2011Conference paper (Refereed)
  • 3.
    Pordel, Mostafa
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Object Classification and Image Labeling using RGB-Depth Information2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is part of research for the vision systems of four robots in the EU funded project, CROPS[l], where the robots should harvest apples, sweet peppers and grapes, and explore forests. The whole process of designing such a system, including the software architecture, creation of the image database, image labeling and object detection, is presented.

    The software architecture chapter of this thesis provides a review of some of the latest frameworks for robotics. It describes the structure of robetics components and the communication systems between them. The vision system is a subsystem of the robetics architecture studied in conjunction with other components of the robetics system. To build a vision system, three main steps should be taken. First a collection of images that are similar to what the robots are expected to meet should be created. Second, all the images should be labeled manually or automatically. Finally, learning systems should use the labeled images to build object medels. Details about these steps make up the majority of the content in this thesis. With new widely available low-cost sensors such as Microsoft Kinect, it is possible to use depth imagesalong with RGB images to increase the performance of vision systems. We particularly focus on various methods that integrate depth information in the three steps that are mentioned for building a vision system. More specifically, the image labeling tools help to extract objects in images to be used as ground truth for learning and testing processes in object detection. The inputs for such tools are usually RGB images. Despite the existence of many powerful tools for image labeling, there is still a need for RGB-Depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentatian from RGB to RGB-Depth. Second, it minimizes the interadion time needed for object extraction by using a highly efficient segmentatian method in RGB-Depth space. The ent~re procedure requires very few elieks campared to other already existing tools. In fact, when the desired object is the dosest object to the camera, as is the case in our forestry application, no click is required to extract the object. Finally, while we present state of the art in object detection with 2D environments, object detection using RGB-depth information is mainly addressed for future work.

  • 4.
    Pordel, Mostafa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Robotics architecture frameworks, available tools and further requirements2013Report (Other academic)
    Abstract [en]

    For every robotics project, choosing a suitable framework and middleware for software and hardware is a challenging task which may influence the entire project. Robotics applications typically are resource constrained when it comes to computations and memory usage. They are built on different hardware platforms and applied in different domains. Therefore it is hard to introduce a common framework for all types of projects. However, in recent years several new attempts have been made and received attention from both researchers and industry. These frameworks are still under development and need to be extended. This paper discusses the different features that are needed for robotics frameworks and compares some of the available middleware and standards.

  • 5.
    Pordel, Mostafa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Semi-automatic image labeling using depth informationManuscript (preprint) (Other academic)
  • 6.
    Pordel, Mostafa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Australian National University.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Semi-Automatic Image Labelling Using Depth Information2015In: Computers, ISSN 2073-431X, E-ISSN 2073-431X, Vol. 4, no 2, p. 142-154Article in journal (Refereed)
    Abstract [en]

    Image labeling tools help to extract objects within images to be used as ground truth for learning and testing in object detection processes. The inputs for such tools are usually RGB images. However with new widely available low-cost sensors like Microsoft Kinect it is possible to use depth images in addition to RGB images. Despite many existing powerful tools for image labeling, there is a need for RGB-depth adapted tools. We present a new interactive labeling tool that partially automates image labeling, with two major contributions. First, the method extends the concept of image segmentation from RGB to RGB-depth using Fuzzy C-Means clustering, connected component labeling and superpixels, and generates bounding pixels to extract the desired objects. Second, it minimizes the interaction time needed for object extraction by doing an efficient segmentation in RGB-depth space. Very few clicks are needed for the entire procedure compared to existing, tools. When the desired object is the closest object to the camera, which is often the case in robotics applications, no clicks at all are required to accurately extract the object.

  • 7.
    Pordel, Mostafa
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ostovar, Ahmad
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Integrating kinect depth data with a stochastic object classification framework for forestry robots2012In: Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics: Volume 2, SciTePress , 2012, p. 314-320Conference paper (Other academic)
  • 8.
    Yekeh, Farahnaz
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pordel, Mostafa
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Almeida, Luís
    University of Porto.
    Behnam, Moris
    Mälardalen University.
    Portugal, Paulo José
    University of Porto.
    Exploring alternatives to scale FTT-SE to large networks2011In: 6th IEEE International Symposium on Industrial Embedded Systems, 2011Conference paper (Refereed)
    Abstract [en]

    Nowadays, most complex embedded systems follow a distributed approach in which a network interconnects potentially large numbers of nodes. One technology that is being increasingly used is switched Ethernet, but real-time variants of this protocol typically limit scalability. In this paper, we focus on the scalability of the Flexible Time Triggered communication over Switched Ethernet (FTT-SE), which has been proposed to support hard real-time applications in a flexible and predictable manner. Moreover, time-triggered and event-triggered communication methods are supported in this protocol. FTT-SE has already been explored and investigated for small scale networked applications. In this paper we address the protocol scalability and suggest three different solutions with a qualitative assessment. © 2011 IEEE.

1 - 8 of 8
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf