umu.sePublications
Change search
Refine search result
34353637 1801 - 1838 of 1838
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1801.
    Zhou, Zong-ke
    et al.
    University of Western Australia.
    Li, Ming G.
    University of Western Australia.
    Börlin, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Wood, David J.
    University of Western Australia.
    Nivbrant, Bo
    University of Western Australia.
    No Increased Migration in Cups with Ceramic-on-Ceramic Bearing: An RSA Study2006In: Clinical Orthopaedics and Related Research, ISSN 0009-921X, E-ISSN 1528-1132, no 448, p. 39-45Article in journal (Refereed)
    Abstract [en]

    Ceramic-on-ceramic hip replacements might stress the bone interface more than a metal-polyethylene because of material stiffness, microseparation, and sensitivity to impingement. To ascertain whether this potentially increased stress caused an increased cup migration we compared a ceramic-on-ceramic with a metal-on-polyethylene implant for cup migration. Sixty one patients (61 hips) undergoing THA for osteoarthritis were randomized to ceramic on ceramic (Ce/Ce) or cobalt-chromium on cross-linked polyethylene bearings (PE) in the same uncemented cup shell. Migrationwas followed with RSA. At 2 years we observed similar mean cup translations in the 3 directions (0.07–0.40 mm vs. 0.05–0.31 mm, Ce/Ce vs. PE), as well as similar rotations around the 3 axes (0.31–0.92° vs. 0.57–1.40°). WOMAC and SF-36 scores were also similar and no radiolucent lines or osteolysis found. The large migration seen in some cups in both implant groups will require close monitoring to ascertain the reasons. Mean proximal wear of the polyethylene liners measured 0.016 mm between 2 and 24 months. Our data suggest there is no increased cup migration in the ceramic-on-ceramic implant compared with the metal-on-polyethylene, and they seem an equally safe choice. However, the low wear measured with the more versatile and less expensive cross-linked polyethylene makes it a strong contender. Levels of Evidence: Therapeutic Level I. See the Guidelinesfor Authors for a complete description of levels of evidence.

  • 1802.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    AlgExt: an Algorithm Extractor for C Programs2001Report (Other academic)
    Abstract [en]

    ALGEXT is a program that extracts strategic/block comments from C source files to improve maintainability and to keep documentation consistent with source code. This is done by writing the comments in the source code in what we call extractable algorithms, describing the algorithm used in the functions.

    ALGEXT recognizes different kinds of comments:

    • Strategic comments are comments that proceed a block of code, with only whitespace preceding it on the line,
    • Tactical comments are comments that describes the code that precedes it on the same line,
    • Function comments are comments immediately preceding a function definition, describing the function,
    • File comments are comments at the head of the file, before any declarations of functions and variables, and finally
    • Global comments are comments within the global scope, but not associated with a function.

    Only strategic comment are used as basis for algorithm extraction in ALGEXT.

    The paper discusses the rationale for ALGEXT and describes its implementation and usage. Examples are presented for clarification of what can be done with ALGEXT.

    Our experience shows that students who use ALGEXT for preparing theirassignments tend to write about 66% more comments than non-ALGEXT users.

  • 1803.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Departement of Computing Science.
    Assessment of WWW-Based Ranking Systems for Smaller Web Sites2006In: INFOCOMP Journal of Computer Science, ISSN 1807-4545, Vol. 5, no 2, p. 45-55Article in journal (Refereed)
    Abstract [en]

    A comparison between a number of search engines from three different families (HITS, PageRank, and Propagation of Trust) is presented for a small web server with respect to perceived relevance. A total of 307 individual tests have been done and the results from these were disseminated to the algorithms, and then handled using confidence intervals, Kolmogorov-Smirnov and ANOVA. We show that the results can be grouped according to algorithm family, and also that the algorithms (or at least families) can be partially ordered in order of relevance.

  • 1804.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Automatic Generation of Concept Hierarchies for a Discrete Data Mining System2002In: International Conference on Information and Knowledge Engineering (IKE '02) / [ed] Hamid R. Arabnia, Youngsong Mun, Bhanu Prasad, CSREA Press, 2002, p. 287-293Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an algorithm for automatic creation of concept hierarchies from discrete databases and datasets. The reason for doing this is to accommodate later data mining operations on the same set of data without having an expert create these hierachies by hand.

    We will go through the algorithm thoroughly and show the results from each step of the algorithm using a (small) example. We will also give actual execution times for our prototype for non-trivial example data sets and estimates of the complexity of the algorithm in terms of the number of records and the number of distinct data values in the data set.

  • 1805.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    CHiC: A Fast Concept Hierarchy Constructor for Discrete or Mixed Mode Databases2003In: SEKE 2003: Proceedings of the Fifteenth International Conference on Software Engineering & Knowledge Engineering, Knowledge Systems Institute, 2003, p. 250-258Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an algorithm that automatically creates concept hierarchies or lattices for discrete databases and datasets. The reason for doing this is to accommodate later data mining operations on the same sets of data without having an expert create these hierarchies by hand.

    Each step of the algorithm will be examined; We will show inputs and output for each step using a small example. The theoretical upper bound of the complexity for each part of the algorithm will be presented, as well as real time measurements for a number of databases. We will finally present a time model of the algorithm in terms of a number of attributes of the databases

  • 1806.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Finding, extracting and exploiting structure in text and hypertext2009Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Data mining is a fast-developing field of study, using computations to either predict or describe large amounts of data. The increase in data produced each year goes hand in hand with this, requiring algorithms that are more and more efficient in order to find interesting information within a given time.

    In this thesis, we study methods for extracting information from semi-structured data, for finding structure within large sets of discrete data, and to efficiently rank web pages in a topic-sensitive way.

    The information extraction research focuses on support for keeping both documentation and source code up to date at the same time. Our approach to this problem is to embed parts of the documentation within strategic comments of the source code and then extracting them by using a specific tool.

    The structures that our structure mining algorithms are able to find among crisp data (such as keywords) is in the form of subsumptions, i.e. one keyword is a more general form of the other. We can use these subsumptions to build larger structures in the form of hierarchies or lattices, since subsumptions are transitive. Our tool has been used mainly as input to data mining systems and for visualisation of data-sets.

    The main part of the research has been on ranking web pages in a such a way that both the link structure between pages and also the content of each page matters. We have created a number of algorithms and compared them to other algorithms in use today. Our focus in these comparisons have been on convergence rate, algorithm stability and how relevant the answer sets from the algorithms are according to real-world users.

    The research has focused on the development of efficient algorithms for gathering and handling large data-sets of discrete and textual data. A proposed system of tools is described, all operating on a common database containing "fingerprints" and meta-data about items. This data could be searched by various algorithms to increase its usefulness or to find the real data more efficiently.

    All of the methods described handle data in a crisp manner, i.e. a word or a hyper-link either is or is not a part of a record or web page. This means that we can model their existence in a very efficient way. The methods and algorithms that we describe all make use of this fact.

  • 1807.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Departement of Computing Science.
    Propagation of Meta Data over the World Wide Web2003In: Proceedings of the International Conference on Internet Computing (IC '03), Las Vegas, Nevada, USA: CSREA Press , 2003, p. 670-676Conference paper (Refereed)
    Abstract [en]

    In this paper we propose a distribution and propagation algorithm for meta data. The main purpose of this is to tentatively allocate or derive meta data for nodes (in our case sites and/or web pages) for which no meta data exists.

    We propose an algorithm that depends on 1) meta data given to a node, site and/or web page, 2) how pervasive we percieve this meta data, and 3) the trust that we give to this meta data. We will also show that PICS labels can be used to hold the meta data even for distant web pages and sites.

  • 1808.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    S²ProT: Rank Allocation by Superpositioned Propagation of Topic-Relevance2008In: International Journal of Web Information Systems, ISSN 1744-0084, Vol. 4, no 4, p. 416-440Article in journal (Refereed)
    Abstract [en]

    Purpose – The purpose of this paper is to assign topic-specific ratings to web pages.

    Design/methodology/approach – The paper uses power iteration to assign topic-specific rating values (called relevance) to web pages, creating a ranking or partial order among these pages for each topic. This approach depends on a set of pages that are initially assumed to be relevant for a specific topic; the spatial link structure of the web pages; and a net-specific decay factor designated ξ.

    Findings – The paper finds that this approach exhibits desirable properties such as fast convergence, stability and yields relevant answer sets. The first property will be shown using theoretical proofs, while the others are evaluated through stability experiments and assessments of real world data in comparison with already established algorithms.

    Research limitations/implications – In the assessment, all pages that a web spider was able to find in the Nordic countries were used. It is also important to note that entities that use domains outside the Nordic countries (e.g..com or.org) are not present in the paper's datasets even though they reside logically within one or more of the Nordic countries. This is quite a large dataset, but still small in comparison with the entire worldwide web. Moreover, the execution speed of some of the algorithms unfortunately prohibited the use of a large test dataset in the stability tests.

    Practical implications – It is not only possible, but also reasonable, to perform ranking of web pages without using Markov chain approaches. This means that the work of generating answer sets for complex questions could (at least in theory) be divided into smaller parts that are later summed up to give the final answer.

    Originality/value – This paper contributes to the research on internet search engines.

  • 1809.
    Ågren, Ola
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics. Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Using the ProT Nordic Web Dataset2011Report (Other academic)
    Abstract [en]

    In this paper we present a free dataset, usable for testing web search engines. The dataset corresponds to a snapshot of the Nordic part of the Internet back in early 2007 and is highly abstracted, with numbers representing each web page. The released dataset consists of three parts; a graph, 76 sets of pages containing each tested word combination, and some files to use when calculating relevance of the resulting sets of algorithms/search engines. We also present statistics for some search engine algorithms.

  • 1810.
    Ågren, Sanna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Object tracking methods and their areas of application: A meta-analysis: A thorough review and summary of commonly used object tracking methods2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Object tracking is a well-studied problem within the the area of image processing. The ability to track objects has improved drastically during the last decades, however, it is still considered a complex problem to solve. The importance of object tracking is reflected by the broad area of applications such as video surveillance, human-computer interaction, and robot navigation.

    The purpose of this study was to examine, evaluate, and make a summary of the most common object tracking methods. In this paper a thorough review of the object tracking process is presented. This includes selection of object representation, object features, methods for object detection, and methods for tracking the object over succeeding frames. A summary of the object tracking methods covered in this paper is presented in the result section, including advantages, disadvantages, and for which context each method is suitable for.

  • 1811.
    Åkerblom-Andersson, Robert
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    LNoC, Lagom Network On Chip - A thesis about network on chip and the design of a specification of a network on chip optimized for FPGAs called LNoC.2013Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis presents a new network on chip (NoC) designed specifically to suit the growing FPGA market. Network on chip is the interconnect of tomorrow, as of writing this, traditional computer buses are still the most used type of interconnect technology. At the same time research on NoC has been extensive over the last couple of years and is today used in some of the newest and most capable CPUs from leading companies like Intel, Texas Instruments and IBM. As more cores need to t cut it anymore, a network for interconnect communication is the solution. FPGAs are today produced at 22 and 28 nm and are capable of holding very complex logic designs with many cores. Low end FPGAs are also falling in price and it opens up the usage of FPGAs in markets where they earlier could not be used because of their high price.

    LNoC is a network on chip designed to suit the need of both high performance systems and low cost FPGA designs. The key to LNoCs success s reconfigurability and ability to adapt to the demands of each unique FPGA design, while keeping a standard interface for IP blocks enabling effective hardware reusability.

    The presentation of this thesis work contains two main parts, the content of the thesis and the LNoC specification. The thesis chapters focus on the background and development of LNoC. Some example use cases are also discussed. It explains some concepts and technologies that the reader might not be familiar with. The specification contains an in depth description of LNoC and how it should be implemented in hardware

  • 1812.
    Åsbrink, Niklas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    En studie i att tillämpa Computational Thinking på grafteori2014Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Computational thinking was brought to the forefront in 2006 by Jeannette Wing. Computational thinking is a problem solving method that uses computer science techniques.

    The thesis is analyzing computational thinking and how it could be applied to graph theory. Characteristics and main fields from computational thinking is being analysed. This analysis is applied to graph theory to see the potential in developing a proposal for how an exercise can look for an introductory course in discrete data types. Only basic knowledge of graphs is required to perform the exercise. It's required to know what a directed, undirected and weighted graph is.

    The exercise is based upon exercises and theory from a report called Computational Thinking - Teacher Resources written by Computer Science Teachers Association and International Society for Technology in Education. The exercise should be solved in a group of 4 people and is a complex problem that is reminiscent of the Travelling Salesman Problem.

    In the end of the thesis a discussion is held about the definition of computational thinking, the creation of the exercise and a discussion of the future in the field.

    The cognitive aspect will not be deepened or questioned in this study.

  • 1813.
    Åslin, Martin
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Using Artificial Intelligence for the Evaluation of the Movability of Insurances2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today the decision to move an insurance from one company/bank to another is done manually. So there is always the risk that a incorrect decision is made due to human error. The goal of this thesis is to evaluate the possibility to use an artifcial intelligence, AI, to make that decision instead. The thesis evaluates three AI techniques Fuzzy clustering, Bayesian networks and Neural networks. These three techniques was compared and it was decided that Fuzzy clustering would be the technique to use. Even though Fuzzy clustering only achieved a hit rate of 69%, there is a lot of potential in Fuzzy clustering. In section 4.2 on page 32 a few improvements are discussed which should help raise the hit rate.

  • 1814.
    Çöltekin, Arzu
    et al.
    Department of Geography, University of Zurich.
    Francelet, Rebecca
    Department of Geography, University of Zurich.
    Richter, Kai-Florian
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Thoresen, John
    Laboratory of Behavioural Genetics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL).
    Fabrikant, Sara Irina
    Department of Geography, University of Zurich.
    The effects of visual realism, spatial abilities, and competition on performance in map-based route learning in men2018In: Cartography and Geographic Information Science, ISSN 1523-0406, E-ISSN 1545-0465, Vol. 45, no 4, p. 339-353Article in journal (Refereed)
    Abstract [en]

    We report on how visual realism might influence map-based route learning performance in a controlled laboratory experiment with 104 male participants in a competitive context. Using animations of a dot moving through routes of interest, we find that participants recall the routes more accurately with abstract road maps than with more realistic satellite maps. We also find that, irrespective of visual realism, participants with higher spatial abilities (high-spatial participants) are more accurate in memorizing map-based routes than participants with lower spatial abilities (low-spatial participants). On the other hand, added visual realism limits high-spatial participants in their route recall speed, while it seems not to influence the recall speed of low-spatial participants. Competition affects participants’ overall confidence positively, but does not affect their route recall performance neither in terms of accuracy nor speed. With this study, we provide further empirical evidence demonstrating that it is important to choose the appropriate map type considering task characteristics and spatial abilities. While satellite maps might be perceived as more fun to use, or visually more attractive than road maps, they also require more cognitive resources for many map-based tasks, which is true even for high-spatial users.

  • 1815.
    Öberg, Linus
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Evaluation of Cross-Platform Mobile Development ToolsDevelopment of an Evaluation Framework2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The aim of this thesis is to determine what cross-platform mobile development tool that is best suited for Vitec and their mobile application ”Teknisk Förvaltning”. But more importantly we will in this thesis develop a generic evaluation framework for assessing cross-platform mobile development tools. With the purpose of making it easy to select the most appropriate tool for a specific mobile application.

    This was achieved by first, in consideration with Vitec, selecting Cordova + Ionic and Xamarin. Forms as the cross-platform tools to be evaluated. Then secondly by proposing an evaluation framework containing criteria evolved from discussions with mobile application developers, experience gained by developing prototype applications using the tools to be evaluated and through literature studies. The initial framework was also refined by investigating what other research exists in the area and by learning from their methods and conclusions we refined our initial evaluation framework.

    The result of performing the evaluation using the suggested evaluation framework is visualized in a result table, where the fulfilment of each criteria is graded from one to five. By allowing Vitec to rank how important these criteria are for their application, we were able to determine which of the tools that was the best fit for their application. We have thereby succeed in developing a generic evaluation framework which can be used to asses any cross-platform mobile development tool.

  • 1816.
    Ödling, Robin
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Data transfer over high latency and high packet loss networks2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    This thesis aims to pin down which aspects aect the data transfer speed through the Internet, especially with regard to latency and packet loss. What is required from a technical point of view in order to effciently mitigate the effects of latency and packet loss to network transfer? What are the diffculties when testing network protocols? The approach is to test four different protocols; TCP CUBIC, Westwood+, Tsunami, and UDT, while gathering metrics such as throughput, the size of the congestion window and slow start threshold, and then analysing the results. The analysis of the results show that latency have the most impact on throughput, effecting the network transfer by decreasing the number of times the congestion window is able to grow in RTT dependant algorithms, effectively decreasing the throughput. Packet loss eects network transfer because protocols interpreting the loss as a congestion problem on which the protocol decreases the sending rate. The size of this decrease is shown to impact the throughput, where an aggressive decrease shows a poorer performance with packet loss. The study can aid anyone who seeks to develop a network transport protocol.

  • 1817.
    Ögren, Daniel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Generation of thumbnails2016Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Den moderna människan interagerar med ett stort antal system dagligen. Detta tillsammans med dagens överflöd av applikationer ställer högre krav än någonsin på prestanda och användbarhet. Om ett program läser sig eller på annat sätt upplevs långsamt, finns det ofta ett flertal andra program som kan utföra samma uppgift. I detta arbete utförs en utvärdering av olika konverteringsstrategier i form av olika programvaror och bibliotek. Här görs det en utvärdering av hur optimering, felhantering och andra viktiga faktorer hanteras i dessa olika konverteringsstrategier och vilken inverkan det har på ett system i sin helhet. Det vi har kommit fram till i denna studie är att dåligt optimerade program kan använda stora delar av ett systems resurser och en begränsad felhantering kan leda till situationer som är svåra att återhämta sig från. Vid jämförelsen av olika konverteringsprogram är det observerat att vissa program använder betydligt mer systemresurser än andra program. Denna begränsade felhantering används som standard i de tillämpningar som ar utvärderade, men det går dock att observera att det finns utökad funktionalitet för att hantera fel i många av de konverteringsstrategier som har utvärderats.

  • 1818.
    Öhman, Mikael
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    a Data-Warehouse Solution for OMS Data Management2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A database system for storing and querying data of a dynamic schema has been developed based on the kdb+ database management system and the q programming language for use in a financial setting of order and execution services. Some basic assumptions of mandatory fields of the data to be stored are made including that the data are time-series based.A dynamic schema enables an Order-Management System (OMS) to store information not suitable or usable when stored in log files or traditional databases. Log files are linear, cannot be queried effectively and are not suitable for the volumes produced by modern OMSs. Traditional databases are typically row-oriented which does not suit time-series based data and rely on the relational model which uses statically typed sets to store relations.The created system includes software that is capable of mining the actual schema stored in the database and visualize it. This enables ease of exploratory querying and production of applications which use the database. A feedhandler has been created optimized for handling high volumes of data. Volumes in finance are steadily growing as the industry continues to adopt computer automation of tasks. Feedhandler performance is important to reduce latency and for cost savings as a result of not having to scale horizontally. A study of the area of algorithmic trading has been performed with focus on transaction-cost analysis. Fundamental algorithms have been reviewed.A proof of concept application has been created that simulates an OMS storing logs on the execution of a Volume Weighted Average Price (VWAP) trading algorithm. The stored logs are then used in order to improve the performance of the trading algorithm through basic data mining and machine learning techniques. The actual learning algorithm focuses on predicting intraday volume patterns.

  • 1819.
    Östberg, Per-Olov
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A model for simulation of application and resource behavior in heterogeneous distributed computing environments2012In: Proceedings of the 2nd international conference on simulation and modeling methodologies, technologies and applications / [ed] Nuno Pina, Janusz Kacprzyk, Mohammad S. Obaidat, SciTePress, 2012, p. 144-151Conference paper (Refereed)
  • 1820.
    Östberg, Per-Olov
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Architectures, design methodologies, and service composition techniques for Grid job and resource management2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    The field of Grid computing has in recent years emerged and been established as an enabling technology for a range of computational eScience applications. The use of Grid technology allows researchers and industry experts to address problems too large to efficiently study using conventional computing technology, and enables new applications and collaboration models. Grid computing has today not only introduced new technologies, but also influenced new ways to utilize existing technologies.This work addresses technical aspects of the current methodology of Grid com- puting; to leverage highly functional, interconnected, and potentially under-utilized high-end systems to create virtual systems capable of processing problems too large to address using individual (supercomputing) systems. In particular, this thesis studies the job and resource management problem inherent to Grid environments, and aims to contribute to development of more mature job and resource management systems and software development processes. A number of aspects related to Grid job and resource management are here addressed, including software architectures for Grid job management, design methodologies for Grid software development, service composition (and refactorization) techniques for Service-Oriented Grid Architectures, Grid infrastructure and application integration issues, and middleware-independent and transparent techniques to leverage Grid resource capabilities.The software development model used in this work has been derived from the notion of an ecosystem of Grid components. In this model, a virtual ecosystem is defined by the set of available Grid infrastructure and application components, and ecosystem niches are defined by areas of component functionality. In the Grid ecosys- tem, applications are constructed through selection and composition of components, and individual components subject to evolution through meritocratic natural selection. Central to the idea of the Grid ecosystem is that mechanisms that promote traits beneficial to survival in the ecosystem, e.g., scalability, integrability, robustness, also influence Grid application and infrastructure adaptability and longevity. As Grid computing has evolved into a highly interdisciplinary field, current Grid applications are very diverse and utilize computational methodologies from a number of fields. Due to this, and the scale of the problems studied, Grid applications typically place great performance requirements on Grid infrastructures, making Grid infrastructure design and integration challenging tasks. In this work, a model of building on, and abstracting, Grid middlewares has been developed and is outlined in the papers. In addition to the contributions of this thesis, a number of software artefacts, e.g., the Grid Job Management Framework (GJMF), have resulted from this work.

  • 1821.
    Östberg, Per-Olov
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Virtual infrastructures for computational science: software and architectures for distributed job and resource management2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In computational science, the scale of problems addressed and the resolution of solu- tions achieved are often limited by the available computational capacity. The current methodology of scaling computational capacity to large scale (i.e. larger than individ- ual resource site capacity) includes aggregation and federation of distributed resource systems. Regardless of how this aggregation manifests, scaling of scientific compu- tational problems typically involves (re)formulation of computational structures and problems to exploit problem and resource parallelism. Efficient parallelization and scaling of scientific computations to large scale is difficult and further complicated by a number of factors introduced by resource aggregation, e.g., resource heterogene- ity and coupling of computational methodology. Scaling complexity severely impacts computation enactment and necessitates the use of mechanisms that provide higher abstractions for management of computations in distributed computing environments.This work addresses design and construction of virtual infrastructures for scientific computation that abstract computation enactment complexity, decouple computation specification from computation enactment, and facilitate large-scale use of compu- tational resource systems. In particular, this thesis discusses job and resource man- agement in distributed virtual scientific infrastructures intended for Grid and Cloud computing environments. The main area studied is Grid computing, which is ap- proached using Service-Oriented Computing and Architecture methodology. Thesis contributions discuss both methodology and mechanisms for construction of virtual infrastructures, and address individual problems such as job management, application integration, scheduling job prioritization, and service-based software development.I addition to scientific publications, this work also makes contributions in the form of software artifacts that demonstrate the concepts discussed. The Grid Job Manage- ment Framework (GJMF) abstracts job enactment complexity and provides a range of middleware-agnostic job submission, control, and monitoring interfaces. The FSGrid framework provides a generic model for specification and delegation of resource allo- cations in virtual organizations, and enacts allocations based on distributed fairshare job prioritization. Mechanisms such as these decouple job and resource management from computational infrastructure systems and facilitate the construction of scalable virtual infrastructures for computational science.

  • 1822.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Barry, McCollum
    Queens University of Belfast, United Kingdom.
    Heuristics and Algorithms for Data Center Optimization2015In: Proceedings of the 7th Multidisciplinary International Conference on Scheduling : Theory and Applications (MISTA 2015) / [ed] Zdenek Hanzálek, Graham Kendall, Barry McCollum, Premysl Šůcha, 2015, p. 921-927Conference paper (Other academic)
  • 1823.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Byrne, James
    Casari, Paolo
    Eardley, Philip
    Fernandez Anta, Antonio
    Forsman, Johan
    Kennedy, John
    Le Duc, Thang
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Noya Marino, Manuel
    Loomba, Radhika
    Lopez Pena, Miguel Angel
    Veiga, Jose Lopez
    Lynn, Theo
    Mancuso, Vincenzo
    Svorobej, Sergej
    Torneus, Anders
    Wesner, Stefan
    Willis, Peter
    Domaschka, Joerg
    Reliable Capacity Provisioning for Distributed Cloud/Edge/Fog Computing Applications2017In: 2017 EUROPEAN CONFERENCE ON NETWORKS AND COMMUNICATIONS (EUCNC), IEEE , 2017Conference paper (Refereed)
    Abstract [en]

    The REliable CApacity Provisioning and enhanced remediation for distributed cloud applications (RECAP) project aims to advance cloud and edge computing technology, to develop mechanisms for reliable capacity provisioning, and to make application placement, infrastructure management, and capacity provisioning autonomous, predictable and optimized. This paper presents the RECAP vision for an integrated edge-cloud architecture, discusses the scientific foundation of the project, and outlines plans for toolsets for continuous data collection, application performance modeling, application and component auto-scaling and remediation, and deployment optimization. The paper also presents four use cases from complementing fields that will be used to showcase the advancements of RECAP.

  • 1824.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A Performance Evaluation of the Grid Job Management Framework (GJMF)2011Conference paper (Refereed)
  • 1825.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Decentralized Prioritization-Based Management Systems for Distributed Computing2013In: 2013 IEEE 9th international conference on e-science (e-science), IEEE Computer Society, 2013, p. 228-237Conference paper (Refereed)
    Abstract [en]

    Fairshare scheduling is an established technique to provide user-level differentiation in management of capacity consumption in high-performance and grid computing scheduler systems. In this paper we extend on a state-of-the-art approach to decentralized grid fairshare and propose a generalized model for construction of decentralized prioritization-based management systems. The approach is based on (re) formulation of control problems as prioritization problems, and a proposed framework for computationally efficient decentralized priority calculation. The model is presented along with a discussion of application of decentralized management systems in distributed computing environments that outlines selected use cases and illustrates key trade-off behaviors of the proposed model.

  • 1826.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    GJMF - a composable service-oriented Grid job management framework2013In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 29, no 1, p. 144-157Article in journal (Refereed)
    Abstract [en]

    We investigate best practices for Grid software design and development, and propose a composable, loosely coupled Service-Oriented Architecture for Grid job management. The architecture focuses on providing a transparent Grid access model for concurrent use of multiple Grid middlewares and aims to decouple Grid applications from Grid middlewares and infrastructure. The notion of an ecosystem of Grid infrastructure components is extended, and Grid job management software design is discussed in this context. Non- intrusive integration models and abstraction of Grid middleware function- ality through hierarchical aggregation of autonomous Grid job management services are emphasized, and service composition techniques facilitating this process are explored. A proof-of-concept implementation of the architecture is presented along with a discussion of architecture implementation details and trade-offs introduced by the service composition techniques used.

  • 1827.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Impact of service overhead on service-oriented Grid architectures2011Conference paper (Refereed)
    Abstract [en]

    Grid computing applications and infrastructures build heavily on Service-Oriented Computing development methodology and are often realized as Service-Oriented Ar- chitectures. Current Service-Oriented Architecture methodology renders service components as Web Services, and suffers per- formance limitations from Web Service overhead. The Grid Job Management Framework (GJMF) is a flexible Grid in- frastructure and application support component realized as a loosely coupled network of Web Services that offers a range of abstractive and platform independent interfaces for middleware- agnostic Grid job submission, monitoring, and control. In this paper we a present a performance evaluation aimed to characterize the impact of service overhead on Grid Service- Oriented Architectures and evaluate the efficiency of the GJMF architecture and optimization mechanisms designed to mediate impact of Web Service overhead on architecture performance.

  • 1828.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Increasing flexibility and abstracting complexity in service-based Grid and cloud software2011In: Proceedings of the 1st International Conference on Cloud Computing and Services Science / [ed] F. Leyman, I. Ivanov, M. van Sinderen and B. Shishkov, SciTePress, 2011, p. 240-249Conference paper (Refereed)
    Abstract [en]

    This work addresses service-based software development in Grid and Cloud computing environments, and proposes a methodology for Service-Oriented Architecture design. The approach consists of an architecture design methodology focused on facilitating system flexibility, a service model emphasizing component modularity and customization, and a development tool designed to abstract service development complexity. The approach is intended for use in computational eScience environments and is designed to increase flexibility in system design, development, and deployment, and reduce complexity in system development and administration. To illustrate the approach we present case studies from two recent Grid infrastructure software development projects, and evaluate impact of the development approach and the toolset on the projects.

  • 1829.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Mediation of service overhead in service-oriented grid architectures2011In: 2011 IEEE/ACM 12th International Conference on Grid Computing, IEEE, 2011, p. 9-18Conference paper (Refereed)
    Abstract [en]

    Grid computing applications and infrastructures build heavily on Service-Oriented Computing development methodology and are often realized as Service-Oriented Architectures. The Grid Job Management Framework (GJMF) is a flexible Grid infrastructure and application support tool that offers a range of abstractive and platform independent interfaces for middleware-agnostic Grid job submission, monitoring, and control. In this paper we use the GJMF as a test bed for characterization of Grid Service-Oriented Architecture overhead, and evaluate the efficiency of a set of design patterns for overhead mediation mechanisms featured in the framework.

  • 1830.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Service development abstraction: A design methodology and development toolset for abstractive and flexible service-based software2011In: Cloud Computing and Service Science / [ed] Ivanov, van Sinderen, and Shishkov, Springer, 2011Chapter in book (Refereed)
  • 1831.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Espling, Daniel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Decentralized scalable fairshare scheduling2013In: Future generations computer systems, ISSN 0167-739X, E-ISSN 1872-7115, Vol. 29, no 1, p. 130-143Article in journal (Refereed)
    Abstract [en]

    This work addresses Grid fairshare allocation policy enforcement and presents Aequus, a decentralized system for Grid-wide fairshare job prioritization. The main idea of fairshare scheduling is to prioritize users with regard to predefined resource allocation quotas. The presented system builds on three contributions: a flexible tree-based policy model that allows delegation of policy definition, a job prioritization algorithm based on local enforcement of distributed fairshare policies, and a decentralized architecture for non-intrusive integration with existing scheduling systems. The system supports organization of users in virtual organizations and divides usage policies into local and global policy components that are defined by resource owners and virtual organizations. The architecture realization is presented in detail along with an evaluation of the system behavior in an emulated environment. In the evaluation, convergence noise types (mechanisms counteracting policy allocation convergence) are characterized and quantified, and the system is demonstrated to meet scheduling objectives and perform scalably under realistic operating conditions.

  • 1832.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Groenda, Henning
    Wesner, Stefan
    Byrne, James
    Nikolopoulos, Dimitris S.
    Sheridan, Craig
    Krzywda, Jakub
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ali-Eldin, Ahmed
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Stier, Christian
    Krogmann, Klaus
    Domaschka, Jörg
    Hauser, Christopher B.
    Byrne, PJ
    Svorobej, Sergej
    McCollum, Barry
    Papazachos, Zafeiros
    Whigham, Darren
    Rüth, Stefan
    Paurevic, Dragana
    The CACTOS Vision of Context-Aware Cloud Topology Optimization and Simulation2014In: 2014 IEEE 6th International Conference on Cloud Computing Technology and Science, 2014, p. 26-31Conference paper (Refereed)
    Abstract [en]

    Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

  • 1833.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellander, Andreas
    Drawert, Brian
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Holmgren, Sverker
    Petzold, Linda
    Abstractions for scaling escience applications to distributed computing environments: a StratUm Integration Case Study in Molecular Systems Biology2012In: Bioinformatics: proceedings of the international conference on bioinformatics models, methods and algorithms / [ed] Correia, C; Fred, A; Gamboa, H; Schier, J, SETUBAL: SCITEPRESS , 2012, p. 290-294Conference paper (Other academic)
    Abstract [en]

    Management of eScience computations and resulting data in distributed computing environments is complicated and often introduces considerable overhead. In this work we address a lack of integration tools that provide the abstraction levels, performance, and usability required to facilitate migration of eScience applications to distributed computing environments, In particular, we explore an approach to raising abstraction levels based on separation of computation design computation management and present StratUm, a computation enactment tool for distributed computing environments. Results are illustrated in a case study of integration of a software from the systems biology community with a grid computation management system.

  • 1834.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellander, Andreas
    University of California, Santa Barbara, USA.
    Drawert, Brian
    University of California, Santa Barbara, USA.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Holmgren, Sverker
    Uppsala University, Sweden.
    Petzold, Linda
    University of California, Santa Barbara, USA.
    Reducing Complexity in Management of eScience Computation2012In: CCGrid 2012: Proceedings of the 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing / [ed] Balaji, P., Buyya, R., Majumdar, S., Pandey, S., IEEE, 2012, p. 845-852, article id 6217522Conference paper (Refereed)
    Abstract [en]

    In this paper we address reduction of complexity in management of scientific computations in distributed computing environments. We explore an approach based on separation of computation design (application development) and distributed execution of computations, and investigate best practices for construction of virtual infrastructures for computational science - software systems that abstract and virtualize the processes of managing scientific computations on heterogeneous distributed resource systems. As a result we present StratUm, a toolkit for management of eScience computations. To illustrate use of the toolkit, we present it in the context of a case study where we extend the capabilities of an existing kinetic Monte Carlo software framework to utilize distributed computational resources. The case study illustrates a viable design pattern for construction of virtual infrastructures for distributed scientific computing. The resulting infrastructure is evaluated using a computational experiment from molecular systems biology.

  • 1835.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lockner, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Creo: reduced complexity service development2014In: Proceedings of CLOSER 2014 - 4th International Conference on Cloud Computing and Services Science / [ed] M. Helfert, F. Desprez, D. Ferguson, F. Leymann, V. Mendez Munoz, SciTePress, 2014, p. 230-241Conference paper (Refereed)
    Abstract [en]

    In this work we address service-oriented software development in distributed computing environments, and investigate an approach to software development and integration based on code generation. The approach is illustrated in a toolkit for multi-language software generation built on three building blocks; a service description language, a serialization and transport protocol, and a set of code generation techniques. The approach is intended for use in the eScience domain and aims to reduce the complexity of development and integration of distributed software systems through a low-knowledge-requirements model for construction of network-accessible services. The toolkit is presented along with a discussion of use cases and a performance evaluation quantifying the performance of the toolkit against selected alternative techniques for code generation and service communication. In tests of communication overhead and response time, toolkit performance is found to be comparable to or improve upon the evaluated techniques.

  • 1836.
    Östberg, Per-Olov
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lockner, Niclas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Reducing Complexity in Service Development and Integration2015In: Cloud computing and services sciences, CLOSER 2014, Springer Berlin/Heidelberg, 2015, p. 63-80Conference paper (Refereed)
    Abstract [en]

    The continuous growth and increasing complexity of distributed systems software has produced a need for software development tools and techniques that reduce the learning requirements and complexity of building distributed systems. In this work we address reduction of complexity in service-oriented software development and present an approach and a toolkit for multi-language service development based on three building blocks: a simplified service description language, an intuitive message serialization and transport protocol, and a set of code generation techniques that provide boilerplate environments for service implementations. The toolkit is intended for use in the eScience domain and is presented along with a performance evaluation that quantifies toolkit performance against that of selected alternative toolkits and technologies for service development. Toolkit performance is found to be comparable to or improve upon the performance of evaluated technologies.

  • 1837.
    Öztürk, Aybüke
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Experimenting with Open Data2013Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Public (open) data are now provided by many governments and organizations. Access to them can be made through central repositories or applications such as Google public data. 1 On the other hand, usage is still very much human oriented; there is no global data download, the data need to be selected and prepared manually, and need to be decided data formatting. Experimenting with open data project aim is to design and to evaluate a research prototype for crawling open data repository and collecting extracted data.A key issue is to be able to automatically collect and organize data in order to ease their re-use. Our scenario here is not searching for a single and specific dataset, but downloading a full repository to see what we can expect/automate/extract/learn from this large set of data. This project will involve conducting a number of experiments to achieve this.

     

  • 1838.
    Öztürk, Aybüke
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Textual Summarization of Scientific Publications and Usage Patterns2012Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this study, we propose textual summarization for scientific publications and mobile phone usage patterns. Textual summarization is a process that takes a source document or set of related documents, identifying the most salient information and conveying it in less space than the original text. The increasing availability of information has necessitated deep research for textual summarization within Information Retrieval and the Natural Language Processing (NLP) area because textual summaries are easier to read, and provide to access to large repositories of content data in an efficient way. For example, snippets in web search are helpful for users as textual summaries. While there exists summarization tools for textual summarization, either they are not adapted to scientific collection of documents or they summarize short form of text such as news. In the first part of this study, we adapt the MEAD 3.11 summarization tool [19] to propose a method for building summaries of a set of related scientific articles by exploiting the structure of scientific publications in order to focus on some parts that are known to be the most informative in such documents. In the second part, we generate a natural language statement that describes a more readable form of a given symbolic pattern extracted from Nokia Challenge data. The reason is that the availability of mobile phone usage details enables new opportunities to provide a better understanding of the interest of user populations in mobile phone applications. For evaluating the first part of study, we make use of Amazon Mechanical Turk (Mturk) to validate summarization output.

34353637 1801 - 1838 of 1838
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf