Umeå University's logo

umu.sePublications
Change search
Refine search result
1234567 1 - 50 of 2668
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Aaghabali, M.
    et al.
    Akbari, S.
    Friedland, S.
    Markström, Klas
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Tajfirouz, Z.
    Upper bounds on the number of perfect matchings and directed 2-factors in graphs with given number of vertices and edges2015In: European journal of combinatorics (Print), ISSN 0195-6698, E-ISSN 1095-9971, Vol. 45, p. 132-144Article in journal (Refereed)
    Abstract [en]

    We give an upper bound on the number of perfect matchings in simple graphs with a given number of vertices and edges. We apply this result to give an upper bound on the number of 2-factors in a directed complete bipartite balanced graph on 2n vertices. The upper bound is sharp for even n. For odd n we state a conjecture on a sharp upper bound.

  • 2.
    Abdollahian, Josef
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Kanwar, Anna
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Optimering av kortaste vägen vid hantering och avledning av skadligt dagvatten: Lösning med A-stjärna algoritm samt en guide med ekonomiska styrmedel för beslutsfattande aktörer2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The earth's population is growing and increasingly more people move into urban areas. This means that as cities grow, new buildings are being built and infrastructures are expanding. This rapid growth is directly related to increased floods as a result of man-made changes in nature.

    The already overloaded storm water systems for rain-, melt-, rinsing and other surplus water cannot often handle the existing demand. Therefore, floods arise at greater rain intensity and pose significant costs to society. Due to an unclear division of responsibility within the municipality's organizations there is a failure to handle the existing storm water problem. In order to be able to plan for sustainable cities in the future, it is important to find a viable solution regarding the responsibility issue and how to best handle the storm water to achieve cost advantage.

    This study presents a guide for municipalities on how to allocate the responsibility between the municipality and the exploiter. The guide is based on simulations and theories in optimization to propose effective solutions for harmful surplus storm water. Through simulations of the storm water system, the amount of surplus water that does not fit the storm water system capacity has been quantified. In addition, to find a reasonable alternative run-off path for the surplus water, different methods of the shortest path problem have been investigated.

    The results show that a classical shortest path algorithm with a heuristic function is not the most appropriate alternative. This because the heuristic function in the algorithm prevents the selection of a more natural pathway upstream even though it could be a more optimal solution.

    Download full text (pdf)
    Optimering av kortaste vägen vid hantering och avledning av skadligt dagvatten
  • 3.
    Abdulle, Assyr
    et al.
    ANMC, EPFL.
    Cohen, David
    Institut für Angewandte und Numerische Mathematik, KIT.
    Vilmart, Gilles
    ANMC, EPFL.
    Konstantinos, Zygalakis
    ANMC, EPFL.
    High weak order methods for stochastic differential equations based on modified equations2012In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 34, no 3, p. A1800-A1823Article in journal (Refereed)
    Abstract [en]

    Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (mean-square stable) stochastic problems, and implicit integrators that exactly conserve all quadratic firstintegrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.

  • 4.
    Abramowicz, Konrad
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Numerical analysis for random processes and fields and related design problems2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D).

    In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed.

    In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided.

    In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity.

    In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.

    Download full text (pdf)
    fulltext
  • 5.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Arnqvist, Per
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Secchi, Piercesare
    Vantini, Simone
    Vitelli, Valeria
    Was it snowing on lake Kassjön in January 4486 BC? Functional data analysis of sediment data2014Conference paper (Other academic)
  • 6.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Häger, Charlotte
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Hérbert-Losier, Kim
    Swedish Winter Sports Research Centre Mid Sweden; University Department of Health Sciences, Östersund, Sweden.
    Pini, Alessia
    MOX – Department of Mathematics, Politecnico di Milano.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    MOX – Department of Mathematics, Politecnico di Milano.
    An inferential framework for domain selection in functional anova2014In: Contributions in infinite-dimensional statistics and related topics / [ed] Bongiorno, E.G., Salinelli, E., Goia, A., Vieu, P, Esculapio , 2014Conference paper (Refereed)
    Abstract [en]

    We present a procedure for performing an ANOVA test on functional data, including pairwise group comparisons. in a Scheff´e-like perspective. The test is based on the Interval Testing Procedure, and it selects intervals where the groups significantly differ. The procedure is applied on the 3D kinematic motion of the knee joint collected during a functional task (one leg hop) performed by three groups of individuals.

  • 7.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Häger, Charlotte
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Pini, Alessia
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    Nonparametric inference for functional-on-scalar linear models applied to knee kinematic hop data after injury of the anterior cruciate ligament2018In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 45, no 4, p. 1036-1061Article in journal (Refereed)
    Abstract [en]

    Motivated by the analysis of the dependence of knee movement patterns during functional tasks on subject-specific covariates, we introduce a distribution-free procedure for testing a functional-on-scalar linear model with fixed effects. The procedure does not only test the global hypothesis on the entire domain but also selects the intervals where statistically significant effects are detected. We prove that the proposed tests are provided with an asymptotic control of the intervalwise error rate, that is, the probability of falsely rejecting any interval of true null hypotheses. The procedure is applied to one-leg hop data from a study on anterior cruciate ligament injury. We compare knee kinematics of three groups of individuals (two injured groups with different treatments and one group of healthy controls), taking individual-specific covariates into account.

    Download full text (pdf)
    fulltext
  • 8.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Pini, Alessia
    Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Stamm, Aymeric
    Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, Nantes, France.
    Vantini, Simone
    MOX – Modelling and Scientific Computing Laboratory, Department of Mathematics, Politecnico di Milano, Milan, Italy.
    Domain selection and family-wise error rate for functional data: a unified framework2023In: Biometrics, ISSN 0006-341X, E-ISSN 1541-0420, Vol. 79, no 2, p. 1119-1132Article in journal (Refereed)
    Abstract [en]

    Functional data are smooth, often continuous, random curves, which can be seen as an extreme case of multivariate data with infinite dimensionality. Just as component-wise inference for multivariate data naturally performs feature selection, subset-wise inference for functional data performs domain selection. In this paper, we present a unified testing framework for domain selection on populations of functional data. In detail, p-values of hypothesis tests performed on point-wise evaluations of functional data are suitably adjusted for providing a control of the family-wise error rate (FWER) over a family of subsets of the domain. We show that several state-of-the-art domain selection methods fit within this framework and differ from each other by the choice of the family over which the control of the FWER is provided. In the existing literature, these families are always defined a priori. In this work, we also propose a novel approach, coined threshold-wise testing, in which the family of subsets is instead built in a data-driven fashion. The method seamlessly generalizes to multidimensional domains in contrast to methods based on a-priori defined families. We provide theoretical results with respect to consistency and control of the FWER for the methods within the unified framework. We illustrate the performance of the methods within the unified framework on simulated and real data examples, and compare their performance with other existing methods.

    Download full text (pdf)
    fulltext
  • 9.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Multiresolution clustering of dependent functional data with application to climate reconstruction2019In: Stat, E-ISSN 2049-1573, Vol. 8, no 1, article id e240Article in journal (Refereed)
    Abstract [en]

    We propose a new nonparametric clustering method for dependent functional data, the double clustering bagging Voronoi method. It consists of two levels of clustering. Given a spatial lattice of points, a function is observed at each grid point. In the first‐level clustering, features of the functional data are clustered. The second‐level clustering takes dependence into account, by grouping local representatives, built from the resulting first‐level clusters, using a bagging Voronoi strategy. Depending on the distance measure used, features of the functions may be included in the second‐step clustering, making the method flexible and general. Combined with the clustering method, a multiresolution approach is proposed that searches for stable clusters at different spatial scales, aiming to capture latent structures. This provides a powerful and computationally efficient tool to cluster dependent functional data at different spatial scales, here illustrated by a simulation study. The introduced methodology is applied to varved lake sediment data, aiming to reconstruct winter climate regimes in northern Sweden at different time resolutions over the past 6,000 years.

  • 10.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Multivariate piecewise linear interpolation of a random field2011Manuscript (preprint) (Other academic)
    Abstract [en]

    We consider a multivariate piecewise linear interpolation of a continuous random field on a-dimensional cube. The approximation performance is measured by the integrated mean square error. Multivariate piecewise linear interpolator is defined by N field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field in mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields we provide the upper bound for the approximation accuracy in the uniform mean square norm.

    Download full text (pdf)
    fulltext
  • 11.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    On the error of the Monte Carlo pricing method for Asian option2008In: Journal of Numerical and Applied Mathematics, ISSN 0868-6912, Vol. 96, no 1, p. 1-10Article in journal (Refereed)
    Abstract [en]

    We consider a Monte Carlo method to price a continuous arithmetic Asian option with a given precision. Piecewise constant approximation and plain simulation are used for a wide class of models based on L\'{e}vy processes. We give bounds of the possible discretization and simulation errors. The sufficient numbers of discretization points and simulations to obtain requested accuracy are derived. To demonstrate the general approach, the Black-Scholes model is studied in more detail. We undertake the case of continuous averaging and starting time zero,  but the obtained results can be applied to the discrete case  and generalized for any time before an execution date. Some numerical experiments and comparison to the PDE based method are also presented.

  • 12.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Piecewise multilinear interpolation of a random field2013In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 45, no 4, p. 945-959Article in journal (Refereed)
    Abstract [en]

    We consider a piecewise-multilinear interpolation of a continuous random field on a d-dimensional cube. The approximation performance is measured using the integrated mean square error. Piecewise-multilinear interpolator is defined by N-field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field, in the mean square sense, and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields, we provide the upper bound for the approximation accuracy in the uniform mean square norm.

  • 13.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Stratified Monte Carlo quadrature for continuous random fields2015In: Methodology and Computing in Applied Probability, ISSN 1387-5841, E-ISSN 1573-7713, Vol. 17, no 1, p. 59-72Article in journal (Refereed)
    Abstract [en]

    We consider the problem of numerical approximation of integrals of random fields over a unit hypercube. We use a stratified Monte Carlo quadrature and measure the approximation performance by the mean squared error. The quadrature is defined by a finite number of stratified randomly chosen observations with the partition generated by a rectangular grid (or design). We study the class of locally stationary random fields whose local behavior is like a fractional Brownian field in the mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large number of the observations. For the H¨older class of random functions, we provide an upper bound for the approximation error. Additionally, for a certain class of isotropic random functions with an isolated singularity at the origin, we construct a sequence of designs eliminating the effect of the singularity point.

  • 14.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Nonparametric bagging clustering methods to identify latent structures from a sequence of dependent categorical data2022In: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 177, article id 107583Article in journal (Refereed)
    Abstract [en]

    Nonparametric bagging clustering methods are studied and compared to identify latent structures from a sequence of dependent categorical data observed along a one-dimensional (discrete) time domain. The frequency of the observed categories is assumed to be generated by a (slowly varying) latent signal, according to latent state-specific probability distributions. The bagging clustering methods use random tessellations (partitions) of the time domain and clustering of the category frequencies of the observed data in the tessellation cells to recover the latent signal, within a bagging framework. New and existing ways of generating the tessellations and clustering are discussed and combined into different bagging clustering methods. Edge tessellations and adaptive tessellations are the new proposed ways of forming partitions. Composite methods are also introduced, that are using (automated) decision rules based on entropy measures to choose among the proposed bagging clustering methods. The performance of all the methods is compared in a simulation study. From the simulation study it can be concluded that local and global entropy measures are powerful tools in improving the recovery of the latent signal, both via the adaptive tessellation strategies (local entropy) and in designing composite methods (global entropy). The composite methods are robust and overall improve performance, in particular the composite method using adaptive (edge) tessellations.

    Download full text (pdf)
    fulltext
  • 15.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Nonparametric clustering methods to identify latent structures from a sequence of dependent categorical dataManuscript (preprint) (Other academic)
  • 16.
    Abramowizc, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Arnqvist, Per
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Secchi, Piercesare
    Politecnico di Milano, Italy.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    Politecnico di Milano, Italy.
    Vitelli, Valeria
    Oslo University, Norway.
    Clustering misaligned dependent curves applied to varved lake sediment for climate reconstruction2017In: Stochastic environmental research and risk assessment (Print), ISSN 1436-3240, E-ISSN 1436-3259, Vol. 31, no 1, p. 71-85Article in journal (Refereed)
    Abstract [en]

    In this paper we introduce a novel functional clustering method, the Bagging Voronoi K-Medoid Aligment (BVKMA) algorithm, which simultaneously clusters and aligns spatially dependent curves. It is a nonparametric statistical method that does not rely on distributional or dependency structure assumptions. The method is motivated by and applied to varved (annually laminated) sediment data from lake Kassjön in northern Sweden, aiming to infer on past environmental and climate changes. The resulting clusters and their time dynamics show great potential for seasonal climate interpretation, in particular for winter climate changes.

  • 17.
    Abramsson, Evelina
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Grind, Kajsa
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Skattning av kausala effekter med matchat fall-kontroll data2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    fulltext
  • 18.
    Adamowicz, Tomasz
    et al.
    Institute of Mathematics of the Polish Academy of Sciences, Warsaw, Poland.
    Lundström, Niklas L.P.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    The boundary Harnack inequality for variable exponent p-Laplacian, Carleson estimates, barrier functions and p(⋅)-harmonic measures2016In: Annali di Matematica Pura ed Applicata, ISSN 0373-3114, E-ISSN 1618-1891, Vol. 195, no 2, p. 623-658Article in journal (Refereed)
    Abstract [en]

    We investigate various boundary decay estimates for p(⋅)-harmonic functions. For domains in Rn,n≥2satisfying the ball condition (C1,1-domains), we show the boundary Harnack inequality for p(⋅)-harmonic functions under the assumption that the variable exponent p is a bounded Lipschitz function. The proof involves barrier functions and chaining arguments. Moreover, we prove a Carleson-type estimate for p(⋅)-harmonic functions in NTA domains in Rn and provide lower and upper growth estimates and a doubling property for a p(⋅)-harmonic measure.

    Download full text (pdf)
    fulltext
  • 19.
    Adlerborn, Björn
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Karlsson, Lars
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kågström, Bo
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2018In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 40, no 2, p. C157-C180Article in journal (Refereed)
    Abstract [en]

    A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cacheblocked algorithm by K degrees agstrom et al. [BIT, 48 (2008), pp. 563 584]. A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new formulation is an improvement of the state of the art and also identify factors that limit its scalability.

  • 20.
    Adlerborn, Björn
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kågström, Bo
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Karlsson, Lars
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2016Report (Other academic)
    Abstract [en]

    A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cache-blocked algorithm by Kågstrom, Kressner, E.S. Quintana-Ortí, and G. Quintana-Ortí (2008). A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new algorithm is an improvement of the state of the art but also identifies factors that currently limit its scalability.

    Download full text (pdf)
    fulltext
  • 21.
    Adlerborn, Björn
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kågström, Bo
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kressner, Daniel
    A parallel QZ algorithm for distributed memory HPC systems2014In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 36, no 5, p. C480-C503Article in journal (Refereed)
    Abstract [en]

    Appearing frequently in applications, generalized eigenvalue problems represent one of the core problems in numerical linear algebra. The QZ algorithm of Moler and Stewart is the most widely used algorithm for addressing such problems. Despite its importance, little attention has been paid to the parallelization of the QZ algorithm. The purpose of this work is to fill this gap. We propose a parallelization of the QZ algorithm that incorporates all modern ingredients of dense eigensolvers, such as multishift and aggressive early deflation techniques. To deal with (possibly many) infinite eigenvalues, a new parallel deflation strategy is developed. Numerical experiments for several random and application examples demonstrate the effectiveness of our algorithm on two different distributed memory HPC systems.

  • 22.
    Adlerborn, Björn
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kågström, Bo
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
    Kressner, Daniel
    SB–MATHICSE–ANCHP, EPF Lausanne.
    PDHGEQZ user guide2015Report (Other academic)
    Abstract [en]

    Given a general matrix pair (A,B) with real entries, we provide software routines for computing a generalized Schur decomposition (S, T). The real and complex conjugate pairs of eigenvalues appear as 1×1 and 2×2 blocks, respectively, along the diagonals of (S, T) and can be reordered in any order. Typically, this functionality is used to compute orthogonal bases for a pair of deflating subspaces corresponding to a selected set of eigenvalues. The routines are written in Fortran 90 and targets distributed memory machines.

    Download full text (pdf)
    fulltext
  • 23.
    Adolfsson, David
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Claesson, Tom
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Estimation methods for Asian Quanto Basket options2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    All financial institutions that provide options to counterparties will in most cases get involved withMonte Carlo simulations. Options with a payoff function that depends on asset’s value at differenttime points over its lifespan are so called path dependent options. This path dependency impli-cates that there exists no parametric solution and the price must hence be estimated, it is hereMonte Carlo methods come into the picture. The problem though with this fundamental optionpricing method is the computational time. Prices fluctuate continuously on the open market withrespect to different risk factors and since it’s impossible to re-evaluate the option for all shifts dueto its computing intensive nature, estimations of the option price must be used. Estimating theprice from known points will of course never produce the same result as a full re-evaluation but anestimation method that produces reliable results and greatly reduces computing time is desirable.This thesis will evaluate different approaches and try to minimize the estimation error with respectto a certain number of risk factors.This is the background for our master thesis at Swedbank. The goal is to create multiple estima-tion methods and compare them to Swedbank’s current estimation model. By doing this we couldpotentially provide Swedbank with improvement ideas regarding some of its option products andrisk measurements. This thesis is primarily based on two estimation methods that estimate optionprices with respect to two variable risk factors, the value of the underlying assets and volatility.The first method is a grid that uses a second order Taylor expansion and the sensitivities delta,gamma and vega. The other method uses a grid of pre-simulated option prices for different shiftsin risk factors. The interpolation technique that is used in this method is calledPiecewise CubicHermiteinterpolation. The methods (or referred to as approaches in the report) are implementedto handle a relative change of 50 percent in the underlying asset’s index value, which is the firstrisk factor. Concerning the second risk factor, volatility, both methods estimate prices for a 50percent relative downward change and an upward change of 400 percent from the initial volatility.Should there emerge even more extreme market conditions both methods use linear extrapolationto estimate a new option price.

    Download full text (pdf)
    ESTIMATION METHODS FOR ASIAN QUANTO BASKET OPTIONS
  • 24.
    af Klintberg, Max
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Predictive Modeling of Emissions: Heavy Duty Vehicles2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    fulltext
  • 25.
    Agvik, Simon
    Umeå University, Faculty of Science and Technology, Department of Physics.
    A deformable terrain model in multi-domain dynamics using elastoplastic constraints: An adaptive approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Achieving realistic simulations of terrain vehicles in their work environment does not only require a careful model of the vehicle itself but the vehicle's interactions with the surroundings are equally important. For off-road ground vehicles the terrain will heavily affect the behaviour of the vehicle and thus puts great demands on the terrain model.

    The purpose of this project has been to develop and evaluate a deformable terrain model, meant to be used in real-time simulations with multi-body dynamics. The proposed approach is a modification of an existing elastoplastic model based on linear elasticity theory and a capped Drucker-Prager model, using it in an adaptive way. The original model can be seen as a system of rigid bodies connected by elastoplastic constraints, representing the terrain. This project investigates if it is possible to create dynamic bodies just when it is absolutely necessary, and store information about possible deformations in a grid.

    Two methods used for transferring information between the dynamic bodies and the grid have been evaluated; an interpolating approach and a discrete approach. The test results indicate that the interpolating approach is preferable, with better stability to an equal performance cost. However, stability problems still exist that have to be solved if the model should be useful in a commercial product.

    Download full text (pdf)
    fulltext
  • 26.
    Ahlbeck, Jakob
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Mosebach, Fredrik
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Analys av risker med garantinivåer i förhållande till förväntade utbetalningar och portföljavkastningar för traditionella pensionsförsäkringar: Ett examensarbete för Folksam Liv med dotterbolag2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Download full text (pdf)
    Ahlbeck&Mosebach
  • 27.
    Ahlin, Mikael
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Ranby, Felix
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Predicting Marketing Churn Using Machine Learning Models2019Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For any organisation that engages in marketing actions there is a need to understand how people react to communication messages that are sent. Since the introduction of General Data Protection Regulation, the requirements for personal data usage have increased and people are able to effect the way their personal information is used by companies. For instance people have the possibility to unsubscribe from communication that is sent, this is called Opt-Out and can be viewed as churning from communication channels. When a customer Opt-Out the organisation loses the opportunity to send personalised marketing to that individual which in turn result in lost revenue. 

    The aim with this thesis is to investigate the Opt-Out phenomena and build a model that is able to predict the risk of losing a customer from the communication channels. The risk of losing a customer is measured as the estimated probability that a specic individual will Opt-Out in the near future. To predict future Opt-Outs the project uses machine learning algorithms on aggregated communication and customer data. Of the algorithms that were tested the best and most stable performance was achieved by an Extreme Gradient Boosting algorithm that used simulated variables. The performance of the model is best described by an AUC score of 0.71 and a lift score of 2.21, with an adjusted threshold on data two months into the future from when the model was trained. With a model that uses simulated variables the computational cost goes up. However, the increase in performance is signicant and it can be concluded that the choice to include information about specic communications is considered relevant for the outcome of the predictions. A boosted method such as the Extreme Gradient Boosting algorithm generates stable results which lead to a longer time between model retraining sessions.

    Download full text (pdf)
    Thesis_Ahlin_Ranby_2019
  • 28.
    Ahlkrona, Josefin
    et al.
    Department of Mathematics, Stockholm University, Stockholm, Sweden; Swedish e-Science Research Centre (SeRC), Stockholm, Sweden.
    Elfverson, Daniel
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    A cut finite element method for non-Newtonian free surface flows in 2D: application to glacier modelling2021In: Journal of Computational Physics: X, E-ISSN 2590-0552, Vol. 11, article id 100090Article in journal (Refereed)
    Abstract [en]

    In ice sheet and glacier modelling, the Finite Element Method is rapidly gaining popularity. However, constructing and updating meshes for ice sheets and glaciers is a non-trivial and computationally demanding task due to their thin, irregular, and time dependent geometry. In this paper we introduce a novel approach to ice dynamics computations based on the unfitted Finite Element Method CutFEM, which lets the domain boundary cut through elements. By employing CutFEM, complex meshing and remeshing is avoided as the glacier can be immersed in a simple background mesh without loss of accuracy. The ice is modelled as a non-Newtonian, shear-thinning fluid obeying the p-Stokes (full Stokes) equations with the ice atmosphere interface as a moving free surface. A Navier slip boundary condition applies at the glacier base allowing both bedrock and subglacial lakes to be represented. Within the CutFEM framework we develop a strategy for handling non-linear viscosities and thin domains and show how glacier deformation can be modelled using a level set function. In numerical experiments we show that the expected order of accuracy is achieved and that the method is robust with respect to penalty parameters. As an application we compute the velocity field of the Swiss mountain glacier Haut Glacier d'Arolla in 2D with and without an underlying subglacial lake, and simulate the glacier deformation from year 1930 to 1932, with and without surface accumulation and basal melt.

    Download full text (pdf)
    fulltext
  • 29.
    Ahlm, Kristoffer
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    IDENTIFIKATION AV RISKINDIKATORER I FINANSIELL INFORMATION MED HJÄLP AV AI/ML: Ökade möjligheter för myndigheter att förebygga ekonomisk brottslighet2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Economic crimes are more lucrative compared to other crimes as drugs, selling of stolen gods, trafficing. Early preventions that make it more difficult for criminals to use companies for criminal purposes can reduce large costs for sociaty. A litterature study showed that there are large weaknesses in the collaboration between Swedish authorities to detect serious economic crimes.Today most crimes among companies that commit fraud are found after a company has declared bancruptcy. In studies, machine learning models have been tested to detect economic crimes and some swedish authorites are now using machine learning methods to detect different crimes and more advanced methods are used by the danish authorites. Bolagsverket has a large register of companies in Sweden and the aim of this study is to investigate if machinelearning can be used to detect on annual reports that have been digitaly submited and information in Bolagsverket’s register to be able to train classificationsmodels and identify companies that are suspicious. To be able to train the model lawsuits have been collected from the Swedish Economic Crime Authority that can be connected to specific companies through their digitally submited annual report. Principal component analysis is used to visually show differences between the groups suspect companies and not suspected companies and the analysis show that there is an overlap between the groups and no clear clustering between the groups. Because the dataset was unbalanced with 38 suspicious companies out of 1009 companies the oversampling tecnique SMOTE was used to create more synthethic data and more suspects in the dataset. The two machinelearnings models Random Forest and support vector machine (SVM) was compared in a 10 fold crossvalidation. Both models showed a recall on around 0.91 but Random Forest had a much higher precision with a higher accuracy.

    Random Forest was chosen and was trained again and showed a recall on 0.75 when it was tested on unseen data with 8 suspects out of 202 companies. Lowering the treshold resulted in a higher recall but with a larger portion of wrongly classfied companies. The study shows clearly the problem with an unbalanced dataset and the challanges with a small dataset. A larger dataset could have made it possible to make a more selective selection of certain crimes that could have resulted in a more robust model that could be used by Bolagsverket to easier identify suspicous companies in their register.

    Download full text (pdf)
    Ahlm_fulltext
  • 30.
    Ahlstrand, Samuel
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Partiformning vid intern materialförsörjning och layoutanpassning av lager: En fallstudie vid GE Healthcare Umeå av två-binge, supermarkets ochmaterialspindlar2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    GE Healthcare (GEHC) Umeå har vid sin implementering av Lean genomfört förändringar i lagerstrukturen som är i behov av bättre anpassning. GEHC Umeås nya lagerstruktur innebär öppna supermarkets med förändrade försörjningsrutiner från lager till produktion. Ett två-binge system har implementeras där signalbehållare med material fylls av lageransvariga materialspindlar.

    Det första identifierade problemet och forskningsfrågan utgör två-bingarnas kvantiteter som bestämmer mängden artiklar vid monteringsstationerna. Dessa behöver ses över och en rutin för bestämmande av kvantitet behöver etableras. Som den andra, och oberoende forskningsfrågan, har antalet supermarkets (lager) och dessmaterialspindlar identifieras som är många till antalet och utspridda med begränsad samordning.

    Ett tidigare examensarbete, litteraturstudier, intervjuer samt egna observationer på plats har används för att beskriva nuläget genom både kvalitativa och kvantitativa metoder. På grund av bristen på liknande problem i litteraturen har externa partiformningsmetoder och lagerstyrning används och komplimenterats med simulering för två-binge systemet som del i besvarandet av den första frågeställningen. För den andra frågeställningen har delar ur förenklad systematisk lokalplanläggning används där bland annat olika centraliseringsgrader undersökts med simulering av materialtransporter vid olika artikelplaceringar.

    Idag sätts kvantiteten efter prognoserat användande utifrån personliga erfarenheter. Samordningen mellan materialspindlarna är bristande och nyttjandegraden upplevs ojämn samtidigt som godsmottagningen skulle gynnas av ökad kapacitet. Standardiserade processer i materialhanteringen saknas och produktionsgrupperna har skilda arbetssätt som antaskunna gynnas av en centralisering där gemensamma rutiner lättare kan etableras.

    De historiska transaktionerna visar att det finns utrymme för förbättringar då vissa artiklar genererar långa transportsträckor på grund lagerplats i förhållande till var de används iproduktionen. De nya binge-kvantiteterna från partiformningsmetoderna EOQ, m-EOQ och Kanbanformeln har testats i simulering av påfyllning och materialåtgång via en implementation i Excel VBA.

    Kanbanformeln uppvisar högsta servicenivån 90 %, för lägsta totalkostnaden och minskad kapitalbindning. Kanbankvantiteterna minskar den totala kostnaden med 20 %. Antalet påfyllningar skulle öka med 7 % och antalet artiklar i produktion minskar med 59%. För layoutanpassningen har även simulering av olika orderplock och artikelplaceringen genomförts. Resultatet visar att en centraliseringsgrad är möjlig med en liten ökning avmaterialtransporterna. Det framgår även att artiklar som plockas väldigt sällan är beräknade att ta upp 89 hyllställage av totalt 230 stycken och bör ses över. Detta tillsammans med kravspecifikationen från analys-delen har hjälpt för att generera olika koncept.

    GEHC Umeå bör använda Kanbanformeln i framtiden för bestämmande av kvantiteten i bingarna. Vissa anpassningar för gemensamma artiklar i Comm-lagret och artiklar utan historiska efterfrågan bör ske. För layouten bör GEHC Umeå först och främst flytta artiklarsom idag bidrar med onödiga transporter. På längre sikt bör en ökad grad centralisering avlagren vara möjlig med hänsyn till fördelar vid samordning och informell spridning av arbetsrutiner. Materialspindlarna bör underlätta för godsmottagningen, delta i bristrapportering samt förbättringsarbetet. Utöver detta bör möjligheter till ökat samarbetet mellan materialplaneringen, produktionsplaneringen och materialspindlarna undersökas

  • 31. Akbari, Saieed
    et al.
    Friedland, Shmuel
    Markström, Klas
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Zare, Sanaz
    On 1-sum flows in undirected graphs2016In: The Electronic Journal of Linear Algebra, ISSN 1537-9582, E-ISSN 1081-3810, Vol. 31, p. 646-665Article in journal (Refereed)
    Abstract [en]

    Let G = (V, E) be a simple undirected graph. For a given set L subset of R, a function omega: E -> L is called an L-flow. Given a vector gamma is an element of R-V , omega is a gamma-L-flow if for each v is an element of V, the sum of the values on the edges incident to v is gamma(v). If gamma(v) = c, for all v is an element of V, then the gamma-L-flow is called a c-sum L-flow. In this paper, the existence of gamma-L-flows for various choices of sets L of real numbers is studied, with an emphasis on 1-sum flows. Let L be a subset of real numbers containing 0 and denote L* := L \ {0}. Answering a question from [S. Akbari, M. Kano, and S. Zare. A generalization of 0-sum flows in graphs. Linear Algebra Appl., 438:3629-3634, 2013.], the bipartite graphs which admit a 1-sum R* -flow or a 1-sum Z* -flow are characterized. It is also shown that every k-regular graph, with k either odd or congruent to 2 modulo 4, admits a 1-sum {-1, 0, 1}-flow.

  • 32.
    Alainentalo, Lisbeth
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    A Comparison of Tests for Ordered Alternatives With Application in Medicine1997Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A situation frequently encountered in medical studies is the comparison of several treatments with a control. The problem is to determine whether or not a test drug has a desirable medical effect and/or to identify the minimum effective dose. In this Bachelor’s thesis, some of the methods used for testing hypotheses of ordered alternatives are reviewed and compared with respect to the power of the tests. Examples of multiple comparison procedures, maximum likelihood procedures, rank tests and different types of contrasts are presented and the properties of the methods are explored.

    Depending on the degree of knowledge about the dose-responses, the aim of the study, whether the test is parametric or non-parametric and distribution-free or not, different recommendations are given which of the tests should be used. Thus, there is no single test which can be applied in all experimental situations for testing all different alternative hypotheses. 

    Download full text (pdf)
    fulltext
  • 33. Albano, Anthony D.
    et al.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Linking With External Covariates: Examining Accuracy by Anchor Type, Test Length, Ability Difference, and Sample Size2019In: Applied psychological measurement, ISSN 0146-6216, E-ISSN 1552-3497, Vol. 43, no 8, p. 597-610Article in journal (Refereed)
    Abstract [en]

    Research has recently demonstrated the use of multiple anchor tests and external covariates to supplement or substitute for common anchor items when linking and equating with nonequivalent groups. This study examines the conditions under which external covariates improve linking and equating accuracy, with internal and external anchor tests of varying lengths and groups of differing abilities. Pseudo forms of a state science test were equated within a resampling study where sample size ranged from 1,000 to 10,000 examinees and anchor tests ranged in length from eight to 20 items, with reading and math scores included as covariates. Frequency estimation linking with an anchor test and external covariate was found to produce the most accurate results under the majority of conditions studied. Practical applications of linking with anchor tests and covariates are discussed.

    Download full text (pdf)
    fulltext
  • 34.
    Albing, Malin
    et al.
    Department of Mathematics, Luleå University of Technology.
    Vännman, Kerstin
    Department of Mathematics, Luleå University of Technology.
    Elliptical safety region plots for Cpk2011In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 38, no 6, p. 1169-1187Article in journal (Refereed)
    Abstract [en]

    The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.

  • 35.
    Al-Dory, Mohammed
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Directional edge detection by the gradient method applied to linear and non-linear edges2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    When we humans look at images, especially paintings, we are usually interested in the sense of art and what we regard as “beauty”. This may include colour harmony, fantasy, realism, expression, drama, ordered chaos, contemplative aspects, etc.Alas, none of that is interesting for a robot that processes a two-dimensional matrix representing what we humans call an image.Robots, and other digital computers, are programmed to care about things like resolution, sampling frequency, image intensity, as well as edges. The detection of edges is a very important subject in the field of image processing. An edge in an image represents the end of one object and the start of another object. Thus, edges exist in different shapes and forms. Some edges are horizontal, other edges are vertical and there are also diagonal edges, all these edges are straight lines with constant slopes. Then we have also circular and curved edges whose slopes depend on the spatial variables. It is not always beneficial to detect all edges in an image, sometimes we are interested in edges in a certain direction. In this work we will explain the mathematics behind edge detection using gradient approach and try to give optimal ways to detect linear edges in different directions and discuss detection of non-linear edges. The theory developed in this work will then be applied and tested using Matlab.

    Download full text (pdf)
    fulltext
  • 36.
    Alger, Susanne
    Umeå University, Faculty of Social Sciences, Department of applied educational science.
    Is This Reliable Enough?: Examining Classification Consistency and Accuracy in a Criterion-Referenced Test2016In: International journal of assessment tools in education, ISSN 2148-7456, Vol. 3, no 2, p. 137-150Article in journal (Refereed)
    Abstract [en]

    One important step for assessing the quality of a test is to examine the reliability of test score interpretation. Which aspect of reliability is the most relevant depends on what type of test it is and how the scores are to be used. For criterion-referenced tests, and in particular certification tests, where students are classified into performance categories, primary focus need not be on the size of error but on the impact of this error on classification. This impact can be described in terms of classification consistency and classification accuracy. In this article selected methods from classical test theory for estimating classification consistency and classification accuracy were applied to the theory part of the Swedish driving licence test, a high-stakes criterion-referenced test which is rarely studied in terms of reliability of classification. The results for this particular test indicated a level of classification consistency that falls slightly short of the recommended level which is why lengthening the test should be considered. More evidence should also be gathered as to whether the placement of the cut-off score is appropriate since this has implications for the validity of classifications.

  • 37.
    Ali, Raman
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Root Cause Analysis for In-Transit Time Performance: Time Series Analysis for Inbound Quantity Received into Warehouse2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cytiva is a global provider of technologies to global pharmaceutical companies and it is critical to ensure that Cytiva’s customers receive deliveries of products on-time. Cytiva’s products are shipped via road transportation within most parts of Europe and for the rest in the world air freight is used. The company is challenged to deliver products on time between regional distribution points and from manufacturing sites to its regional distribution centers. The time performance for the delivery of goods is today 79% compared to the company’s goal 95%.

    The goal of this work is to find the root causes and recommend improvement opportunities for the logistics organizations inbound in-transit time performance towards their target of 95% success rate of shipping in-transit times.

    Data for this work was collected from the company’s system to create visibility for the logistics specialists and to create a prediction that can be used for the warehouse in Rosersberg. Visibility was created by implementing various dashboards in the QlikSense program that can be used by the logistics group. The prediction models were built on Holt-Winters forecasting technique to be able to predict quantity, weight and volume of products, which arrive daily within five days and are enough to be implemented in the daily work. With the forecasting technique high accurate models were found for both the quantity and weight with accuracies of 96.02% and 92.23%, respectively. For the volume, however, too many outliers were replaced by the mean values and the accuracy of the model was 75.82%.

    However, large amounts of discrepancies have been found in the data which today has led to a large ongoing project to solve. This means that the models shown in this thesis cannot be completely reliable for the company to use when a lot of errors in their data have been found. The models may need to be adjusted when the quality of the data has increased. As of today the models can be used by having a glance upon.

    Download full text (pdf)
    fulltext
  • 38.
    Alishev, Boris
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Kågström, Oskar
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Effectivisation of an Industrial Painting Process: A discrete event approach to modeling and analysing the painting process at Volvo GTO Umeå2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For any manufacturing process, one of the key challenges after a solid foundation has been built is how improvements can be made. Management has to consider how possible changes will affect both the process as a whole in addition to every individual part before implementation. The groundwork for this is to have a clear overview of every part and the possibility to investigate effects of changes. This thesis thus aims to provide a clear overview of the complex painting process at Volvo GTO in Umeå and a template for investigating how differently implemented changes will affect the process. The means for doing this is to use statistics, modeling and discrete event simulation. Modeling shall provide an approximate recreation of reality and the subsequent analysis shall take into account similarities and differences to estimate the effects of changes. Recreation of real-world data and variability is based on bootstrap resampling for multiple independent weeks of observations. Results obtained from simulation are compared to observed data in order to validate the model and investigate discrepancies. Given the results of model validation, modifications are implemented and information obtained from model validation is used to evaluate the results of the modifications. Further, strengths and weaknesses of the thesis are presented and a recommendation of altering the stance on process improvements is provided to Volvo GTO.

    Download full text (pdf)
    fulltext
  • 39. Alloyarova, Roza
    et al.
    Nikulin, Mikhail
    Pya, Natalya
    Voinov, Vassilly
    The Power-Generalized Weibull probability distribution and its use in survival analysis2007In: Communications in Dependability and Quality Management, Vol. 10, no 1, p. 5-15Article in journal (Refereed)
  • 40.
    Alm, Hannah
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Optimering av asfaltproduktion för minskad klimatpåverkan: Minimering av koldioxidutsläpp i Skanskas asfalt2024Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [sv]

    Detta examensarbete utfördes hos Skanska Industrial Solutions (SIS) och syftade till att undersöka sätt att minimera koldioxidutsläppen från Skanskas asfaltproduktion genom optimering. En icke-linjär optimeringsmodell utvecklades för att beräkna den optimala inblandningen av jungfruliga material och returasfalt samt mängden årston som ska produceras för att minimera koldioxidutsläppen per ton tillverkad asfalt.

    Utgångspunkten för modellen var att på ett så verklighetstroget sätt som möjligt beskriva de olika aspekterna i asfaltproduktionens utsläpp. Modellen tar hänsyn till begränsningar såsom maximal tillåten inblandning av returasfalt, maximal produktionsnivå och tillgången på returasfalt. El- och biooljeförbrukningen modellerades som funktioner av produktionsvolymen genom minsta kvadratmetoden för att beskriva hur förbrukningen förändrades i takt med en ökad mängd producerad årston.

    Resultaten visar att en hög inblandning av returasfalt är den viktigaste faktorn för att minska koldioxidutsläppen. Modellen visade också att mängden årston har en inverkan på koldioxidutsläppen per ton tillverkad asfalt där en högre produktion leder till lägre utsläpp per ton. En undersökning av framtida scenarion med en lägre tillgång på returasfalt visade dock att mängden årston endast bör ökas så länge en maximal inblandning av returasfalt fortfarande är möjlig.

    Download full text (pdf)
    fulltext
  • 41.
    Alm, Hannes
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Lindman, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Estimating Payoff Distributions of US LifeInsurance Portfolios: An Evaluation of Two Approaches: A Monte Carlo Method and the De Pril’s Algorithm2024Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    For any fund manager, the ability to project expected returns into the futureis vital, but it poses a great deal of uncertainty. When the underlying risk istied to human longevity, the uncertainty is found in the stochastic nature of mortality.

    This thesis presents two approaches to approximating a distribution of expected payoffs for a portfolio containing US life insurance policies. The first one utilizes the Monte Carlo method and approximates the payoff in binary and monetary values. The second approach uses the De Pril’s recursive algorithm to calculate the binary distribution. The different methods are evaluated on two key factors; accuracy and computational cost. In addition, different portfolio distributionsare evaluated in terms of their statistical characteristics and longevity exposure.

    The results presented in this thesis indicate that the Monte Carlo method isthe more appropriate method for calculating payoff distributions of US life insurance portfolios. Although the De Pril’s method displays an accurate resultfor a single time period, the process of repeated convolution to evaluate longertime periods leads to an unsustainable increase in the error term. An analysis of statistical measurements indicates that life settlement portfolios have apeaky distribution with heavy tails and positive skewness. Furthermore, testsof longevity show that the portfolio distributions are sensitive to the accuracyof mortality assumptions.

    Download full text (pdf)
    fulltext
  • 42.
    Almqvist, Saga
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Nore, Lana
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Where to Stack the Chocolate?: Mapping and Optimisation of the Storage Locations with Associated Transportation Cost at Marabou2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Today, inventory management at Marabou is organised in such way that articles are stored based on which production line they belong to and are sent to storage locations close to their production line. However, some storage locations are not optimised, insofar articles are stored out of pure habit and follow what is considered most convenient. This means that the storage locations are not based on any fixed instructions or standard. In this report, we propose optimal storage locations with respect to transportation cost by modelling the problem mathematically as a minimal cost matching problem, which we solve using the so-called Hungarian algorithm. To be able to implement the Hungarian algorithm, we collected data regarding the stock levels of articles in the factory throughout 2016. We adjusted the collected data by turning the articles into units of pallets. We considered three different implementations of the Hungarian algorithm. The results from the different approaches are presented together with several suggestions regarding pallet optimisation. In addition to the theoretical background, our work is based on an empirical study through participant observations as well as qualitative interviews with factory employees. In addition to our modelling work, we thus offer several further suggestions for efficiency savings or improvements at the factory, as well as for further work building on this report.

    Download full text (pdf)
    ALMQVIST&NORE
  • 43.
    Almqvist, Siri
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Nordin, Oskar
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    STRESS TESTING AN SME PORTFOLIO: Effects of an Adverse Macroeconomic Scenario on Credit Risk Transition Matrices2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The financial crisis of 2007-2008 was a severe global crisis causing a worldwide recession. One of the main contributing factors of the crisis was the excessive risk appetite of banks and financial institutions. Since then, regulatory authorities and financial institutions have directed focus towards risk management with the main objective to avert a similar crisis from occurring in the future. The aim of this thesis is to investigate how an adverse macroeconomic scenario would affect the migrations between risk classes of an SME portfolio, referred to as stress test.

    This thesis utilises two frameworks, one by Belkin and Suchower and one by Carlehed and Petrov, for creating a single systematic indicator describing the credit class migrations of the portfolio. Four different regression model setups (Ordinary Least Squares, Additive Model, XGBoost and SVM) are then used to describe the relationship between macroeconomic indicators and this systematic indicator. The four models are evaluated in terms of interpretability and ability to predict in order to find the main drivers for the systematic indicator. Their corresponding prediction errors are compared to find the best model. The portfolio is stress tested by using the regression models to predict the corresponding systematic indicator given an adverse macroeconomic scenario. The probability of default, estimated from the indicator using each of the frameworks, are then compared and analysed with regards to the systematic indicator.

    The results show that unemployment is the main driver of the risk class migrations for an SME portfolio, both from a statistical and economical perspective. The most appropriate regression model is the additive model because of its performance and interpretability and is therefore advised to use for this problem. From the PD estimations, it is concluded that the framework by Belkin and Suchower gives a more volatile estimate than that of Carlehed and Petrov.

    Download full text (pdf)
    fulltext
  • 44.
    Al-Sahi, Mohammad Reda
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Prehospital resource optimization: Master thesis for Umeå University & Prehospital resource optimization2022Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    The Vinnova project Prehospital resource optimization is a collaboration between the four northernmost regions in Sweden, the Jämtland/Härjedalen region, the Västernorrland region, the Västerbotten region and the Norrbotten region, and together these four regions make up approximately half of Sweden's area.This work is a continuation and improvement on a study conducted by Umeå University to estimate driving times for ambulances in the four northernmost regions. This study aims to improve the accuracy of the estimated driving times conducted by Umeå University and to understand and explain the variables that affect the driving times for ambulances. Data for empirical driving times from 2014-2020 were reviewed and checked to estimate driving times. Through analysis of linear relations, different linear models were created that are dependent on different parameters to explain the empirical driving time as well as possible. The model was finally validated by the K-Fold method. The results show that the estimated driving times can be improved, however, there is room for further improvements. Explanatory variables are month, day of the week and time of day.

  • 45.
    Alshalabi, Mohamad
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Measures of statistical dependence for feature selection: Computational study2022Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The importance of feature selection for statistical and machine learning models derives from their explainability and the ability to explore new relationships, leading to new discoveries. Straightforward feature selection methods measure the dependencies between the potential features and the response variable. This thesis tries to study the selection of features according to a maximal statistical dependency criterion based ongeneralized Pearson’s correlation coefficients, e.g., Wijayatunga’s coefficient. I present a framework for feature selection based on these coefficients for high dimensional feature variables. The results are compared to the ones obtained by applying an elastic net regression (for high-dimensional data). The generalized Pearson’s correlation coefficient is a metric-based measure where the metric is Hellinger distance. The metric is considered as the distance between probability distributions. The Wijayatunga’s coefficient is originally proposed for the discrete case; here, we generalize it for continuous variables by discretization and kernelization. It is interesting to see how discretization work as we discretize the bins finer. The study employs both synthetic and real-world data to illustrate the validity and power of this feature selection process. Moreover, a new method of normalization for mutual information is included. The results show that both measures have considerable potential in detecting associations. The feature selection experiment shows that elastic net regression is superior to our proposed method; nevertheless, more investigation could be done regarding this subject.

    Download full text (pdf)
    fulltext
  • 46.
    Alstermark, Olivia
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Stolt, Evangelina
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Purchase Probability Prediction: Predicting likelihood of a new customer returning for a second purchase using machine learning methods2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame.

    To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models.

    The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.

    Download full text (pdf)
    alstermark_stolt
  • 47.
    Altmejd, Adam
    et al.
    Swedish Institute for Social Research, Stockholm University, Stockholm, Sweden; Department of Finance, Stockholm School of Economics, Stockholm, Sweden.
    Rocklöv, Joacim
    Umeå University, Faculty of Medicine, Department of Public Health and Clinical Medicine, Section of Sustainable Health. Heidelberg Institute of Global Health (HIGH), Interdisciplinary Centre for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany.
    Wallin, Jonas
    Department of Statistics, Lund University, Lund, Sweden.
    Nowcasting COVID-19 statistics reported with delay: A case-study of Sweden and the UK2023In: International Journal of Environmental Research and Public Health, ISSN 1661-7827, E-ISSN 1660-4601, Vol. 20, no 4Article in journal (Refereed)
    Abstract [en]

    The COVID-19 pandemic has demonstrated the importance of unbiased, real-time statistics of trends in disease events in order to achieve an effective response. Because of reporting delays, real-time statistics frequently underestimate the total number of infections, hospitalizations and deaths. When studied by event date, such delays also risk creating an illusion of a downward trend. Here, we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the "removal method"-a well-established estimation framework in the field of ecology.

    Download full text (pdf)
    fulltext
  • 48.
    Altıntaş, Özge
    et al.
    Ankara University, Faculty of Educational Sciences, Department of Educational Sciences, Educational Measurement and Evaluation, Ankara, Turkey.
    Wallin, Gabriel
    Université Côte d’Azur, Inria, CNRS, Laboratoire J. A. Dieudonné, team Maasai, Sophia-Antipolis, France.
    Equality of admission tests using kernel equating under the non-equivalent groups with covariates design2021In: International Journal of Assessment Tools in Education, E-ISSN 2148-7456, Vol. 8, no 4, p. 729-743Article in journal (Refereed)
    Abstract [en]

    Educational assessment tests are designed to measure the same psychological constructs over extended periods of time. This feature is important considering that test results are often used in the selection process for admittance to university programs. To ensure fair assessments, especially for those whose results weigh heavily in selection decisions, it is necessary to collect evidence demonstrating that the assessments are not biased, and to confirm that the scores obtained from different test forms have statistical equality. For this purpose, test equating has important functions, as it prevents bias generated by differences in the difficulty levels of different test forms, allows the scores obtained from different test forms to be reported on the same scale, and ensures that the reported scores communicate the same meaning. In this study, these important functions were evaluated using real college admission test data from different test administrations. The kernel equating method under the non-equivalent groups with covariates design was applied to determine whether the scores obtained from different time periods but measuring the same psychological constructs were statistically equivalent. The non-equivalent groups with covariates design was specifically used because the test groups of the admission test are non-equivalent and there are no anchor items. Results from the analyses showed that the test forms had different score distributions, and that the relationship was non-linear. The equating procedure was thus adjusted to eliminate these differences and thereby allow the tests to be used interchangeably.

  • 49.
    Amanuel, Meron
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    ON GENERATING THE PROBABILITY MASS FUNCTION USING FIBONACCI POWER SERIES2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis will focus on generating the probability mass function using Fibonacci sequenceas the coefficient of the power series.

    The discrete probability, named Fibonacci distribution,was formed by taking into consideration the recursive property of the Fibonacci sequence,the radius of convergence of the power series, and additive property of mutually exclusiveevents. This distribution satisfies the requisites of a legitimate probability mass function.

    It's cumulative distribution function and the moment generating function are then derived and the latter are used to generate moments of the distribution, specifically, the mean and the variance.

    The characteristics of some convergent sequences generated from the Fibonacci sequenceare found useful in showing that the limiting form of the Fibonacci distribution is a geometricdistribution. Lastly, the paper showcases applications and simulations of the Fibonacci distribution using MATLAB.

    Download full text (pdf)
    fulltext
  • 50.
    Andersdotter Persson, Anna
    Umeå University, Faculty of Social Sciences, Department of Statistics.
    Kalibrering som ett sätt att hantera bortfall: Vilken korrelation krävs mellan hjälp- och responsvariabler?2010Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    FULLTEXT01
1234567 1 - 50 of 2668
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf