Umeå University's logo

umu.sePublikasjoner
Endre søk
Begrens søket
1234567 1 - 50 of 2720
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1. Aaghabali, M.
    et al.
    Akbari, S.
    Friedland, S.
    Markström, Klas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Tajfirouz, Z.
    Upper bounds on the number of perfect matchings and directed 2-factors in graphs with given number of vertices and edges2015Inngår i: European journal of combinatorics (Print), ISSN 0195-6698, E-ISSN 1095-9971, Vol. 45, s. 132-144Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We give an upper bound on the number of perfect matchings in simple graphs with a given number of vertices and edges. We apply this result to give an upper bound on the number of 2-factors in a directed complete bipartite balanced graph on 2n vertices. The upper bound is sharp for even n. For odd n we state a conjecture on a sharp upper bound.

  • 2.
    Abdollahian, Josef
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Kanwar, Anna
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Optimering av kortaste vägen vid hantering och avledning av skadligt dagvatten: Lösning med A-stjärna algoritm samt en guide med ekonomiska styrmedel för beslutsfattande aktörer2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    Jordens befolkning växer och allt fler flyttar in till urbana områden. Detta medför att städer växer, nya byggnader tillkommer och infrastrukturer expanderar. Denna snabba tillväxtfas står i direkt anslutning till ökade översvämningar till följd av de förändringar som görs i naturen.

    De redan överbelastade dagvattensystemen har i många fall svårt att hantera de befintliga kraven. Till följd av detta uppstår översvämningar vid större regnintensitet och utgör stora omkostnader för samhället. Dagvattenhanteringen brister då det inom kommunens organisationer är otydliga ansvarsfördelningar. För att kunna planera för hållbara städer även i framtiden är det viktigt att hitta en genomförbar lösning gällande både ansvarsfördelningen samt hur dagvattnet ska hanteras på bästa sätt för att uppnå kostnadsfördelar.

    I denna studie tas det fram en guide för kommunen över hur ansvaret bör fördelas mellan kommun och exploatör i dagvattenfrågan. Guiden bygger på simuleringar och teorier inom optimeringslära för att kunna föreslå rimliga lösningar. Genom dessa simuleringar av dagvattensystemet har mängden vatten som inte ryms i dagvattensystemet kvantifierats. Vidare för att hitta en rimlig alternativ avrinningsväg för det överflödiga dagvattnet har olika algoritmer för kortaste vägen problemet undersökts.

    Resultaten visar att en klassisk algoritm med en heuristisk funktion som appliceras på kortaste vägen problemet inte kan identifiera den mest lämpliga avrinningsvägen. Detta då den heuristiska funktionen i algoritmen förhindrar att en naturligare avrinningsväg uppströms väljs även om denna skulle ge en mer optimal lösning. 

    Fulltekst (pdf)
    Optimering av kortaste vägen vid hantering och avledning av skadligt dagvatten
  • 3.
    Abdulle, Assyr
    et al.
    ANMC, EPFL.
    Cohen, David
    Institut für Angewandte und Numerische Mathematik, KIT.
    Vilmart, Gilles
    ANMC, EPFL.
    Konstantinos, Zygalakis
    ANMC, EPFL.
    High weak order methods for stochastic differential equations based on modified equations2012Inngår i: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 34, nr 3, s. A1800-A1823Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (mean-square stable) stochastic problems, and implicit integrators that exactly conserve all quadratic firstintegrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.

  • 4.
    Abramowicz, Konrad
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Numerical analysis for random processes and fields and related design problems2011Doktoravhandling, med artikler (Annet vitenskapelig)
    Abstract [en]

    In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D).

    In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed.

    In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided.

    In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity.

    In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.

    Fulltekst (pdf)
    fulltext
  • 5.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Arnqvist, Per
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Secchi, Piercesare
    Vantini, Simone
    Vitelli, Valeria
    Was it snowing on lake Kassjön in January 4486 BC? Functional data analysis of sediment data2014Konferansepaper (Annet vitenskapelig)
  • 6.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Häger, Charlotte
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Sjukgymnastik.
    Hérbert-Losier, Kim
    Swedish Winter Sports Research Centre Mid Sweden; University Department of Health Sciences, Östersund, Sweden.
    Pini, Alessia
    MOX – Department of Mathematics, Politecnico di Milano.
    Schelin, Lina
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik. Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Fysioterapi.
    Strandberg, Johan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Vantini, Simone
    MOX – Department of Mathematics, Politecnico di Milano.
    An inferential framework for domain selection in functional anova2014Inngår i: Contributions in infinite-dimensional statistics and related topics / [ed] Bongiorno, E.G., Salinelli, E., Goia, A., Vieu, P, Esculapio , 2014Konferansepaper (Fagfellevurdert)
    Abstract [en]

    We present a procedure for performing an ANOVA test on functional data, including pairwise group comparisons. in a Scheff´e-like perspective. The test is based on the Interval Testing Procedure, and it selects intervals where the groups significantly differ. The procedure is applied on the 3D kinematic motion of the knee joint collected during a functional task (one leg hop) performed by three groups of individuals.

  • 7.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Häger, Charlotte
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering, Fysioterapi.
    Pini, Alessia
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik. Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå universitet, Medicinska fakulteten, Institutionen för samhällsmedicin och rehabilitering. Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Vantini, Simone
    Nonparametric inference for functional-on-scalar linear models applied to knee kinematic hop data after injury of the anterior cruciate ligament2018Inngår i: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 45, nr 4, s. 1036-1061Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Motivated by the analysis of the dependence of knee movement patterns during functional tasks on subject-specific covariates, we introduce a distribution-free procedure for testing a functional-on-scalar linear model with fixed effects. The procedure does not only test the global hypothesis on the entire domain but also selects the intervals where statistically significant effects are detected. We prove that the proposed tests are provided with an asymptotic control of the intervalwise error rate, that is, the probability of falsely rejecting any interval of true null hypotheses. The procedure is applied to one-leg hop data from a study on anterior cruciate ligament injury. We compare knee kinematics of three groups of individuals (two injured groups with different treatments and one group of healthy controls), taking individual-specific covariates into account.

    Fulltekst (pdf)
    fulltext
  • 8.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Pini, Alessia
    Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Stamm, Aymeric
    Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, Nantes, France.
    Vantini, Simone
    MOX – Modelling and Scientific Computing Laboratory, Department of Mathematics, Politecnico di Milano, Milan, Italy.
    Domain selection and family-wise error rate for functional data: a unified framework2023Inngår i: Biometrics, ISSN 0006-341X, E-ISSN 1541-0420, Vol. 79, nr 2, s. 1119-1132Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Functional data are smooth, often continuous, random curves, which can be seen as an extreme case of multivariate data with infinite dimensionality. Just as component-wise inference for multivariate data naturally performs feature selection, subset-wise inference for functional data performs domain selection. In this paper, we present a unified testing framework for domain selection on populations of functional data. In detail, p-values of hypothesis tests performed on point-wise evaluations of functional data are suitably adjusted for providing a control of the family-wise error rate (FWER) over a family of subsets of the domain. We show that several state-of-the-art domain selection methods fit within this framework and differ from each other by the choice of the family over which the control of the FWER is provided. In the existing literature, these families are always defined a priori. In this work, we also propose a novel approach, coined threshold-wise testing, in which the family of subsets is instead built in a data-driven fashion. The method seamlessly generalizes to multidimensional domains in contrast to methods based on a-priori defined families. We provide theoretical results with respect to consistency and control of the FWER for the methods within the unified framework. We illustrate the performance of the methods within the unified framework on simulated and real data examples, and compare their performance with other existing methods.

    Fulltekst (pdf)
    fulltext
  • 9.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Schelin, Lina
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Strandberg, Johan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Multiresolution clustering of dependent functional data with application to climate reconstruction2019Inngår i: Stat, E-ISSN 2049-1573, Vol. 8, nr 1, artikkel-id e240Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We propose a new nonparametric clustering method for dependent functional data, the double clustering bagging Voronoi method. It consists of two levels of clustering. Given a spatial lattice of points, a function is observed at each grid point. In the first‐level clustering, features of the functional data are clustered. The second‐level clustering takes dependence into account, by grouping local representatives, built from the resulting first‐level clusters, using a bagging Voronoi strategy. Depending on the distance measure used, features of the functions may be included in the second‐step clustering, making the method flexible and general. Combined with the clustering method, a multiresolution approach is proposed that searches for stable clusters at different spatial scales, aiming to capture latent structures. This provides a powerful and computationally efficient tool to cluster dependent functional data at different spatial scales, here illustrated by a simulation study. The introduced methodology is applied to varved lake sediment data, aiming to reconstruct winter climate regimes in northern Sweden at different time resolutions over the past 6,000 years.

  • 10.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Seleznjev, Oleg
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Multivariate piecewise linear interpolation of a random field2011Manuskript (preprint) (Annet vitenskapelig)
    Abstract [en]

    We consider a multivariate piecewise linear interpolation of a continuous random field on a-dimensional cube. The approximation performance is measured by the integrated mean square error. Multivariate piecewise linear interpolator is defined by N field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field in mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields we provide the upper bound for the approximation accuracy in the uniform mean square norm.

    Fulltekst (pdf)
    fulltext
  • 11.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Seleznjev, Oleg
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    On the error of the Monte Carlo pricing method for Asian option2008Inngår i: Journal of Numerical and Applied Mathematics, ISSN 0868-6912, Vol. 96, nr 1, s. 1-10Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider a Monte Carlo method to price a continuous arithmetic Asian option with a given precision. Piecewise constant approximation and plain simulation are used for a wide class of models based on L\'{e}vy processes. We give bounds of the possible discretization and simulation errors. The sufficient numbers of discretization points and simulations to obtain requested accuracy are derived. To demonstrate the general approach, the Black-Scholes model is studied in more detail. We undertake the case of continuous averaging and starting time zero,  but the obtained results can be applied to the discrete case  and generalized for any time before an execution date. Some numerical experiments and comparison to the PDE based method are also presented.

  • 12.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Seleznjev, Oleg
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Piecewise multilinear interpolation of a random field2013Inngår i: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 45, nr 4, s. 945-959Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider a piecewise-multilinear interpolation of a continuous random field on a d-dimensional cube. The approximation performance is measured using the integrated mean square error. Piecewise-multilinear interpolator is defined by N-field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field, in the mean square sense, and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields, we provide the upper bound for the approximation accuracy in the uniform mean square norm.

  • 13.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Seleznjev, Oleg
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Stratified Monte Carlo quadrature for continuous random fields2015Inngår i: Methodology and Computing in Applied Probability, ISSN 1387-5841, E-ISSN 1573-7713, Vol. 17, nr 1, s. 59-72Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We consider the problem of numerical approximation of integrals of random fields over a unit hypercube. We use a stratified Monte Carlo quadrature and measure the approximation performance by the mean squared error. The quadrature is defined by a finite number of stratified randomly chosen observations with the partition generated by a rectangular grid (or design). We study the class of locally stationary random fields whose local behavior is like a fractional Brownian field in the mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large number of the observations. For the H¨older class of random functions, we provide an upper bound for the approximation error. Additionally, for a certain class of isotropic random functions with an isolated singularity at the origin, we construct a sequence of designs eliminating the effect of the singularity point.

  • 14.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Strandberg, Johan
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Nonparametric bagging clustering methods to identify latent structures from a sequence of dependent categorical data2023Inngår i: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 177, artikkel-id 107583Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Nonparametric bagging clustering methods are studied and compared to identify latent structures from a sequence of dependent categorical data observed along a one-dimensional (discrete) time domain. The frequency of the observed categories is assumed to be generated by a (slowly varying) latent signal, according to latent state-specific probability distributions. The bagging clustering methods use random tessellations (partitions) of the time domain and clustering of the category frequencies of the observed data in the tessellation cells to recover the latent signal, within a bagging framework. New and existing ways of generating the tessellations and clustering are discussed and combined into different bagging clustering methods. Edge tessellations and adaptive tessellations are the new proposed ways of forming partitions. Composite methods are also introduced, that are using (automated) decision rules based on entropy measures to choose among the proposed bagging clustering methods. The performance of all the methods is compared in a simulation study. From the simulation study it can be concluded that local and global entropy measures are powerful tools in improving the recovery of the latent signal, both via the adaptive tessellation strategies (local entropy) and in designing composite methods (global entropy). The composite methods are robust and overall improve performance, in particular the composite method using adaptive (edge) tessellations.

    Fulltekst (pdf)
    fulltext
  • 15.
    Abramowicz, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Strandberg, Johan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Nonparametric clustering methods to identify latent structures from a sequence of dependent categorical dataManuskript (preprint) (Annet vitenskapelig)
  • 16.
    Abramowizc, Konrad
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Arnqvist, Per
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Secchi, Piercesare
    Politecnico di Milano, Italy.
    Sjöstedt de Luna, Sara
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Vantini, Simone
    Politecnico di Milano, Italy.
    Vitelli, Valeria
    Oslo University, Norway.
    Clustering misaligned dependent curves applied to varved lake sediment for climate reconstruction2017Inngår i: Stochastic environmental research and risk assessment (Print), ISSN 1436-3240, E-ISSN 1436-3259, Vol. 31, nr 1, s. 71-85Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In this paper we introduce a novel functional clustering method, the Bagging Voronoi K-Medoid Aligment (BVKMA) algorithm, which simultaneously clusters and aligns spatially dependent curves. It is a nonparametric statistical method that does not rely on distributional or dependency structure assumptions. The method is motivated by and applied to varved (annually laminated) sediment data from lake Kassjön in northern Sweden, aiming to infer on past environmental and climate changes. The resulting clusters and their time dynamics show great potential for seasonal climate interpretation, in particular for winter climate changes.

  • 17.
    Abramsson, Evelina
    et al.
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Grind, Kajsa
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Skattning av kausala effekter med matchat fall-kontroll data2017Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Fulltekst (pdf)
    fulltext
  • 18.
    Adamowicz, Tomasz
    et al.
    Institute of Mathematics of the Polish Academy of Sciences, Warsaw, Poland.
    Lundström, Niklas L.P.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    The boundary Harnack inequality for variable exponent p-Laplacian, Carleson estimates, barrier functions and p(⋅)-harmonic measures2016Inngår i: Annali di Matematica Pura ed Applicata, ISSN 0373-3114, E-ISSN 1618-1891, Vol. 195, nr 2, s. 623-658Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We investigate various boundary decay estimates for p(⋅)-harmonic functions. For domains in Rn,n≥2satisfying the ball condition (C1,1-domains), we show the boundary Harnack inequality for p(⋅)-harmonic functions under the assumption that the variable exponent p is a bounded Lipschitz function. The proof involves barrier functions and chaining arguments. Moreover, we prove a Carleson-type estimate for p(⋅)-harmonic functions in NTA domains in Rn and provide lower and upper growth estimates and a doubling property for a p(⋅)-harmonic measure.

    Fulltekst (pdf)
    fulltext
  • 19.
    Adlerborn, Björn
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Karlsson, Lars
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kågström, Bo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2018Inngår i: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 40, nr 2, s. C157-C180Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cacheblocked algorithm by K degrees agstrom et al. [BIT, 48 (2008), pp. 563 584]. A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new formulation is an improvement of the state of the art and also identify factors that limit its scalability.

  • 20.
    Adlerborn, Björn
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kågström, Bo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Karlsson, Lars
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2016Rapport (Annet vitenskapelig)
    Abstract [en]

    A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cache-blocked algorithm by Kågstrom, Kressner, E.S. Quintana-Ortí, and G. Quintana-Ortí (2008). A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new algorithm is an improvement of the state of the art but also identifies factors that currently limit its scalability.

    Fulltekst (pdf)
    fulltext
  • 21.
    Adlerborn, Björn
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kågström, Bo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kressner, Daniel
    A parallel QZ algorithm for distributed memory HPC systems2014Inngår i: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 36, nr 5, s. C480-C503Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Appearing frequently in applications, generalized eigenvalue problems represent one of the core problems in numerical linear algebra. The QZ algorithm of Moler and Stewart is the most widely used algorithm for addressing such problems. Despite its importance, little attention has been paid to the parallelization of the QZ algorithm. The purpose of this work is to fill this gap. We propose a parallelization of the QZ algorithm that incorporates all modern ingredients of dense eigensolvers, such as multishift and aggressive early deflation techniques. To deal with (possibly many) infinite eigenvalues, a new parallel deflation strategy is developed. Numerical experiments for several random and application examples demonstrate the effectiveness of our algorithm on two different distributed memory HPC systems.

  • 22.
    Adlerborn, Björn
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kågström, Bo
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N).
    Kressner, Daniel
    SB–MATHICSE–ANCHP, EPF Lausanne.
    PDHGEQZ user guide2015Rapport (Annet vitenskapelig)
    Abstract [en]

    Given a general matrix pair (A,B) with real entries, we provide software routines for computing a generalized Schur decomposition (S, T). The real and complex conjugate pairs of eigenvalues appear as 1×1 and 2×2 blocks, respectively, along the diagonals of (S, T) and can be reordered in any order. Typically, this functionality is used to compute orthogonal bases for a pair of deflating subspaces corresponding to a selected set of eigenvalues. The routines are written in Fortran 90 and targets distributed memory machines.

    Fulltekst (pdf)
    fulltext
  • 23.
    Adolfsson, David
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Claesson, Tom
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Estimation methods for Asian Quanto Basket options2019Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    All financial institutions that provide options to counterparties will in most cases get involved withMonte Carlo simulations. Options with a payoff function that depends on asset’s value at differenttime points over its lifespan are so called path dependent options. This path dependency impli-cates that there exists no parametric solution and the price must hence be estimated, it is hereMonte Carlo methods come into the picture. The problem though with this fundamental optionpricing method is the computational time. Prices fluctuate continuously on the open market withrespect to different risk factors and since it’s impossible to re-evaluate the option for all shifts dueto its computing intensive nature, estimations of the option price must be used. Estimating theprice from known points will of course never produce the same result as a full re-evaluation but anestimation method that produces reliable results and greatly reduces computing time is desirable.This thesis will evaluate different approaches and try to minimize the estimation error with respectto a certain number of risk factors.This is the background for our master thesis at Swedbank. The goal is to create multiple estima-tion methods and compare them to Swedbank’s current estimation model. By doing this we couldpotentially provide Swedbank with improvement ideas regarding some of its option products andrisk measurements. This thesis is primarily based on two estimation methods that estimate optionprices with respect to two variable risk factors, the value of the underlying assets and volatility.The first method is a grid that uses a second order Taylor expansion and the sensitivities delta,gamma and vega. The other method uses a grid of pre-simulated option prices for different shiftsin risk factors. The interpolation technique that is used in this method is calledPiecewise CubicHermiteinterpolation. The methods (or referred to as approaches in the report) are implementedto handle a relative change of 50 percent in the underlying asset’s index value, which is the firstrisk factor. Concerning the second risk factor, volatility, both methods estimate prices for a 50percent relative downward change and an upward change of 400 percent from the initial volatility.Should there emerge even more extreme market conditions both methods use linear extrapolationto estimate a new option price.

    Fulltekst (pdf)
    ESTIMATION METHODS FOR ASIAN QUANTO BASKET OPTIONS
  • 24.
    af Klintberg, Max
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Predictive Modeling of Emissions: Heavy Duty Vehicles2016Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Fulltekst (pdf)
    fulltext
  • 25.
    Agvik, Simon
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för fysik.
    A deformable terrain model in multi-domain dynamics using elastoplastic constraints: An adaptive approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    Achieving realistic simulations of terrain vehicles in their work environment does not only require a careful model of the vehicle itself but the vehicle's interactions with the surroundings are equally important. For off-road ground vehicles the terrain will heavily affect the behaviour of the vehicle and thus puts great demands on the terrain model.

    The purpose of this project has been to develop and evaluate a deformable terrain model, meant to be used in real-time simulations with multi-body dynamics. The proposed approach is a modification of an existing elastoplastic model based on linear elasticity theory and a capped Drucker-Prager model, using it in an adaptive way. The original model can be seen as a system of rigid bodies connected by elastoplastic constraints, representing the terrain. This project investigates if it is possible to create dynamic bodies just when it is absolutely necessary, and store information about possible deformations in a grid.

    Two methods used for transferring information between the dynamic bodies and the grid have been evaluated; an interpolating approach and a discrete approach. The test results indicate that the interpolating approach is preferable, with better stability to an equal performance cost. However, stability problems still exist that have to be solved if the model should be useful in a commercial product.

    Fulltekst (pdf)
    fulltext
  • 26.
    Ahlbeck, Jakob
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Mosebach, Fredrik
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Analys av risker med garantinivåer i förhållande till förväntade utbetalningar och portföljavkastningar för traditionella pensionsförsäkringar: Ett examensarbete för Folksam Liv med dotterbolag2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Fulltekst (pdf)
    Ahlbeck&Mosebach
  • 27.
    Ahlin, Mikael
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Ranby, Felix
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Predicting Marketing Churn Using Machine Learning Models2019Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    For any organisation that engages in marketing actions there is a need to understand how people react to communication messages that are sent. Since the introduction of General Data Protection Regulation, the requirements for personal data usage have increased and people are able to effect the way their personal information is used by companies. For instance people have the possibility to unsubscribe from communication that is sent, this is called Opt-Out and can be viewed as churning from communication channels. When a customer Opt-Out the organisation loses the opportunity to send personalised marketing to that individual which in turn result in lost revenue. 

    The aim with this thesis is to investigate the Opt-Out phenomena and build a model that is able to predict the risk of losing a customer from the communication channels. The risk of losing a customer is measured as the estimated probability that a specic individual will Opt-Out in the near future. To predict future Opt-Outs the project uses machine learning algorithms on aggregated communication and customer data. Of the algorithms that were tested the best and most stable performance was achieved by an Extreme Gradient Boosting algorithm that used simulated variables. The performance of the model is best described by an AUC score of 0.71 and a lift score of 2.21, with an adjusted threshold on data two months into the future from when the model was trained. With a model that uses simulated variables the computational cost goes up. However, the increase in performance is signicant and it can be concluded that the choice to include information about specic communications is considered relevant for the outcome of the predictions. A boosted method such as the Extreme Gradient Boosting algorithm generates stable results which lead to a longer time between model retraining sessions.

    Fulltekst (pdf)
    Thesis_Ahlin_Ranby_2019
  • 28.
    Ahlkrona, Josefin
    et al.
    Department of Mathematics, Stockholm University, Stockholm, Sweden; Swedish e-Science Research Centre (SeRC), Stockholm, Sweden.
    Elfverson, Daniel
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    A cut finite element method for non-Newtonian free surface flows in 2D: application to glacier modelling2021Inngår i: Journal of Computational Physics: X, E-ISSN 2590-0552, Vol. 11, artikkel-id 100090Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    In ice sheet and glacier modelling, the Finite Element Method is rapidly gaining popularity. However, constructing and updating meshes for ice sheets and glaciers is a non-trivial and computationally demanding task due to their thin, irregular, and time dependent geometry. In this paper we introduce a novel approach to ice dynamics computations based on the unfitted Finite Element Method CutFEM, which lets the domain boundary cut through elements. By employing CutFEM, complex meshing and remeshing is avoided as the glacier can be immersed in a simple background mesh without loss of accuracy. The ice is modelled as a non-Newtonian, shear-thinning fluid obeying the p-Stokes (full Stokes) equations with the ice atmosphere interface as a moving free surface. A Navier slip boundary condition applies at the glacier base allowing both bedrock and subglacial lakes to be represented. Within the CutFEM framework we develop a strategy for handling non-linear viscosities and thin domains and show how glacier deformation can be modelled using a level set function. In numerical experiments we show that the expected order of accuracy is achieved and that the method is robust with respect to penalty parameters. As an application we compute the velocity field of the Swiss mountain glacier Haut Glacier d'Arolla in 2D with and without an underlying subglacial lake, and simulate the glacier deformation from year 1930 to 1932, with and without surface accumulation and basal melt.

    Fulltekst (pdf)
    fulltext
  • 29.
    Ahlm, Kristoffer
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    IDENTIFIKATION AV RISKINDIKATORER I FINANSIELL INFORMATION MED HJÄLP AV AI/ML: Ökade möjligheter för myndigheter att förebygga ekonomisk brottslighet2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    Ekonomisk brottslighet är mer lukrativt jämfört med annan brottslighet som narkotika, häleri och människohandel. Tidiga åtgärder som försvårar att kriminella kan använda företag för brottsliga syften gör att stora kostnader för samhället kan undvikas. En genomgång av litteraturen visade också att det finns stora brister i samarbetet mellan svenska myndigheter för att upptäcka grov ekonomisk brottslighet. Idag uppdagas brotten först ofta efter att en konkurs inletts. I studier har maskininlärningsmodeller prövats för att kunna upptäcka ekonomisk brottslighet och några svenska myndigheter använder maskininlärningsmodeller för att upptäcka brott men mer avancerade metoder används idag av danska myndigheter. Bolagsverket har idag ett omfattande register för bolag i Sverige och denna studie syftar till att undersöka om maskininlärning kan användas för att identifiera misstänkta bolag, genom att använda digitalt inlämnade årsredovisningar och information ur bolagsverkets register för att kunna träna klassificeringsmodeller att identifiera misstänkta bolag. För att träna modellen så har stämningsansökningar inhämtats från Ekobrottsmyndigheten som kunnat kopplas till specifika bolag av de inlämnade årsredovisningar. Principalkomponentanalys används för att visuellt visa på skillnader mellan grupperna misstänkta och icke misstänkta bolag och analyserna visade på ett överlapp mellan grupperna och ingen tydlig klustring av grupperna. Data var obalanserat med 38 misstänkta bolag av totalt 1009 bolag och därför användes översamplingstekniken SMOTE för att skapa mer syntetiskt data och för att öka antalet i gruppen misstänkta. Två maskininlärningsmodeller Random Forest och Stödvektormaskin (SVM) jämfördes i en 10 fold korsvalidering. Där båda uppvisade en recall på runt 0.91 men där Random Forest hade en mycket högre precision och med högre accuracy. Random Forest valdes och tränades på nytt och uppvisades en recall på 0.75 när den testades på osett data bestående av 8 misstänkta av 202 bolag. Ett sänkt tröskelvärde resulterade i en högre recall men med en större antal felklassificerade bolag.

    Studien visar tydligt problemet med obalans i data och de utmaningar man ställs inför med mindre data. Ett större data hade möjligjort ett strängare urval på brottstyper som hade kunnat ge en mer robust modell som skulle kunna användas av bolagsverket för att lättare kunna identifiera misstänkta bolag i deras register.

    Fulltekst (pdf)
    Ahlm_fulltext
  • 30.
    Ahlstrand, Samuel
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Partiformning vid intern materialförsörjning och layoutanpassning av lager: En fallstudie vid GE Healthcare Umeå av två-binge, supermarkets ochmaterialspindlar2014Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    GE Healthcare (GEHC) Umeå har vid sin implementering av Lean genomfört förändringar i lagerstrukturen som är i behov av bättre anpassning. GEHC Umeås nya lagerstruktur innebär öppna supermarkets med förändrade försörjningsrutiner från lager till produktion. Ett två-binge system har implementeras där signalbehållare med material fylls av lageransvariga materialspindlar.

    Det första identifierade problemet och forskningsfrågan utgör två-bingarnas kvantiteter som bestämmer mängden artiklar vid monteringsstationerna. Dessa behöver ses över och en rutin för bestämmande av kvantitet behöver etableras. Som den andra, och oberoende forskningsfrågan, har antalet supermarkets (lager) och dessmaterialspindlar identifieras som är många till antalet och utspridda med begränsad samordning.

    Ett tidigare examensarbete, litteraturstudier, intervjuer samt egna observationer på plats har används för att beskriva nuläget genom både kvalitativa och kvantitativa metoder. På grund av bristen på liknande problem i litteraturen har externa partiformningsmetoder och lagerstyrning används och komplimenterats med simulering för två-binge systemet som del i besvarandet av den första frågeställningen. För den andra frågeställningen har delar ur förenklad systematisk lokalplanläggning används där bland annat olika centraliseringsgrader undersökts med simulering av materialtransporter vid olika artikelplaceringar.

    Idag sätts kvantiteten efter prognoserat användande utifrån personliga erfarenheter. Samordningen mellan materialspindlarna är bristande och nyttjandegraden upplevs ojämn samtidigt som godsmottagningen skulle gynnas av ökad kapacitet. Standardiserade processer i materialhanteringen saknas och produktionsgrupperna har skilda arbetssätt som antaskunna gynnas av en centralisering där gemensamma rutiner lättare kan etableras.

    De historiska transaktionerna visar att det finns utrymme för förbättringar då vissa artiklar genererar långa transportsträckor på grund lagerplats i förhållande till var de används iproduktionen. De nya binge-kvantiteterna från partiformningsmetoderna EOQ, m-EOQ och Kanbanformeln har testats i simulering av påfyllning och materialåtgång via en implementation i Excel VBA.

    Kanbanformeln uppvisar högsta servicenivån 90 %, för lägsta totalkostnaden och minskad kapitalbindning. Kanbankvantiteterna minskar den totala kostnaden med 20 %. Antalet påfyllningar skulle öka med 7 % och antalet artiklar i produktion minskar med 59%. För layoutanpassningen har även simulering av olika orderplock och artikelplaceringen genomförts. Resultatet visar att en centraliseringsgrad är möjlig med en liten ökning avmaterialtransporterna. Det framgår även att artiklar som plockas väldigt sällan är beräknade att ta upp 89 hyllställage av totalt 230 stycken och bör ses över. Detta tillsammans med kravspecifikationen från analys-delen har hjälpt för att generera olika koncept.

    GEHC Umeå bör använda Kanbanformeln i framtiden för bestämmande av kvantiteten i bingarna. Vissa anpassningar för gemensamma artiklar i Comm-lagret och artiklar utan historiska efterfrågan bör ske. För layouten bör GEHC Umeå först och främst flytta artiklarsom idag bidrar med onödiga transporter. På längre sikt bör en ökad grad centralisering avlagren vara möjlig med hänsyn till fördelar vid samordning och informell spridning av arbetsrutiner. Materialspindlarna bör underlätta för godsmottagningen, delta i bristrapportering samt förbättringsarbetet. Utöver detta bör möjligheter till ökat samarbetet mellan materialplaneringen, produktionsplaneringen och materialspindlarna undersökas

  • 31. Akbari, Saieed
    et al.
    Friedland, Shmuel
    Markström, Klas
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Zare, Sanaz
    On 1-sum flows in undirected graphs2016Inngår i: The Electronic Journal of Linear Algebra, ISSN 1537-9582, E-ISSN 1081-3810, Vol. 31, s. 646-665Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Let G = (V, E) be a simple undirected graph. For a given set L subset of R, a function omega: E -> L is called an L-flow. Given a vector gamma is an element of R-V , omega is a gamma-L-flow if for each v is an element of V, the sum of the values on the edges incident to v is gamma(v). If gamma(v) = c, for all v is an element of V, then the gamma-L-flow is called a c-sum L-flow. In this paper, the existence of gamma-L-flows for various choices of sets L of real numbers is studied, with an emphasis on 1-sum flows. Let L be a subset of real numbers containing 0 and denote L* := L \ {0}. Answering a question from [S. Akbari, M. Kano, and S. Zare. A generalization of 0-sum flows in graphs. Linear Algebra Appl., 438:3629-3634, 2013.], the bipartite graphs which admit a 1-sum R* -flow or a 1-sum Z* -flow are characterized. It is also shown that every k-regular graph, with k either odd or congruent to 2 modulo 4, admits a 1-sum {-1, 0, 1}-flow.

  • 32.
    Alainentalo, Lisbeth
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    A Comparison of Tests for Ordered Alternatives With Application in Medicine1997Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    A situation frequently encountered in medical studies is the comparison of several treatments with a control. The problem is to determine whether or not a test drug has a desirable medical effect and/or to identify the minimum effective dose. In this Bachelor’s thesis, some of the methods used for testing hypotheses of ordered alternatives are reviewed and compared with respect to the power of the tests. Examples of multiple comparison procedures, maximum likelihood procedures, rank tests and different types of contrasts are presented and the properties of the methods are explored.

    Depending on the degree of knowledge about the dose-responses, the aim of the study, whether the test is parametric or non-parametric and distribution-free or not, different recommendations are given which of the tests should be used. Thus, there is no single test which can be applied in all experimental situations for testing all different alternative hypotheses. 

    Fulltekst (pdf)
    fulltext
  • 33. Albano, Anthony D.
    et al.
    Wiberg, Marie
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Linking With External Covariates: Examining Accuracy by Anchor Type, Test Length, Ability Difference, and Sample Size2019Inngår i: Applied psychological measurement, ISSN 0146-6216, E-ISSN 1552-3497, Vol. 43, nr 8, s. 597-610Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Research has recently demonstrated the use of multiple anchor tests and external covariates to supplement or substitute for common anchor items when linking and equating with nonequivalent groups. This study examines the conditions under which external covariates improve linking and equating accuracy, with internal and external anchor tests of varying lengths and groups of differing abilities. Pseudo forms of a state science test were equated within a resampling study where sample size ranged from 1,000 to 10,000 examinees and anchor tests ranged in length from eight to 20 items, with reading and math scores included as covariates. Frequency estimation linking with an anchor test and external covariate was found to produce the most accurate results under the majority of conditions studied. Practical applications of linking with anchor tests and covariates are discussed.

    Fulltekst (pdf)
    fulltext
  • 34.
    Albing, Malin
    et al.
    Department of Mathematics, Luleå University of Technology.
    Vännman, Kerstin
    Department of Mathematics, Luleå University of Technology.
    Elliptical safety region plots for Cpk2011Inngår i: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 38, nr 6, s. 1169-1187Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.

  • 35.
    Al-Dory, Mohammed
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Directional edge detection by the gradient method applied to linear and non-linear edges2020Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    When we humans look at images, especially paintings, we are usually interested in the sense of art and what we regard as “beauty”. This may include colour harmony, fantasy, realism, expression, drama, ordered chaos, contemplative aspects, etc.Alas, none of that is interesting for a robot that processes a two-dimensional matrix representing what we humans call an image.Robots, and other digital computers, are programmed to care about things like resolution, sampling frequency, image intensity, as well as edges. The detection of edges is a very important subject in the field of image processing. An edge in an image represents the end of one object and the start of another object. Thus, edges exist in different shapes and forms. Some edges are horizontal, other edges are vertical and there are also diagonal edges, all these edges are straight lines with constant slopes. Then we have also circular and curved edges whose slopes depend on the spatial variables. It is not always beneficial to detect all edges in an image, sometimes we are interested in edges in a certain direction. In this work we will explain the mathematics behind edge detection using gradient approach and try to give optimal ways to detect linear edges in different directions and discuss detection of non-linear edges. The theory developed in this work will then be applied and tested using Matlab.

    Fulltekst (pdf)
    fulltext
  • 36.
    Alger, Susanne
    Umeå universitet, Samhällsvetenskapliga fakulteten, Institutionen för tillämpad utbildningsvetenskap.
    Is This Reliable Enough?: Examining Classification Consistency and Accuracy in a Criterion-Referenced Test2016Inngår i: International journal of assessment tools in education, ISSN 2148-7456, Vol. 3, nr 2, s. 137-150Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    One important step for assessing the quality of a test is to examine the reliability of test score interpretation. Which aspect of reliability is the most relevant depends on what type of test it is and how the scores are to be used. For criterion-referenced tests, and in particular certification tests, where students are classified into performance categories, primary focus need not be on the size of error but on the impact of this error on classification. This impact can be described in terms of classification consistency and classification accuracy. In this article selected methods from classical test theory for estimating classification consistency and classification accuracy were applied to the theory part of the Swedish driving licence test, a high-stakes criterion-referenced test which is rarely studied in terms of reliability of classification. The results for this particular test indicated a level of classification consistency that falls slightly short of the recommended level which is why lengthening the test should be considered. More evidence should also be gathered as to whether the placement of the cut-off score is appropriate since this has implications for the validity of classifications.

  • 37.
    Ali, Raman
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Root Cause Analysis for In-Transit Time Performance: Time Series Analysis for Inbound Quantity Received into Warehouse2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Cytiva is a global provider of technologies to global pharmaceutical companies and it is critical to ensure that Cytiva’s customers receive deliveries of products on-time. Cytiva’s products are shipped via road transportation within most parts of Europe and for the rest in the world air freight is used. The company is challenged to deliver products on time between regional distribution points and from manufacturing sites to its regional distribution centers. The time performance for the delivery of goods is today 79% compared to the company’s goal 95%.

    The goal of this work is to find the root causes and recommend improvement opportunities for the logistics organizations inbound in-transit time performance towards their target of 95% success rate of shipping in-transit times.

    Data for this work was collected from the company’s system to create visibility for the logistics specialists and to create a prediction that can be used for the warehouse in Rosersberg. Visibility was created by implementing various dashboards in the QlikSense program that can be used by the logistics group. The prediction models were built on Holt-Winters forecasting technique to be able to predict quantity, weight and volume of products, which arrive daily within five days and are enough to be implemented in the daily work. With the forecasting technique high accurate models were found for both the quantity and weight with accuracies of 96.02% and 92.23%, respectively. For the volume, however, too many outliers were replaced by the mean values and the accuracy of the model was 75.82%.

    However, large amounts of discrepancies have been found in the data which today has led to a large ongoing project to solve. This means that the models shown in this thesis cannot be completely reliable for the company to use when a lot of errors in their data have been found. The models may need to be adjusted when the quality of the data has increased. As of today the models can be used by having a glance upon.

    Fulltekst (pdf)
    fulltext
  • 38.
    Ali, Saif
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Optimization-Based Carefree Clearance: An Optimization-Based Approach to Identifying Worst-Case Manoeuvres for Fighter Aircraft2024Independent thesis Advanced level (degree of Master (Two Years)), 20 poäng / 30 hpOppgave
    Abstract [en]

    The development of advanced Flight Control Systems (FCSs) is continuously progressing at a rapid pace. Originally consisting of purely mechanical functions for the deflection of control surfaces, the transition to Fly-By-Wire technology allowed for the inclusion of highly automatized algorithms within the control system. However, for such complex systems follows a rigorous validation and verification process to ensure safe and reliable flight. In the clearance of its control laws, the FCS must be tested for all possible uncertainties and manoeuvres, resulting in a lengthy and costly process, not least for fighter aircraft with the additional requirement of carefree handling. The demand for efficient and comprehensive tools drives the effort of this thesis, which explores the use of optimization, specifically multi-modal Genetic Algorithms for identification of diverse worst-case manoeuvres. Against the traditional methodology which uses gridding of flight conditions and a set of predefined manoeuvres for assessing clearance, the optimization-based methods were consistently able to find manoeuvres resulting in more problematic outcomes.

    Fulltekst (pdf)
    fulltext
  • 39.
    Alishev, Boris
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Kågström, Oskar
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Effectivisation of an Industrial Painting Process: A discrete event approach to modeling and analysing the painting process at Volvo GTO Umeå2022Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    For any manufacturing process, one of the key challenges after a solid foundation has been built is how improvements can be made. Management has to consider how possible changes will affect both the process as a whole in addition to every individual part before implementation. The groundwork for this is to have a clear overview of every part and the possibility to investigate effects of changes. This thesis thus aims to provide a clear overview of the complex painting process at Volvo GTO in Umeå and a template for investigating how differently implemented changes will affect the process. The means for doing this is to use statistics, modeling and discrete event simulation. Modeling shall provide an approximate recreation of reality and the subsequent analysis shall take into account similarities and differences to estimate the effects of changes. Recreation of real-world data and variability is based on bootstrap resampling for multiple independent weeks of observations. Results obtained from simulation are compared to observed data in order to validate the model and investigate discrepancies. Given the results of model validation, modifications are implemented and information obtained from model validation is used to evaluate the results of the modifications. Further, strengths and weaknesses of the thesis are presented and a recommendation of altering the stance on process improvements is provided to Volvo GTO.

    Fulltekst (pdf)
    fulltext
  • 40. Alloyarova, Roza
    et al.
    Nikulin, Mikhail
    Pya, Natalya
    Voinov, Vassilly
    The Power-Generalized Weibull probability distribution and its use in survival analysis2007Inngår i: Communications in Dependability and Quality Management, Vol. 10, nr 1, s. 5-15Artikkel i tidsskrift (Fagfellevurdert)
  • 41.
    Alm, Hannah
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Optimering av asfaltproduktion för minskad klimatpåverkan: Minimering av koldioxidutsläpp i Skanskas asfalt2024Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [sv]

    Detta examensarbete utfördes hos Skanska Industrial Solutions (SIS) och syftade till att undersöka sätt att minimera koldioxidutsläppen från Skanskas asfaltproduktion genom optimering. En icke-linjär optimeringsmodell utvecklades för att beräkna den optimala inblandningen av jungfruliga material och returasfalt samt mängden årston som ska produceras för att minimera koldioxidutsläppen per ton tillverkad asfalt.

    Utgångspunkten för modellen var att på ett så verklighetstroget sätt som möjligt beskriva de olika aspekterna i asfaltproduktionens utsläpp. Modellen tar hänsyn till begränsningar såsom maximal tillåten inblandning av returasfalt, maximal produktionsnivå och tillgången på returasfalt. El- och biooljeförbrukningen modellerades som funktioner av produktionsvolymen genom minsta kvadratmetoden för att beskriva hur förbrukningen förändrades i takt med en ökad mängd producerad årston.

    Resultaten visar att en hög inblandning av returasfalt är den viktigaste faktorn för att minska koldioxidutsläppen. Modellen visade också att mängden årston har en inverkan på koldioxidutsläppen per ton tillverkad asfalt där en högre produktion leder till lägre utsläpp per ton. En undersökning av framtida scenarion med en lägre tillgång på returasfalt visade dock att mängden årston endast bör ökas så länge en maximal inblandning av returasfalt fortfarande är möjlig.

    Fulltekst (pdf)
    fulltext
  • 42.
    Alm, Hannes
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Lindman, Johan
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Estimating Payoff Distributions of US LifeInsurance Portfolios: An Evaluation of Two Approaches: A Monte Carlo Method and the De Pril’s Algorithm2024Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    For any fund manager, the ability to project expected returns into the futureis vital, but it poses a great deal of uncertainty. When the underlying risk istied to human longevity, the uncertainty is found in the stochastic nature of mortality.

    This thesis presents two approaches to approximating a distribution of expected payoffs for a portfolio containing US life insurance policies. The first one utilizes the Monte Carlo method and approximates the payoff in binary and monetary values. The second approach uses the De Pril’s recursive algorithm to calculate the binary distribution. The different methods are evaluated on two key factors; accuracy and computational cost. In addition, different portfolio distributionsare evaluated in terms of their statistical characteristics and longevity exposure.

    The results presented in this thesis indicate that the Monte Carlo method isthe more appropriate method for calculating payoff distributions of US life insurance portfolios. Although the De Pril’s method displays an accurate resultfor a single time period, the process of repeated convolution to evaluate longertime periods leads to an unsustainable increase in the error term. An analysis of statistical measurements indicates that life settlement portfolios have apeaky distribution with heavy tails and positive skewness. Furthermore, testsof longevity show that the portfolio distributions are sensitive to the accuracyof mortality assumptions.

    Fulltekst (pdf)
    fulltext
  • 43.
    Almqvist, Saga
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Nore, Lana
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Where to Stack the Chocolate?: Mapping and Optimisation of the Storage Locations with Associated Transportation Cost at Marabou2017Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    Today, inventory management at Marabou is organised in such way that articles are stored based on which production line they belong to and are sent to storage locations close to their production line. However, some storage locations are not optimised, insofar articles are stored out of pure habit and follow what is considered most convenient. This means that the storage locations are not based on any fixed instructions or standard. In this report, we propose optimal storage locations with respect to transportation cost by modelling the problem mathematically as a minimal cost matching problem, which we solve using the so-called Hungarian algorithm. To be able to implement the Hungarian algorithm, we collected data regarding the stock levels of articles in the factory throughout 2016. We adjusted the collected data by turning the articles into units of pallets. We considered three different implementations of the Hungarian algorithm. The results from the different approaches are presented together with several suggestions regarding pallet optimisation. In addition to the theoretical background, our work is based on an empirical study through participant observations as well as qualitative interviews with factory employees. In addition to our modelling work, we thus offer several further suggestions for efficiency savings or improvements at the factory, as well as for further work building on this report.

    Fulltekst (pdf)
    ALMQVIST&NORE
  • 44.
    Almqvist, Siri
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Nordin, Oskar
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    STRESS TESTING AN SME PORTFOLIO: Effects of an Adverse Macroeconomic Scenario on Credit Risk Transition Matrices2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The financial crisis of 2007-2008 was a severe global crisis causing a worldwide recession. One of the main contributing factors of the crisis was the excessive risk appetite of banks and financial institutions. Since then, regulatory authorities and financial institutions have directed focus towards risk management with the main objective to avert a similar crisis from occurring in the future. The aim of this thesis is to investigate how an adverse macroeconomic scenario would affect the migrations between risk classes of an SME portfolio, referred to as stress test.

    This thesis utilises two frameworks, one by Belkin and Suchower and one by Carlehed and Petrov, for creating a single systematic indicator describing the credit class migrations of the portfolio. Four different regression model setups (Ordinary Least Squares, Additive Model, XGBoost and SVM) are then used to describe the relationship between macroeconomic indicators and this systematic indicator. The four models are evaluated in terms of interpretability and ability to predict in order to find the main drivers for the systematic indicator. Their corresponding prediction errors are compared to find the best model. The portfolio is stress tested by using the regression models to predict the corresponding systematic indicator given an adverse macroeconomic scenario. The probability of default, estimated from the indicator using each of the frameworks, are then compared and analysed with regards to the systematic indicator.

    The results show that unemployment is the main driver of the risk class migrations for an SME portfolio, both from a statistical and economical perspective. The most appropriate regression model is the additive model because of its performance and interpretability and is therefore advised to use for this problem. From the PD estimations, it is concluded that the framework by Belkin and Suchower gives a more volatile estimate than that of Carlehed and Petrov.

    Fulltekst (pdf)
    fulltext
  • 45.
    Al-Sahi, Mohammad Reda
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Prehospital resource optimization: Master thesis for Umeå University & Prehospital resource optimization2022Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    The Vinnova project Prehospital resource optimization is a collaboration between the four northernmost regions in Sweden, the Jämtland/Härjedalen region, the Västernorrland region, the Västerbotten region and the Norrbotten region, and together these four regions make up approximately half of Sweden's area.This work is a continuation and improvement on a study conducted by Umeå University to estimate driving times for ambulances in the four northernmost regions. This study aims to improve the accuracy of the estimated driving times conducted by Umeå University and to understand and explain the variables that affect the driving times for ambulances. Data for empirical driving times from 2014-2020 were reviewed and checked to estimate driving times. Through analysis of linear relations, different linear models were created that are dependent on different parameters to explain the empirical driving time as well as possible. The model was finally validated by the K-Fold method. The results show that the estimated driving times can be improved, however, there is room for further improvements. Explanatory variables are month, day of the week and time of day.

  • 46.
    Alshalabi, Mohamad
    Umeå universitet, Samhällsvetenskapliga fakulteten, Handelshögskolan vid Umeå universitet, Statistik.
    Measures of statistical dependence for feature selection: Computational study2022Independent thesis Advanced level (degree of Master (One Year)), 10 poäng / 15 hpOppgave
    Abstract [en]

    The importance of feature selection for statistical and machine learning models derives from their explainability and the ability to explore new relationships, leading to new discoveries. Straightforward feature selection methods measure the dependencies between the potential features and the response variable. This thesis tries to study the selection of features according to a maximal statistical dependency criterion based ongeneralized Pearson’s correlation coefficients, e.g., Wijayatunga’s coefficient. I present a framework for feature selection based on these coefficients for high dimensional feature variables. The results are compared to the ones obtained by applying an elastic net regression (for high-dimensional data). The generalized Pearson’s correlation coefficient is a metric-based measure where the metric is Hellinger distance. The metric is considered as the distance between probability distributions. The Wijayatunga’s coefficient is originally proposed for the discrete case; here, we generalize it for continuous variables by discretization and kernelization. It is interesting to see how discretization work as we discretize the bins finer. The study employs both synthetic and real-world data to illustrate the validity and power of this feature selection process. Moreover, a new method of normalization for mutual information is included. The results show that both measures have considerable potential in detecting associations. The feature selection experiment shows that elastic net regression is superior to our proposed method; nevertheless, more investigation could be done regarding this subject.

    Fulltekst (pdf)
    fulltext
  • 47.
    Alstermark, Olivia
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Stolt, Evangelina
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    Purchase Probability Prediction: Predicting likelihood of a new customer returning for a second purchase using machine learning methods2021Independent thesis Advanced level (professional degree), 20 poäng / 30 hpOppgave
    Abstract [en]

    When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame.

    To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models.

    The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.

    Fulltekst (pdf)
    alstermark_stolt
  • 48.
    Altmejd, Adam
    et al.
    Swedish Institute for Social Research, Stockholm University, Stockholm, Sweden; Department of Finance, Stockholm School of Economics, Stockholm, Sweden.
    Rocklöv, Joacim
    Umeå universitet, Medicinska fakulteten, Institutionen för folkhälsa och klinisk medicin, Avdelningen för hållbar hälsa. Heidelberg Institute of Global Health (HIGH), Interdisciplinary Centre for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany.
    Wallin, Jonas
    Department of Statistics, Lund University, Lund, Sweden.
    Nowcasting COVID-19 statistics reported with delay: A case-study of Sweden and the UK2023Inngår i: International Journal of Environmental Research and Public Health, ISSN 1661-7827, E-ISSN 1660-4601, Vol. 20, nr 4Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    The COVID-19 pandemic has demonstrated the importance of unbiased, real-time statistics of trends in disease events in order to achieve an effective response. Because of reporting delays, real-time statistics frequently underestimate the total number of infections, hospitalizations and deaths. When studied by event date, such delays also risk creating an illusion of a downward trend. Here, we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the "removal method"-a well-established estimation framework in the field of ecology.

    Fulltekst (pdf)
    fulltext
  • 49.
    Altıntaş, Özge
    et al.
    Ankara University, Faculty of Educational Sciences, Department of Educational Sciences, Educational Measurement and Evaluation, Ankara, Turkey.
    Wallin, Gabriel
    Université Côte d’Azur, Inria, CNRS, Laboratoire J. A. Dieudonné, team Maasai, Sophia-Antipolis, France.
    Equality of admission tests using kernel equating under the non-equivalent groups with covariates design2021Inngår i: International Journal of Assessment Tools in Education, E-ISSN 2148-7456, Vol. 8, nr 4, s. 729-743Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Educational assessment tests are designed to measure the same psychological constructs over extended periods of time. This feature is important considering that test results are often used in the selection process for admittance to university programs. To ensure fair assessments, especially for those whose results weigh heavily in selection decisions, it is necessary to collect evidence demonstrating that the assessments are not biased, and to confirm that the scores obtained from different test forms have statistical equality. For this purpose, test equating has important functions, as it prevents bias generated by differences in the difficulty levels of different test forms, allows the scores obtained from different test forms to be reported on the same scale, and ensures that the reported scores communicate the same meaning. In this study, these important functions were evaluated using real college admission test data from different test administrations. The kernel equating method under the non-equivalent groups with covariates design was applied to determine whether the scores obtained from different time periods but measuring the same psychological constructs were statistically equivalent. The non-equivalent groups with covariates design was specifically used because the test groups of the admission test are non-equivalent and there are no anchor items. Results from the analyses showed that the test forms had different score distributions, and that the relationship was non-linear. The equating procedure was thus adjusted to eliminate these differences and thereby allow the tests to be used interchangeably.

  • 50.
    Amanuel, Meron
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för matematik och matematisk statistik.
    ON GENERATING THE PROBABILITY MASS FUNCTION USING FIBONACCI POWER SERIES2022Independent thesis Basic level (degree of Bachelor), 10 poäng / 15 hpOppgave
    Abstract [en]

    This thesis will focus on generating the probability mass function using Fibonacci sequenceas the coefficient of the power series.

    The discrete probability, named Fibonacci distribution,was formed by taking into consideration the recursive property of the Fibonacci sequence,the radius of convergence of the power series, and additive property of mutually exclusiveevents. This distribution satisfies the requisites of a legitimate probability mass function.

    It's cumulative distribution function and the moment generating function are then derived and the latter are used to generate moments of the distribution, specifically, the mean and the variance.

    The characteristics of some convergent sequences generated from the Fibonacci sequenceare found useful in showing that the limiting form of the Fibonacci distribution is a geometricdistribution. Lastly, the paper showcases applications and simulations of the Fibonacci distribution using MATLAB.

    Fulltekst (pdf)
    fulltext
1234567 1 - 50 of 2720
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf