umu.sePublications
Change search
Refine search result
1234567 1 - 50 of 1638
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf
Rows per page
• 5
• 10
• 20
• 50
• 100
• 250
Sort
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
• Standard (Relevance)
• Author A-Ö
• Author Ö-A
• Title A-Ö
• Title Ö-A
• Publication type A-Ö
• Publication type Ö-A
• Issued (Oldest first)
• Created (Oldest first)
• Last updated (Oldest first)
• Disputation date (earliest first)
• Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
• 1. Aaghabali, M.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Upper bounds on the number of perfect matchings and directed 2-factors in graphs with given number of vertices and edges2015In: European journal of combinatorics (Print), ISSN 0195-6698, E-ISSN 1095-9971, Vol. 45, p. 132-144Article in journal (Refereed)

We give an upper bound on the number of perfect matchings in simple graphs with a given number of vertices and edges. We apply this result to give an upper bound on the number of 2-factors in a directed complete bipartite balanced graph on 2n vertices. The upper bound is sharp for even n. For odd n we state a conjecture on a sharp upper bound.

• 2.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Optimering av kortaste vägen vid hantering och avledning av skadligt dagvatten: Lösning med A-stjärna algoritm samt en guide med ekonomiska styrmedel för beslutsfattande aktörer2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

The earth's population is growing and increasingly more people move into urban areas. This means that as cities grow, new buildings are being built and infrastructures are expanding. This rapid growth is directly related to increased floods as a result of man-made changes in nature.

The already overloaded storm water systems for rain-, melt-, rinsing and other surplus water cannot often handle the existing demand. Therefore, floods arise at greater rain intensity and pose significant costs to society. Due to an unclear division of responsibility within the municipality's organizations there is a failure to handle the existing storm water problem. In order to be able to plan for sustainable cities in the future, it is important to find a viable solution regarding the responsibility issue and how to best handle the storm water to achieve cost advantage.

This study presents a guide for municipalities on how to allocate the responsibility between the municipality and the exploiter. The guide is based on simulations and theories in optimization to propose effective solutions for harmful surplus storm water. Through simulations of the storm water system, the amount of surplus water that does not fit the storm water system capacity has been quantified. In addition, to find a reasonable alternative run-off path for the surplus water, different methods of the shortest path problem have been investigated.

The results show that a classical shortest path algorithm with a heuristic function is not the most appropriate alternative. This because the heuristic function in the algorithm prevents the selection of a more natural pathway upstream even though it could be a more optimal solution.

• 3.
ANMC, EPFL.
Institut für Angewandte und Numerische Mathematik, KIT. ANMC, EPFL. ANMC, EPFL.
High weak order methods for stochastic differential equations based on modified equations2012In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 34, no 3, p. A1800-A1823Article in journal (Refereed)

Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (mean-square stable) stochastic problems, and implicit integrators that exactly conserve all quadratic firstintegrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.

• 4.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Numerical analysis for random processes and fields and related design problems2011Doctoral thesis, comprehensive summary (Other academic)

In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D).

In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed.

In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided.

In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity.

In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.

• 5.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Was it snowing on lake Kassjön in January 4486 BC? Functional data analysis of sediment data.2014In: Proceedings of the Third International Workshop on Functional and Operatorial Statistics (IWFOS 2014), Stresa, Italy, June 2014., 2014Conference paper (Refereed)
• 6.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy. National Sports Institute of Malaysia. MOX – Department of Mathematics, Politecnico di Milano. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. MOX – Department of Mathematics, Politecnico di Milano.
An inferential framework for domain selection in functional anova2014In: Contributions in infinite-dimensional statistics and related topics / [ed] Bongiorno, E.G., Salinelli, E., Goia, A., Vieu, P, Esculapio , 2014Conference paper (Refereed)

We present a procedure for performing an ANOVA test on functional data, including pairwise group comparisons. in a Scheff´e-like perspective. The test is based on the Interval Testing Procedure, and it selects intervals where the groups significantly differ. The procedure is applied on the 3D kinematic motion of the knee joint collected during a functional task (one leg hop) performed by three groups of individuals.

• 7.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Multivariate piecewise linear interpolation of a random field2011Manuscript (preprint) (Other academic)

We consider a multivariate piecewise linear interpolation of a continuous random field on a-dimensional cube. The approximation performance is measured by the integrated mean square error. Multivariate piecewise linear interpolator is defined by N field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field in mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields we provide the upper bound for the approximation accuracy in the uniform mean square norm.

• 8.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
On the error of the Monte Carlo pricing method for Asian option2008In: Journal of Numerical and Applied Mathematics, ISSN 0868-6912, Vol. 96, no 1, p. 1-10Article in journal (Refereed)

We consider a Monte Carlo method to price a continuous arithmetic Asian option with a given precision. Piecewise constant approximation and plain simulation are used for a wide class of models based on L\'{e}vy processes. We give bounds of the possible discretization and simulation errors. The sufficient numbers of discretization points and simulations to obtain requested accuracy are derived. To demonstrate the general approach, the Black-Scholes model is studied in more detail. We undertake the case of continuous averaging and starting time zero,  but the obtained results can be applied to the discrete case  and generalized for any time before an execution date. Some numerical experiments and comparison to the PDE based method are also presented.

• 9.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Piecewise multilinear interpolation of a random field2013In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 45, no 4, p. 945-959Article in journal (Refereed)

We consider a piecewise-multilinear interpolation of a continuous random field on a d-dimensional cube. The approximation performance is measured using the integrated mean square error. Piecewise-multilinear interpolator is defined by N-field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field, in the mean square sense, and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields, we provide the upper bound for the approximation accuracy in the uniform mean square norm.

• 10.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Stratified Monte Carlo quadrature for continuous random fields2015In: Methodology and Computing in Applied Probability, ISSN 1387-5841, E-ISSN 1573-7713, Vol. 17, no 1, p. 59-72Article in journal (Refereed)

We consider the problem of numerical approximation of integrals of random fields over a unit hypercube. We use a stratified Monte Carlo quadrature and measure the approximation performance by the mean squared error. The quadrature is defined by a finite number of stratified randomly chosen observations with the partition generated by a rectangular grid (or design). We study the class of locally stationary random fields whose local behavior is like a fractional Brownian field in the mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large number of the observations. For the H¨older class of random functions, we provide an upper bound for the approximation error. Additionally, for a certain class of isotropic random functions with an isolated singularity at the origin, we construct a sequence of designs eliminating the effect of the singularity point.

• 11.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Politecnico di Milano, Italy. Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Politecnico di Milano, Italy. Oslo University, Norway.
Clustering misaligned dependent curves applied to varved lake sediment for climate reconstruction2017In: Stochastic environmental research and risk assessment (Print), ISSN 1436-3240, E-ISSN 1436-3259, Vol. 31, no 1, p. 71-85Article in journal (Refereed)

In this paper we introduce a novel functional clustering method, the Bagging Voronoi K-Medoid Aligment (BVKMA) algorithm, which simultaneously clusters and aligns spatially dependent curves. It is a nonparametric statistical method that does not rely on distributional or dependency structure assumptions. The method is motivated by and applied to varved (annually laminated) sediment data from lake Kassjön in northern Sweden, aiming to infer on past environmental and climate changes. The resulting clusters and their time dynamics show great potential for seasonal climate interpretation, in particular for winter climate changes.

• 12.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Skattning av kausala effekter med matchat fall-kontroll data2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
• 13.
Institute of Mathematics of the Polish Academy of Sciences, Warsaw, Poland.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
The boundary Harnack inequality for variable exponent p-Laplacian, Carleson estimates, barrier functions and p(⋅)-harmonic measures2016In: Annali di Matematica Pura ed Applicata, ISSN 0373-3114, E-ISSN 1618-1891, Vol. 195, no 2, p. 623-658Article in journal (Refereed)

We investigate various boundary decay estimates for p(⋅)-harmonic functions. For domains in Rn,n≥2satisfying the ball condition (C1,1-domains), we show the boundary Harnack inequality for p(⋅)-harmonic functions under the assumption that the variable exponent p is a bounded Lipschitz function. The proof involves barrier functions and chaining arguments. Moreover, we prove a Carleson-type estimate for p(⋅)-harmonic functions in NTA domains in Rn and provide lower and upper growth estimates and a doubling property for a p(⋅)-harmonic measure.

• 14.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N). Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2018In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 40, no 2, p. C157-C180Article in journal (Refereed)

A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cacheblocked algorithm by K degrees agstrom et al. [BIT, 48 (2008), pp. 563 584]. A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new formulation is an improvement of the state of the art and also identify factors that limit its scalability.

• 15.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N). Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Distributed one-stage Hessenberg-triangular reduction with wavefront scheduling2016Report (Other academic)

A novel parallel formulation of Hessenberg-triangular reduction of a regular matrix pair on distributed memory computers is presented. The formulation is based on a sequential cache-blocked algorithm by Kågstrom, Kressner, E.S. Quintana-Ortí, and G. Quintana-Ortí (2008). A static scheduling algorithm is proposed that addresses the problem of underutilized processes caused by two-sided updates of matrix pairs based on sequences of rotations. Experiments using up to 961 processes demonstrate that the new algorithm is an improvement of the state of the art but also identifies factors that currently limit its scalability.

• 16.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
A parallel QZ algorithm for distributed memory HPC systems2014In: SIAM Journal on Scientific Computing, ISSN 1064-8275, E-ISSN 1095-7197, Vol. 36, no 5, p. C480-C503Article in journal (Refereed)

Appearing frequently in applications, generalized eigenvalue problems represent one of the core problems in numerical linear algebra. The QZ algorithm of Moler and Stewart is the most widely used algorithm for addressing such problems. Despite its importance, little attention has been paid to the parallelization of the QZ algorithm. The purpose of this work is to fill this gap. We propose a parallelization of the QZ algorithm that incorporates all modern ingredients of dense eigensolvers, such as multishift and aggressive early deflation techniques. To deal with (possibly many) infinite eigenvalues, a new parallel deflation strategy is developed. Numerical experiments for several random and application examples demonstrate the effectiveness of our algorithm on two different distributed memory HPC systems.

• 17.
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N).
Umeå University, Faculty of Science and Technology, Department of Computing Science. Umeå University, Faculty of Science and Technology, High Performance Computing Center North (HPC2N). SB–MATHICSE–ANCHP, EPF Lausanne.

Given a general matrix pair (A,B) with real entries, we provide software routines for computing a generalized Schur decomposition (S, T). The real and complex conjugate pairs of eigenvalues appear as 1×1 and 2×2 blocks, respectively, along the diagonals of (S, T) and can be reordered in any order. Typically, this functionality is used to compute orthogonal bases for a pair of deflating subspaces corresponding to a selected set of eigenvalues. The routines are written in Fortran 90 and targets distributed memory machines.

• 18.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Predictive Modeling of Emissions: Heavy Duty Vehicles2016Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
• 19.
Umeå University, Faculty of Science and Technology, Department of Physics.
A deformable terrain model in multi-domain dynamics using elastoplastic constraints: An adaptive approach2015Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

Achieving realistic simulations of terrain vehicles in their work environment does not only require a careful model of the vehicle itself but the vehicle's interactions with the surroundings are equally important. For off-road ground vehicles the terrain will heavily affect the behaviour of the vehicle and thus puts great demands on the terrain model.

The purpose of this project has been to develop and evaluate a deformable terrain model, meant to be used in real-time simulations with multi-body dynamics. The proposed approach is a modification of an existing elastoplastic model based on linear elasticity theory and a capped Drucker-Prager model, using it in an adaptive way. The original model can be seen as a system of rigid bodies connected by elastoplastic constraints, representing the terrain. This project investigates if it is possible to create dynamic bodies just when it is absolutely necessary, and store information about possible deformations in a grid.

Two methods used for transferring information between the dynamic bodies and the grid have been evaluated; an interpolating approach and a discrete approach. The test results indicate that the interpolating approach is preferable, with better stability to an equal performance cost. However, stability problems still exist that have to be solved if the model should be useful in a commercial product.

• 20.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Analys av risker med garantinivåer i förhållande till förväntade utbetalningar och portföljavkastningar för traditionella pensionsförsäkringar: Ett examensarbete för Folksam Liv med dotterbolag2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
• 21.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Partiformning vid intern materialförsörjning och layoutanpassning av lager: En fallstudie vid GE Healthcare Umeå av två-binge, supermarkets ochmaterialspindlar2014Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

GE Healthcare (GEHC) Umeå har vid sin implementering av Lean genomfört förändringar i lagerstrukturen som är i behov av bättre anpassning. GEHC Umeås nya lagerstruktur innebär öppna supermarkets med förändrade försörjningsrutiner från lager till produktion. Ett två-binge system har implementeras där signalbehållare med material fylls av lageransvariga materialspindlar.

Det första identifierade problemet och forskningsfrågan utgör två-bingarnas kvantiteter som bestämmer mängden artiklar vid monteringsstationerna. Dessa behöver ses över och en rutin för bestämmande av kvantitet behöver etableras. Som den andra, och oberoende forskningsfrågan, har antalet supermarkets (lager) och dessmaterialspindlar identifieras som är många till antalet och utspridda med begränsad samordning.

Ett tidigare examensarbete, litteraturstudier, intervjuer samt egna observationer på plats har används för att beskriva nuläget genom både kvalitativa och kvantitativa metoder. På grund av bristen på liknande problem i litteraturen har externa partiformningsmetoder och lagerstyrning används och komplimenterats med simulering för två-binge systemet som del i besvarandet av den första frågeställningen. För den andra frågeställningen har delar ur förenklad systematisk lokalplanläggning används där bland annat olika centraliseringsgrader undersökts med simulering av materialtransporter vid olika artikelplaceringar.

Idag sätts kvantiteten efter prognoserat användande utifrån personliga erfarenheter. Samordningen mellan materialspindlarna är bristande och nyttjandegraden upplevs ojämn samtidigt som godsmottagningen skulle gynnas av ökad kapacitet. Standardiserade processer i materialhanteringen saknas och produktionsgrupperna har skilda arbetssätt som antaskunna gynnas av en centralisering där gemensamma rutiner lättare kan etableras.

De historiska transaktionerna visar att det finns utrymme för förbättringar då vissa artiklar genererar långa transportsträckor på grund lagerplats i förhållande till var de används iproduktionen. De nya binge-kvantiteterna från partiformningsmetoderna EOQ, m-EOQ och Kanbanformeln har testats i simulering av påfyllning och materialåtgång via en implementation i Excel VBA.

Kanbanformeln uppvisar högsta servicenivån 90 %, för lägsta totalkostnaden och minskad kapitalbindning. Kanbankvantiteterna minskar den totala kostnaden med 20 %. Antalet påfyllningar skulle öka med 7 % och antalet artiklar i produktion minskar med 59%. För layoutanpassningen har även simulering av olika orderplock och artikelplaceringen genomförts. Resultatet visar att en centraliseringsgrad är möjlig med en liten ökning avmaterialtransporterna. Det framgår även att artiklar som plockas väldigt sällan är beräknade att ta upp 89 hyllställage av totalt 230 stycken och bör ses över. Detta tillsammans med kravspecifikationen från analys-delen har hjälpt för att generera olika koncept.

GEHC Umeå bör använda Kanbanformeln i framtiden för bestämmande av kvantiteten i bingarna. Vissa anpassningar för gemensamma artiklar i Comm-lagret och artiklar utan historiska efterfrågan bör ske. För layouten bör GEHC Umeå först och främst flytta artiklarsom idag bidrar med onödiga transporter. På längre sikt bör en ökad grad centralisering avlagren vara möjlig med hänsyn till fördelar vid samordning och informell spridning av arbetsrutiner. Materialspindlarna bör underlätta för godsmottagningen, delta i bristrapportering samt förbättringsarbetet. Utöver detta bör möjligheter till ökat samarbetet mellan materialplaneringen, produktionsplaneringen och materialspindlarna undersökas

• 22. Akbari, Saieed
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
On 1-sum flows in undirected graphs2016In: The Electronic Journal of Linear Algebra, ISSN 1537-9582, E-ISSN 1081-3810, Vol. 31, p. 646-665Article in journal (Refereed)

Let G = (V, E) be a simple undirected graph. For a given set L subset of R, a function omega: E -> L is called an L-flow. Given a vector gamma is an element of R-V , omega is a gamma-L-flow if for each v is an element of V, the sum of the values on the edges incident to v is gamma(v). If gamma(v) = c, for all v is an element of V, then the gamma-L-flow is called a c-sum L-flow. In this paper, the existence of gamma-L-flows for various choices of sets L of real numbers is studied, with an emphasis on 1-sum flows. Let L be a subset of real numbers containing 0 and denote L* := L \ {0}. Answering a question from [S. Akbari, M. Kano, and S. Zare. A generalization of 0-sum flows in graphs. Linear Algebra Appl., 438:3629-3634, 2013.], the bipartite graphs which admit a 1-sum R* -flow or a 1-sum Z* -flow are characterized. It is also shown that every k-regular graph, with k either odd or congruent to 2 modulo 4, admits a 1-sum {-1, 0, 1}-flow.

• 23.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
A Comparison of Tests for Ordered Alternatives With Application in Medicine1997Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

A situation frequently encountered in medical studies is the comparison of several treatments with a control. The problem is to determine whether or not a test drug has a desirable medical effect and/or to identify the minimum effective dose. In this Bachelor’s thesis, some of the methods used for testing hypotheses of ordered alternatives are reviewed and compared with respect to the power of the tests. Examples of multiple comparison procedures, maximum likelihood procedures, rank tests and different types of contrasts are presented and the properties of the methods are explored.

Depending on the degree of knowledge about the dose-responses, the aim of the study, whether the test is parametric or non-parametric and distribution-free or not, different recommendations are given which of the tests should be used. Thus, there is no single test which can be applied in all experimental situations for testing all different alternative hypotheses.

• 24.
Department of Mathematics, Luleå University of Technology.
Department of Mathematics, Luleå University of Technology.
Elliptical safety region plots for Cpk2011In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 38, no 6, p. 1169-1187Article in journal (Refereed)

The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.

• 25.
Umeå University, Faculty of Social Sciences, Department of applied educational science.
Is This Reliable Enough?: Examining Classification Consistency and Accuracy in a Criterion-Referenced Test2016In: International journal of assessment tools in education, ISSN 2148-7456, Vol. 3, no 2, p. 137-150Article in journal (Refereed)

One important step for assessing the quality of a test is to examine the reliability of test score interpretation. Which aspect of reliability is the most relevant depends on what type of test it is and how the scores are to be used. For criterion-referenced tests, and in particular certification tests, where students are classified into performance categories, primary focus need not be on the size of error but on the impact of this error on classification. This impact can be described in terms of classification consistency and classification accuracy. In this article selected methods from classical test theory for estimating classification consistency and classification accuracy were applied to the theory part of the Swedish driving licence test, a high-stakes criterion-referenced test which is rarely studied in terms of reliability of classification. The results for this particular test indicated a level of classification consistency that falls slightly short of the recommended level which is why lengthening the test should be considered. More evidence should also be gathered as to whether the placement of the cut-off score is appropriate since this has implications for the validity of classifications.

• 26.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Where to Stack the Chocolate?: Mapping and Optimisation of the Storage Locations with Associated Transportation Cost at Marabou2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

Today, inventory management at Marabou is organised in such way that articles are stored based on which production line they belong to and are sent to storage locations close to their production line. However, some storage locations are not optimised, insofar articles are stored out of pure habit and follow what is considered most convenient. This means that the storage locations are not based on any fixed instructions or standard. In this report, we propose optimal storage locations with respect to transportation cost by modelling the problem mathematically as a minimal cost matching problem, which we solve using the so-called Hungarian algorithm. To be able to implement the Hungarian algorithm, we collected data regarding the stock levels of articles in the factory throughout 2016. We adjusted the collected data by turning the articles into units of pallets. We considered three different implementations of the Hungarian algorithm. The results from the different approaches are presented together with several suggestions regarding pallet optimisation. In addition to the theoretical background, our work is based on an empirical study through participant observations as well as qualitative interviews with factory employees. In addition to our modelling work, we thus offer several further suggestions for efficiency savings or improvements at the factory, as well as for further work building on this report.

• 27.
Umeå University, Faculty of Social Sciences, Department of Statistics.
Kalibrering som ett sätt att hantera bortfall: Vilken korrelation krävs mellan hjälp- och responsvariabler?2010Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
• 28.
Umeå University, Faculty of Social Sciences, Department of Statistics.
Consequences of near-unfaithfulness in a finite sample: a simulation study2010Independent thesis Advanced level (degree of Master (Two Years)), 10 credits / 15 HE creditsStudent thesis
• 29.
Uppsala universitet.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
kequate: The kernel method of test equating. R package version 1.1.02012Other (Other academic)

Implements the kernel method of test equating using the CB, EG, SG, NEAT CE/PSE and NEC designs, supporting gaussian,logistic and uniform kernels and unsmoothed and pre-smoothed input data.

• 30.
Uppsala universitet.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Performing the Kernel Method of Test Equating with the Package kequate2013In: Journal of Statistical Software, ISSN 1548-7660, E-ISSN 1548-7660, Vol. 55, no 6, p. 1-25Article in journal (Refereed)

In standardized testing it is important to equate tests in order to ensure that the test takers, regardless of the test version given, obtain a fair test. Recently, the kernel method of test equating, which is a conjoint framework of test equating, has gained popularity. The kernel method of test equating includes five steps: (1) pre-smoothing, (2) estimation of the score probabilities, (3) continuization, (4) equating, and (5) computing the standard error of equating and the standard error of equating difference. Here, an implementation has been made for six different equating designs: equivalent groups, single group, counter balanced, non-equivalent groups with anchor test using either chain equating or post- stratification equating, and non-equivalent groups using covariates. An R package for the kernel method of test equating called kequate is presented. Included in the package are also diagnostic tools aiding in the search for a proper log-linear model in the pre-smoothing step for use in conjunction with the R function glm.

• 31.
Statistiska institutionen, Uppsala universitet.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Sensitivity analysis of violations of the faithfulness assumption2014In: Journal of Statistical Computation and Simulation, ISSN 0094-9655, E-ISSN 1563-5163, Vol. 84, no 7, p. 1608-1620Article in journal (Other academic)

We study implications of violations of the fatihfulness condition due to parameter cancellations on estimation of the DAG skeleton. Three settings are investigated: when i) faithfulness is guaranteed ii) faithfulness is not guaranteed and iii) the parameter distributions are concentrated around unfaithfulness (near-unfaithfulness). In a simulation study the effetcs of the different settings are compared using the PC and MMPC algorithms. The results show that the performance in the faithful case is almost unchanged compared to the unrestricted case whereas there is a general decrease in performance under the near-unfaithful case as compared to the unrestricted case. The response to near-unfaithful parameterisations is similar between two algorithms, with the MMPC algorithm having higher true positive rates and the PC algorithm having lower false positive rates.

• 32.
Beijing Normal University.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Item response theory observed-score kernel equating2017In: Psychometrika, ISSN 0033-3123, E-ISSN 1860-0980, Vol. 82, no 1, p. 48-66Article in journal (Refereed)

Item response theory (IRT) observed-score kernel equating is introduced for the non-equivalent groups with anchor test equating design using either chain equating or post-stratification equating. The equating function is treated in a multivariate setting and the asymptotic covariance matrices of IRT observed-score kernel equating functions are derived. Equating is conducted using the two-parameter and three-parameter logistic models with simulated data and data from a standardized achievement test. The results show that IRT observed-score kernel equating offers small standard errors and low equating bias under most settings considered.

• 33.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
A Risk and Capital Requirement Model for Life Insurance Portfolios2008Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

The capital requirements for insurance companies in the Solvency I framework are based on the premium and claim expenditure. This approach does not take the individual risk of the insurer into consideration and give policy holder little assur- ance. Therefore a framework called Solvency II is under development by EU and its members. The capital requirements in Solvency II are based on risk management and is related to the specific risks of the insurer. Moreover, the insurer must make disclosures both to the supervising authority and to the market. This puts pressure on the insurance companies to use better risk and capital management, which gives the policy holders better assurance.

In this thesis we present a stochastic model that describes the development of assets and liabilities. We consider the following risks: Stock market, bond market, interest rate and mortality intensity. These risks are modeled by stochastic processes that are aggregated to describe the change in the insurers Risk Bearing Capital. The capital requirement, Solvency Capital Requirement, is calculated using Conditional Value-at-Risk at a 99% confidence level and Monte Carlo simulation. The results from this model is compared to the Swiss Solvency Test model for three different types of life insurance policies. We can conclude that for large portfolios, the model presented in this thesis gives a lower solvency capital requirement than the Swiss model for all three policies. For small portfolios, the capital requirement is larger due to the stochastic mortality risk which is not included in the Swiss model.

• 34.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Modellering av säkringsstrategier för en elförsäljningsportfölj2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

Because of the high volatility of the electricity spot price there is a necessity of hedging the sales price of the production. The electricity spot price are volatile and are affected by climate, producer supply and political decisions. This means that the revenues from the power activites can vary alot from one year to another. The revenues from the power activites are especially important to be able to budget with probability since they are included in the total budget of Umeå municipality. In order to evaluate possible investment strategies the production along with the electricity spot- and futures prices of different maturities are modelled together. The modelling of production and electricity spot prices are based on a general seasonal block bootstrap method. Furthermore, two different assumptions are made about the relationship between spot prices and futures prices. The first emprical model is based on an assumption that there exists a mismatch between spot prices and the futures prices and historical differences are used to calculate this. The second model is based on the assumption that the futures prices are the same as the expected future spot price and that these are consistent.

The municipality’s current strategy is to hedge 300 GWh of the annual production in futures with three different maturities and sell the remaining of the production at spot price. This strategy can be seen as an average of four electricity prices and therefore reduces the risk of mismatch between the futures and spot price.

The empirical study show that historically it has been most profitable to invest in futures with maturity of three years. This has to do with the historical differences between futures prices and the electricity spot prices for this maturity has been the largest and thus gives the highest expected sales profit in the model. Furthermore, the study show that it has been more profitable to invest in futures compared with selling to spot price. Whether this is something that will continue into the future is uncertain due to the nature of the futures contract and the pricing of these. Finally, the study also show which investment strategies has been most profitable in a so-called backtest.

• 35.
Umeå University, Faculty of Science and Technology, Department of Physics.
Classification of spectral signatures in biological aerosols2013Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

In this thesis multivariate methods were used to evaluate pretreatment methods, such as normalization, as well as classification possibilities of data collected with Laser Induced Breakdown Spectroscopy (LIBS). The LIBS system that FOI is currently developing for the purpose of classifying biological airborne threats was used to collect data from ten different samples in a laboratory environment. Principal component analysis (PCA) shows that it is possible to observe differences between samples using the two types of data acquired from the LIBS system, i.e., 2D CCD camera images and 1D spectra extracted from the image. Further results using partial least squares discriminant analysis (PLS-DA) show that normalization of the data only has visual effects in the PCA score-plots and do not affect the models predictability. Results also show that cropping and binning the pixels in the image is possible to some extent without losing significant predictability.

• 36.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Effects of Physiological Variations2006Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

Heart ischemia, the precursor to an infarction, is one of the most common diseases in the western world. Today, the electrocardiogram or ECG is the most widely used tool to diagnose the disease. However, it often fails to detect the ischemia or to give an adequate picture of the size and location.

Therefore, the potential of increasing knowledge obtained through mathe- matical models is very high. In this thesis the bidomain model is used to describe the electrical activity in the heart and body with ischemia incorporated into the model. To solve the equations set up by the bidomain model, the finite element method is used. Different physiological variations have been made to the body, these include changing the location of the heart and varying the conductivities in the body. The solution to the equations is then studied at the body surface. The main question asked is whether it is possible to detect the location and size of different types of ischemia by analyzing the solution.

The methods used for this have been Singular Value Decomposition and Su- pervised learning. The different vectors obtained from the decomposition are used to distinguish the location and size of the ischemia for different physiolog- ical variations.

The results show that it is possible to distinguish the location of the ischemia but that it probably will be more difficult to find the correct size since the change in size is harder to separate from other physiological variations, such as the conductivity of the body.

Although relatively simple methods have been used, they indicate that, with further development, they can be used for the purpose of detecting the different ischemia.

• 37.
Umeå University, Faculty of Science and Technology, Department of Science and Mathematics Education.
Hur finner vi elever i behov av särskilt stöd i matematik i årskurs 1-3?2015Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

En betydelsefull faktor för att kunna förebygga eller minska elevers svårigheter i matematik är att de upptäcks tidigt. Det är viktigt att ha kunskap om vilka signaler som ska betraktas som avvikande och kräver en särskild utredning (Butterworth, 2011; Malmer, 2006).

Syftet med denna rapport är att undersöka och analysera lärares,speciallärares och specialpedagogers strategier att identifiera SUM-elever, årskurs 1-3 via deras beskrivningar. Studien har en kvalitativ ansats med vissa kvantitativa delar. Metoderna har varit dels semistrukturerade intervjuer med specialpedagoger och speciallärare och dels enkäter med många öppna frågor till lärarna.

Undersökningen visar bland annat att pedagogerna ofta använder sig av olika kartläggningsmaterial för att upptäcka SUM-elever. De upptäcker även SUM-eleverna i det dagliga arbetet och i olika former av dialoger med andra pedagoger, elever och vårdnadshavare. Kartläggningsmaterial används olika, på vissa skolor genomförs kartläggning regelbundet efter en viss plan och på andra skolor används kartläggning vid behov. Vidare varierar det hur specialpedagogen och specialläraren använder sin tid för att hitta SUM-elever. Vissa bedömer det som betydelsefullt att observera och vara i klassen, andra bedömer det som mer verkningsfullt att handleda personal och jobba enskilt med elever för att upptäcka elever i behov av särskilt stöd i matematik.

I undersökningen framkom att pedagogerna inte är konsekventa i sina åsikter när det gäller hur de är grundade de specialpedagogiska perspektiven. De olika tankar som pedagogerna ger uttryck för kan sägas spegla hela skalan av specialpedagogiska perspektiv.

Avslutningsvis kan man säga att pedagogerna anser att lärarnas behörighet och kompetens är det mest avgörande för om SUM-eleverna ska upptäckas. Även klasstorleken har viss betydelse för möjligheter att upptäcka SUM-elever. Speciallärarna och specialpedagogerna anser utöver detta att ett bra kartläggningsmaterial och att erfarenhet har betydelse då det gäller att upptäcka SUM-elever. Det framkom även att pedagogerna tyckte att det saknades tydlig ledning och engagemang från rektor i arbetet att upptäcka SUM-elever.

Nyckelord: dialog, kartläggning, specialpedagogiska perspektiv, SUM-elev, särskilt undervisningsbehov i matematik.

Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Explicit representation of membership in polynomial ideals2011In: Mathematische Annalen, ISSN 0025-5831, E-ISSN 1432-1807, Vol. 349, no 2, p. 345-365Article in journal (Refereed)

We introduce a new division formula on projective space which provides explicit solutions to various polynomial division problems with sharp degree estimates. We consider simple examples as the classical Macaulay theorem as well as a quite recent result by Hickel, related to the effective Nullstellensatz. We also obtain a related result that generalizes Max Noether's classical AF + BG theorem.

• 39.
Umeå University, Faculty of Science and Technology, Department of Physics.
Regression-Based Monte Carlo For Pricing High-Dimensional American-Style Options2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis

Pricing different financial derivatives is an essential part of the financial industry. For some derivatives there exists a closed form solution, however the pricing of high-dimensional American-style derivatives is still today a challenging problem. This project focuses on the derivative called option and especially pricing of American-style basket options, i.e. options with both an early exercise feature and multiple underlying assets. In high-dimensional problems, which is definitely the case for American-style options, Monte Carlo methods is advantageous. Therefore, in this thesis, regression-based Monte Carlo has been used to determine early exercise strategies for the option. The well known Least Squares Monte Carlo (LSM) algorithm of Longstaff and Schwartz (2001) has been implemented and compared to Robust Regression Monte Carlo (RRM) by C.Jonen (2011). The difference between these methods is that robust regression is used instead of least square regression to calculate continuation values of American style options. Since robust regression is more stable against outliers the result using this approach is claimed by C.Jonen to give better estimations of the option price.

It was hard to compare the techniques without the duality approach of Andersen and Broadie (2004) therefore this method was added. The numerical tests then indicate that the exercise strategy determined using RRM produces a higher lower bound and a tighter upper bound compared to LSM. The difference between upper and lower bound could be up to 4 times smaller using RRM.

Importance sampling and Quasi Monte Carlo have also been used to reduce the variance in the estimation of the option price and to speed up the convergence rate.

• 40.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Modelling Discrete Data for Control Chart Application: A Quality Improvement Project at Volvo GTO Umeå2017Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis

In any manufacturing process, it is of vital importance to improve and maintain a high quality outcome. One way of doing so is to make use of statistical process control (SPC), which is a collection of tools for monitoring the outcome of a process. The aim of this work has been to investigate SPC methods suitable for modelling discrete data describing quality, as well as implementing these methods on quality data of cabs collected at the paint shop of Volvo GTO Umeå. A tool that has been of special importance in this project is control charts, and a great part of the project has consisted of finding suitable statistical distributions on which to base these charts. Methods for this include goodness-of-fit of distributions, as well as regression based on generalized linear models (GLM), for finding suitable distributions and estimate their parameters. The regression models also provided useful information on how the quality data, that describes counts of defects in the paint, depend on background variables of a production unit. As a rule, the superiority of one regression model or distribution over another has been evaluated with the Akaike Information Criterion (AIC).

The results of this project are the GLM models that showed the highest significance, as well as implementations of simple Shewhart type control charts with different control limits corresponding to the parameter estimates of these models. The control limits proved to differ from each other for different observations, depending on which underlying combination on values on background variables, such as cab type, that the observation had. In general, there were also indications that the negative binomial distribution works well for modelling relatively common defect types, or sums of counts of different defect types, whereas zero-inflated might be better for less common defect types.

• 41.
Department of Engineering Sciences and Mathematics, Luleå University of Technology, Luleå, Sweden.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
A multivariate process capability index based on the first principal component only2013In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 29, no 7, p. 987-1003Article in journal (Refereed)

Often the quality of a process is determined by several correlated univariate variables. In such cases, the considered quality characteristic should be treated as a vector. Several different multivariate process capability indices (MPCIs) have been developed for such a situation, but confidence intervals or tests have been derived for only a handful of these. In practice, the conclusion about process capability needs to be drawn from a random sample, making confidence intervals or tests for the MPCIs important. Principal component analysis (PCA) is a well-known tool to use in multivariate situations. We present, under the assumption of multivariate normality, a new MPCI by applying PCA to a set of suitably transformed variables. We also propose a decision procedure, based on a test of this new index, to be used to decide whether a process can be claimed capable or not at a stated significance level. This new MPCI and its accompanying decision procedure avoid drawbacks found for previously published MPCIs with confidence intervals. By transforming the original variables, we need to consider the first principal component only. Hence, a multivariate situation can be converted into a familiar univariate process capability index. Furthermore, the proposed new MPCI has the property that if the index exceeds a given threshold value the probability of non-conformance is bounded by a known value. Properties, like significance level and power, of the proposed decision procedure is evaluated through a simulation study in the two-dimensional case. A comparative simulation study between our new MPCI and an MPCI previously suggested in the literature is also performed. These studies show that our proposed MPCI with accompanying decision procedure has desirable properties and is worth to study further.

• 42.
Department of Engineering Sciences and Mathematics, Luleå University of Technology, Luleå, Sweden.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Comparing confidence intervals for multivariate process capability indices2012In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 28, no 4, p. 481-495Article in journal (Refereed)

Multivariate process capability indices (MPCIs) are needed for process capability analysis when the quality of a process is determined by several univariate quality characteristics that are correlated. There are several different MPCIs described in the literature, but confidence intervals have been derived for only a handful of these. In practice, the conclusion about process capability must be drawn from a random sample. Hence, confidence intervals or tests for MPCIs are important. With a case study as a start and under the assumption of multivariate normality, we review and compare four different available methods for calculating confidence intervals of MPCIs that generalize the univariate index Cp. Two of the methods are based on the ratio of a tolerance region to a process region, and two are based on the principal component analysis. For two of the methods, we derive approximate confidence intervals, which are easy to calculate and can be used for moderate sample sizes. We discuss issues that need to be solved before the studied methods can be applied more generally in practice. For instance, three of the methods have approximate confidence levels only, but no investigation has been carried out on how good these approximations are. Furthermore, we highlight the problem with the correspondence between the index value and the probability of nonconformance. We also elucidate a major drawback with the existing MPCIs on the basis of the principal component analysis. Our investigation shows the need for more research to obtain an MPCI with confidence interval such that conclusions about the process capability can be drawn at a known confidence level and that a stated value of the MPCI limits the probability of nonconformance in a known way.

• 43.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
Prediktion av bruttoregionalprodukt: Prognosmodellering som förkortar tiden mellan officiella siffror och prognos2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis

Arbetet har utforskat möjligheten och precisionen av BRP-prediktion för tre statistiska metoder; linjär regression, regressionsträd och modellträd. För modellutvärdering har testfelsskattning erhållen genom korsvalidering och en jämförelse mot Statistiska Centralbyråns (SCB) prognos av BRP används. Resultatet visar att regressionsträd inte lämpar sig för BRP-prediktion, medan de andra två lyckas med rimlig felmarginal. Procentuell avvikelse för metoden som ligger närmast SCB:s prognos har 0,3 i genomsnitt och standardavvikelse 3,0.

• 44.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
The bivariate ising polynomial of a graph2009In: Discrete Applied Mathematics, ISSN 0166-218X, E-ISSN 1872-6771, Vol. 157, no 11, p. 2515-2524Article in journal (Refereed)

In this paper we discuss the two variable Ising polynomials in a graph theoretical setting. This polynomial has its origin in physics as the partition function of the Ising model with an external field. We prove some basic properties of the Ising polynomial and demonstrate that it encodes a large amount of combinatorial information about a graph. We also give examples which prove that certain properties, such as the chromatic number, are not determined by the Ising polynomial. Finally we prove that there exist large families of non-isomorphic planar triangulations with identical Ising polynomial. (C) 2009 Published by Elsevier B.V.

• 45.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Avoiding Arrays of Odd Order by Latin Squares2013In: Combinatorics, probability & computing, ISSN 0963-5483, E-ISSN 1469-2163, Vol. 22, no 2, p. 184-212Article in journal (Refereed)

We prove that there is a constant c such that, for each positive integer k, every (2k + 1) x (2k + 1) array A on the symbols 1, ... , 2k + 1 with at most c(2k + 1) symbols in every cell, and each symbol repeated at most c(2k + 1) times in every row and column is avoidable; that is, there is a (2k + 1) x (2k + 1) Latin square S on the symbols 1, ... , 2k + 1 such that, for each i, j is an element of {1, ... , 2k + 1}, the symbol in position (i, j) of S does not appear in the corresponding cell in Lambda. This settles the last open case of a conjecture by Haggkvist. Using this result, we also show that there is a constant rho, such that, for any positive integer n, if each cell in an n x n array B is assigned a set of m <= rho n symbols, where each set is chosen independently and uniformly at random from {1, ... , n}, then the probability that B is avoidable tends to 1 as n -> infinity.

• 46.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
On the Ising problem and some matrix operations2007Doctoral thesis, comprehensive summary (Other academic)

The first part of the dissertation concerns the Ising problem proposed to Ernst Ising by his supervisor Wilhelm Lenz in the early 20s. The Ising model, or perhaps more correctly the Lenz-Ising model, tries to capture the behaviour of phase transitions, i.e. how local rules of engagement can produce large scale behaviour.

Two decades later Lars Onsager solved the Ising problem for the quadratic lattice without an outer field. Using his ideas solutions for other lattices in two dimensions have been constructed. We describe a method for calculating the Ising partition function for immense square grids, up to linear order 320 (i.e. 102400 vertices).

In three dimensions however only a few results are known. One of the most important unanswered questions is at which temperature the Ising model has its phase transition. In this dissertation it is shown that an upper bound for the critical coupling Kc, the inverse absolute temperature, is 0.29 for the tree dimensional cubic lattice.

To be able to get more information one has to use different statistical methods. We describe one sampling method that can use simple state generation like the Metropolis algorithm for large lattices. We also discuss how to reconstruct the entropy from the model, in order to obtain parameters as the free energy.

The Ising model gives a partition function associated with all finite graphs. In this dissertation we show that a number of interesting graph invariants can be calculated from the coefficients of the Ising partition function. We also give some interesting observations about the partition function in general and show that there are, for any N, N non-isomorphic graphs with the same Ising partition function.

The second part of the dissertation is about matrix operations. We consider the problem of multiplying them when the entries are elements in a finite semiring or in an additively finitely generated semiring. We describe a method that uses O(n3 / log n) arithmetic operations.

We also consider the problem of reducing n x n matrices over a finite field of size q using O(n2 / logq n) row operations in the worst case.

• 47.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Avoidability by Latin squares of arrays of even orderManuscript (preprint) (Other academic)

We prove that for any k and any 2k × 2k array A such that no cell in A contains more than   k/2550 symbols, and no symbol occurs more than k/2550 times in any row or column, there is a Latin square such that no 2550cell in the Latin square contains a symbol that occurs in the corresponding cell in A. This proves a conjecture of Häggkvist [8] in the special case of arrays with even side.

• 48.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Avoidability of random arraysManuscript (preprint) (Other academic)

An n×n array that in each cell contains a subset of the symbols 1, . . . , n is avoidable if there exists a Latin square of order n such that no cell in the Latin square contains a symbol which belongs to the set of symbols in the corresponding cell of the array. Some results on deterministic conditions for avoidability of arrays have been found, but here we study the problem of having an array with randomly assigned subsets of C in its cells. This is equivalent to the problem of list-edge-coloring $K_{n,n}$ with randomly assigned lists from the set {1, . . . , n}. We show that an array where each symbol appears in each cell with probability p will be avoidable with very high probability even if p is such that the expected number of symbols forbidden in each cell is slightly higher than what deterministic theorems can prove is avoidable.

• 49.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Avoiding (m, m, m)-arrays of order n = 2kManuscript (preprint) (Other academic)

An (m, m, m)-array of order n is an n × n array such that each cell is assigned a set of at most m symbols from {1,...,n} such that no symbol occurs more than m times in any row or column. An (m,m,m)- array is called avoidable if there exists a Latin square such that no cell in the Latin square contains a symbol that also belongs to the set assigned to the corresponding cell in the array. We show that there is a constant γ such that if m ≤ γ2k, then any (m,m,m)-array of order 2k is avoidable. Such a constant γ has been conjectured to exist for all n by Häggkvist.

• 50.
Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
Avoiding (m, m, m)-arrays of order n=2(k)2012In: The Electronic Journal of Combinatorics, ISSN 1097-1440, E-ISSN 1077-8926, Vol. 19, no 1, p. P63-Article in journal (Refereed)

An (m, m, m)-array of order n is an n x n array such that each cell is assigned a set of at most m symbols from f 1,...,n g such that no symbol occurs more than m times in any row or column. An (m, m, m)-array is called avoidable if there exists a Latin square such that no cell in the Latin square contains a symbol that also belongs to the set assigned to the corresponding cell in the array. We show that there is a constant gamma such that if m <= gamma 2(k) and k >= 14, then any (m, m, m)-array of order n = 2(k) is avoidable. Such a constant gamma has been conjectured to exist for all n by Haggkvist.

1234567 1 - 50 of 1638
Cite
Citation style
• apa
• ieee
• modern-language-association-8th-edition
• vancouver
• Other style
More styles
Language
• de-DE
• en-GB
• en-US
• fi-FI
• nn-NO
• nn-NB
• sv-SE
• Other locale
More languages
Output format
• html
• text
• asciidoc
• rtf