Umeå University's logo

umu.sePublications
Change search
Refine search result
1234567 1 - 50 of 1002
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abramowicz, Konrad
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Numerical analysis for random processes and fields and related design problems2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D).

    In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed.

    In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided.

    In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity.

    In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.

    Download full text (pdf)
    fulltext
  • 2.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Arnqvist, Per
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Secchi, Piercesare
    Vantini, Simone
    Vitelli, Valeria
    Was it snowing on lake Kassjön in January 4486 BC? Functional data analysis of sediment data2014Conference paper (Other academic)
  • 3.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Häger, Charlotte
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Hérbert-Losier, Kim
    Swedish Winter Sports Research Centre Mid Sweden; University Department of Health Sciences, Östersund, Sweden.
    Pini, Alessia
    MOX – Department of Mathematics, Politecnico di Milano.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    MOX – Department of Mathematics, Politecnico di Milano.
    An inferential framework for domain selection in functional anova2014In: Contributions in infinite-dimensional statistics and related topics / [ed] Bongiorno, E.G., Salinelli, E., Goia, A., Vieu, P, Esculapio , 2014Conference paper (Refereed)
    Abstract [en]

    We present a procedure for performing an ANOVA test on functional data, including pairwise group comparisons. in a Scheff´e-like perspective. The test is based on the Interval Testing Procedure, and it selects intervals where the groups significantly differ. The procedure is applied on the 3D kinematic motion of the knee joint collected during a functional task (one leg hop) performed by three groups of individuals.

  • 4.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Häger, Charlotte
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation, Physiotherapy.
    Pini, Alessia
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå University, Faculty of Medicine, Department of Community Medicine and Rehabilitation. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    Nonparametric inference for functional-on-scalar linear models applied to knee kinematic hop data after injury of the anterior cruciate ligament2018In: Scandinavian Journal of Statistics, ISSN 0303-6898, E-ISSN 1467-9469, Vol. 45, no 4, p. 1036-1061Article in journal (Refereed)
    Abstract [en]

    Motivated by the analysis of the dependence of knee movement patterns during functional tasks on subject-specific covariates, we introduce a distribution-free procedure for testing a functional-on-scalar linear model with fixed effects. The procedure does not only test the global hypothesis on the entire domain but also selects the intervals where statistically significant effects are detected. We prove that the proposed tests are provided with an asymptotic control of the intervalwise error rate, that is, the probability of falsely rejecting any interval of true null hypotheses. The procedure is applied to one-leg hop data from a study on anterior cruciate ligament injury. We compare knee kinematics of three groups of individuals (two injured groups with different treatments and one group of healthy controls), taking individual-specific covariates into account.

    Download full text (pdf)
    fulltext
  • 5.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Pini, Alessia
    Department of Statistical Sciences, Università Cattolica del Sacro Cuore, Milan, Italy.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Stamm, Aymeric
    Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, Nantes, France.
    Vantini, Simone
    MOX – Modelling and Scientific Computing Laboratory, Department of Mathematics, Politecnico di Milano, Milan, Italy.
    Domain selection and family-wise error rate for functional data: a unified framework2023In: Biometrics, ISSN 0006-341X, E-ISSN 1541-0420, Vol. 79, no 2, p. 1119-1132Article in journal (Refereed)
    Abstract [en]

    Functional data are smooth, often continuous, random curves, which can be seen as an extreme case of multivariate data with infinite dimensionality. Just as component-wise inference for multivariate data naturally performs feature selection, subset-wise inference for functional data performs domain selection. In this paper, we present a unified testing framework for domain selection on populations of functional data. In detail, p-values of hypothesis tests performed on point-wise evaluations of functional data are suitably adjusted for providing a control of the family-wise error rate (FWER) over a family of subsets of the domain. We show that several state-of-the-art domain selection methods fit within this framework and differ from each other by the choice of the family over which the control of the FWER is provided. In the existing literature, these families are always defined a priori. In this work, we also propose a novel approach, coined threshold-wise testing, in which the family of subsets is instead built in a data-driven fashion. The method seamlessly generalizes to multidimensional domains in contrast to methods based on a-priori defined families. We provide theoretical results with respect to consistency and control of the FWER for the methods within the unified framework. We illustrate the performance of the methods within the unified framework on simulated and real data examples, and compare their performance with other existing methods.

    Download full text (pdf)
    fulltext
  • 6.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Schelin, Lina
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Multiresolution clustering of dependent functional data with application to climate reconstruction2019In: Stat, E-ISSN 2049-1573, Vol. 8, no 1, article id e240Article in journal (Refereed)
    Abstract [en]

    We propose a new nonparametric clustering method for dependent functional data, the double clustering bagging Voronoi method. It consists of two levels of clustering. Given a spatial lattice of points, a function is observed at each grid point. In the first‐level clustering, features of the functional data are clustered. The second‐level clustering takes dependence into account, by grouping local representatives, built from the resulting first‐level clusters, using a bagging Voronoi strategy. Depending on the distance measure used, features of the functions may be included in the second‐step clustering, making the method flexible and general. Combined with the clustering method, a multiresolution approach is proposed that searches for stable clusters at different spatial scales, aiming to capture latent structures. This provides a powerful and computationally efficient tool to cluster dependent functional data at different spatial scales, here illustrated by a simulation study. The introduced methodology is applied to varved lake sediment data, aiming to reconstruct winter climate regimes in northern Sweden at different time resolutions over the past 6,000 years.

  • 7.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    On the error of the Monte Carlo pricing method for Asian option2008In: Journal of Numerical and Applied Mathematics, ISSN 0868-6912, Vol. 96, no 1, p. 1-10Article in journal (Refereed)
    Abstract [en]

    We consider a Monte Carlo method to price a continuous arithmetic Asian option with a given precision. Piecewise constant approximation and plain simulation are used for a wide class of models based on L\'{e}vy processes. We give bounds of the possible discretization and simulation errors. The sufficient numbers of discretization points and simulations to obtain requested accuracy are derived. To demonstrate the general approach, the Black-Scholes model is studied in more detail. We undertake the case of continuous averaging and starting time zero,  but the obtained results can be applied to the discrete case  and generalized for any time before an execution date. Some numerical experiments and comparison to the PDE based method are also presented.

  • 8.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Piecewise multilinear interpolation of a random field2013In: Advances in Applied Probability, ISSN 0001-8678, E-ISSN 1475-6064, Vol. 45, no 4, p. 945-959Article in journal (Refereed)
    Abstract [en]

    We consider a piecewise-multilinear interpolation of a continuous random field on a d-dimensional cube. The approximation performance is measured using the integrated mean square error. Piecewise-multilinear interpolator is defined by N-field observations on a locations grid (or design). We investigate the class of locally stationary random fields whose local behavior is like a fractional Brownian field, in the mean square sense, and find the asymptotic approximation accuracy for a sequence of designs for large N. Moreover, for certain classes of continuous and continuously differentiable fields, we provide the upper bound for the approximation accuracy in the uniform mean square norm.

  • 9.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Stratified Monte Carlo quadrature for continuous random fields2015In: Methodology and Computing in Applied Probability, ISSN 1387-5841, E-ISSN 1573-7713, Vol. 17, no 1, p. 59-72Article in journal (Refereed)
    Abstract [en]

    We consider the problem of numerical approximation of integrals of random fields over a unit hypercube. We use a stratified Monte Carlo quadrature and measure the approximation performance by the mean squared error. The quadrature is defined by a finite number of stratified randomly chosen observations with the partition generated by a rectangular grid (or design). We study the class of locally stationary random fields whose local behavior is like a fractional Brownian field in the mean square sense and find the asymptotic approximation accuracy for a sequence of designs for large number of the observations. For the H¨older class of random functions, we provide an upper bound for the approximation error. Additionally, for a certain class of isotropic random functions with an isolated singularity at the origin, we construct a sequence of designs eliminating the effect of the singularity point.

  • 10.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Nonparametric bagging clustering methods to identify latent structures from a sequence of dependent categorical data2022In: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 177, article id 107583Article in journal (Refereed)
    Abstract [en]

    Nonparametric bagging clustering methods are studied and compared to identify latent structures from a sequence of dependent categorical data observed along a one-dimensional (discrete) time domain. The frequency of the observed categories is assumed to be generated by a (slowly varying) latent signal, according to latent state-specific probability distributions. The bagging clustering methods use random tessellations (partitions) of the time domain and clustering of the category frequencies of the observed data in the tessellation cells to recover the latent signal, within a bagging framework. New and existing ways of generating the tessellations and clustering are discussed and combined into different bagging clustering methods. Edge tessellations and adaptive tessellations are the new proposed ways of forming partitions. Composite methods are also introduced, that are using (automated) decision rules based on entropy measures to choose among the proposed bagging clustering methods. The performance of all the methods is compared in a simulation study. From the simulation study it can be concluded that local and global entropy measures are powerful tools in improving the recovery of the latent signal, both via the adaptive tessellation strategies (local entropy) and in designing composite methods (global entropy). The composite methods are robust and overall improve performance, in particular the composite method using adaptive (edge) tessellations.

    Download full text (pdf)
    fulltext
  • 11.
    Abramowicz, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Strandberg, Johan
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Nonparametric clustering methods to identify latent structures from a sequence of dependent categorical dataManuscript (preprint) (Other academic)
  • 12.
    Abramowizc, Konrad
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Arnqvist, Per
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Secchi, Piercesare
    Politecnico di Milano, Italy.
    Sjöstedt de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Vantini, Simone
    Politecnico di Milano, Italy.
    Vitelli, Valeria
    Oslo University, Norway.
    Clustering misaligned dependent curves applied to varved lake sediment for climate reconstruction2017In: Stochastic environmental research and risk assessment (Print), ISSN 1436-3240, E-ISSN 1436-3259, Vol. 31, no 1, p. 71-85Article in journal (Refereed)
    Abstract [en]

    In this paper we introduce a novel functional clustering method, the Bagging Voronoi K-Medoid Aligment (BVKMA) algorithm, which simultaneously clusters and aligns spatially dependent curves. It is a nonparametric statistical method that does not rely on distributional or dependency structure assumptions. The method is motivated by and applied to varved (annually laminated) sediment data from lake Kassjön in northern Sweden, aiming to infer on past environmental and climate changes. The resulting clusters and their time dynamics show great potential for seasonal climate interpretation, in particular for winter climate changes.

  • 13.
    Abramsson, Evelina
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Grind, Kajsa
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Skattning av kausala effekter med matchat fall-kontroll data2017Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    fulltext
  • 14.
    Alainentalo, Lisbeth
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    A Comparison of Tests for Ordered Alternatives With Application in Medicine1997Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A situation frequently encountered in medical studies is the comparison of several treatments with a control. The problem is to determine whether or not a test drug has a desirable medical effect and/or to identify the minimum effective dose. In this Bachelor’s thesis, some of the methods used for testing hypotheses of ordered alternatives are reviewed and compared with respect to the power of the tests. Examples of multiple comparison procedures, maximum likelihood procedures, rank tests and different types of contrasts are presented and the properties of the methods are explored.

    Depending on the degree of knowledge about the dose-responses, the aim of the study, whether the test is parametric or non-parametric and distribution-free or not, different recommendations are given which of the tests should be used. Thus, there is no single test which can be applied in all experimental situations for testing all different alternative hypotheses. 

    Download full text (pdf)
    fulltext
  • 15. Albano, Anthony D.
    et al.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Linking With External Covariates: Examining Accuracy by Anchor Type, Test Length, Ability Difference, and Sample Size2019In: Applied psychological measurement, ISSN 0146-6216, E-ISSN 1552-3497, Vol. 43, no 8, p. 597-610Article in journal (Refereed)
    Abstract [en]

    Research has recently demonstrated the use of multiple anchor tests and external covariates to supplement or substitute for common anchor items when linking and equating with nonequivalent groups. This study examines the conditions under which external covariates improve linking and equating accuracy, with internal and external anchor tests of varying lengths and groups of differing abilities. Pseudo forms of a state science test were equated within a resampling study where sample size ranged from 1,000 to 10,000 examinees and anchor tests ranged in length from eight to 20 items, with reading and math scores included as covariates. Frequency estimation linking with an anchor test and external covariate was found to produce the most accurate results under the majority of conditions studied. Practical applications of linking with anchor tests and covariates are discussed.

    Download full text (pdf)
    fulltext
  • 16.
    Albing, Malin
    et al.
    Department of Mathematics, Luleå University of Technology.
    Vännman, Kerstin
    Department of Mathematics, Luleå University of Technology.
    Elliptical safety region plots for Cpk2011In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 38, no 6, p. 1169-1187Article in journal (Refereed)
    Abstract [en]

    The process capability index C pk is widely used when measuring the capability of a manufacturing process. A process is defined to be capable if the capability index exceeds a stated threshold value, e.g. C pk >4/3. This inequality can be expressed graphically using a process capability plot, which is a plot in the plane defined by the process mean and the process standard deviation, showing the region for a capable process. In the process capability plot, a safety region can be plotted to obtain a simple graphical decision rule to assess process capability at a given significance level. We consider safety regions to be used for the index C pk . Under the assumption of normality, we derive elliptical safety regions so that, using a random sample, conclusions about the process capability can be drawn at a given significance level. This simple graphical tool is helpful when trying to understand whether it is the variability, the deviation from target, or both that need to be reduced to improve the capability. Furthermore, using safety regions, several characteristics with different specification limits and different sample sizes can be monitored in the same plot. The proposed graphical decision rule is also investigated with respect to power.

  • 17.
    Alger, Susanne
    Umeå University, Faculty of Social Sciences, Department of applied educational science.
    Is This Reliable Enough?: Examining Classification Consistency and Accuracy in a Criterion-Referenced Test2016In: International journal of assessment tools in education, ISSN 2148-7456, Vol. 3, no 2, p. 137-150Article in journal (Refereed)
    Abstract [en]

    One important step for assessing the quality of a test is to examine the reliability of test score interpretation. Which aspect of reliability is the most relevant depends on what type of test it is and how the scores are to be used. For criterion-referenced tests, and in particular certification tests, where students are classified into performance categories, primary focus need not be on the size of error but on the impact of this error on classification. This impact can be described in terms of classification consistency and classification accuracy. In this article selected methods from classical test theory for estimating classification consistency and classification accuracy were applied to the theory part of the Swedish driving licence test, a high-stakes criterion-referenced test which is rarely studied in terms of reliability of classification. The results for this particular test indicated a level of classification consistency that falls slightly short of the recommended level which is why lengthening the test should be considered. More evidence should also be gathered as to whether the placement of the cut-off score is appropriate since this has implications for the validity of classifications.

  • 18.
    Ali, Raman
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Root Cause Analysis for In-Transit Time Performance: Time Series Analysis for Inbound Quantity Received into Warehouse2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Cytiva is a global provider of technologies to global pharmaceutical companies and it is critical to ensure that Cytiva’s customers receive deliveries of products on-time. Cytiva’s products are shipped via road transportation within most parts of Europe and for the rest in the world air freight is used. The company is challenged to deliver products on time between regional distribution points and from manufacturing sites to its regional distribution centers. The time performance for the delivery of goods is today 79% compared to the company’s goal 95%.

    The goal of this work is to find the root causes and recommend improvement opportunities for the logistics organizations inbound in-transit time performance towards their target of 95% success rate of shipping in-transit times.

    Data for this work was collected from the company’s system to create visibility for the logistics specialists and to create a prediction that can be used for the warehouse in Rosersberg. Visibility was created by implementing various dashboards in the QlikSense program that can be used by the logistics group. The prediction models were built on Holt-Winters forecasting technique to be able to predict quantity, weight and volume of products, which arrive daily within five days and are enough to be implemented in the daily work. With the forecasting technique high accurate models were found for both the quantity and weight with accuracies of 96.02% and 92.23%, respectively. For the volume, however, too many outliers were replaced by the mean values and the accuracy of the model was 75.82%.

    However, large amounts of discrepancies have been found in the data which today has led to a large ongoing project to solve. This means that the models shown in this thesis cannot be completely reliable for the company to use when a lot of errors in their data have been found. The models may need to be adjusted when the quality of the data has increased. As of today the models can be used by having a glance upon.

    Download full text (pdf)
    fulltext
  • 19.
    Alshalabi, Mohamad
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Measures of statistical dependence for feature selection: Computational study2022Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The importance of feature selection for statistical and machine learning models derives from their explainability and the ability to explore new relationships, leading to new discoveries. Straightforward feature selection methods measure the dependencies between the potential features and the response variable. This thesis tries to study the selection of features according to a maximal statistical dependency criterion based ongeneralized Pearson’s correlation coefficients, e.g., Wijayatunga’s coefficient. I present a framework for feature selection based on these coefficients for high dimensional feature variables. The results are compared to the ones obtained by applying an elastic net regression (for high-dimensional data). The generalized Pearson’s correlation coefficient is a metric-based measure where the metric is Hellinger distance. The metric is considered as the distance between probability distributions. The Wijayatunga’s coefficient is originally proposed for the discrete case; here, we generalize it for continuous variables by discretization and kernelization. It is interesting to see how discretization work as we discretize the bins finer. The study employs both synthetic and real-world data to illustrate the validity and power of this feature selection process. Moreover, a new method of normalization for mutual information is included. The results show that both measures have considerable potential in detecting associations. The feature selection experiment shows that elastic net regression is superior to our proposed method; nevertheless, more investigation could be done regarding this subject.

    Download full text (pdf)
    fulltext
  • 20.
    Altmejd, Adam
    et al.
    Swedish Institute for Social Research, Stockholm University, Stockholm, Sweden; Department of Finance, Stockholm School of Economics, Stockholm, Sweden.
    Rocklöv, Joacim
    Umeå University, Faculty of Medicine, Department of Public Health and Clinical Medicine, Section of Sustainable Health. Heidelberg Institute of Global Health (HIGH), Interdisciplinary Centre for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany.
    Wallin, Jonas
    Department of Statistics, Lund University, Lund, Sweden.
    Nowcasting COVID-19 statistics reported with delay: A case-study of Sweden and the UK2023In: International Journal of Environmental Research and Public Health, ISSN 1661-7827, E-ISSN 1660-4601, Vol. 20, no 4Article in journal (Refereed)
    Abstract [en]

    The COVID-19 pandemic has demonstrated the importance of unbiased, real-time statistics of trends in disease events in order to achieve an effective response. Because of reporting delays, real-time statistics frequently underestimate the total number of infections, hospitalizations and deaths. When studied by event date, such delays also risk creating an illusion of a downward trend. Here, we describe a statistical methodology for predicting true daily quantities and their uncertainty, estimated using historical reporting delays. The methodology takes into account the observed distribution pattern of the lag. It is derived from the "removal method"-a well-established estimation framework in the field of ecology.

    Download full text (pdf)
    fulltext
  • 21.
    Altıntaş, Özge
    et al.
    Ankara University, Faculty of Educational Sciences, Department of Educational Sciences, Educational Measurement and Evaluation, Ankara, Turkey.
    Wallin, Gabriel
    Université Côte d’Azur, Inria, CNRS, Laboratoire J. A. Dieudonné, team Maasai, Sophia-Antipolis, France.
    Equality of admission tests using kernel equating under the non-equivalent groups with covariates design2021In: International Journal of Assessment Tools in Education, E-ISSN 2148-7456, Vol. 8, no 4, p. 729-743Article in journal (Refereed)
    Abstract [en]

    Educational assessment tests are designed to measure the same psychological constructs over extended periods of time. This feature is important considering that test results are often used in the selection process for admittance to university programs. To ensure fair assessments, especially for those whose results weigh heavily in selection decisions, it is necessary to collect evidence demonstrating that the assessments are not biased, and to confirm that the scores obtained from different test forms have statistical equality. For this purpose, test equating has important functions, as it prevents bias generated by differences in the difficulty levels of different test forms, allows the scores obtained from different test forms to be reported on the same scale, and ensures that the reported scores communicate the same meaning. In this study, these important functions were evaluated using real college admission test data from different test administrations. The kernel equating method under the non-equivalent groups with covariates design was applied to determine whether the scores obtained from different time periods but measuring the same psychological constructs were statistically equivalent. The non-equivalent groups with covariates design was specifically used because the test groups of the admission test are non-equivalent and there are no anchor items. Results from the analyses showed that the test forms had different score distributions, and that the relationship was non-linear. The equating procedure was thus adjusted to eliminate these differences and thereby allow the tests to be used interchangeably.

  • 22.
    Amanuel, Meron
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    ON GENERATING THE PROBABILITY MASS FUNCTION USING FIBONACCI POWER SERIES2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This thesis will focus on generating the probability mass function using Fibonacci sequenceas the coefficient of the power series.

    The discrete probability, named Fibonacci distribution,was formed by taking into consideration the recursive property of the Fibonacci sequence,the radius of convergence of the power series, and additive property of mutually exclusiveevents. This distribution satisfies the requisites of a legitimate probability mass function.

    It's cumulative distribution function and the moment generating function are then derived and the latter are used to generate moments of the distribution, specifically, the mean and the variance.

    The characteristics of some convergent sequences generated from the Fibonacci sequenceare found useful in showing that the limiting form of the Fibonacci distribution is a geometricdistribution. Lastly, the paper showcases applications and simulations of the Fibonacci distribution using MATLAB.

    Download full text (pdf)
    fulltext
  • 23.
    Andersdotter Persson, Anna
    Umeå University, Faculty of Social Sciences, Department of Statistics.
    Kalibrering som ett sätt att hantera bortfall: Vilken korrelation krävs mellan hjälp- och responsvariabler?2010Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    FULLTEXT01
  • 24.
    Andersson, Björn
    Umeå University, Faculty of Social Sciences, Department of Statistics.
    Consequences of near-unfaithfulness in a finite sample: a simulation study2010Independent thesis Advanced level (degree of Master (Two Years)), 10 credits / 15 HE creditsStudent thesis
    Download full text (pdf)
    FULLTEXT01
  • 25.
    Andersson, Björn
    et al.
    Uppsala universitet.
    Bränberg, Kenny
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    kequate: The kernel method of test equating. R package version 1.1.02012Other (Other academic)
    Abstract [en]

    Implements the kernel method of test equating using the CB, EG, SG, NEAT CE/PSE and NEC designs, supporting gaussian,logistic and uniform kernels and unsmoothed and pre-smoothed input data.

  • 26.
    Andersson, Björn
    et al.
    Uppsala universitet.
    Bränberg, Kenny
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Performing the Kernel Method of Test Equating with the Package kequate2013In: Journal of Statistical Software, E-ISSN 1548-7660, Vol. 55, no 6, p. 1-25Article in journal (Refereed)
    Abstract [en]

    In standardized testing it is important to equate tests in order to ensure that the test takers, regardless of the test version given, obtain a fair test. Recently, the kernel method of test equating, which is a conjoint framework of test equating, has gained popularity. The kernel method of test equating includes five steps: (1) pre-smoothing, (2) estimation of the score probabilities, (3) continuization, (4) equating, and (5) computing the standard error of equating and the standard error of equating difference. Here, an implementation has been made for six different equating designs: equivalent groups, single group, counter balanced, non-equivalent groups with anchor test using either chain equating or post- stratification equating, and non-equivalent groups using covariates. An R package for the kernel method of test equating called kequate is presented. Included in the package are also diagnostic tools aiding in the search for a proper log-linear model in the pre-smoothing step for use in conjunction with the R function glm.

  • 27.
    Andersson, Björn
    et al.
    Statistiska institutionen, Uppsala universitet.
    Waernbaum, Ingeborg
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Sensitivity analysis of violations of the faithfulness assumption2014In: Journal of Statistical Computation and Simulation, ISSN 0094-9655, E-ISSN 1563-5163, Vol. 84, no 7, p. 1608-1620Article in journal (Other academic)
    Abstract [en]

    We study implications of violations of the fatihfulness condition due to parameter cancellations on estimation of the DAG skeleton. Three settings are investigated: when i) faithfulness is guaranteed ii) faithfulness is not guaranteed and iii) the parameter distributions are concentrated around unfaithfulness (near-unfaithfulness). In a simulation study the effetcs of the different settings are compared using the PC and MMPC algorithms. The results show that the performance in the faithful case is almost unchanged compared to the unrestricted case whereas there is a general decrease in performance under the near-unfaithful case as compared to the unrestricted case. The response to near-unfaithful parameterisations is similar between two algorithms, with the MMPC algorithm having higher true positive rates and the PC algorithm having lower false positive rates.

  • 28.
    Andersson, Björn
    et al.
    Beijing Normal University.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Item response theory observed-score kernel equating2017In: Psychometrika, ISSN 0033-3123, E-ISSN 1860-0980, Vol. 82, no 1, p. 48-66Article in journal (Refereed)
    Abstract [en]

    Item response theory (IRT) observed-score kernel equating is introduced for the non-equivalent groups with anchor test equating design using either chain equating or post-stratification equating. The equating function is treated in a multivariate setting and the asymptotic covariance matrices of IRT observed-score kernel equating functions are derived. Equating is conducted using the two-parameter and three-parameter logistic models with simulated data and data from a standardized achievement test. The results show that IRT observed-score kernel equating offers small standard errors and low equating bias under most settings considered.

  • 29. Andersson, Carolyn J.
    et al.
    Embretson, Susan
    Meulman, Jacqueline
    Moustaki, Irini
    von Davier, Alina A.
    Wiberg, Marie
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Yan, Duanli
    Stories of successful careers in psychometrics and what we can learn from them2020In: Quantitative psychology: 84th annual meeting of the Psychometric Society, Santiago, Chile, 2019 / [ed] Marie Wiberg, Dylan Molenaar, Jorge González, Ulf Böckenholt, and Jee-Seon Kim, New York: Springer, 2020, p. 1-17Chapter in book (Refereed)
    Abstract [en]

    This paper was inspired by the presentations and discussions from the panel "Successful Careers in Academia and Industry and What We Can Learn from Them”" that took place at the IMPS meeting in 2019. In this paper, we discuss what makes a career successful in academia and industry and we provide examples from the past to the present. We include education and career paths as well as highlights of achievements as researchers and teachers. The paper provides a brief historical context for the representation of women in psychometrics and an insight into strategies for success for publishing, for grant applications and promotion. The authors outline the importance of interdisciplinary work, the inclusive citation approaches, and visibility of research in academia and industry. The personal stories provide a platform for considering the needs for a supportive work environment for women and for work-life balance. The outcome of these discussions and reflections of the panel members are included in the paper.

  • 30.
    Andersson, Martin
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Mazouch, Marcus
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Binary classification for predicting propensity to buy flight tickets.: A study on whether binary classification can be used to predict Scandinavian Airlines customers’ propensity to buy a flight ticket within the next seven days.2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    A customers propensity to buy a certain product is a widely researched field and is applied in multiple industries. In this thesis it is showed that using binary classification on data from Scandinavian Airlines can predict their customers propensity to book a flight within the next coming seven days. A comparison between logistic regression and support vector machine is presented and logistic regression with reduced number of variables is chosen as the final model, due to it’s simplicity and accuracy. The explanatory variables contains exclusively booking history, whilst customer demographics and search history is showed to be insignificant.

    Download full text (pdf)
    Binary classification for predicting propensity to buy flight tickets
  • 31.
    Andersson, Niklas
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Regression-Based Monte Carlo For Pricing High-Dimensional American-Style Options2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Pricing different financial derivatives is an essential part of the financial industry. For some derivatives there exists a closed form solution, however the pricing of high-dimensional American-style derivatives is still today a challenging problem. This project focuses on the derivative called option and especially pricing of American-style basket options, i.e. options with both an early exercise feature and multiple underlying assets. In high-dimensional problems, which is definitely the case for American-style options, Monte Carlo methods is advantageous. Therefore, in this thesis, regression-based Monte Carlo has been used to determine early exercise strategies for the option. The well known Least Squares Monte Carlo (LSM) algorithm of Longstaff and Schwartz (2001) has been implemented and compared to Robust Regression Monte Carlo (RRM) by C.Jonen (2011). The difference between these methods is that robust regression is used instead of least square regression to calculate continuation values of American style options. Since robust regression is more stable against outliers the result using this approach is claimed by C.Jonen to give better estimations of the option price.

    It was hard to compare the techniques without the duality approach of Andersen and Broadie (2004) therefore this method was added. The numerical tests then indicate that the exercise strategy determined using RRM produces a higher lower bound and a tighter upper bound compared to LSM. The difference between upper and lower bound could be up to 4 times smaller using RRM.

    Importance sampling and Quasi Monte Carlo have also been used to reduce the variance in the estimation of the option price and to speed up the convergence rate.

    Download full text (pdf)
    fulltext
  • 32.
    Andersson Tano, Ingrid
    et al.
    Department of Engineering Sciences and Mathematics, Luleå University of Technology, Luleå, Sweden.
    Vännman, Kerstin
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    A multivariate process capability index based on the first principal component only2013In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 29, no 7, p. 987-1003Article in journal (Refereed)
    Abstract [en]

    Often the quality of a process is determined by several correlated univariate variables. In such cases, the considered quality characteristic should be treated as a vector. Several different multivariate process capability indices (MPCIs) have been developed for such a situation, but confidence intervals or tests have been derived for only a handful of these. In practice, the conclusion about process capability needs to be drawn from a random sample, making confidence intervals or tests for the MPCIs important. Principal component analysis (PCA) is a well-known tool to use in multivariate situations. We present, under the assumption of multivariate normality, a new MPCI by applying PCA to a set of suitably transformed variables. We also propose a decision procedure, based on a test of this new index, to be used to decide whether a process can be claimed capable or not at a stated significance level. This new MPCI and its accompanying decision procedure avoid drawbacks found for previously published MPCIs with confidence intervals. By transforming the original variables, we need to consider the first principal component only. Hence, a multivariate situation can be converted into a familiar univariate process capability index. Furthermore, the proposed new MPCI has the property that if the index exceeds a given threshold value the probability of non-conformance is bounded by a known value. Properties, like significance level and power, of the proposed decision procedure is evaluated through a simulation study in the two-dimensional case. A comparative simulation study between our new MPCI and an MPCI previously suggested in the literature is also performed. These studies show that our proposed MPCI with accompanying decision procedure has desirable properties and is worth to study further.

  • 33.
    Andersson Tano, Ingrid
    et al.
    Department of Engineering Sciences and Mathematics, Luleå University of Technology, Luleå, Sweden.
    Vännman, Kerstin
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Comparing confidence intervals for multivariate process capability indices2012In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 28, no 4, p. 481-495Article in journal (Refereed)
    Abstract [en]

    Multivariate process capability indices (MPCIs) are needed for process capability analysis when the quality of a process is determined by several univariate quality characteristics that are correlated. There are several different MPCIs described in the literature, but confidence intervals have been derived for only a handful of these. In practice, the conclusion about process capability must be drawn from a random sample. Hence, confidence intervals or tests for MPCIs are important. With a case study as a start and under the assumption of multivariate normality, we review and compare four different available methods for calculating confidence intervals of MPCIs that generalize the univariate index Cp. Two of the methods are based on the ratio of a tolerance region to a process region, and two are based on the principal component analysis. For two of the methods, we derive approximate confidence intervals, which are easy to calculate and can be used for moderate sample sizes. We discuss issues that need to be solved before the studied methods can be applied more generally in practice. For instance, three of the methods have approximate confidence levels only, but no investigation has been carried out on how good these approximations are. Furthermore, we highlight the problem with the correspondence between the index value and the probability of nonconformance. We also elucidate a major drawback with the existing MPCIs on the basis of the principal component analysis. Our investigation shows the need for more research to obtain an MPCI with confidence interval such that conclusions about the process capability can be drawn at a known confidence level and that a stated value of the MPCI limits the probability of nonconformance in a known way.

  • 34.
    Andersson, Tobias
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Golovlev, Jegor
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Prediktion av bruttoregionalprodukt: Prognosmodellering som förkortar tiden mellan officiella siffror och prognos2015Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Arbetet har utforskat möjligheten och precisionen av BRP-prediktion för tre statistiska metoder; linjär regression, regressionsträd och modellträd. För modellutvärdering har testfelsskattning erhållen genom korsvalidering och en jämförelse mot Statistiska Centralbyråns (SCB) prognos av BRP används. Resultatet visar att regressionsträd inte lämpar sig för BRP-prediktion, medan de andra två lyckas med rimlig felmarginal. Procentuell avvikelse för metoden som ligger närmast SCB:s prognos har 0,3 i genomsnitt och standardavvikelse 3,0.

    Download full text (pdf)
    fulltext
  • 35.
    Andersson, Tore
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Borgström, Jonas
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Analys av bortfallets påverkan i Riksstrokes kvalitetsregister2020Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [sv]

    Akut stroke är en allvarlig och livshotande sjukdom som ofta leder till fysiska och kognitiva funktionsnedsättningar. Riksstroke är ett kvalitetregister som samlar in och tillhandahåller information om strokevården i Sverige. Under 2019–2020 pågår ett omfattande valideringsarbete där analys av bortfallet inom registret utförs. Syftet med uppsatsen var att som i en del av detta arbete analysera omfattningen av bortfallet i flera faktorer och om det fanns en skillnad mellan grupperna kön, ålder och sjukhus. Därefter testades två metoder för bortfallshantering, complete case analysis och multipel imputations by chained equation (MICE). Dessa utvärderades genom att jämföra de skattade oddskvoterna för död inom 90 dagar efter inskrivning på sjukhus. Resultatet visade att det fanns stora skillnader i bortfall mellan män och kvinnor, åldersgrupper och sjukhusen. Där kan en stor del av skillnaden i bortfall troligtvis kan förklaras av åldern på patienterna. Det två utvärderade metoderna producerade jämförbara resultat.

    Download full text (pdf)
    fulltext
  • 36.
    Andersson-Evelönn, Emma
    et al.
    Umeå University, Faculty of Medicine, Department of Medical Biosciences, Pathology.
    Vidman, Linda
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Källberg, David
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Landfors, Mattias
    Umeå University, Faculty of Medicine, Department of Medical Biosciences, Pathology.
    Liu, Xijia
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Ljungberg, Börje
    Umeå University, Faculty of Medicine, Department of Surgical and Perioperative Sciences, Urology and Andrology.
    Hultdin, Magnus
    Umeå University, Faculty of Medicine, Department of Medical Biosciences, Pathology.
    Degerman, Sofie
    Umeå University, Faculty of Medicine, Department of Medical Biosciences, Pathology. Umeå University, Faculty of Medicine, Department of Clinical Microbiology.
    Rydén, Patrik
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Combining epigenetic and clinicopathological variables improves prognostic prediction in clear cell Renal Cell CarcinomaManuscript (preprint) (Other academic)
  • 37. Anderstig, Christer
    et al.
    Snickars, Folke
    Westin, Jonas
    Umeå University, Faculty of Social Sciences, Centre for Regional Science (CERUM). Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Westlund, Hans
    Multiregional input-output tables for Swedish regions: trade modelling comparisons2023Report (Other academic)
    Download full text (pdf)
    fulltext
  • 38.
    Anderstig, Christer
    et al.
    WSP.
    Snickars, Folke
    KTH.
    Westin, Jonas
    Umeå University, Faculty of Social Sciences, Centre for Regional Science (CERUM).
    Westlund, Hans
    JIBS/KTH.
    Multiregionala inputoutputanalyser – idag och imorgon2022Report (Other academic)
    Abstract [sv]

    Denna rapport om validering, kvalitetssäkring och demonstration av SCB:s MRIO-tabeller har skrivits inom ramen för ett svenskt utvecklingsarbete med att utnyttja interregionala IO-tabeller för forskning, policy och planering. Det arbete som rapporten behandlar utgör en högre ambition för att skapa kunskapsunderlag om hur de regionala ekonomierna i Sverige och internationellt hänger samman genom handel med varor och tjänster.

    Frågor om vilka policyinsatser som kan göras för att utveckla svensk konkurrenskraft och främja svenskt klimatarbete i en alltmera specialiserad och internationaliserad ekonomi utgör kärnan i rapportens frågeställningar. Det har länge ansetts som näst intill omöjligt att behandla dessa frågeställningar med metodik som bygger på interregional inputoutputanalys. Denna rapport visar att en sådan ambitionshöjning är både motiverad och möjlig särskilt i Sverige med sin välutvecklade ekonomiska statistik och acceptans föranvändning av evidensbaserade utvärderingsmetoder.

    Rapportens huvudresultat är att det finns anledning att fortsätta och ytterligare fördjupa SCB:s påbörjade utvecklingsarbete med interregionala input-outputtabeller. För att detta arbete ska bli framgångsrikt krävs att man tillför ytterligare resurser för statistikinsamling, framtagning av kvalitetssäkrade statistikprodukter och demonstration av metodikensanvändningsområden och styrkor i förhållande till befintlig praktik.

    Ett grundläggande problem som identifierats i SCB:s pågående MRIO-projekt, och som behandlas i forskningsrapporten, är att den regionala nivån betraktas som underordnad den nationella vid statistikinsamling och uppbyggnad av räkenskapssystemen.

    De nationella räkenskaperna byggs upp från mikrodata utan att tillvarata den geografiska information som återfinns i mikrodata. Därefter tas de regionala räkenskaperna fram via nedbrytnings- och fördelningsmetoder av olika slag.

    I rapporten visas att detta kan leda till konsistensproblem för både produktionssystemet och den slutliga förbrukningen. Rapporten tar privat konsumtion som exempel men även behandlingen av handelsmarginaler och transportmarginaler ges som exempel. Det visas hur man i Kanada sedan länge har byggt samman den regionala nivån och den nationella inte minst när det gäller interprovinsiell och internationell handel.

    I rapporten pekas på ytterligare samverkansvinster som kan göras med pågående arbete inom såväl Tillväxtanalys och Tillväxtverket som Trafikverket och Trafikanalys. Dessa kan bestå i en samordning mellan de olika myndigheternas insamling av primärdata exempelvis avseende handelsflöden. Ett huvudresultat som lyfts fram i rapporten är att dessa aspekter kan åtgärdas i en tredje fas av projektet. Sådana fördjupningar behövs för att gå vidare med utvärderingsmodeller för olika policyområden som exempelvis rörfrågor om konkurrenskraft, sysselsättningsåtgärder och klimatutmaningar. Modeller att bygga vidare på finns redan, exempelvis Raps för regionalekonomiska utvärderingar.

    I rapporten visas vidare att den kommande fasen av SCB:s MRIO-projekt bör innehålla moment som inriktas mot sammanfogning av de svenska regionala och nationella räkenskaperna med det internationella arbetet med konsoliderad input-outputanalys. Multiregionala input-outputanalyser – i dag och i morgon 6/83

    De svenska räkenskaperna har tidigare har varit föremål för justering bland annat inom OECD för att fungera för analys av globala värdekedjor och klimatpolitiska åtgärder. Nu har en anpassning skett och det ligger inom räckhåll att göra Sverige till förebild när det gäller att samlat utvärdera politik på internationell och interregional nivå.

    Rapporten är skriven i nära anslutning till SCB:s MRIO-projekt. Detta torde garantera att resultaten av föreliggande projekt kan nyttiggöras i fas tre av SCB-projektet på ett smidigt sätt.

    Fas två av detta projekt är ännu inte avslutad även om huvuddelen av utvecklingsarbetet är klart inte minst när det gäller programmering i R av en helt ny produkt för insamling och analys av regional statistik. Kvalitetssäkring pågår av projektets olika delar med ambition att denna kvalitetssäkring även ska göras av internationella experter. Det finns därför goda förutsättningar att nästa fas av utvecklingsprojektet kan innebära ett stort steg framåt när det gäller en ny generation av statistikprodukter inom SCB och ett fördjupatunderlag för framtida policyanalyser.

  • 39.
    Angelchev Shiryaev, Artem
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Karlsson, Johan
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Estimating Dependence Structures with Gaussian Graphical Models: A Simulation Study in R2021Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Graphical models are powerful tools when estimating complex dependence structures among large sets of data. This thesis restricts the scope to undirected Gaussian graphical models. An initial predefined sparse precision matrix was specified to generate multivariate normally distributed data. Utilizing the generated data, a simulation study was conducted reviewing accuracy, sensitivity and specificity of the estimated precision matrix. The graphical LASSO was applied using four different packages available in R with seven selection criteria's for estimating the tuning parameter.

    The findings are mostly in line with previous research. The graphical LASSO is generally faster and feasible in high dimensions, in contrast to stepwise model selection. A portion of the selection methods for estimating the optimal tuning parameter obtained the true network structure. The results provide an estimate of how well each model obtains the true, predefined dependence structure as featured in our simulation. As the simulated data used in this thesis is merely an approximation of real-world data, one should not take the results as the only aspect of consideration when choosing a model.

    Download full text (pdf)
    fulltext
  • 40.
    Angelov, Angel G.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Methods for interval-censored data and testing for stochastic dominance2018Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis includes four papers: the first three of them are concerned with methods for interval-censored data, while the forth paper is devoted to testing for stochastic dominance.

    In many studies, the variable of interest is observed to lie within an interval instead of being observed exactly, i.e., each observation is an interval and not a single value. This type of data is known as interval-censored. It may arise in questionnaire-based studies when the respondent gives an answer in the form of an interval without having pre-specified ranges. Such data are called self-selected interval data. In this context, the assumption of noninformative censoring is not fulfilled, and therefore the existing methods for interval-censored data are not necessarily applicable.

    A problem of interest is to estimate the underlying distribution function. There are two main approaches to this problem: (i) parametric estimation, which assumes a particular functional form of the distribution, and (ii) nonparametric estimation, which does not rely on any distributional assumptions. In Paper A, a nonparametric maximum likelihood estimator for self-selected interval data is proposed and its consistency is shown. Paper B suggests a parametric maximum likelihood estimator. The consistency and asymptotic normality of the estimator are proven.

    Another interesting problem is to infer whether two samples arise from identical distributions. In Paper C, nonparametric two-sample tests suitable for self-selected interval data are suggested and their properties are investigated through simulations.

    Paper D concerns testing for stochastic dominance with uncensored data. The paper explores a testing problem which involves four hypotheses, that is, based on observations of two random variables X and Y, one wants to discriminate between four possibilities: identical survival functions, stochastic dominance of X over Y, stochastic dominance of Y over X, or crossing survival functions. Permutation-based tests suitable for two independent samples and for paired samples are proposed. The tests are applied to data from an experiment concerning the individual's willingness to pay for a given environmental improvement.

    Download full text (pdf)
    fulltext
    Download full text (pdf)
    spikblad
  • 41.
    Angelov, Angel G.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Nonparametric two-sample tests for informatively interval-censored dataManuscript (preprint) (Other academic)
  • 42.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Maximum likelihood estimation for survey data with informative interval censoring2019In: AStA Advances in Statistical Analysis, ISSN 1863-8171, E-ISSN 1863-818X, Vol. 103, no 2, p. 217-236Article in journal (Refereed)
    Abstract [en]

    Interval-censored data may arise in questionnaire surveys when, instead of being asked to provide an exact value, respondents are free to answer with any interval without having pre-specified ranges. In this context, the assumption of noninformative censoring is violated, and thus, the standard methods for interval-censored data are not appropriate. This paper explores two schemes for data collection and deals with the problem of estimation of the underlying distribution function, assuming that it belongs to a parametric family. The consistency and asymptotic normality of a proposed maximum likelihood estimator are proven. A bootstrap procedure that can be used for constructing confidence intervals is considered, and its asymptotic validity is shown. A simulation study investigates the performance of the suggested methods.

    Download full text (pdf)
    fulltext
  • 43.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Nonparametric estimation for self-selected interval data collected through a two-stage approach2017In: Metrika (Heidelberg), ISSN 0026-1335, E-ISSN 1435-926X, Vol. 80, no 4, p. 377-399Article in journal (Refereed)
    Abstract [en]

    Self-selected interval data arise in questionnaire surveys when respondents are free to answer with any interval without having pre-specified ranges. This type of data is a special case of interval-censored data in which the assumption of noninformative censoring is violated, and thus the standard methods for interval-censored data (e.g. Turnbull's estimator) are not appropriate because they can produce biased results. Based on a certain sampling scheme, this paper suggests a nonparametric maximum likelihood estimator of the underlying distribution function. The consistency of the estimator is proven under general assumptions, and an iterative procedure for finding the estimate is proposed. The performance of the method is investigated in a simulation study.

    Download full text (pdf)
    fulltext
  • 44.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Forest Resource Management, Swedish University of Agricultural Sciences, Umeå, Sweden.
    Tests of stochastic dominance with repeated measurements data2023In: AStA Advances in Statistical Analysis, ISSN 1863-8171, E-ISSN 1863-818X, Vol. 107, no 3, p. 443-467Article in journal (Refereed)
    Abstract [en]

    The paper explores a testing problem which involves four hypotheses, that is, based on observations of two random variables X and Y, we wish to discriminate between four possibilities: identical survival functions, stochastic dominance of X over Y, stochastic dominance of Y over X, or crossing survival functions. Four-decision testing procedures for repeated measurements data are proposed. The tests are based on a permutation approach and do not rely on distributional assumptions. One-sided versions of the Cramér–von Mises, Anderson–Darling, and Kolmogorov–Smirnov statistics are utilized. The consistency of the tests is proven. A simulation study shows good power properties and control of false-detection errors. The suggested tests are applied to data from a psychophysical experiment.

    Download full text (pdf)
    fulltext
  • 45.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Forest Resource Management, Swedish University of Agricultural Sciences, Umeå, Sweden.
    Kriström, Bengt
    Department of Forest Economics, Swedish University of Agricultural Sciences, Umeå, Sweden.
    Nilsson, Mats E.
    Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden.
    Four-decision tests for stochastic dominance, with an application to environmental psychophysics2019In: Journal of mathematical psychology (Print), ISSN 0022-2496, E-ISSN 1096-0880, Vol. 93, article id 102281Article in journal (Refereed)
    Abstract [en]

    If the survival function of a random variable X lies to the right of the survival function of a random variable Y, then X is said to stochastically dominate Y. Inferring stochastic dominance is particularly complicated because comparing survival functions raises four possible hypotheses: identical survival functions, dominance of X over Y, dominance of Y over X, or crossing survival functions. In this paper, we suggest four-decision tests for stochastic dominance suitable for paired samples. The tests are permutation-based and do not rely on distributional assumptions. One-sided Cramér–von Mises and Kolmogorov–Smirnov statistics are employed but the general idea may be utilized with other test statistics. The power to detect dominance and the different types of wrong decisions are investigated in an extensive simulation study. The proposed tests are applied to data from an experiment concerning the individual’s willingness to pay for a given environmental improvement.

    Download full text (pdf)
    fulltext
  • 46.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics.
    Kriström, Bengt
    Umeå University, Faculty of Social Sciences, Center for Environmental and Resource Economics (CERE). Department of Forest Economics, Swedish University of Agricultural Sciences.
    Nilsson, Mats E.
    Gösta Ekman Laboratory, Department of Psychology, Stockholm University.
    Testing for stochastic dominance: Procedures with four hypothesesManuscript (preprint) (Other academic)
  • 47.
    Angelov, Angel G.
    et al.
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Probability, Operations Research and Statistics, Faculty of Mathematics and Informatics, Sofia University St. Kliment Ohridski, Sofia, Bulgaria.
    Ekström, Magnus
    Umeå University, Faculty of Social Sciences, Umeå School of Business and Economics (USBE), Statistics. Department of Forest Resource Management, Swedish University of Agricultural Sciences, Umeå, Sweden.
    Puzon, Klarizze
    United Nations University World Institute for Development Economics Research, Helsinki, Finland.
    Arcenas, Agustin
    School of Economics, University of the Philippines, Diliman, Quezon City, Philippines.
    Kriström, Bengt
    Department of Forest Economics, Swedish University of Agricultural Sciences, Umeå, Sweden.
    Quantile regression with interval-censored data in questionnaire-based studies2022In: Computational statistics (Zeitschrift), ISSN 0943-4062, E-ISSN 1613-9658Article in journal (Refereed)
    Abstract [en]

    Interval-censored data can arise in questionnaire-based studies when the respondent gives an answer in the form of an interval without having pre-specified ranges. Such data are called self-selected interval data. In this case, the assumption of independent censoring is not fulfilled, and therefore the ordinary methods for interval-censored data are not suitable. This paper explores a quantile regression model for self-selected interval data and suggests an estimator based on estimating equations. The consistency of the estimator is shown. Bootstrap procedures for constructing confidence intervals are considered. A simulation study indicates satisfactory performance of the proposed methods. An application to data concerning price estimates is presented.

    Download full text (pdf)
    fulltext
  • 48.
    Anthony, Tim
    Umeå University, Faculty of Science and Technology, Department of Physics.
    On the Topic of Unconstrained Black-Box Optimization with Application to Pre-Hospital Care in Sweden: Unconstrained Black-Box Optimization2021Independent thesis Advanced level (professional degree), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    In this thesis, the theory and application of black-box optimization methods are explored. More specifically, we looked at two families of algorithms, descent methods andresponse surface methods (closely related to trust region methods). We also looked at possibilities in using a dimension reduction technique called active subspace which utilizes sampled gradients. This dimension reduction technique can make the descent methods more suitable to high-dimensional problems, which turned out to be most effective when the data have a ridge-like structure. Finally, the optimization methods were used on a real-world problem in the context of pre-hospital care where the objective is to minimize the ambulance response times in the municipality of Umea by changing the positions of the ambulances.

    Before applying the methods on the real-world ambulance problem, a simulation study was performed on synthetic data, aiming at finding the strengths and weaknesses of the different models when applied to different test functions, at different levels of noise.

    The results showed that we could improve the ambulance response times across several different performance metrics compared to the response times of the current ambulancepositions. This indicates that there exist adjustments that can benefit the pre-hospitalcare in the municipality of Umea. However, since the models in this thesis work find local and not global optimums, there might still exist even better ambulance positions that can improve the response time further.

    Download full text (pdf)
    fulltext
  • 49.
    Anton, Rikard
    et al.
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Cohen, David
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics. Univ Innsbruck, Dept Math, Innsbruck, Austria.
    Larsson, Stig
    Wang, Xiaojie
    Full discretization of semilinear stochastic wave equations driven by multiplicative noise2016In: SIAM Journal on Numerical Analysis, ISSN 0036-1429, E-ISSN 1095-7170, Vol. 54, no 2, p. 1093-1119Article in journal (Refereed)
    Abstract [en]

    A fully discrete approximation of the semilinear stochastic wave equation driven by multiplicative noise is presented. A standard linear finite element approximation is used in space, and a stochastic trigonometric method is used for the temporal approximation. This explicit time integrator allows for mean-square error bounds independent of the space discretization and thus does not suffer from a step size restriction as in the often used Stormer-Verlet leapfrog scheme. Furthermore, it satisfies an almost trace formula (i.e., a linear drift of the expected value of the energy of the problem). Numerical experiments are presented and confirm the theoretical results.

  • 50.
    Arnqvist, Per
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Allowing Left Truncated and Censored Fertility Data in the Normal Waiting Model1995Report (Other academic)
    Abstract [en]

    Models describing marital fertility are under consideration. In Arnqvist (Research Report 2, Mathematical Statistics, Umeå University), a normal approximation of the waiting model was introduced. In this report a modification of the normal approximation is suggested. This specification allows the data to be left truncated and censored, which gives the possibility to apply the normally approximated waiting model in datasets as from the United Nations World Fertility Services. The model is appropriate except for extremely high fertility intensities, when it gives rise to bias in the parameter estimations. In this case, therefore, a bootstrap method is suggested to estimate and correct the bias. This means that the normal approximated waiting model is a good competitor to the well known Poisson or Coale-Trussell model. It also uses an understandable fertility specification.

    Download full text (pdf)
    fulltext
1234567 1 - 50 of 1002
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf