Umeå University's logo

umu.sePublications
Change search
Refine search result
1234567 1 - 50 of 328
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Abel, Olubunmi
    et al.
    Shatunov, Aleksey
    Jones, Ashley R.
    Andersen, Peter M.
    Umeå University, Faculty of Medicine, Department of Pharmacology and Clinical Neuroscience, Neurology.
    Powell, John F.
    Al-Chalabi, Ammar
    Development of a Smartphone App for a Genetics Website: The Amyotrophic Lateral Sclerosis Online Genetics Database (ALSoD)2013In: JMIR mhealth and uhealth, E-ISSN 2291-5222, Vol. 1, no 2, article id e18Article in journal (Refereed)
    Abstract [en]

    Background: The ALS Online Genetics Database (ALSoD) website holds mutation, geographical, and phenotype data on genes implicated in amyotrophic lateral sclerosis (ALS) and links to bioinformatics resources, publications, and tools for analysis. On average, there are 300 unique visits per day, suggesting a high demand from the research community. To enable wider access, we developed a mobile-friendly version of the website and a smartphone app. Objective: We sought to compare data traffic before and after implementation of a mobile version of the website to assess utility. Methods: We identified the most frequently viewed pages using Google Analytics and our in-house analytic monitoring. For these, we optimized the content layout of the screen, reduced image sizes, and summarized available information. We used the Microsoft. NET framework mobile detection property (HttpRequest. IsMobileDevice in the Request. Browser object in conjunction with HttpRequest. UserAgent), which returns a true value if the browser is a recognized mobile device. For app development, we used the Eclipse integrated development environment with Android plug-ins. We wrapped the mobile website version with the WebView object in Android. Simulators were downloaded to test and debug the applications. Results: The website automatically detects access from a mobile phone and redirects pages to fit the smaller screen. Because the amount of data stored on ALSoD is very large, the available information for display using smartphone access is deliberately restricted to improve usability. Visits to the website increased from 2231 to 2820, yielding a 26% increase from the pre-mobile to post-mobile period and an increase from 103 to 340 visits (230%) using mobile devices (including tablets). The smartphone app is currently available on BlackBerry and Android devices and will be available shortly on iOS as well. Conclusions: Further development of the ALSoD website has allowed access through smartphones and tablets, either through the website or directly through a mobile app, making genetic data stored on the database readily accessible to researchers and patients across multiple devices.

    Download full text (pdf)
    fulltext
  • 2.
    Adewole, Kayode S.
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Department of Computer Science, University of Ilorin, Ilorin, Nigeria.
    Torra, Vicenç
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    DFTMicroagg: a dual-level anonymization algorithm for smart grid data2022In: International Journal of Information Security, ISSN 1615-5262, E-ISSN 1615-5270, Vol. 21, p. 1299-1321Article in journal (Refereed)
    Abstract [en]

    The introduction of advanced metering infrastructure (AMI) smart meters has given rise to fine-grained electricity usage data at different levels of time granularity. AMI collects high-frequency daily energy consumption data that enables utility companies and data aggregators to perform a rich set of grid operations such as demand response, grid monitoring, load forecasting and many more. However, the privacy concerns associated with daily energy consumption data has been raised. Existing studies on data anonymization for smart grid data focused on the direct application of perturbation algorithms, such as microaggregation, to protect the privacy of consumers. In this paper, we empirically show that reliance on microaggregation alone is not sufficient to protect smart grid data. Therefore, we propose DFTMicroagg algorithm that provides a dual level of perturbation to improve privacy. The algorithm leverages the benefits of discrete Fourier transform (DFT) and microaggregation to provide additional layer of protection. We evaluated our algorithm on two publicly available smart grid datasets with millions of smart meters readings. Experimental results based on clustering analysis using k-Means, classification via k-nearest neighbor (kNN) algorithm and mean hourly energy consumption forecast using Seasonal Auto-Regressive Integrated Moving Average with eXogenous (SARIMAX) factors model further proved the applicability of the proposed method. Our approach provides utility companies with more flexibility to control the level of protection for their published energy data.

    Download full text (pdf)
    fulltext
  • 3.
    Ahmetaj, Shqiponja
    et al.
    Tu Wien, Austria.
    Ortiz, Magdalena
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Oudshoorn, Anouk
    Tu Wien, Austria.
    Šimkus, Mantas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Reconciling SHACL and ontologies: semantics and validation via rewriting2023In: ECAI 2023 / [ed] Kobi Gal; Ann Nowé; Grzegorz J. Nalepa; Roy Fairstein; Roxana Rădulescu, IOS Press, 2023, p. 27-35Conference paper (Refereed)
    Abstract [en]

    OWL and SHACL are two prominent W3C standards for managing RDF graphs, the data model of the Web. They are used for different purposes and make different assumptions about the completeness of data: SHACL is used for expressing integrity constraints on complete data, while OWL allows inferring implicit facts from incomplete data; SHACL reasoners perform validation, while OWL reasoners do logical inference. Integrating these two tasks into one uniform approach is a relevant but challenging problem. The SHACL standard envisions graph validation in combination with OWL entailment, but it does not provide technical guidance on how to realize this. To address this problem, we propose a new intuitive semantics for validating SHACL constraints with OWL 2 QL ontologies based on a suitable notion of the chase. We propose an algorithm that rewrites a set of recursive SHACL constraints (with stratified negation) and an OWL 2 QL ontology into a stand-alone set of SHACL constraints that preserves validation for every input graph, which can in turn be evaluated using an off-the-shelf SHACL validator. We show that validation in this setting is EXPTIME-complete in combined complexity, but only PTIME-complete in data complexity, i.e., if the constraints and the ontology are fixed.

    Download full text (pdf)
    fulltext
  • 4.
    Ait-Mlouk, Addi
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Agouti, Tarik
    Cadi Ayyad university.
    DM-MCDA: A web-based platform for data mining and multiple criteria decision analysis: A case study on road accident2019In: SoftwareX, E-ISSN 2352-7110, Vol. 10, article id 100323Article in journal (Refereed)
    Abstract [en]

    Today's ultra-connected world is generating a huge amount of data stored in databases and cloud environment especially in the era of transportation. These databases need to be processed and analyzed to extract useful information and present it as a valid element for transportation managers for further use, such as road safety, shipping delays, and shipping optimization. The potential of data mining algorithms is largely untapped, this paper shows large-scale techniques such as associations rule analysis, multiple criteria analysis, and time series to improve road safety by identifying hot-spots in advance and giving chance to drivers to avoid the dangers. Indeed, we proposed a framework DM-MCDA based on association rules mining as a preliminary task to extract relationships between variables related to a road accident, and then integrate multiple criteria analysis to help decision-makers to make their choice of the most relevant rules. The developed system is flexible and allows intuitive creation and execution of different algorithms for an extensive range of road traffic topics. DM-MCDA can be expanded with new topics on demand, rendering knowledge extraction more robust and provide meaningful information that could help in developing suitable policies for decision-makers.

    Download full text (pdf)
    fulltext
  • 5.
    Aler Tubella, Andrea
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Theodorou, Andreas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nieves, Juan Carlos
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Interrogating the black box: Transparency through information-seeking dialogues2021In: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) , 2021, p. 106-114Conference paper (Refereed)
    Abstract [en]

    This paper is preoccupied with the following question: given a (possibly opaque) learning system, how can we understand whether its behaviour adheres to governance constraints? The answer can be quite simple: we just need to “ask” the system about it. We propose to construct an investigator agent to query a learning agent- the suspect agent- to investigate its adherence to a given ethical policy in the context of an information-seeking dialogue, modeled in formal argumentation settings. This formal dialogue framework is the main contribution of this paper. Through it, we break down compliance checking mechanisms into three modular components, each of which can be tailored to various needs in a vast amount of ways: an investigator agent, a suspect agent, and an acceptance protocol determining whether the responses of the suspect agent comply with the policy. This acceptance protocol presents a fundamentally different approach to aggregation: rather than using quantitative methods to deal with the non-determinism of a learning system, we leverage the use of argumentation semantics to investigate the notion of properties holding consistently. Overall, we argue that the introduced formal dialogue framework opens many avenues both in the area of compliance checking and in the analysis of properties of opaque systems.

  • 6.
    Ali, Irfan
    et al.
    Department of Computer System Engineering, Institute of Business Administration Sukkar, Sukkur, Pakistan.
    Shehzad, Muhammad Naeem
    Department of Electrical and Computer Engineering, COMSATS University Islamabad, Lahore Campus, Lahore, Pakistan.
    Bashir, Qaisar
    Intel Corporation, TX, Austin, United States.
    Elahi, Haroon
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Awais, Muhammad Naeem
    Department of Electrical and Computer Engineering, COMSATS University Islamabad, Lahore Campus, Lahore, Pakistan.
    Geman, Oana
    Department of Computers, Electronics and Automation, Stefan Cel Mare University of Suceava, Suceava, Romania.
    Liu, Pin
    School of Computer Science and Engineering, Central South University, Changsha, China.
    A thermal-aware scheduling algorithm for reducing thermal risks in DAG-based applications in cyber-physical systems2023In: Ubiquitous security: second international conference, Ubisec 2022, Zhangjiajie, China, December 28–31, 2022, revised selected papers / [ed] Guojun Wang; Kim-Kwang Raymond Choo; Jie Wu; Ernesto Damiani, Singapore: Springer, 2023, p. 497-508Conference paper (Refereed)
    Abstract [en]

    Directed Acyclic Graph (DAG)-based scheduling applications are critical to resource allocation in the Cloud, Edge, and Fog layers of cyber-physical systems (CPS). However, thermal anomalies in DVFS-enabled homogeneous multiprocessor systems (HMSS) may be exploited by malicious applications posing risks to the availability of the underlying CPS. This can negatively affect the trustworthiness of CPS. This paper proposes an algorithm to address the thermal risks in DVFS-enabled HMSS for periodic DAG-based applications. It also improves the current list scheduling-based Depth-First and Breadth-First techniques without violating the timing constraints of the system. We test the algorithm using standard benchmarks and synthetic applications in a simulation setup. The results show a reduction in the temperature peaks by up to 30%, average temperature by up to 22%, temperature variations up to 3 times, and temperature spatial gradients by up to 4 times as compared to the conventional Depth-First Scheduling algorithms.

  • 7.
    Ali-Eldin, Ahmed
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Capacity Scaling for Elastic Compute Clouds2013Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    AbstractCloud computing is a computing model that allows better management, higher utiliza-tion and reduced operating costs for datacenters while providing on demand resourceprovisioning for different customers. Data centers are often enormous in size andcomplexity. In order to fully realize the cloud computing model, efficient cloud man-agement software systems that can deal with the datacenter size and complexity needto be designed and built.This thesis studies automated cloud elasticity management, one of the main andcrucial datacenter management capabilities. Elasticity can be defined as the abilityof cloud infrastructures to rapidly change the amount of resources allocated to anapplication in the cloud according to its demand. This work introduces algorithms,techniques and tools that a cloud provider can use to automate dynamic resource pro-visioning allowing the provider to better manage the datacenter resources. We designtwo automated elasticity algorithms for cloud infrastructures that predict the futureload for an application running on the cloud. It is assumed that a request is either ser-viced or dropped after one time unit, that all requests are homogeneous and that it takesone time unit to add or remove resources. We discuss the different design approachesfor elasticity controllers and evaluate our algorithms using real workload traces. Wecompare the performance of our algorithms with a state-of-the-art controller. We ex-tend on the design of the best performing controller out of our two controllers anddrop the assumptions made during the first design. The controller is evaluated with aset of different real workloads.All controllers are designed using certain assumptions on the underlying systemmodel and operating conditions. This limits a controller’s performance if the modelor operating conditions change. With this as a starting point, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components,an analyzer and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.

    Download full text (pdf)
    Capacity Scaling for Elastic Compute Clouds
  • 8.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    El-Ansary, Sameh
    Nile University.
    Optimizing Replica Placement in Peer-Assisted Cloud Stores2011Conference paper (Refereed)
    Abstract [en]

    Peer-assisted cloud storage systems use the unutilizedresources of the clients subscribed to a storage cloudto offload the servers of the cloud. The provider distributesdata replicas on the clients instead of replicating on the localinfrastructure. These replicas allow the provider to providea highly available, reliable and cheap service at a reducedcost. In this work we introduce NileStore, a protocol forreplication management in peer-assisted cloud storage. Theprotocol converts the replica placement problem into a lineartask assignment problem. We design five utility functionsto optimize placement taking into account the bandwidth,free storage and the size of data in need of replication oneach peer. The problem is solved using a suboptimal greedyoptimization algorithm. We show our simulation results usingthe different utilities under realistic network conditions. Ourresults show that using our approach offloads the cloud serversby about 90% compared to a random placement algorithmwhile consuming 98.5% less resources compared to a normalstorage cloud.

    Download full text (pdf)
    fulltext
  • 9.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    El-Ansary, Sameh
    Nile University.
    Replica Placement in Peer-Assisted Clouds: An Economic Approach2011In: Lecture Notes in Computer Science / [ed] Pascal Felber, Romain Rouvoy, Springer, 2011, p. 208-213Conference paper (Refereed)
    Abstract [en]

    We introduce NileStore, a replica placement algorithm based on an economical model for use in Peer-assisted cloud storage. The algorithm uses storage and bandwidth resources of peers to offload the cloud provider’s resources. We formulate the placement problem as a linear task assignment problem where the aim is to minimize time needed for file replicas to reach a certain desired threshold. Using simulation, We reduce the probability of a file being served from the provider’s servers by more than 97.5% under realistic network conditions.

  • 10.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Ilyushkin, Alexey
    Ghit, Bogdan
    Herbst, Nikolas Roman
    Papadopoulos, Alessandro
    Losup, Alexandru
    Which Cloud Auto-Scaler Should I Use for my Application?: Benchmarking Auto-Scaling Algorithms2016In: PROCEEDINGS OF THE 2016 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE'16), Association for Computing Machinery (ACM), 2016, p. 131-132Conference paper (Refereed)
  • 11.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kihl, Maria
    Lund University.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Analysis and characterization of a Video-on-Demand service workload2015In: Proceedings of the 6th ACM Multimedia Systems Conference, MMSys 2015, ACM Digital Library, 2015, p. 189-200Conference paper (Refereed)
    Abstract [en]

    Video-on-Demand (VoD) and video sharing services accountfor a large percentage of the total downstream Internet traf-fic. In order to provide a better understanding of the loadon these services, we analyze and model a workload tracefrom a VoD service provided by a major Swedish TV broad-caster. The trace contains over half a million requests gener-ated by more than 20000 unique users. Among other things,we study the request arrival rate, the inter-arrival time, thespikes in the workload, the video popularity distribution, thestreaming bit-rate distribution and the video duration distri-bution. Our results show that the user and the session ar-rival rates for the TV4 workload does not follow a Poissonprocess. The arrival rate distribution is modeled using a log-normal distribution while the inter-arrival time distributionis modeled using a stretched exponential distribution. Weobserve the “impatient user” behavior where users abandonstreaming sessions after minutes or even seconds of startingthem. Both very popular videos and non-popular videos areparticularly affected by impatient users. We investigate ifthis behavior is an invariant for VoD workloads.

  • 12.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Rezaie, Ali
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Mehta, Amardeep
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Razroev, Stanislav
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt-de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    How will your workload look like in 6 years?: Analyzing Wikimedia's workload2014In: Proceedings of the 2014 IEEE International Conference on Cloud Engineering (IC2E 2014) / [ed] Lisa O’Conner, IEEE Computer Society, 2014, p. 349-354Conference paper (Refereed)
    Abstract [en]

    Accurate understanding of workloads is key to efficient cloud resource management as well as to the design of large-scale applications. We analyze and model the workload of Wikipedia, one of the world's largest web sites. With descriptive statistics, time-series analysis, and polynomial splines, we study the trend and seasonality of the workload, its evolution over the years, and also investigate patterns in page popularity. Our results indicate that the workload is highly predictable with a strong seasonality. Our short term prediction algorithm is able to predict the workload with a Mean Absolute Percentage Error of around 2%.

  • 13.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Seleznjev, Oleg
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Sjöstedt-de Luna, Sara
    Umeå University, Faculty of Science and Technology, Department of Mathematics and Mathematical Statistics.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Measuring cloud workload burstiness2014In: 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC), IEEE conference proceedings, 2014, p. 566-572Conference paper (Refereed)
    Abstract [en]

    Workload burstiness and spikes are among the main reasons for service disruptions and decrease in the Quality-of-Service (QoS) of online services. They are hurdles that complicate autonomic resource management of datacenters. In this paper, we review the state-of-the-art in online identification of workload spikes and quantifying burstiness. The applicability of some of the proposed techniques is examined for Cloud systems where various workloads are co-hosted on the same platform. We discuss Sample Entropy (SampEn), a measure used in biomedical signal analysis, as a potential measure for burstiness. A modification to the original measure is introduced to make it more suitable for Cloud workloads.

    Download full text (pdf)
    fulltext
  • 14.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kihl, Maria
    Lund University.
    WAC: A Workload analysis and classification tool for automatic selection of cloud auto-scaling methodsManuscript (preprint) (Other academic)
    Abstract [en]

    Autoscaling algorithms for elastic cloud infrastructures dynami-cally change the amount of resources allocated to a service ac-cording to the current and predicted future load. Since there areno perfect predictors, no single elasticity algorithm is suitable foraccurate predictions of all workloads. To improve the quality ofworkload predictions and increase the Quality-of-Service (QoS)guarantees of a cloud service, multiple autoscalers suitable for dif-ferent workload classes need to be used. In this work, we intro-duce WAC, a Workload Analysis and Classification tool that as-signs workloads to the most suitable elasticity autoscaler out of aset of pre-deployed autoscalers. The workload assignment is basedon the workload characteristics and a set of user-defined Business-Level-Objectives (BLO). We describe the tool design and its maincomponents. We implement WAC and evaluate its precision us-ing various workloads, BLO combinations and state-of-the-art au-toscalers. Our experiments show that, when the classifier is tunedcarefully, WAC assigns between 87% and 98.3% of the workloadsto the most suitable elasticity autoscaler.

  • 15.
    Ali-Eldin, Ahmed
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kihl, Maria
    Department of Electrical and Information Technology, Lund University, Lund, Sweden.
    Workload Classification for Efficient Auto-Scaling of Cloud Resources2013Manuscript (preprint) (Other academic)
    Abstract [en]

    Elasticity algorithms for cloud infrastructures dynamically change the amount of resources allocated to a running service according to the current and predicted future load. Since there is no perfect predictor, and since different applications’ workloads have different characteristics, no single elasticity algorithm is suitable for future predictions for all workloads. In this work, we introduceWAC, aWorkload Analysis and Classification tool that analyzes workloads and assigns them to the most suitable elasticity controllers based on the workloads’ characteristics and a set of business level objectives.

    WAC has two main components, the analyzer and the classifier. The analyzer analyzes workloads to extract some of the features used by the classifier, namely, workloads’ autocorrelations and sample entropies which measure the periodicity and the burstiness of the workloads respectively. These two features are used with the business level objectives by the clas-sifier as the features used to assign workloads to elasticity controllers. We start by analyzing 14 real workloads available from different applications. In addition, a set of 55 workloads is generated to test WAC on more workload configurations. We implement four state of the art elasticity algorithms. The controllers are the classes to which the classifier assigns workloads. We use a K nearest neighbors classifier and experiment with different workload combinations as training and test sets. Our experi-ments show that, when the classifier is tuned carefully, WAC correctly classifies between 92% and 98.3% of the workloads to the most suitable elasticity controller.

  • 16.
    Ali-Eldin Hassan, Ahmed
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Workload characterization, controller design and performance evaluation for cloud capacity autoscaling2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis studies cloud capacity auto-scaling, or how to provision and release re-sources to a service running in the cloud based on its actual demand using an auto-matic controller. As the performance of server systems depends on the system design,the system implementation, and the workloads the system is subjected to, we focuson these aspects with respect to designing auto-scaling algorithms. Towards this goal,we design and implement two auto-scaling algorithms for cloud infrastructures. Thealgorithms predict the future load for an application running in the cloud. We discussthe different approaches to designing an auto-scaler combining reactive and proactivecontrol methods, and to be able to handle long running requests, e.g., tasks runningfor longer than the actuation interval, in a cloud. We compare the performance ofour algorithms with state-of-the-art auto-scalers and evaluate the controllers’ perfor-mance with a set of workloads. As any controller is designed with an assumptionon the operating conditions and system dynamics, the performance of an auto-scalervaries with different workloads.In order to better understand the workload dynamics and evolution, we analyze a6-years long workload trace of the sixth most popular Internet website. In addition,we analyze a workload from one of the largest Video-on-Demand streaming servicesin Sweden. We discuss the popularity of objects served by the two services, the spikesin the two workloads, and the invariants in the workloads. We also introduce, a mea-sure for the disorder in a workload, i.e., the amount of burstiness. The measure isbased on Sample Entropy, an empirical statistic used in biomedical signal processingto characterize biomedical signals. The introduced measure can be used to charac-terize the workloads based on their burstiness profiles. We compare our introducedmeasure with the literature on quantifying burstiness in a server workload, and showthe advantages of our introduced measure.To better understand the tradeoffs between using different auto-scalers with differ-ent workloads, we design a framework to compare auto-scalers and give probabilisticguarantees on the performance in worst-case scenarios. Using different evaluation cri-teria and more than 700 workload traces, we compare six state-of-the-art auto-scalersthat we believe represent the development of the field in the past 8 years. Knowingthat the auto-scalers’ performance depends on the workloads, we design a workloadanalysis and classification tool that assigns a workload to its most suitable elasticitycontroller out of a set of implemented controllers. The tool has two main components;an analyzer, and a classifier. The analyzer analyzes a workload and feeds the analysisresults to the classifier. The classifier assigns a workload to the most suitable elasticitycontroller based on the workload characteristics and a set of predefined business levelobjectives. The tool is evaluated with a set of collected real workloads, and a set ofgenerated synthetic workloads. Our evaluation results shows that the tool can help acloud provider to improve the QoS provided to the customers.

    Download full text (pdf)
    fulltext
    Download (pdf)
    spikblad
  • 17.
    Alisade, Hubert
    et al.
    Department of American Studies, University of Innsbruck, Austria.
    Calvanese, Diego
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Klarer, Mario
    Department of American Studies, University of Innsbruck, Austria.
    Mosca, Alessandro
    Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Ndefo, Nonyelum
    Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Rangger, Bernadette
    Department of American Studies, University of Innsbruck, Austria.
    Tratter, Aaron
    Department of American Studies, University of Innsbruck, Austria.
    Exploration of medieval manuscripts through keyword spotting in the MENS project2023In: Proceedings of the AIxIA 2023 discussion papers (AIxIA 2023 DP), Rome, Italy, November 6-9, 2023 / [ed] Roberto Basili; Domenico Lembo; Carla Limongelli; AndreA Orlandini, CEUR-WS , 2023, p. 67-74Conference paper (Refereed)
    Abstract [en]

    In-depth searching for specific content in medieval manuscripts requires labor-intensive, hence time-consuming manual manuscript screening. Using existing IT tools to carry out this task has not been possible, since state-of-the-art keyword spotting lacks the necessary metaknowledge or larger ontology that scholars intuitively apply in their investigations. This problem is being addressed in the “Research Südtirol/Alto Adige” 2019 project “MENS – Medieval Explorations in Neuro-Science (1050–1450): Ontology-Based Keyword Spotting in Manuscript Scans,” whose goal is to build a paradigmatic case study for compiling and subsequent screening of large collections of manuscript scans by using AI techniques for natural language processing and data management based on formal ontologies. We report here on the ongoing work and the results achieved so far in the MENS project.

    Download full text (pdf)
    fulltext
  • 18.
    Anjomshoae, Sule
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Context-based explanations for machine learning predictions2022Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.

    Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  

    Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.

    Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.

    Download full text (pdf)
    fulltext
    Download (pdf)
    spikblad
  • 19. Arkian, Hamidreza
    et al.
    Pierre, Guillaume
    Tordsson, Johan
    Elastisys AB.
    Elmroth, Erik
    Elastisys AB.
    An Experiment-Driven Performance Model of Stream Processing Operators in Fog Computing Environments2020In: SAC '20: Proceedings of the 35th Annual ACM Symposium on Applied Computing, ACM Digital Library, 2020, p. 1763-1771Conference paper (Refereed)
    Abstract [en]

    Data stream processing (DSP) is an interesting computation paradigm in geo-distributed infrastructures such as Fog computing because it allows one to decentralize the processing operations and move them close to the sources of data. However, any decomposition of DSP operators onto a geo-distributed environment with large and heterogeneous network latencies among its nodes can have significant impact on DSP performance. In this paper, we present a mathematical performance model for geo-distributed stream processing applications derived and validated by extensive experimental measurements. Using this model, we systematically investigate how different topological changes affect the performance of DSP applications running in a geo-distributed environment. In our experiments, the performance predictions derived from this model are correct within ±2% even in complex scenarios with heterogeneous network delays between every pair of nodes.

  • 20. Arkian, Hamidreza
    et al.
    Pierre, Guillaume
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Elastisys.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Elastisys.
    Model-based Stream Processing Auto-scaling in Geo-Distributed Environments2021In: 2021 International Conference on Computer Communications and Networks (ICCCN), IEEE, 2021Conference paper (Refereed)
    Abstract [en]

    Data stream processing is an attractive paradigm for analyzing IoT data at the edge of the Internet before transmitting processed results to a cloud. However, the relative scarcity of fog computing resources combined with the workloads' nonstationary properties make it impossible to allocate a static set of resources for each application. We propose Gesscale, a resource auto-scaler which guarantees that a stream processing application maintains a sufficient Maximum Sustainable Throughput to process its incoming data with no undue delay, while not using more resources than strictly necessary. Gesscale derives its decisions about when to rescale and which geo-distributed resource(s) to add or remove on a performance model that gives precise predictions about the future maximum sustainable throughput after reconfiguration. We show that this auto-scaler uses 17% less resources, generates 52% fewer reconfigurations, and processes more input data than baseline auto-scalers based on threshold triggers or a simpler performance model.

  • 21. Asan, Noor Badariah
    et al.
    Hassan, Emadeldeen
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Perez, Mauricio David
    Shah, Syaiful Redzwan Mohd
    Velander, Jacob
    Blokhuis, Taco J.
    Voigt, Thiemo
    Augustine, Robin
    Assessment of Blood Vessel Effect on Fat-Intrabody Communication Using Numerical and Ex-Vivo Models at 2.45 GHZ2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 89886-89900Article in journal (Refereed)
    Abstract [en]

    The potential offered by the intra-body communication (IBC) over the past few years has resulted in a spike of interest for the topic, specifically for medical applications. Fat-IBC is subsequently a novel alternative technique that utilizes fat tissue as a communication channel. This work aimed to identify such transmission medium and its performance in varying blood-vessel systems at 2.45 GHz, particularly in the context of the IBC and medical applications. It incorporated three-dimensional (3D) electromagnetic simulations and laboratory investigations that implemented models of blood vessels of varying orientations, sizes, and positions. Such investigations were undertaken by using ex-vivo porcine tissues and three blood-vessel system configurations. These configurations represent extreme cases of real-life scenarios that sufficiently elucidated their principal influence on the transmission. The blood-vessel models consisted of ex-vivo muscle tissues and copper rods. The results showed that the blood vessels crossing the channel vertically contributed to 5.1 dB and 17.1 dB signal losses for muscle and copper rods, respectively, which is the worst-case scenario in the context of fat-channel with perturbance. In contrast, blood vessels aligned-longitudinally in the channel have less effect and yielded 4.5 dB and 4.2 dB signal losses for muscle and copper rods, respectively. Meanwhile, the blood vessels crossing the channel horizontally displayed 3.4 dB and 1.9 dB signal losses for muscle and copper rods, respectively, which were the smallest losses among the configurations. The laboratory investigations were in agreement with the simulations. Thus, this work substantiated the fat-IBC signal transmission variability in the context of varying blood vessel configurations.

    Download full text (pdf)
    fulltext
  • 22. Babou, Cheikh Saliou Mbacke
    et al.
    Fall, Doudou
    Kashihara, Shigeru
    Taenaka, Yuzo
    Bhuyan, Monowar H.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Niang, Ibrahima
    Kadobayashi, Youki
    Hierarchical Load Balancing and Clustering Technique for Home Edge Computing2020In: IEEE Access, E-ISSN 2169-3536, Vol. 8, p. 127593-127607Article in journal (Refereed)
    Abstract [en]

    The edge computing system attracts much more attention and is expected to satisfy ultra-low response time required by emerging IoT applications. Nevertheless, as there were problems on latency such as the emerging traffic requiring very sensitive delay, a new Edge Computing system architecture, namely Home Edge Computing (HEC) supporting these real-time applications has been proposed. HEC is a three-layer architecture made up of HEC servers, which are very close to users, Multi-access Edge Computing (MEC) servers and the central cloud. This paper proposes a solution to solve the problems of latency on HEC servers caused by their limited resources. The increase in the traffic rate creates a long queue on these servers, i.e., a raise in the processing time (delay) for requests. By leveraging, based on clustering and load balancing techniques, we propose a new technique called HEC-Clustering Balance. It allows us to distribute the requests hierarchically on the HEC clusters and another focus of the architecture to avoid congestion on a HEC server to reduce the latency. The results show that HEC-Clustering Balance is more efficient than baseline clustering and load balancing techniques. Thus, compared to the HEC architecture, we reduce the processing time on the HEC servers to 19% and 73% respectively on two experimental scenarios.

    Download full text (pdf)
    fulltext
  • 23.
    Banerjee, Sourasekhar
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bhuyan, Monowar H.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Fed-FiS: A Novel Information-Theoretic Federated Feature Selection for Learning Stability2021In: Neural Information Processing: 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8–12, 2021, Proceedings, Part V / [ed] Teddy Mantoro, Minho Lee, Media Anugerah Ayu, Kok Wai Wong, Achmad Nizar Hidayanto, Springer Nature, 2021, Vol. 1516, p. 480-487Conference paper (Refereed)
    Abstract [en]

    In the era of big data and federated learning, traditional feature selection methods show unacceptable performance for handling heterogeneity when deployed in federated environments. We propose Fed-FiS, an information-theoretic federated feature selection approach to overcome the problem occur due to heterogeneity. Fed-FiS estimates feature-feature mutual information (FFMI) and feature-class mutual information (FCMI) to generate a local feature subset in each user device. Based on federated values across features and classes obtained from each device, the central server ranks each feature and generates a global dominant feature subset. We show that our approach can find stable features subset collaboratively from all local devices. Extensive experiments based on multiple benchmark iid (independent and identically distributed) and non-iid datasets demonstrate that Fed-FiS significantly improves overall performance in comparison to the state-of-the-art methods. This is the first work on feature selection in a federated learning system to the best of our knowledge.

  • 24.
    Banerjee, Sourasekhar
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Misra, Rajiv
    Prasad, Mukesh
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bhuyan, Monowar H.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Multi-Diseases Classification From Chest-X-Ray: A Federated Deep Learning Approach2020In: AI 2020: Advances in Artificial Intelligence: 33rd Australasian Joint Conference, AI 2020, Canberra, ACT, Australia, November 29–30, 2020, Proceedings / [ed] Marcus Gallagher, Nour Moustafa, Erandi Lakshika, Springer, 2020, Vol. 12576, p. 3-15Conference paper (Refereed)
    Abstract [en]

    Data plays a vital role in deep learning model training. In large-scale medical image analysis, data privacy and ownership make data gathering challenging in a centralized location. Hence, federated learning has been shown as successful in alleviating both problems for the last few years. In this work, we have proposed multi-diseases classification from chest-X-ray using Federated Deep Learning (FDL). The FDL approach detects pneumonia from chest-X-ray and also identifies viral and bacterial pneumonia. Without submitting the chest-X-ray images to a central server, clients train the local models with limited private data at the edge server and send them to the central server for global aggregation. We have used four pre-trained models such as ResNet18, ResNet50, DenseNet121, and MobileNetV2, and applied transfer learning on them at each edge server. The learned models in the federated setting have compared with centrally trained deep learning models. It has been observed that the models trained using the ResNet18 in a federated environment produce accuracy up to 98.3%98.3% for pneumonia detection and up to 87.3% accuracy for viral and bacterial pneumonia detection. We have compared the performance of adaptive learning rate based optimizers such as Adam and Adamax with Momentum based Stochastic Gradient Descent (SGD) and found out that Momentum SGD yields better results than others. Lastly, for visualization, we have used Class Activation Mapping (CAM) approaches such as Grad-CAM, Grad-CAM++, and Score-CAM to identify pneumonia affected regions in a chest-X-ray.

  • 25.
    Banerjee, Sourasekhar
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Patel, Yashwant Singh
    Thapar Institute of Engineering & Technology, India.
    Kumar, Pushkar
    Indian Institute of Information Technology, Ranchi, India.
    Bhuyan, Monowar H.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Towards post-disaster damage assessment using deep transfer learning and GAN-based data augmentation2023In: ICDCN '23: Proceedings of the 24th International Conference on Distributed Computing and Networking, ACM Digital Library, 2023, p. 372-377Conference paper (Refereed)
    Abstract [en]

    Cyber-physical disaster systems (CPDS) are a new cyber-physical application that collects physical realm measurements from IoT devices and sends them to the edge for damage severity analysis of impacted sites in the aftermath of a large-scale disaster. However, the lack of effective machine learning paradigms and the data and device heterogeneity of edge devices pose significant challenges in disaster damage assessment (DDA). To address these issues, we propose a generative adversarial network (GAN) and a lightweight, deep transfer learning-enabled, fine-tuned machine learning pipeline to reduce overall sensing error and improve the model's performance. In this paper, we applied several combinations of GANs (i.e., DCGAN, DiscoGAN, ProGAN, and Cycle-GAN) to generate fake images of the disaster. After that, three pre-trained models: VGG19, ResNet18, and DenseNet121, with deep transfer learning, are applied to classify the images of the disaster. We observed that the ResNet18 is the most pertinent model to achieve a test accuracy of 88.81%. With the experiments on real-world DDA applications, we have visualized the damage severity of disaster-impacted sites using different types of Class Activation Mapping (CAM) techniques, namely Grad-CAM++, Guided Grad-Cam, & Score-CAM. Finally, using k-means clustering, we have obtained the scatter plots to measure the damage severity into no damage, mild damage, and severe damage categories in the generated heat maps.

    Download full text (pdf)
    fulltext
  • 26.
    Baskar, Jayalakshmi
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Lindgren, Helena
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Human-Agent Dialogues on Health Topics - An Evaluation Study2015In: Highlights of practical applications of agents, multi-agent systems, and sustainability: The PAAMS Collection, PAAMS 2015, 2015, p. 28-39Conference paper (Refereed)
    Abstract [en]

    A common conversation between an older adult and a nurse about health-related issues includes topics such as troubles with sleep, reasons for walking around nighttime, pain conditions, etc. This dialogue emerges from the participating human's lines of thinking, their roles, needs and motives, while switching between topics as the dialogue unfolds. This paper presents a dialogue system that enables a human to engage in a dialogue with a software agent to reason about health-related issues in a home environment. The purpose of this work is to conduct a pilot evaluation study of a prototype system for human-agent dialogues, which is built upon a set of semantic models and integrated in a web application designed for older adults. Focus of the study was to receive qualitative results regarding purpose and content of the agent-based dialogue system, and to evaluate a method for the agent to evaluate its behavior based on the human agent's perception of appropriateness of moves. The participants include five therapists and 11 older adults. The results show users' feedback on the purpose of dialogues and the appropriateness of dialogues presented to them during the interaction with the software agent.

    Download full text (pdf)
    fulltext
  • 27. Bauer, André
    et al.
    Herbst, Nikolas
    Spinner, Simon
    Ali-Eldin, Ahmed
    Umeå University, Faculty of Science and Technology, Department of Computing Science. UMass, Amherst, MA, USA.
    Kounev, Samuel
    Chameleon: A Hybrid, Proactive Auto-Scaling Mechanism on a Level-Playing Field2019In: IEEE Transactions on Parallel and Distributed Systems, ISSN 1045-9219, E-ISSN 1558-2183, Vol. 30, no 4, p. 800-813Article in journal (Refereed)
    Abstract [en]

    Auto-scalers for clouds promise stable service quality at low costs when facing changing workload intensity. The major public cloud providers provide trigger-based auto-scalers based on thresholds. However, trigger-based auto-scaling has reaction times in the order of minutes. Novel auto-scalers from literature try to overcome the limitations of reactive mechanisms by employing proactive prediction methods. However, the adoption of proactive auto-scalers in production is still very low due to the high risk of relying on a single proactive method. This paper tackles the challenge of reducing this risk by proposing a new hybrid auto-scaling mechanism, called Chameleon, combining multiple different proactive methods coupled with a reactive fallback mechanism. Chameleon employs on-demand, automated time series-based forecasting methods to predict the arriving load intensity in combination with run-time service demand estimation to calculate the required resource consumption per work unit without the need for application instrumentation. We benchmark Chameleon against five different state-of-the-art proactive and reactive auto-scalers one in three different private and public cloud environments. We generate five different representative workloads each taken from different real-world system traces. Overall, Chameleon achieves the best scaling behavior based on user and elasticity performance metrics, analyzing the results from 400 hours aggregated experiment time.

  • 28.
    Bayuh Lakew, Ewnetu
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Birke, Robert
    Perez, Juan F.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Chen, Lydia Y.
    SmallTail: Scaling Cores and Probabilistic Cloning Requests for Web Systems2018In: 15TH IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING (ICAC 2018), IEEE , 2018, p. 31-40Conference paper (Refereed)
    Abstract [en]

    Users quality of experience on web systems are largely determined by the tail latency, e.g., 95th percentile. Scaling resources along, e.g., the number of virtual cores per VM, is shown to be effective to meet the average latency but falls short in taming the latency tail in the cloud where the performance variability is higher. The prior art shows the prominence of increasing the request redundancy to curtail the latency either in the off-line setting or without scaling-in cores of virtual machines. In this paper, we propose an opportunistic scaler, termed SmallTail, which aims to achieve stringent targets of tail latency while provisioning a minimum amount of resources and keeping them well utilized. Against dynamic workloads, SmallTail simultaneously adjusts the core provisioning per VM and probabilistically replicates requests so as to achieve the tail latency target. The core of SmallTail is a two level controller, where the outer loops controls the core provision per distributed VMs and the inner loop controls the clones in a finer granularity. We also provide theoretical analysis on the steady-state latency for a given probabilistic replication that clones one out of N arriving requests. We extensively evaluate SmallTail on three different web systems, namely web commerce, web searching, and web bulletin board. Our testbed results show that SmallTail can ensure the 95th latency below 1000 ms using up to 53% less cores compared to the strategy of constant cloning, whereas scaling-core only solution exceeds the latency target by up to 70%.

  • 29.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Björklund, Johanna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kutrib, Martin
    Deterministic Stack Transducers2017In: International Journal of Foundations of Computer Science, ISSN 0129-0541, Vol. 28, no 5, p. 583-601Article in journal (Refereed)
    Abstract [en]

    We introduce and investigate stack transducers, which are one-way stack automata with an output tape. A one-way stack automaton is a classical pushdown automaton with the additional ability to move the stack head inside the stack without altering the contents. For stack transducers, we distinguish between a digging and a non-digging mode. In digging mode, the stack transducer can write on the output tape when its stack head is inside the stack, whereas in non-digging mode, the stack transducer is only allowed to emit symbols when its stack head is at the top of the stack. These stack transducers have a motivation from natural-language interface applications, as they capture long-distance dependencies in syntactic, semantic, and discourse structures. We study the computational capacity for deterministic digging and non-digging stack transducers, as well as for their non-erasing and checking versions. We finally show that even for the strongest variant of stack transducers the stack languages are regular.

  • 30.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Björklund, Johanna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kutrib, Martin
    Institut fur Informatik, Universität Giessen.
    Deterministic Stack Transducers2016In: Implementation and Application of Automata / [ed] Yo-Sub Han and Kai Salomaa, Springer, 2016, p. 27-38Conference paper (Refereed)
    Abstract [en]

    We introduce and investigate stack transducers, which are one-way stack automata with an output tape. A one-way stack automaton is a classical pushdown automaton with the additional ability to move the stack head inside the stack without altering the contents. For stack transducers, we distinguish between a digging and a non-digging mode. In digging mode, the stack transducer can write on the output tape when its stack head is inside the stack, whereas in non-digging mode, the stack transducer is only allowed to emit symbols when its stack head is at the top of the stack. These stack transducers have a motivation from natural language interface applications, as they capture long-distance dependencies in syntactic, semantic, and discourse structures.We study the computational capacity for deterministic digging and non-digging stack transducers, as well as for their non-erasing and checking versions. We finally show that even for the strongest variant of stack transducers the stack languages are regular.

  • 31.
    Bensch, Suna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kutrib, Martin
    Institut fuer Informatik, Universität Giessen.
    Malcher, Andreas
    Institut fuer Informatik, Universität Giessen.
    Extended Uniformly Limited T0L Languages and Mild Context-Sensitivity2016In: Eight Workshop on Non-Classical Models of Automata and Applications (NCMA 2016): Short Papers / [ed] Henning Bordihn, Rudolf Freund, Benedek Nagy, and György Vaszil, Wien: Institut für Computersprachen , 2016, p. 35-46Conference paper (Refereed)
    Abstract [en]

    We study the fixed membership problem for k-uniformly-limited and propagating ET0L systems (kulEPT0L systems). To this end, the algorithm given in [7] is applied. It follows that kulEPT0L languages are parsable in polynomial time. Since kulEPT0L languages are semi-linear [1] and kulEPT0L systems generate certain non-context-free languages, which capture the non-context-free phenomena occurring in natural languages, this is the last building block to show that kulEPT0L languages, for k ≥ 2, belong to the family of mildly context-sensitive languages.

    Download full text (pdf)
    fulltext
  • 32.
    Bergvik, David
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Designing experiences for virtual reality, in virtual reality: A design process evaluation2017Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Creating immersive experiences for virtual reality (VR) presents new design opportunities and challenges that do not appear when creating experiences on a screen. Creating prototypes and exploring concepts in VR is today limited to professionals with previous knowledge in 3D application development, and testing 3D experiences requires the usage of an Head-Mounted Display (HMD), which forces professionals to switch medium from the computer to an HMD. With new advances in this field, there have to be new solutions to these challenges. The goal of this thesis is to explore how VR technology can be utilized in the experience design process for VR. This is achieved through a literature study and conducting expert interviews, followed by a hardware evaluation of different HMDs and concept creation using rapid prototyping. From the interviews, a number of issues could be identified that correlates with the research from the literature study. Based on these findings, two phases were identified as suitable for further improvements; Concept prototyping and testing/tweaking of a created experience. Lo-fi and hi-fi prototypes of a virtual design tool were developed for HTC Vive and Google Daydream, which were selected based on the hardware evaluation. The prototypes are designed and developed, then tested using a Wizard of Oz approach. The purpose of the prototypes is to solve some of the issues when designing immersive experiences for HMDs in the suitable experience design phases that were identified by analyzing the interview results. An interactive testing suite for HTC Vive was developed for testing and evaluation of the final prototype, to verify the validity of the concept. Using Virtual Reality as a medium for designing virtual experiences is a promising way of solving current issues within this technological field that are identified in this thesis. Tools for object creation and manipulation will aid professionals when exploring new concepts as well as editing and testing existing immersive experiences. Furthermore, using a Wizard of Oz approach to test VR prototypes significantly improves the prototype quality without compromising the user experience in this medium. 

    Download full text (pdf)
    fulltext
  • 33.
    Bhutto, Adil B.
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Vu, Xuan-Son
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Tay, Wee Peng
    School of Electrical & Electronics Engineering, Nanyang Technological University, Nanyang, Singapore.
    Bhuyan, Monowar H.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Reinforced Transformer Learning for VSI-DDoS Detection in Edge Clouds2022In: IEEE Access, E-ISSN 2169-3536, Vol. 10, p. 94677-94690Article in journal (Refereed)
    Abstract [en]

    Edge-driven software applications often deployed as online services in the cloud-to-edge continuum lack significant protection for services and infrastructures against emerging cyberattacks. Very-Short Intermittent Distributed Denial of Service (VSI-DDoS) attack is one of the biggest factor for diminishing the Quality of Services (QoS) and Quality of Experiences (QoE) for users on edge. Unlike conventional DDoS attacks, these attacks live for a very short time (on the order of a few milliseconds) in the traffic to deceive users with a legitimate service experience. To provide protection, we propose a novel and efficient approach for detecting VSI-DDoS attacks using reinforced transformer learning that mitigates the tail latency and service availability problems in edge clouds. In the presence of attacks, the users’ demand for availing ultra-low latency and high throughput services deployed on the edge, can never be met. Moreover, these attacks send very-short intermittent requests towards the target services that enforce longer delays in users’ responses. The assimilation of transformer with deep reinforcement learning accelerates detection performance under adverse conditions by adapting the dynamic and the most discernible patterns of attacks (e.g., multiplicative temporal dependency, attack dynamism). The extensive experiments with testbed and benchmark datasets demonstrate that the proposed approach is suitable, effective, and efficient for detecting VSI-DDoS attacks in edge clouds. The results outperform state-of-the-art methods with 0.9%-3.2% higher accuracy in both datasets.

    Download full text (pdf)
    fulltext
  • 34.
    Bhuyan, Monowar
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nieves, Juan Carlos
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Argumentation-based adversarial regression with multiple learners2022In: 2022 IEEE 34th international conference on tools with artificial intelligence (ICTAI), IEEE, 2022, p. 96-104Conference paper (Refereed)
    Abstract [en]

    Despite the extensive benefits of machine learning techniques in practice, several studies demonstrated that many approaches are vulnerable to attacks. These attacks generate adversarial data to manipulate learning models that result ambiguous decisions. In this paper, we propose a hybrid-reasoning framework that combines data-driven and non-monotonic reasoning, specifically formal argumentation and adversarial regression with multiple learners, to deal with ambiguous predictions of predictive models. The introduced hybrid-reasoning framework ensures three significant benefits. It (i) provides an argumentation-based aggregation function for combining multiple learners, (ii) reduces the effort to resolve conflicts in predictions, and (iii) cost-effective and robust training in adversarial regression solutions. To illustrate the introduced framework, we consider a benchmark of resource traces obtained from Yahoo's service cluster for anomaly detection under adversarial settings. The experimental analysis shows 3% more accuracy in prediction under argumentation-based adversarial settings.

  • 35.
    Björklund, Johanna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dahlgren Lindström, Adam
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Drewes, Frank
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bridging Perception, Memory, and Inference through Semantic Relations2021In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics (ACL) , 2021, p. 9136-9142Conference paper (Refereed)
    Abstract [en]

    There is a growing consensus that surface form alone does not enable models to learn meaning and gain language understanding. This warrants an interest in hybrid systems that combine the strengths of neural and symbolic methods. We favour triadic systems consisting of neural networks, knowledge bases, and inference engines. The network provides perception, that is, the interface between the system and its environment. The knowledge base provides explicit memory and thus immediate access to established facts. Finally, inference capabilities are provided by the inference engine which reflects on the perception, supported by memory, to reason and discover new facts. In this work, we probe six popular language models for semantic relations and outline a future line of research to study how the constituent subsystems can be jointly realised and integrated.

  • 36.
    Bliek, Adna
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Bensch, Suna
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hellström, Thomas
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    How Can a Robot Trigger Human Backchanneling?2020In: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, 2020, p. 96-103Conference paper (Refereed)
    Abstract [en]

    In human communication, backchanneling is an important part of the natural interaction protocol. The purpose is to signify the listener's attention, understanding, agreement, or to indicate that a speaker should go on talking. While the effects of backchanneling robots on humans have been investigated, studies of how and when humans backchannel to talking robots is poorly studied. In this paper we investigate how the robot's behavior as a speaker affects a human listener's backchanneling behavior. This is interesting in Human -Robot Interaction since backchanneling between humans has been shown to support more fluid interactions, and human -robot interaction would therefore benefit from mimicking this human communication feature. The results show that backchanneling increases when the robot exhibits backchannel-inviting cues such as pauses and gestures. Furthermore, clear differences between how a human backchannels to another human and to a robot are shown.

  • 37.
    Blöcker, Christopher
    et al.
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Smiljanic, Jelena
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Scholtes, Ingo
    Center for Artificial Intelligence and Data Science, University of Würzburg, Germany.
    Rosvall, Martin
    Umeå University, Faculty of Science and Technology, Department of Physics.
    Similarity-based link prediction from modular compression of network flows2022In: Proceedings of the First Learning on Graphs Conference, ML Research Press , 2022, p. 52:1-52:18Conference paper (Refereed)
    Abstract [en]

    Node similarity scores are a foundation for machine learning in graphs for clustering, node classification, anomaly detection, and link prediction with applications in biological systems, information networks, and recommender systems. Recent works on link prediction use vector space embeddings to calculate node similarities in undirected networks with good performance. Still, they have several disadvantages: limited interpretability, need for hyperparameter tuning, manual model fitting through dimensionality reduction, and poor performance from symmetric similarities in directed link prediction. We propose MapSim, an information-theoretic measure to assess node similarities based on modular compression of network flows. Unlike vector space embeddings, MapSim represents nodes in a discrete, non-metric space of communities and yields asymmetric similarities in an unsupervised fashion. We compare MapSim on a link prediction task to popular embedding-based algorithms across 47 networks and find that MapSim's average performance across all networks is more than 7% higher than its closest competitor, outperforming all embedding methods in 11 of the 47 networks. Our method demonstrates the potential of compression-based approaches in graph representation learning, with promising applications in other graph learning tasks.

  • 38.
    Bohman, Dan
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Single Sign On med Azure AD Connect2016Independent thesis Basic level (university diploma), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    This report covers Azure AD Connect and Single/Simplified Sign On. Users and customers today places greater demand for easier login method and seamless experience for reaching all services. Microsoft has recently released Azure AD Connect tool to help synchronize passwords between Active Directory and the cloud services Office 365/Azure and 1000s of Software as a service applications. Team Norr IT-partner is an IT company that focuses on delivering Microsoft products to thier customers and therefore wanted to know more about Azure AD Connect. How to configure the solution and what the set requirements are.

    Single Sign On means that you only need to sign in with password and login once and automatically get access the applications that support the technology without any more credentials.  By using a Federated domain users get the best and safest experience with Single Sign On. Simplified Sign On lets users use the same username and password to login with to all applications with support, but no automatic login.

    Azure AD Connect tool installs the roles that are needed to run a Single Sign On or Simplified Sign On. By default the synchronization engine will keep track of information about the users and groups. Passwords are also synchronized between on-premises Active Directory and Azure Active Directory or federation server.

    What the Synchronization engine takes is determined by the rules defined. Password Sync does not install any extra server roles. With the Federation path there will be extra roles installed called Federation (AD FS) and Web Application Proxy (WAP). They handle the authentication of users instead of the normal Microsoft authentication. There is some set requirement for the servers that host the roles depending on the size of Active Directory and numbers of users. The servers need a certain base performance for it to work properly. 

    Download full text (pdf)
    Azure AD Connect
  • 39.
    Bräne, Arvid
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    User Experience Design for Children: Developing and Testing a UX Framework2016Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Designing good digital experiences for children can be difficult; designers have to consider children's cognitive and motor skill limitations, understand their target audience, create something entertaining and educational, comply with national and international jurisdiction, and at the same time appeal to parents. We set out to create a general framework which designers and developers can use as a foundation and testing ground for their digital products in the field of user experience.

    The methods used during the thesis include interviews, literature studies, user testing, case studies, personas, prototyping, and more. The results created are primarily user experience guidelines packaged in a Theoretical Framework, user testing conclusions, along with suggestions on improving the current Lego Star Wars: Force Builders application, a few in the form of prototypes.

    Download full text (pdf)
    fulltext
  • 40.
    Brännback, Andreas
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    FTTX-Analysverktyg anpassat för Telias nät2018Independent thesis Basic level (professional degree), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    A tool for analyzing the status of Fiber to the X (FTTX) customers in Telia’s network has been programmed in the Python programming language. The system consists of a module divided structure where analysis functions of similar types are bundled into module files. The system is designed to be easily further developed by adding more analysis modules in future projects. To perform an analysis on a specific customer, the system retrieves technical data parameters from the switch which the customer is connected to, and compares these parameters against predetermined values to find deviations. Simple Network Management Protocol (SNMP) and Telnet are the primary protocols used to retrieve data. Hypertext Transfer Protocol (HTTP) is used to transfer data as system input and output. The result of an analysis is sent as Extensible Markup Language (XML) back to the server that originally requested the start of an analysis. The XML reply contains technical data parameters describing the customer’s connection status and an analytical response based on these technical parameters. The amount of data presented in the XML response varies slightly depending on the type of switch the customer is connected to. Switches of older hardware types generally presents less customer port data compared to more modern switches. Less customer port data leads to poor detail in the analytical response, and therefore, this analysis tool is better suited to the modern switches found in Telia's network.

    Download full text (pdf)
    fulltext
  • 41.
    Brännström, Andreas
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Kampik, Timotheus
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Nieves, Juan Carlos
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Towards Human-Aware Epistemic Planning For Promoting Behavior-Change2020In: Workshop Program, 2020Conference paper (Refereed)
    Abstract [en]

    This paper introduces an approach to human-aware epistemicplanning in which a rational intelligent agent plans its actionsfor encouraging a human to proceed in a social virtual real-ity (VR) environment. In order to persuade the human user toexecute specific actions, the agent adapts the virtual environ-ment by adjusting motivators in the environment. The agent’smodel of the human is based on the theory of planned behav-ior (TPB), a cognitive theory to explain and predict humanbehavior. The intelligent agent manipulates the environment,a process where the agent conducts epistemic actions, i.e.,adapting the environment and observing human responses, inorder to understand the human’s behavior and encourage hu-man actions. An action reasoning framework is introducedthat defines transitions between goal-oriented human activi-ties in the virtual scenario. The proposed human-aware plan-ning architecture can also be applied in environments that arenot virtual, by utilizing modern mobile devices which havebuilt-in sensors that measure motion, orientation, and variousenvironmental conditions.

    Download full text (pdf)
    fulltext
  • 42.
    Buckland, Philip I.
    Umeå University, Faculty of Arts, Department of historical, philosophical and religious studies, Environmental Archaeology Lab.
    Freeing information to the people: Using the past to aid the future2011In: International Innovation - Disseminating Science Research and Technology, ISSN 2041-4552, no 4, p. 51-53Article in journal (Other (popular science, discussion, etc.))
    Abstract [en]

    Dr Philip Buckland discusses his recent project SEAD: the web-accessible scientific database that crosses archaeological and environmental disciplines. 

    Disciplines as diverse as anthropology and palaeoecology take an interest in our environment and how we have treated it. The Strategic Environmental Archaeology Database aims to create a multi-proxy, GIS-ready database for environmental and archaeological data to aid multidisciplinary research

    Download full text (pdf)
    fulltext
  • 43.
    Calvanese, Diego
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Free-University of Bozen-Bolzano, Bolzano, Italy.
    Gal, Avigdor
    Technion – Israel Institute of Technology, Haifa, Israel.
    Lanti, Davide
    Free-University of Bozen-Bolzano, Bolzano, Italy.
    Montali, Marco
    Free-University of Bozen-Bolzano, Bolzano, Italy.
    Mosca, Alessandro
    Free-University of Bozen-Bolzano, Bolzano, Italy.
    Shraga, Roee
    Khoury College of Computer Science, Northeastern University, MA, Boston, United States.
    Conceptually-grounded mapping patterns for Virtual Knowledge Graphs2023In: Data & Knowledge Engineering, ISSN 0169-023X, E-ISSN 1872-6933, Vol. 145, article id 102157Article in journal (Refereed)
    Abstract [en]

    Virtual Knowledge Graphs (VKGs) constitute one of the most promising paradigms for integrating and accessing legacy data sources. A critical bottleneck in the integration process involves the definition, validation, and maintenance of mapping assertions that link data sources to a domain ontology. To support the management of mappings throughout their entire lifecycle, we identify a comprehensive catalog of sophisticated mapping patterns that emerge when linking databases to ontologies. To do so, we build on well-established methodologies and patterns studied in data management, data analysis, and conceptual modeling. These are extended and refined through the analysis of concrete VKG benchmarks and real-world use cases, and considering the inherent impedance mismatch between data sources and ontologies. We validate our catalog on the considered VKG scenarios, showing that it covers the vast majority of mappings present therein.

  • 44.
    Calvanese, Diego
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Gianola, Alessandro
    Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Mazzullo, Andrea
    Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    Montali, Marco
    Faculty of Engineering, Free University of Bozen-Bolzano, Italy.
    SMT safety verification of ontology-based processes2023In: Proceedings of the 37th AAAI conference on artificial intelligence, AAAI2023, AAAI Press, 2023, Vol. 37, p. 6271-6279Conference paper (Refereed)
    Abstract [en]

    In the context of verification of data-aware processes, a formal approach based on satisfiability modulo theories (SMT) has been considered to verify parameterised safety properties. This approach requires a combination of model-theoretic notions and algorithmic techniques based on backward reachability. We introduce here Ontology-Based Processes, which are a variant of one of the most investigated models in this spectrum, namely simple artifact systems (SASs), where, instead of managing a database, we operate over a description logic (DL) ontology. We prove that when the DL is expressed in (a slight extension of) RDFS, it enjoys suitable model-theoretic properties, and that by relying on such DL we can define Ontology-Based Processes to which backward reachability can still be applied. Relying on these results we are able to show that in this novel setting, verification of safety properties is decidable in PSPACE.

  • 45.
    Camillo, Frédéric
    et al.
    University of Toulouse / ENSEEIHT.
    Caron, Eddy
    University of Lyon / École Normale Supérieure de Lyon.
    Guivarch, Ronan
    University of Toulouse / ENSEEIHT.
    Hurault, Aurélie
    University of Toulouse / ENSEEIHT.
    Klein, Cristian
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pérez, Christian
    University of Lyon / INRIA.
    Resource Management Architecture for Fair Scheduling of Optional Computations2013In: 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing: 3PGCIC 2013 / [ed] Fatos Xhafa, Leonard Barolli, Dritan Nace, Salvatore Vinticinque and Alain Bui, IEEE Computer Society, 2013, p. 113-120Conference paper (Refereed)
    Abstract [en]

    Most HPC platforms require users to submit a pre-determined number of computation requests (also called jobs). Unfortunately, this is cumbersome when some of the computations are optional, i.e., they are not critical, but their completion would improve results. For example, given a deadline, the number of requests to submit for a Monte Carlo experiment is difficult to choose. The more requests are completed, the better the results are, however, submitting too many might overload the platform. Conversely, submitting too few requests may leave resources unused and misses an opportunity to improve the results.

    This paper introduces and solves the problem of scheduling optional computations. An architecture which auto-tunes the number of requests is proposed, then implemented in the DIET GridRPC middleware. Real-life experiments show that several metrics are improved, such as user satisfaction, fairness and the number of completed requests. Moreover, the solution is shown to be scalable.

    Download full text (pdf)
    fulltext
  • 46.
    Castillo, Ismael
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Freidovich, Leonid B.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Barrier sliding mode control and on-line trajectory generation for the automation of a mobile hydraulic crane2018In: 15th International Workshop on Variable Structure Systems (VSS), IEEE, 2018, p. 162-167, article id 8460409Conference paper (Refereed)
    Abstract [en]

    In this paper we propose an implementation scheme of independent joint control for a four-degree-of-freedom heavy-duty hydraulic actuated crane. First, on-line generation of feasible trajectories, following a driver's lead and satisfying the actuator constrains for the redundant kinematic chain, is performed. Second, an implementation of two new Sliding Mode algorithms with variable barrier function gains, which allow robust tracking of the generated trajectory with alleviation of high frequency oscillations, is presented. Experimental results are presented to show the effectiveness of the proposed semi-automation scheme, exploiting the forestry application motivated low accuracy requirement.

  • 47.
    Chaudhry, Tanmay
    et al.
    SimScale GmbH, Germany.
    Doblander, Christoph
    Technische Universität München, Germany.
    Dammer, Anatol
    SimScale GmbH, Germany.
    Klein, Cristian
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Jacobsen, Hans-Arno
    Technische Universität München, Germany.
    Retrofitting Admission Control in an Internet-Scale Application2016Report (Other academic)
    Abstract [en]

    In this paper we propose a methodology to retrofit admission control in an Internet-scale, production application. Admission control requires less effort to improve the availability of an application, in particular when making it scalable is costly. This can occur due to the integration of 3rd-party legacy code or handling large amounts of data, and is further motivated by lean thinking, which argues for building a minimum viable product to discover customer requirements.

    Our main contribution consists in a method to generate an amplified workload, that is realistic enough to test all kinds of what-if scenarios, but does not require an exhaustive transition matrix. This workload generator can then be used to iteratively stress-test the application, identify the next bottleneck and add admission control.

    To illustrate the usefulness of the approach, we report on our experience with adding admission control within SimScale, a Software-as-a-Service start-up for engineering simulations, that already features 50,000 users.

    Download full text (pdf)
    fulltext
  • 48.
    Chokwitthaya, Chanachok
    et al.
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Zhu, Yimin
    Department of Construction Management, Louisiana State University, Baton Rouge, United States.
    Lu, Weizhuo
    Umeå University, Faculty of Science and Technology, Department of Applied Physics and Electronics.
    Ontology for experimentation of human-building interactions using virtual reality2023In: Advanced Engineering Informatics, ISSN 1474-0346, E-ISSN 1873-5320, Vol. 55, article id 101903Article in journal (Refereed)
    Abstract [en]

    Scientific experiments significantly enhance the understanding of human-building interactions in building and engineering research. Recently, conducting virtual reality (VR) experiments has gained acceptance and popularity as an approach to studying human-building interactions. However, little attention has been given to the standardization of the experimentations. Proper standardization can promote the reusability, replicability, and repeatability of VR experiments and accelerate the maturity of this emerging experimentation method. Responding to such needs, the authors proposed a virtual human-building interaction experimentation ontology (VHBIEO). It is an ontology at the domain level, extending the ontology of scientific experiments (EXPO) to standardize virtual human-building interaction experimentation. It was developed based on state-of-the-art ontology development approaches. Competency questions (CQs) were used to derive requirements and regulate the development. Semantic Web technologies were applied to make VHBIEO machine-readable, accessible, and processable. VHBIEO incorporates an application view (APV) to support the inclusion of unique information for particular applications. The authors performed taxonomy evaluations to assess the consistency, completeness, and redundancy, affirming no occurrence of errors in its structure. Application evaluations were applied for investigating its ability to standardize and support generating of machine-readable, accessible, and processable information. Application evaluations also verified the capability of APV to support the inclusion of unique information.

    Download full text (pdf)
    fulltext
  • 49.
    Coelho Mollo, Dimitri
    Umeå University, Faculty of Arts, Department of historical, philosophical and religious studies.
    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models2023In: Transactions on Machine Learning ResearchArticle in journal (Refereed)
    Abstract [en]

    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting. 

  • 50. Cohen, Albert
    et al.
    Shen, Xipeng
    Torrellas, Josep
    Tuck, James
    Zhou, Yuanyuan
    Adve, Sarita
    Akturk, Ismail
    Bagchi, Saurabh
    Balasubramonian, Rajeev
    Barik, Rajkishore
    Beck, Micah
    Bodik, Ras
    Butt, Ali
    Ceze, Luis
    Chen, Haibo
    Chen, Yiran
    Chilimbi, Trishul
    Christodorescu, Mihai
    Criswell, John
    Ding, Chen
    Ding, Yufei
    Dwarkadas, Sandhya
    Elmroth, Erik
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Gibbons, Phil
    Guo, Xiaochen
    Gupta, Rajesh
    Heiser, Gernot
    Hoffman, Hank
    Huang, Jian
    Hunter, Hillery
    Kim, John
    King, Sam
    Larus, James
    Liu, Chen
    Lu, Shan
    Lucia, Brandon
    Maleki, Saeed
    Mazumdar, Somnath
    Neamtiu, Iulian
    Pingali, Keshav
    Rech, Paolo
    Scott, Michael
    Solihin, Yan
    Song, Dawn
    Szefer, Jakub
    Tsafrir, Dan
    Urgaonkar, Bhuvan
    Wolf, Marilyn
    Xie, Yuan
    Zhao, Jishen
    Zhong, Lin
    Zhu, Yuhao
    Inter-disciplinary Research Challenges in Computer Systems for the 2020s2018Manuscript (preprint) (Other academic)
1234567 1 - 50 of 328
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf