Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Fox, William
    et al.
    Lawrence Berkeley National Laboratory.
    Ghoshal, Devarshi
    Souza, Abel
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Lawrence Berkeley National Laboratory.
    P. Rodrigo, Gonzalo
    Ramakrishnan, Lavanya
    E-HPC: A Library for Elastic Resource Management in HPC Environments2017In: 12th Workshop on Workflows in Support of Large-Scale Science (WORKS), New York, NY, USA: Association for Computing Machinery (ACM), 2017, article id 1Conference paper (Refereed)
    Abstract [en]

    Next-generation data-intensive scientific workflows need to support streaming and real-time applications with dynamic resource needs on high performance computing (HPC) platforms. The static resource allocation model on current HPC systems that was designed for monolithic MPI applications is insufficient to support the elastic resource needs of current and future workflows. In this paper, we discuss the design, implementation and evaluation of Elastic-HPC (E-HPC), an elastic framework for managing resources for scientific workflows on current HPC systems. E-HPC considers a resource slot for a workflow as an elastic window that might map to different physical resources over the duration of a workflow. Our framework uses checkpoint-restart as the underlying mechanism to migrate workflow execution across the dynamic window of resources. E-HPC provides the foundation necessary to enable dynamic resource allocation of HPC resources that are needed for streaming and real-time workflows. E-HPC has negligible overhead beyond the cost of checkpointing. Additionally, E-HPC results in decreased turnaround time of workflows compared to traditional model of resource allocation for workflows, where resources are allocated per stage of the workflow. Our evaluation shows that E-HPC improves core hour utilization for common workflow resource use patterns and provides an effective framework for elastic expansion of resources for applications with dynamic resource needs.

  • 2. Gutierrez, Felipe
    et al.
    Beedkar, Kaustubh
    Souza, Abel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Markl, Volker
    AdCom: Adaptive combiner for streaming aggregations2021In: Advances in Database Technology - EDBT / [ed] Velegrakis Y.; Velegrakis Y.; Zeinalipour D.; Chrysanthis P.K.; Chrysanthis P.K.; Guerra F., OpenProceedings, 2021, Vol. 2021-March, p. 403-414Conference paper (Refereed)
    Abstract [en]

    Continuous applications such as device monitoring and anomaly detection often require real-time aggregated statistics over unbounded data streams. While existing stream processing systems such as Flink, Spark, and Storm support processing of streaming aggregations, their optimizations are limited with respect to the dynamic nature of the data, and therefore are suboptimal when the workload changes and/or when there is data skew. In this paper we present AdCom, which is an adaptive combiner for stream processing engines. The use of AdCom in aggregation queries enables pre-aggregating tuples upstream (i.e., before data shuffling) followed by global aggregation downstream. In contrast to existing approaches, AdCom can automatically adjust the number of tuples to pre-aggregate depending on the data rate and available network. Our experimental study using real-world streaming workloads shows that using AdCom leads to 2.5-9× higher sustainable throughput without compromising latency.

    Download full text (pdf)
    fulltext
  • 3.
    Souza, Abel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Autonomous resource management for high performance datacenters2020Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Over the last decade, new applications such as data intensive workflows have hit an inflection point in wide spread use and influenced the compute paradigm of most scientific and industrial endeavours. Data intensive workflows are highly dynamic and adaptable to resource changes, system faults, and by also allowing approximate solutions into their models. On the one hand, these dynamic characteristics require processing power and capabilities originated in cloud computing environments, and are not well supported by large High Performance Computing (HPC) infrastructures. On the other hand, cloud computing datacenters favor low latency over throughput, deeply contrasting with HPC, which enforces a centralized environment and prioritizes total computation accomplished over-time, ignoring latency entirely. Although data handling needs are predicted to increase by as much as a thousand times over the next decade, future datacenters processing power will not increase as much.

    To tackle these long-term developments, this thesis proposes autonomic methods combined with novel scheduling strategies to optimize datacenter utilization while guaranteeing user defined constraints and seamlessly supporting a wide range of applications under various real operational scenarios. Leveraging upon data intensive characteristics, a library is developed to dynamically adjust the amount of resources used throughout the lifespan of a workflow, enabling elasticity for such applications in HPC datacenters. For mission critical environments where services must run even in the event of system failures, we define an adaptive controller to dynamically select the best method to perform runtime state synchronizations. We develop different hybrid extensible architectures and reinforcement learning scheduling algorithms that smoothly enable dynamic applications into HPC environments. An overall theme in this thesis is extensive experimentation in real datacenters environments. Our results show improvements in datacenter utilization and performance, achieving higher overall efficiency. Our methods also simplify operations and allow the onboarding of novel types of applications previously not supported.

    Download full text (pdf)
    fulltext
    Download (pdf)
    spikblad
    Download (png)
    presentationsbild
    Download (pdf)
    errata
  • 4.
    Souza, Abel
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Papadopoulos, Alessandro Vittorio
    Tomás Bolivar, Luis
    Umeå University, Faculty of Science and Technology, Department of Computing Science. Red Hat Inc..
    Gilbert, David
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hybrid Adaptive Checkpointing for Virtual Machine Fault Tolerance2018In: Proceedings - 2018 IEEE International Conference on Cloud Engineering, IC2E 2018 / [ed] Li J., Chandra A., Guo T., Cai Y., Institute of Electrical and Electronics Engineers (IEEE), 2018, p. 12-22Conference paper (Refereed)
    Abstract [en]

    Active Virtual Machine (VM) replication is an application independent and cost-efficient mechanism for high availability and fault tolerance, with several recently proposed implementations based on checkpointing. However, these methods may suffer from large impacts on application latency, excessive resource usage overheads, and/or unpredictable behavior for varying workloads. To address these problems, we propose a hybrid approach through a Proportional-Integral (PI) controller to dynamically switch between periodic and on-demand check-pointing. Our mechanism automatically selects the method that minimizes application downtime by adapting itself to changes in workload characteristics. The implementation is based on modifications to QEMU, LibVirt, and OpenStack, to seamlessly provide fault tolerant VM provisioning and to enable the controller to dynamically select the best checkpointing mode. Our evaluation is based on experiments with a video streaming application, an e-commerce benchmark, and a software development tool. The experiments demonstrate that our adaptive hybrid approach improves both application availability and resource usage compared to static selection of a checkpointing method, with application performance gains and neglectable overheads.

  • 5.
    Souza, Abel
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pelckmans, Kristiaan
    Uppsala University.
    Ghoshal, Devarshi
    Lawrence Berkeley National Lab.
    Ramakrishnan, Lavanya
    Lawrence Berkeley National Lab.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    ASA - The Adaptive Scheduling Architecture2020In: HPDC '20: Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing, ACM Digital Library, 2020, p. 161-165Conference paper (Refereed)
    Abstract [en]

    In High Performance Computing (HPC), resources are controlled by batch systems and may not be available due to long queue waiting times, negatively impacting application deadlines. This is noticeable in low latency scientific workflows where resource planning and timely allocation are key for efficient processing. On the one hand, peak allocations guarantee the fastest possible workflows execution time, at the cost of extended queue waiting times and costly resource usage. On the other hand, dynamic allocations following specific workflow stage requirements optimizes resource usage, though it increases the total workflow makespan. To enable new scheduling strategies and features in workflows, we propose ASA: the Adaptive Scheduling Architecture, a novel scheduling method to reduce perceived queue waiting times as well as to optimize workflows resource usage. Reinforcement learning is used to estimate queue waiting times, and based on these estimates ASA pro-actively submit resource change requests, minimizing total workflow inter-stage waiting times, idle resources, and makespan. Experiments with three scientific workflows at two HPC centers show that ASA combines the best of the two aforementioned approaches, with average queue waiting time and makespan reductions of up to 10% and 2% respectively, with up to 100% prediction accuracy, while obtaining near optimal resource utilization.

    Download full text (pdf)
    fulltext
  • 6.
    Souza, Abel
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pelckmans, Kristiaan
    Uppsala University,.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A HPC Co-Scheduler with Reinforcement LearningManuscript (preprint) (Other academic)
    Abstract [en]

    High Performance Computing (HPC) datacenters process thousands of diverse applications, supporting many scientific and business endeavours. Although users understand minimum coarse resource job requirements such as amounts of CPUs and memory, internal infrastructural utilization data and system dynamics are often visible only to cluster operators. Besides that, due to increased complexity, heuristically tweaking a batch system is even today a very challenge task. When combined with applications profiling, infrastructural data enables improvements to job scheduling, while creating space to improve Quality-of-Service (QoS) metrics such as queue waiting times and total execution times. Targeting improvements in utilization and throughput, in this paper we evaluate and propose a novel Reinforcement Learning co-scheduler algorithm that combines capacity utilization with application performance profiling. We first profile a running application by assessing its resource utilization and progress by means of a forest of decision trees, enabling our algorithm to understand the application’s resource capacity usage. We then use this information to estimate how much capacity from this ongoing allocation can be allocated for co-scheduling additional applications. Because estimations may go wrong, our algorithm has to learn and evaluate when co-scheduling decisions results in QoS degradation, such as application slowness. To overcome this, we devised a co-scheduling architecture and a handful metric to help minimizing performance degradation, enabling improvements on utilization of up to 25% even when the cluster is experiencing high demands, with 10% average queue makespan reductions when experiencing low loads.Together with the architecture, our algorithm forms the base of an application-aware co-scheduler for improved datacenter utilization and minimal performance degradation.

  • 7.
    Souza, Abel
    et al.
    University of Massachusetts Amherst, Amherst, USA.
    Pelckmans, Kristiaan
    Uppsala University, Uppsala, Sweden.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    A HPC Co-scheduler with Reinforcement Learning2021In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 2021, p. 126-148Conference paper (Refereed)
    Abstract [en]

    Although High Performance Computing (HPC) users understand basic resource requirements such as the number of CPUs and memory limits, internal infrastructural utilization data is exclusively leveraged by cluster operators, who use it to configure batch schedulers. This task is challenging and increasingly complex due to ever larger cluster scales and heterogeneity of modern scientific workflows. As a result, HPC systems achieve low utilization with long job completion times (makespans). To tackle these challenges, we propose a co-scheduling algorithm based on an adaptive reinforcement learning algorithm, where application profiling is combined with cluster monitoring. The resulting cluster scheduler matches resource utilization to application performance in a fine-grained manner (i.e., operating system level). As opposed to nominal allocations, we apply decision trees to model applications’ actual resource usage, which are used to estimate how much resource capacity from one allocation can be co-allocated to additional applications. Our algorithm learns from incorrect co-scheduling decisions and adapts from changing environment conditions, and evaluates when such changes cause resource contention that impacts quality of service metrics such as jobs slowdowns. We integrate our algorithm in an HPC resource manager that combines Slurm and Mesos for job scheduling and co-allocation, respectively. Our experimental evaluation performed in a dedicated cluster executing a mix of four real different scientific workflows demonstrates improvements on cluster utilization of up to 51% even in high load scenarios, with 55% average queue makespan reductions under low loads.

  • 8.
    Souza, Abel Pinto Coelho de
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Application-aware resource management for datacenters2018Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    High Performance Computing (HPC) and Cloud Computing datacenters are extensively used to steer and solve complex problems in science, engineering, and business, such as calculating correlations and making predictions. Already in a single datacenter server, there are thousands of hardware and software metrics – Key Performance Indicators (KPIs) – that individually and aggregated can give insight in the performance, robustness, and efficiency of the datacenter and the provisioned applications. At the datacenter level, the number of KPIs is even higher. The fast growing interest on datacenter management from both public and industry together with the rapid expansion in scale and complexity of datacenter resources and the services being provided on them have made monitoring, profiling, controlling, and provisioning compute resources dynamically at runtime into a challenging and complex task. Commonly, correlations of application KPIs, like response time and throughput, with resource capacities show that runtime systems (e.g., containers or virtual machines) that are used to provision these applications do not utilize available resources efficiently. This reduces datacenter efficiency, which in term results in higher operational costs and longer waiting times for results.

    The goal of this thesis is to develop tools and autonomic techniques for improving datacenter operations, management and utilization, while improving and/or minimizing impacts on applications performance. To this end, we make use of application resource descriptors to create a library that dynamically adjusts the amount of resources used, enabling elasticity for scientific workflows in HPC datacenters. For mission critical applications, high availability is of great concern since these services must be kept running even in the event of system failures. By modeling and correlating specific resource counters, like CPU, memory and network utilization, with the number of runtime synchronizations, we present adaptive mechanisms to dynamically select which fault tolerant mechanism to use. Likewise, for scientific applications we propose a hybrid extensible architecture for dual-level scheduling of data intensive jobs in HPC infrastructures, allowing operational simplification, on-boarding of new types of applications and achieving greater job throughput with higher overall datacenter efficiency.

    Download full text (pdf)
    fulltext
  • 9.
    Souza, Abel
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Rezaei, Mohamad
    PDC Center for High Performance Computing, KTH Royal Institute of Technology, Sweden.
    Laure, Erwin
    PDC Center for High Performance Computing, KTH Royal Institute of Technology, Sweden.
    Tordsson, Johan
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Hybrid Resource Management for HPC and Data Intensive Workloads2019In: 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Los Alamitos: IEEE Computer Society, 2019, p. 399-409Conference paper (Refereed)
    Abstract [en]

    Traditionally, High Performance Computing (HPC) and Data Intensive (DI) workloads have been executed on separate hardware using different tools for resource and application management. With increasing convergence of these paradigms, where modern applications are composed of both types of jobs in complex workflows, this separation becomes a growing overhead and the need for a common computation platform for both application areas increases. Executing both application classes on the same hardware not only enables hybrid workflows, but can also increase the usage efficiency of the system, as often not all available hardware is fully utilized by an application. While HPC systems are typically managed in a coarse grained fashion, allocating a fixed set of resources exclusively to an application, DI systems employ a finer grained regime, enabling dynamic resource allocation and control based on application needs. On the path to full convergence, a useful and less intrusive step is a hybrid resource management system that allows the execution of DI applications on top of standard HPC scheduling systems.In this paper we present the architecture of a hybrid system enabling dual-level scheduling for DI jobs in HPC infrastructures. Our system takes advantage of real-time resource utilization monitoring to efficiently co-schedule HPC and DI applications. The architecture is easily adaptable and extensible to current and new types of distributed workloads, allowing efficient combination of hybrid workloads on HPC resources with increased job throughput and higher overall resource utilization. The architecture is implemented based on the Slurm and Mesos resource managers for HPC and DI jobs. Our experimental evaluation in a real cluster based on a set of representative HPC and DI applications demonstrate that our hybrid architecture improves resource utilization by 20%, with 12% decrease on queue makespan while still meeting all deadlines for HPC jobs.

    Download full text (pdf)
    fulltext
  • 10.
    Villarroel, Beatriz
    et al.
    Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, Stockholm, Sweden; Instituto de Astrofísica de Canarias, Avda Vía Láctea s/n, Tenerife, La Laguna, Spain.
    Pelckmans, Kristiaan
    Department of Information Technology, Uppsala University, Box 337, Uppsala, Sweden.
    Solano, Enrique
    Departamento de Astrofísica, Centro de Astrobiología (CSIC/INTA), P.O. Box 78, Villanueva de la Cañada, Madrid, Spain.
    Laaksoharju, Mikael
    Department of Information Technology, Uppsala University, Box 337, Uppsala, Sweden.
    Souza, Abel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dom, Onyeuwaoma Nnaemeka
    Center for Basic Space Science, National Space Research and Development Agency, P.O. Box 2022, Enugu, Nsukka, Nigeria.
    Laggoune, Khaoula
    Sirius Astronomy Association, Sirius Astronomy Association, BP 18, Cité du 20 Aout, Constantine, Algeria.
    Mimouni, Jamal
    Department of Physics, University of Constantine 1, LPMPS & CERIST, Constantine, Algeria.
    Guergouri, Hichem
    Research Unit in Scientific Mediation, Science of Matter Division, CERIST, Constantine, Algeria.
    Mattsson, Lars
    Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, Stockholm, Sweden.
    García, Aurora Lago
    IES Tartessos, Barriada Hiconsa s/n, Sevilla, Camas, Spain.
    Soodla, Johan
    Department of Information Technology, Uppsala University, Box 337, Uppsala, Sweden.
    Castillo, Diego
    Department of Information Technology, Uppsala University, Box 337, Uppsala, Sweden.
    Shultz, Matthew E.
    Department of Physics & Astronomy, University of Delaware, 217 Sharp Lab, DE, Newark, United States.
    Aworka, Rubby
    Ghana Space Science and Technology Institute, Atomic-Haatso Rd., Kwabenya, P.O. Box LG 80 233, Accra, Ghana; African Institute for Mathematical Sciences Ghana, Accra (AIMS Ghana), University of Ghana, Summerhill Estates Ltd., Rd., Santeo, Legon, P.O. Box LG 25, Accra, Ghana.
    Comerón, Sébastien
    Instituto de Astrofísica de Canarias, Avda Vía Láctea s/n, Tenerife, La Laguna, Spain; Departamento de Astrofísica, Universidad de La Laguna, Tenerife, La Laguna, Spain.
    Geier, Stefan
    Instituto de Astrofísica de Canarias, Avda Vía Láctea s/n, Tenerife, La Laguna, Spain; Gran Telescopio Canarias (GRANTECAN), Cuesta de San José s/n, La Palma, Breña Baja, Spain.
    Marcy, Geoffrey W.
    Space Laser Awareness, 3883 Petaluma Hill Rd, CA, Santa Rosa, United States.
    Gupta, Alok C.
    Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital, India.
    Bergstedt, Josefine
    Independent Researcher, Uppsala, Sweden.
    Bär, Rudolf E.
    Institute for Particle Physics and Astrophysics, ETH Zürich, Wolfgang-Pauli-Strasse 27, Zürich, Switzerland.
    Buelens, Bart
    Flemish Institute for Technological Research (VITO), Mol, Belgium.
    Enriquez, Emilio
    Department of Astrophysics/IMAPP, Radboud University Nijmegen, Nijmegen, Netherlands.
    Mellon, Christopher K.
    Galileo Project Affiliate, PA, Laughlintown, United States.
    Prieto, Almudena
    Instituto de Astrofísica de Canarias, Avda Vía Láctea s/n, Tenerife, La Laguna, Spain; Departamento de Astrofísica, Universidad de La Laguna, Tenerife, La Laguna, Spain.
    Wamalwa, Dismas Simiyu
    Meru Physics Department, University of Science and Technology, P.O. Box 972-60200, Meru, Kenya.
    de Souza, Rafael S.
    Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Chinese Academy of Sciences, 80 Nandan Rd, Shanghai, China.
    Ward, Martin J.
    Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Rd, Durham, United Kingdom.
    Launching the VASCO Citizen Science Project2022In: Universe, E-ISSN 2218-1997, Vol. 8, no 11, article id 561Article in journal (Refereed)
    Abstract [en]

    The Vanishing & Appearing Sources during a Century of Observations (VASCO) project investigates astronomical surveys spanning a time interval of 70 years, searching for unusual and exotic transients. We present herein the VASCO Citizen Science Project, which can identify unusual candidates driven by three different approaches: hypothesis, exploratory, and machine learning, which is particularly useful for SETI searches. To address the big data challenge, VASCO combines three methods: the Virtual Observatory, user-aided machine learning, and visual inspection through citizen science. Here we demonstrate the citizen science project and its improved candidate selection process, and we give a progress report. We also present the VASCO citizen science network led by amateur astronomy associations mainly located in Algeria, Cameroon, and Nigeria. At the moment of writing, the citizen science project has carefully examined 15,593 candidate image pairs in the data (ca. 10% of the candidates), and has so far identified 798 objects classified as "vanished". The most interesting candidates will be followed up with optical and infrared imaging, together with the observations by the most potent radio telescopes.

    Download full text (pdf)
    fulltext
  • 11.
    Villarroel, Beatriz
    et al.
    Nordita, KTH Royal Institute of Technology, Stockholm University, Roslagstullsbacken 23, Stockholm, Sweden; Instituto de Astrofísica de Canarias, Avda Vía Láctea S/N, La Laguna, Tenerife, Spain.
    Pelckmans, Kristiaan
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Solano, Enrique
    Departamento de Astrofísica, Centro de Astrobiología (CSIC/INTA), PO Box 78, Villanueva de la Cañada, Spain; Spanish Virtual Observatory.
    Laaksoharju, Mikael
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Souza, Abel
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Dom, Onyeuwaoma Nnaemeka
    Center for Basic Space Science, National Space Research and Development Agency, Enugu, Nigeria.
    Laggoune, Khaoula
    SIRIUS Astronomy Association, Algeria.
    Mimouni, Jamal
    Depart. of Physics, Univ. of Constantine-1, LPMPS, Constantine, Algeria.
    Mattsson, Lars
    Nordita, KTH Royal Institute of Technology, Stockholm University, Roslagstullsbacken 23, Stockholm, Sweden.
    Soodla, Johan
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Castillo, Diego
    Department of Information Technology, Uppsala University, Uppsala, Sweden.
    Shultz, Matthew E.
    Department of Physics and Astronomy, University of Delaware, United States.
    Aworka, Rubby
    Ghana Space Science and Technology Institute, Ghana; African Institute for Mathematical Sciences Ghana, Accra (AIMS Ghana), University of Ghana, Ghana.
    Comerón, Sébastien
    Instituto de Astrofísica de Canarias, Avda Vía Láctea S/N, La Laguna, Tenerife, Spain; Departamento de Astrofísica, Universidad de La Laguna, La Laguna, Tenerife, Spain.
    Geier, Stefan
    Instituto de Astrofísica de Canarias, Avda Vía Láctea S/N, La Laguna, Tenerife, Spain; Gran Telescopio Canarias (GRANTECAN), Cuesta de San José s/n, Breña Baja, La Palma, Spain.
    Marcy, Geoffrey
    Department of Astronomy, University of California, CA, Berkeley, United States.
    Gupta, Alok C.
    Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital, India.
    Bergstedt, Josefine
    Freelance, Uppsala, Sweden.
    Bär, Rudolf E.
    Institute for Particle Physics and Astrophysics, ETH Zürich, Wolfgang-Pauli-Strasse 27, Zürich, Switzerland.
    Buelens, Bart
    Statistics Netherlands, Methodology Department, Heerlen, Netherlands; VITO, Diepenbeek Genk, Flanders, Belgium.
    Prieto, M. Almudena
    Instituto de Astrofísica de Canarias, Avda Vía Láctea S/N, La Laguna, Tenerife, Spain; Departamento de Astrofísica, Universidad de La Laguna, La Laguna, Tenerife, Spain.
    Ramos-Almeida, Cristina
    Instituto de Astrofísica de Canarias, Avda Vía Láctea S/N, La Laguna, Tenerife, Spain; Departamento de Astrofísica, Universidad de La Laguna, La Laguna, Tenerife, Spain.
    Wamalwa, Dismas Simiyu
    Meru University of Science and Technology, P.O BOX 972, Meru, Kenya.
    Ward, Martin J.
    Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham, United Kingdom.
    The VASCO project: 100 red transients and their follow up2020In: Proceedings of the International Astronautical Congress, IAC: 71st International Astronautical Congress, IAC 2020, International Astronautical Federation, IAF , 2020, article id 166680Conference paper (Refereed)
    Abstract [en]

    The Vanishing & Appearing Sources during a Century of Observations (VASCO) project investigates astronomical surveys spanning a 70 years time interval, searching for unusual and exotic transients. We present herein the VASCO Citizen Science Project, that uses three different approaches to the identification of unusual transients in a given set of candidates: hypothesis-driven, exploratory-driven and machine learning-driven (which is of particular benefit for SETI searches). To address the big data challenge, VASCO combines methods from the Virtual Observatory, a user-aided machine learning and visual inspection through citizen science. In this article, we demonstrate the citizen science project, the new and improved candidate selection process and give a progress report. We also present the VASCO citizen science network led by amateur astronomy associations mainly located in Algeria, Cameroon and Nigeria. At the moment of writing, the citizen science project has carefully examined 12,000 candidate image pairs in the data, and has so far identified 713 objects classified as “vanished”. The most interesting candidates will be followed up with optical and infrared imaging, together with the observations by the most potent radio telescopes.

1 - 11 of 11
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf