Umeå University's logo

umu.sePublications
Change search
Refine search result
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Björklund, Henrik
    et al.
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Gelade, Wouter
    Martens, Wim
    Incremental XPath evaluation2010In: ACM Transactions on Database Systems, ISSN 0362-5915, E-ISSN 1557-4644, Vol. 35, no 4:29, p. 42-Article in journal (Refereed)
    Abstract [en]

    Incremental view maintenance for XPath queries asks to maintain a materialized XPath view over an XML database. It assumes an underlying XML database D and a query Q. One is given a sequence of updates U to D, and the problem is to compute the result of Q(U(D)): the result of evaluating query Q on database D after having applied updates U. This article initiates a systematic study of the Boolean version of this problem. In the Boolean version, one only wants to know whether Q(U(D)) is empty or not.

    In order to quickly answer this question, we are allowed to maintain an auxiliary data structure. The complexity of the maintenance algorithms is measured in, (1) the size of the auxiliary data structure, (2) the worst-case time per update needed to compute Q(U(D)), and (3) the worst-case time per update needed to bring the auxiliary data structure up to date. We allow three kinds of updates: node insertion, node deletion, and node relabeling. Our main results are that downward XPath queries can be incrementally maintained in time O(depth(D)·poly(|Q|)) per update and conjunctive forward XPath queries in time O(depth(D) · log(width(D))·poly(|Q|)) per update, where |Q| is the size of the query, and depth(D) and width(D) are the nesting depth and maximum number of siblings in database D, respectively. The auxiliary data structures for maintenance are linear in |D| and polynomial in |Q| in all these cases.

  • 2.
    Gottlob, Georg
    et al.
    Dipartimento di Matematica e Informatica, University of Calabria, Via P. Bucci-Edificio 30B, Arcavacata di Rende, Italy.
    Lanzinger, Matthias
    Department of Computer Science, University of Oxford, 7 Parks Road, Oxford, United Kingdom.
    Okulmus, Cem
    Umeå University, Faculty of Science and Technology, Department of Computing Science.
    Pichler, Reinhard
    Institute of Logic and Computation, TU Wien Favoritenstraße 9-11, 1040 Wien, Austria.
    Fast parallel hypertree decompositions in logarithmic recursion depth2024In: ACM Transactions on Database Systems, ISSN 0362-5915, E-ISSN 1557-4644, Vol. 49, no 1, article id 1Article in journal (Refereed)
    Abstract [en]

    Various classic reasoning problems with natural hypergraph representations are known to be tractable if a hypertree decomposition (HD) of low width exists. The resulting algorithms are attractive for practical use in fields like databases and constraint satisfaction. However, algorithmic use of HDs relies on the difficult task of first computing a decomposition of the hypergraph underlying a given problem instance, which is then used to guide the algorithm for this particular instance. The performance of purely sequential methods for computing HDs is inherently limited, yet the problem is, theoretically, amenable to parallelisation. In this article, we propose the first algorithm for computing hypertree decompositions that is well suited for parallelisation. The newly proposed algorithm log-k-decomp requires only a logarithmic number of recursion levels and additionally allows for highly parallelised pruning of the search space by restriction to so-called balanced separators. We provide a detailed experimental evaluation over the HyperBench benchmark and demonstrate that log-k-decomp outperforms the current state of the art significantly.

    Download full text (pdf)
    fulltext
1 - 2 of 2
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf