umu.sePublikasjoner
Endre søk
Begrens søket
1 - 3 of 3
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Treff pr side
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
  • Standard (Relevans)
  • Forfatter A-Ø
  • Forfatter Ø-A
  • Tittel A-Ø
  • Tittel Ø-A
  • Type publikasjon A-Ø
  • Type publikasjon Ø-A
  • Eldste først
  • Nyeste først
  • Skapad (Eldste først)
  • Skapad (Nyeste først)
  • Senast uppdaterad (Eldste først)
  • Senast uppdaterad (Nyeste først)
  • Disputationsdatum (tidligste først)
  • Disputationsdatum (siste først)
Merk
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Björklund, Johanna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Cleophas, Loek
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap. Stellenbosch University, ZA-7602 Matieland, South Africa.
    A Taxonomy of Minimisation Algorithms for Deterministic Tree Automata2016Inngår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 22, nr 2, s. 180-196Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We present a taxonomy of algorithms for minimising deterministic bottom-up tree automata (DTAs) over ranked and ordered trees. Automata of this type and its extensions are used in many application areas, including natural language processing (NLP) and code generation. In practice, DTAs can grow very large, but minimisation keeps things manageable. The proposed taxonomy serves as a unifying framework that makes algorithms accessible and comparable, and as a foundation for efficient implementation. Taxonomies of this type are also convenient for correctness and complexity analysis, as results can frequently be propagated through the hierarchy. The taxonomy described herein covers a broad spectrum of algorithms, ranging from novel to well-studied ones, with a focus on computational complexity.

  • 2.
    Björklund, Johanna
    et al.
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Cleophas, Loek
    Stellenbosch University, Republic of South Africa.
    Karlsson, My
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Högpresterande beräkningscentrum norr (HPC2N). Codemill.
    An evaluation of structured language modeling for automatic speech recognition2017Inngår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 23, nr 11, s. 1019-1034Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    We evaluated probabilistic lexicalized tree-insertion grammars (PLTIGs) on a classification task relevant for automatic speech recognition. The baseline is a family of n-gram models tuned with Witten-Bell smoothing. The language models are trained on unannotated corpora, consisting of 10,000 to 50,000 sentences collected from the English section of Wikipedia. For the evaluation, an additional 150 random sentences were selected from the same source, and for each of these, approximately 3,200 variations were generated. Each variant sentence was obtained by replacing an arbitrary word by a similar word, chosen to be at most 2 character edits from the original. The evaluation task consisted of identifying the original sentence among the automatically constructed (and typically inferior) alternatives. In the experiments, the n-gram models outperformed the PLTIG model on the smaller data set, but as the size of data grew, the PLTIG model gave comparable results. While PLTIGs are more demanding to train, they have the advantage that they assign a parse structure to their input sentences. This is valuable for continued algorithmic processing, for example, for summarization or sentiment analysis.

  • 3.
    Hegner, Stephen
    Umeå universitet, Teknisk-naturvetenskapliga fakulteten, Institutionen för datavetenskap.
    Internal representation of database views2010Inngår i: Journal of universal computer science (Online), ISSN 0948-695X, E-ISSN 0948-6968, Vol. 16, nr 20, s. 2956-2985Artikkel i tidsskrift (Fagfellevurdert)
    Abstract [en]

    Although a database view embodies partial information about the state of the main schema, the state of the view schema is a quotient (and not a subset) of the state of the main schema. It is the information content of the view state, the set of sentences which are true for that state, and not the state itself which is a subset of the information content of the state of the main schema. There are thus two dual approaches to modelling this partiality, one based upon structures, with a consequent quotient relationship, and another based upon logical theories, with a consequent subset relationship. In this work, a representation for database views is developed which combines these two approaches. The state-based representation is expanded so that the information content embodied in a wide class of views, including those defined by SPJ queries, is fully representable, thus permitting the view state to be modelled internally as a subset of the main database state. The utility of this framework is demonstrated with a simple solution to the uniqueness problem for view updates via constant complement.

1 - 3 of 3
RefereraExporteraLink til resultatlisten
Permanent link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf