Umeå universitets logga

umu.sePublikationer
Ändra sökning
Länk till posten
Permanent länk

Direktlänk
Publikationer (10 of 112) Visa alla publikationer
Jäger, G. & Drewes, F. (2024). Optimal strategies for the static black-peg AB game with two and three pegs. Discrete Mathematics, Algorithms and Applications (DMAA), 16(4), Article ID 2350049.
Öppna denna publikation i ny flik eller fönster >>Optimal strategies for the static black-peg AB game with two and three pegs
2024 (Engelska)Ingår i: Discrete Mathematics, Algorithms and Applications (DMAA), ISSN 1793-8309, E-ISSN 1793-8317, Vol. 16, nr 4, artikel-id 2350049Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

The AB Game is a game similar to the popular game Mastermind. We study a version of this game called Static Black-Peg AB Game. It is played by two players, the codemaker and the codebreaker. The codemaker creates a so-called secret by placing a color from a set of c colors on each of p ≤ c pegs, subject to the condition that every color is used at most once. The codebreaker tries to determine the secret by asking questions, where all questions are given at once and each question is a possible secret. As an answer the codemaker reveals the number of correctly placed colors for each of the questions. After that, the codebreaker only has one more try to determine the secret and thus to win the game. 

For given p and c, our goal is to find the smallest number k of questions the codebreaker needs to win, regardless of the secret, and the corresponding list of questions, called a (k + 1)-strategy. We present a (⌈4c/3⌉ − 1)-strategy for p = 2 for all c ≥ 2, and a ⌊(3c − 1)/2⌋-strategy for p = 3 for all c ≥ 4 and show the optimality of both strategies, i.e., we prove that no (k + 1)-strategy for a smaller k exists. 

Ort, förlag, år, upplaga, sidor
World Scientific, 2024
Nyckelord
Game theory, mastermind, AB game, optimal strategy
Nationell ämneskategori
Diskret matematik
Forskningsämne
matematik
Identifikatorer
urn:nbn:se:umu:diva-210346 (URN)10.1142/s1793830923500490 (DOI)001034748600002 ()2-s2.0-85165934499 (Scopus ID)
Forskningsfinansiär
Kempestiftelserna, JCK-2022.1
Tillgänglig från: 2023-06-20 Skapad: 2023-06-20 Senast uppdaterad: 2024-06-26Bibliografiskt granskad
Hatefi, A., Vu, X.-S., Bhuyan, M. H. & Drewes, F. (2023). ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents. In: Mourad Abbas; Abed Alhakim Freihat (Ed.), Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023): . Paper presented at 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023. (pp. 68-77). Association for Computational Linguistics
Öppna denna publikation i ny flik eller fönster >>ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents
2023 (Engelska)Ingår i: Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023) / [ed] Mourad Abbas; Abed Alhakim Freihat, Association for Computational Linguistics, 2023, s. 68-77Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We introduce ADCluster, a deep document clustering approach based on language models that is trained to adapt to the clustering task. This adaptability is achieved through an iterative process where K-Means clustering is applied to the dataset, followed by iteratively training a deep classifier with generated pseudo-labels – an approach referred to as inner adaptation. The model is also able to adapt to changes in the data as new documents are added to the document collection. The latter type of adaptation, outer adaptation, is obtained by resuming the inner adaptation when a new chunk of documents has arrived. We explore two outer adaptation strategies, namely accumulative adaptation (training is resumed on the accumulated set of all documents) and non-accumulative adaptation (training is resumed using only the new chunk of data). We show that ADCluster outperforms established document clustering techniques on medium and long-text documents by a large margin. Additionally, our approach outperforms well-established baseline methods under both the accumulative and non-accumulative outer adaptation scenarios.

Ort, förlag, år, upplaga, sidor
Association for Computational Linguistics, 2023
Nyckelord
deep clustering, adaptive, deep learning, unsupervised, data stream
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datalogi; datorlingvistik
Identifikatorer
urn:nbn:se:umu:diva-220260 (URN)
Konferens
6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023.
Tillgänglig från: 2024-01-31 Skapad: 2024-01-31 Senast uppdaterad: 2024-07-02Bibliografiskt granskad
Eklund, A., Forsman, M. & Drewes, F. (2023). An empirical configuration study of a common document clustering pipeline. Northern European Journal of Language Technology (NEJLT), 9(1)
Öppna denna publikation i ny flik eller fönster >>An empirical configuration study of a common document clustering pipeline
2023 (Engelska)Ingår i: Northern European Journal of Language Technology (NEJLT), ISSN 2000-1533, Vol. 9, nr 1Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Document clustering is frequently used in applications of natural language processing, e.g. to classify news articles or create topic models. In this paper, we study document clustering with the common clustering pipeline that includes vectorization with BERT or Doc2Vec, dimension reduction with PCA or UMAP, and clustering with K-Means or HDBSCAN. We discuss the inter- actions of the different components in the pipeline, parameter settings, and how to determine an appropriate number of dimensions. The results suggest that BERT embeddings combined with UMAP dimension reduction to no less than 15 dimensions provides a good basis for clustering, regardless of the specific clustering algorithm used. Moreover, while UMAP performed better than PCA in our experiments, tuning the UMAP settings showed little impact on the overall performance. Hence, we recommend configuring UMAP so as to optimize its time efficiency. According to our topic model evaluation, the combination of BERT and UMAP, also used in BERTopic, performs best. A topic model based on this pipeline typically benefits from a large number of clusters.

Ort, förlag, år, upplaga, sidor
Linköping University Electronic Press, 2023
Nyckelord
document clustering, topic modeling, dimension reduction, clustering, BERT, doc2vec, UMAP, PCA, K-Means, HDBSCAN
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
urn:nbn:se:umu:diva-214455 (URN)10.3384/nejlt.2000-1533.2023.4396 (DOI)
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF), ID19-0055
Tillgänglig från: 2023-09-15 Skapad: 2023-09-15 Senast uppdaterad: 2023-09-15Bibliografiskt granskad
Andersson, E., Björklund, J., Drewes, F. & Jonsson, A. (2023). Generating semantic graph corpora with graph expansion grammar. In: Nagy B., Freund R. (Ed.), 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023): . Paper presented at 13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus (pp. 3-15). Open Publishing Association, 388
Öppna denna publikation i ny flik eller fönster >>Generating semantic graph corpora with graph expansion grammar
2023 (Engelska)Ingår i: 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023) / [ed] Nagy B., Freund R., Open Publishing Association , 2023, Vol. 388, s. 3-15Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We introduce LOVELACE, a tool for creating corpora of semantic graphs.The system uses graph expansion grammar as  a representational language, thus allowing users to craft a grammar that describes a corpus with desired properties. When given such grammar as input, the system generates a set of output graphs that are well-formed according to the grammar, i.e., a graph bank.The generation process can be controlled via a number of configurable parameters that allow the user to, for example, specify a range of desired output graph sizes.Central use cases are the creation of synthetic data to augment existing corpora, and as a pedagogical tool for teaching formal language theory. 

Ort, förlag, år, upplaga, sidor
Open Publishing Association, 2023
Serie
Electronic Proceedings in Theoretical Computer Science, ISSN 2075-2180
Nyckelord
semantic representation, graph corpora, graph grammar
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-212143 (URN)10.4204/EPTCS.388.3 (DOI)2-s2.0-85173059788 (Scopus ID)
Konferens
13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus
Forskningsfinansiär
Vetenskapsrådet, 2020-03852
Tillgänglig från: 2023-07-18 Skapad: 2023-07-18 Senast uppdaterad: 2023-10-18Bibliografiskt granskad
Björklund, J., Drewes, F. & Jonsson, A. (2023). Generation and polynomial parsing of graph languages with non-structural reentrancies. Computational linguistics - Association for Computational Linguistics (Print), 49(4), 841-882
Öppna denna publikation i ny flik eller fönster >>Generation and polynomial parsing of graph languages with non-structural reentrancies
2023 (Engelska)Ingår i: Computational linguistics - Association for Computational Linguistics (Print), ISSN 0891-2017, E-ISSN 1530-9312, Vol. 49, nr 4, s. 841-882Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Graph-based semantic representations are popular in natural language processing (NLP), where it is often convenient to model linguistic concepts as nodes and relations as edges between them. Several attempts have been made to find a generative device that is sufficiently powerful to describe languages of semantic graphs, while at the same allowing efficient parsing. We contribute to this line of work by introducing graph extension grammar, a variant of the contextual hyperedge replacement grammars proposed by Hoffmann et al. Contextual hyperedge replacement can generate graphs with non-structural reentrancies, a type of node-sharing that is very common in formalisms such as abstract meaning representation, but which context-free types of graph grammars cannot model. To provide our formalism with a way to place reentrancies in a linguistically meaningful way, we endow rules with logical formulas in counting monadic second-order logic. We then present a parsing algorithm and show as our main result that this algorithm runs in polynomial time on graph languages generated by a subclass of our grammars, the so-called local graph extension grammars.

Ort, förlag, år, upplaga, sidor
Association for Computational Linguistics, 2023
Nyckelord
Graph grammar, semantic graph, meaning representation, graph parsing
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Forskningsämne
datalogi; datorlingvistik
Identifikatorer
urn:nbn:se:umu:diva-209515 (URN)10.1162/coli_a_00488 (DOI)001152974700005 ()2-s2.0-85173016925 (Scopus ID)
Projekt
STING – Synthesis and analysis with Transducers and Invertible Neural Generators
Forskningsfinansiär
Wallenberg AI, Autonomous Systems and Software Program (WASP)Vetenskapsrådet, 2020-03852
Tillgänglig från: 2023-06-10 Skapad: 2023-06-10 Senast uppdaterad: 2024-02-19Bibliografiskt granskad
Drewes, F., Mörbitz, R. & Vogler, H. (2023). Hybrid tree automata and the yield theorem for constituent tree automata. Theoretical Computer Science, 979, Article ID 114185.
Öppna denna publikation i ny flik eller fönster >>Hybrid tree automata and the yield theorem for constituent tree automata
2023 (Engelska)Ingår i: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 979, artikel-id 114185Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

We introduce an automaton model for recognizing sets of hybrid trees, the hybrid tree automaton (HTA). Special cases of hybrid trees are constituent trees and dependency trees, as they occur in natural language processing. This includes the cases of discontinuous constituent trees and non-projective dependency trees. In general, a hybrid tree is a tree over a ranked alphabet in which a symbol can additionally be equipped with a natural number, called index; in a hybrid tree, each index occurs at most once. The yield of a hybrid tree is a sequence of strings over those symbols which occur in an indexed form; the corresponding indices determine the order within these strings; the borders between two consecutive strings are determined by the gaps in the sequence of indices. As a special case of HTA, we define constituent tree automata (CTA) which recognize sets of constituent trees. We introduce the notion of CTA-inductively recognizable and we show that the set of yields of a CTA-inductively recognizable set of constituent trees is an LCFRS language, and vice versa.

Ort, förlag, år, upplaga, sidor
Elsevier, 2023
Nyckelord
Constituent tree, Constituent tree automata, Hybrid tree, Hybrid tree automata, Linear context-free rewriting system
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-215370 (URN)10.1016/j.tcs.2023.114185 (DOI)2-s2.0-85173478895 (Scopus ID)
Tillgänglig från: 2023-10-31 Skapad: 2023-10-31 Senast uppdaterad: 2023-11-01Bibliografiskt granskad
Drewes, F. & Volkov, M. (2023). Preface. In: Frank Drewes; Mikhail Volkov (Ed.), Developments in language theory: 27th International Conference, DLT 2023 Umeå, Sweden, June 12–16, 2023 Proceedings. Springer Science+Business Media B.V.
Öppna denna publikation i ny flik eller fönster >>Preface
2023 (Engelska)Ingår i: Developments in language theory: 27th International Conference, DLT 2023 Umeå, Sweden, June 12–16, 2023 Proceedings / [ed] Frank Drewes; Mikhail Volkov, Springer Science+Business Media B.V., 2023Kapitel i bok, del av antologi (Övrigt vetenskapligt)
Ort, förlag, år, upplaga, sidor
Springer Science+Business Media B.V., 2023
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13911
Nationell ämneskategori
Datavetenskap (datalogi)
Identifikatorer
urn:nbn:se:umu:diva-210220 (URN)2-s2.0-85161250270 (Scopus ID)978-3-031-33263-0 (ISBN)978-3-031-33264-7 (ISBN)
Tillgänglig från: 2023-06-28 Skapad: 2023-06-28 Senast uppdaterad: 2023-06-28Bibliografiskt granskad
Drewes, F., Hoffmann, B. & Minas, M. (2022). Acyclic contextual hyperedge replacement: decidability of acyclicity and generative power. In: Nicolas Behr; Daniel Strüber (Ed.), Graph Transformation: 15th International Conference, ICGT 2022, Held as Part of STAF 2022, Nantes, France, July 7–8, 2022, Proceedings. Paper presented at 15th International Conference on Graph Transformation, ICGT 2022, held as Part of STAF 2022, Nantes, France, July 7-8, 2022. (pp. 3-19). Cham: Springer
Öppna denna publikation i ny flik eller fönster >>Acyclic contextual hyperedge replacement: decidability of acyclicity and generative power
2022 (Engelska)Ingår i: Graph Transformation: 15th International Conference, ICGT 2022, Held as Part of STAF 2022, Nantes, France, July 7–8, 2022, Proceedings / [ed] Nicolas Behr; Daniel Strüber, Cham: Springer, 2022, s. 3-19Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

Graph grammars based on contextual hyperedge replacement (CHR)  extend the generative power of the well-known hyperedge replacement  (HR) grammars to an extent that makes them useful for practical  modeling. Recent work has shown that acyclicity is a key condition for  parsing CHR grammars efficiently. In this paper we show that acyclicity of CHR grammars is decidable  and that the generative power of  acyclic CHR grammars lies strictly between that of HR grammars and  unrestricted CHR grammars.

Ort, förlag, år, upplaga, sidor
Cham: Springer, 2022
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13349
Nyckelord
Graph grammar, Hyperedge replacement, Contextual hyperedge replacement, Acyclicity, Decidability, Generative power
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-194089 (URN)10.1007/978-3-031-09843-7_1 (DOI)000870300100001 ()2-s2.0-85135030138 (Scopus ID)978-3-031-09842-0 (ISBN)978-3-031-09843-7 (ISBN)
Konferens
15th International Conference on Graph Transformation, ICGT 2022, held as Part of STAF 2022, Nantes, France, July 7-8, 2022.
Tillgänglig från: 2022-04-24 Skapad: 2022-04-24 Senast uppdaterad: 2023-09-05Bibliografiskt granskad
Eklund, A., Forsman, M. & Drewes, F. (2022). Dynamic topic modeling by clustering embeddings from pretrained language models: a research proposal. In: Yan Hanqi; Yang Zonghan; Sebastian Ruder; Wan Xiaojun (Ed.), Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop: . Paper presented at The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, Online, November 21-24, 2022 (pp. 84-91). Association for Computational Linguistics
Öppna denna publikation i ny flik eller fönster >>Dynamic topic modeling by clustering embeddings from pretrained language models: a research proposal
2022 (Engelska)Ingår i: Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop / [ed] Yan Hanqi; Yang Zonghan; Sebastian Ruder; Wan Xiaojun, Association for Computational Linguistics , 2022, s. 84-91Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

A new trend in topic modeling research is to do Neural Topic Modeling by Clustering document Embeddings (NTM-CE) created with a pretrained language model. Studies have evaluated static NTM-CE models and found them performing comparably to, or even better than other topic models. An important extension of static topic modeling is making the models dynamic, allowing the study of topic evolution over time, as well as detecting emerging and disappearing topics. In this research proposal, we present two research questions to understand dynamic topic modeling with NTM-CE theoretically and practically. To answer these, we propose four phases with the aim of establishing evaluation methods for dynamic topic modeling, finding NTM-CE-specific properties, and creating a framework for dynamic NTM-CE. For evaluation, we propose to use both quantitative measurements of coherence and human evaluation supported by our recently developed tool.

Ort, förlag, år, upplaga, sidor
Association for Computational Linguistics, 2022
Nyckelord
topic modeling, dynamic topic modeling, topic modeling evaluation, research proposal, pretrained language model
Nationell ämneskategori
Språkteknologi (språkvetenskaplig databehandling)
Identifikatorer
urn:nbn:se:umu:diva-202486 (URN)
Konferens
The 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, Online, November 21-24, 2022
Forskningsfinansiär
Stiftelsen för strategisk forskning (SSF), ID190055
Tillgänglig från: 2023-01-11 Skapad: 2023-01-11 Senast uppdaterad: 2023-01-11Bibliografiskt granskad
Drewes, F., Mörbitz, R. & Vogler, H. (2022). Hybrid Tree Automata and the Yield Theorem for Constituent Tree Automata. In: Caron P.; Mignot L. (Ed.), Implementation and Application of Automata (CIAA 2022): 26th International Conference, CIAA 2022, Rouen, France, June 28 – July 1, 2022, Proceedings. Paper presented at CIAA 2022: 26th International Conference on Implementation and Application of Automata, Rouen, France, June 28 - July 1, 2022 (pp. 93-105). Springer Science+Business Media B.V.
Öppna denna publikation i ny flik eller fönster >>Hybrid Tree Automata and the Yield Theorem for Constituent Tree Automata
2022 (Engelska)Ingår i: Implementation and Application of Automata (CIAA 2022): 26th International Conference, CIAA 2022, Rouen, France, June 28 – July 1, 2022, Proceedings / [ed] Caron P.; Mignot L., Springer Science+Business Media B.V., 2022, s. 93-105Konferensbidrag, Publicerat paper (Refereegranskat)
Abstract [en]

We introduce an automaton model for recognizing sets of hybrid trees, the hybrid tree automaton (HTA).    Special cases of hybrid trees are constituent trees and dependency trees, as they occur in natural language processing.    This includes the cases of discontinuous  constituent trees and non-projective dependency trees.   In general, a hybrid tree is a tree over a ranked alphabet %of symbols and indexed symbols, an indexed symbol being a symbol paired with   in which symbols can additionally be equipped with an index,   i.e.,   a natural number which indicates the position of that symbol in the yield of the hybrid tree.    As a special case of HTA, we define constituent tree automata (CTA) which recognize sets of  constituent trees. We show that the set of yields of a CTA-recognizable set of constituent trees is an LCFRS language, and vice versa.

Ort, förlag, år, upplaga, sidor
Springer Science+Business Media B.V., 2022
Serie
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13266
Nyckelord
tree language, constituent tree, dependency tree, hybrid tree
Nationell ämneskategori
Datavetenskap (datalogi)
Forskningsämne
datalogi
Identifikatorer
urn:nbn:se:umu:diva-194090 (URN)10.1007/978-3-031-07469-1_7 (DOI)000876366600007 ()2-s2.0-85131956678 (Scopus ID)978-3-031-07468-4 (ISBN)978-3-031-07469-1 (ISBN)
Konferens
CIAA 2022: 26th International Conference on Implementation and Application of Automata, Rouen, France, June 28 - July 1, 2022
Tillgänglig från: 2022-04-24 Skapad: 2022-04-24 Senast uppdaterad: 2022-12-30Bibliografiskt granskad
Projekt
Trädautomater för datoriserad språkteknologi [2008-06074_VR]; Umeå universitet
Organisationer
Identifikatorer
ORCID-id: ORCID iD iconorcid.org/0000-0001-7349-7693

Sök vidare i DiVA

Visa alla publikationer