Umeå University's logo

umu.sePublications
System disruptions
We are currently experiencing disruptions on the search portals due to high traffic. We are working to resolve the issue, you may temporarily encounter an error message.
Change search
Link to record
Permanent link

Direct link
Publications (10 of 116) Show all publications
Eklund, A., Forsman, M. & Drewes, F. (2025). Comparing human-perceived cluster characteristics through the lens of CIPHE: measuring coherence beyond keywords. Journal of Data Mining and Digital Humanities, NLP4DH, Article ID 32.
Open this publication in new window or tab >>Comparing human-perceived cluster characteristics through the lens of CIPHE: measuring coherence beyond keywords
2025 (English)In: Journal of Data Mining and Digital Humanities, E-ISSN 2416-5999, Vol. NLP4DH, article id 32Article in journal (Refereed) Published
Abstract [en]

A frequent problem in document clustering and topic modeling is the lack of ground truth. Models are typically intended to reflect some aspect of how human readers view texts (the general theme, sentiment, emotional response, etc), but it can be difficult to assess whether they actually do. The only real ground truth is human judgement. To enable researchers and practitioners to collect such judgement in a cost-efficient standardized way, we have developed the crowdsourcing solution CIPHE -- Cluster Interpretation and Precision from Human Exploration. CIPHE is an adaptable framework which systematically gathers and evaluates data on the human perception of a set of document clusters where participants read sample texts from the cluster. In this article, we use CIPHE to study the limitations that keyword-based methods pose in topic modeling coherence evaluation. Keyword methods, including word intrusion, are compared with the outcome of the thorougher CIPHE on scoring and characterizing clusters. The results show how the abstraction of keywords skews the cluster interpretation for almost half of the compared instances, meaning that many important cluster characteristics are missed. Further, we present a case study where CIPHE is used to (a) provide insights into the UK news domain and (b) find out how the evaluated clustering model should be tuned to better suit the intended application. The experiments provide evidence that CIPHE characterizes clusters in a predictable manner and has the potential to be a valuable framework for using human evaluation in the pursuit of nuanced research aims.

Place, publisher, year, edition, pages
Centre pour la Communication Scientifique Directe (CCSD), 2025
Keywords
document clustering, topic modeling, topic modeling evaluation, news clustering, topic coherence, human evaluation methods, crowdsourced cluster validation, BERTopic, CIPHE
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-236229 (URN)10.46298/jdmdh.15044 (DOI)
Note

The code for the CIPHE platform is uploaded at https://github.com/antoneklund/CIPHE/

Available from: 2025-03-09 Created: 2025-03-09 Last updated: 2025-03-11Bibliographically approved
Eklund, A., Forsman, M. & Drewes, F. (2024). CIPHE: A Framework for Document Cluster Interpretation and Precision from Human Exploration. In: Mika Hämäläinen; Emily Öhman; So Miyagawa; Khalid Alnajjar; Yuri Bizzoni (Ed.), Proceedings of the 4th international conference on natural language processing for digital humanities: . Paper presented at 4th International Conference on Natural Language Processing for Digital Humanities, Miami, USA, November 15-16, 2024 (pp. 536-548). Association for Computational Linguistics
Open this publication in new window or tab >>CIPHE: A Framework for Document Cluster Interpretation and Precision from Human Exploration
2024 (English)In: Proceedings of the 4th international conference on natural language processing for digital humanities / [ed] Mika Hämäläinen; Emily Öhman; So Miyagawa; Khalid Alnajjar; Yuri Bizzoni, Association for Computational Linguistics, 2024, p. 536-548Conference paper, Published paper (Refereed)
Abstract [en]

Document clustering models serve unique application purposes, which turns model quality into a property that depends on the needs of the individual investigator. We propose a framework, Cluster Interpretation and Precision from Human Exploration (CIPHE), for collecting and quantifying human interpretations of cluster samples. CIPHE tasks survey participants to explore actual document texts from cluster samples and records their perceptions. It also includes a novel inclusion task that is used to calculate the cluster precision in an indirect manner. A case study on news clusters shows that CIPHE reveals which clusters have multiple interpretation angles, aiding the investigator in their exploration.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2024
Keywords
document clustering, topic modeling, clustering, human evaluation, CIPHE, news articles
National Category
Natural Language Processing
Research subject
computational linguistics
Identifiers
urn:nbn:se:umu:diva-231697 (URN)979-8-89176-181-0 (ISBN)
Conference
4th International Conference on Natural Language Processing for Digital Humanities, Miami, USA, November 15-16, 2024
Available from: 2024-11-11 Created: 2024-11-11 Last updated: 2025-03-10Bibliographically approved
Drewes, F. & Stade, Y. (2024). On the power of local graph expansion grammars with and without additional restrictions. Theoretical Computer Science, 1015, Article ID 114763.
Open this publication in new window or tab >>On the power of local graph expansion grammars with and without additional restrictions
2024 (English)In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 1015, article id 114763Article in journal (Refereed) Published
Abstract [en]

We study graph expansion grammars, a type of graph grammar that has recently been introduced with motivations in natural language processing. Graph expansion generalizes the well-known hyperedge replacement. In contrast to the latter, the former is able to generate graph languages of unbounded treewidth, like the set of all graphs. In an earlier paper, the complexity of the membership problem of the generated languages was studied, the main result being a polynomial parsing algorithm for local DAG expansion grammars (there called local DAG expansion grammars), a subclass of graph expansion grammars that generates directed acyclic graphs. Here, we study the generative power of local graph expansion grammars. While they, un- restricted, are able to simulate Turing machines, we identify natural restrictions that give rise to a pumping lemma and ensure that the generated languages have regular path languages as well as a semi-linear Parikh image. 

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
graph grammar, hyperedge replacement, graph expansion grammar, generative power
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-228142 (URN)10.1016/j.tcs.2024.114763 (DOI)001296881100001 ()2-s2.0-85203023465 (Scopus ID)
Projects
WASP NEST project STING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-08-01 Created: 2024-08-01 Last updated: 2024-09-16Bibliographically approved
Jäger, G. & Drewes, F. (2024). Optimal strategies for the static black-peg AB game with two and three pegs. Discrete Mathematics, Algorithms and Applications (DMAA), 16(4), Article ID 2350049.
Open this publication in new window or tab >>Optimal strategies for the static black-peg AB game with two and three pegs
2024 (English)In: Discrete Mathematics, Algorithms and Applications (DMAA), ISSN 1793-8309, E-ISSN 1793-8317, Vol. 16, no 4, article id 2350049Article in journal (Refereed) Published
Abstract [en]

The AB Game is a game similar to the popular game Mastermind. We study a version of this game called Static Black-Peg AB Game. It is played by two players, the codemaker and the codebreaker. The codemaker creates a so-called secret by placing a color from a set of c colors on each of p ≤ c pegs, subject to the condition that every color is used at most once. The codebreaker tries to determine the secret by asking questions, where all questions are given at once and each question is a possible secret. As an answer the codemaker reveals the number of correctly placed colors for each of the questions. After that, the codebreaker only has one more try to determine the secret and thus to win the game. 

For given p and c, our goal is to find the smallest number k of questions the codebreaker needs to win, regardless of the secret, and the corresponding list of questions, called a (k + 1)-strategy. We present a (⌈4c/3⌉ − 1)-strategy for p = 2 for all c ≥ 2, and a ⌊(3c − 1)/2⌋-strategy for p = 3 for all c ≥ 4 and show the optimality of both strategies, i.e., we prove that no (k + 1)-strategy for a smaller k exists. 

Place, publisher, year, edition, pages
World Scientific, 2024
Keywords
Game theory, mastermind, AB game, optimal strategy
National Category
Discrete Mathematics
Research subject
Mathematics
Identifiers
urn:nbn:se:umu:diva-210346 (URN)10.1142/s1793830923500490 (DOI)001034748600002 ()2-s2.0-85165934499 (Scopus ID)
Funder
The Kempe Foundations, JCK-2022.1
Available from: 2023-06-20 Created: 2023-06-20 Last updated: 2024-06-26Bibliographically approved
Hatefi, A., Vu, X.-S., Bhuyan, M. H. & Drewes, F. (2023). ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents. In: Mourad Abbas; Abed Alhakim Freihat (Ed.), Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023): . Paper presented at 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023. (pp. 68-77). Association for Computational Linguistics
Open this publication in new window or tab >>ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents
2023 (English)In: Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023) / [ed] Mourad Abbas; Abed Alhakim Freihat, Association for Computational Linguistics, 2023, p. 68-77Conference paper, Published paper (Refereed)
Abstract [en]

We introduce ADCluster, a deep document clustering approach based on language models that is trained to adapt to the clustering task. This adaptability is achieved through an iterative process where K-Means clustering is applied to the dataset, followed by iteratively training a deep classifier with generated pseudo-labels – an approach referred to as inner adaptation. The model is also able to adapt to changes in the data as new documents are added to the document collection. The latter type of adaptation, outer adaptation, is obtained by resuming the inner adaptation when a new chunk of documents has arrived. We explore two outer adaptation strategies, namely accumulative adaptation (training is resumed on the accumulated set of all documents) and non-accumulative adaptation (training is resumed using only the new chunk of data). We show that ADCluster outperforms established document clustering techniques on medium and long-text documents by a large margin. Additionally, our approach outperforms well-established baseline methods under both the accumulative and non-accumulative outer adaptation scenarios.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2023
Keywords
deep clustering, adaptive, deep learning, unsupervised, data stream
National Category
Computer Sciences
Research subject
Computer Science; computational linguistics
Identifiers
urn:nbn:se:umu:diva-220260 (URN)
Conference
6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023.
Available from: 2024-01-31 Created: 2024-01-31 Last updated: 2024-07-02Bibliographically approved
Eklund, A., Forsman, M. & Drewes, F. (2023). An empirical configuration study of a common document clustering pipeline. Northern European Journal of Language Technology (NEJLT), 9(1)
Open this publication in new window or tab >>An empirical configuration study of a common document clustering pipeline
2023 (English)In: Northern European Journal of Language Technology (NEJLT), ISSN 2000-1533, Vol. 9, no 1Article in journal (Refereed) Published
Abstract [en]

Document clustering is frequently used in applications of natural language processing, e.g. to classify news articles or create topic models. In this paper, we study document clustering with the common clustering pipeline that includes vectorization with BERT or Doc2Vec, dimension reduction with PCA or UMAP, and clustering with K-Means or HDBSCAN. We discuss the inter- actions of the different components in the pipeline, parameter settings, and how to determine an appropriate number of dimensions. The results suggest that BERT embeddings combined with UMAP dimension reduction to no less than 15 dimensions provides a good basis for clustering, regardless of the specific clustering algorithm used. Moreover, while UMAP performed better than PCA in our experiments, tuning the UMAP settings showed little impact on the overall performance. Hence, we recommend configuring UMAP so as to optimize its time efficiency. According to our topic model evaluation, the combination of BERT and UMAP, also used in BERTopic, performs best. A topic model based on this pipeline typically benefits from a large number of clusters.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2023
Keywords
document clustering, topic modeling, dimension reduction, clustering, BERT, doc2vec, UMAP, PCA, K-Means, HDBSCAN
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-214455 (URN)10.3384/nejlt.2000-1533.2023.4396 (DOI)
Funder
Swedish Foundation for Strategic Research, ID19-0055
Available from: 2023-09-15 Created: 2023-09-15 Last updated: 2025-03-10Bibliographically approved
Andersson, E., Björklund, J., Drewes, F. & Jonsson, A. (2023). Generating semantic graph corpora with graph expansion grammar. In: Nagy B., Freund R. (Ed.), 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023): . Paper presented at 13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus (pp. 3-15). Open Publishing Association, 388
Open this publication in new window or tab >>Generating semantic graph corpora with graph expansion grammar
2023 (English)In: 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023) / [ed] Nagy B., Freund R., Open Publishing Association , 2023, Vol. 388, p. 3-15Conference paper, Published paper (Refereed)
Abstract [en]

We introduce LOVELACE, a tool for creating corpora of semantic graphs.The system uses graph expansion grammar as  a representational language, thus allowing users to craft a grammar that describes a corpus with desired properties. When given such grammar as input, the system generates a set of output graphs that are well-formed according to the grammar, i.e., a graph bank.The generation process can be controlled via a number of configurable parameters that allow the user to, for example, specify a range of desired output graph sizes.Central use cases are the creation of synthetic data to augment existing corpora, and as a pedagogical tool for teaching formal language theory. 

Place, publisher, year, edition, pages
Open Publishing Association, 2023
Series
Electronic Proceedings in Theoretical Computer Science, ISSN 2075-2180
Keywords
semantic representation, graph corpora, graph grammar
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-212143 (URN)10.4204/EPTCS.388.3 (DOI)2-s2.0-85173059788 (Scopus ID)
Conference
13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus
Funder
Swedish Research Council, 2020-03852
Available from: 2023-07-18 Created: 2023-07-18 Last updated: 2025-02-07Bibliographically approved
Björklund, J., Drewes, F. & Jonsson, A. (2023). Generation and polynomial parsing of graph languages with non-structural reentrancies. Computational linguistics - Association for Computational Linguistics (Print), 49(4), 841-882
Open this publication in new window or tab >>Generation and polynomial parsing of graph languages with non-structural reentrancies
2023 (English)In: Computational linguistics - Association for Computational Linguistics (Print), ISSN 0891-2017, E-ISSN 1530-9312, Vol. 49, no 4, p. 841-882Article in journal (Refereed) Published
Abstract [en]

Graph-based semantic representations are popular in natural language processing (NLP), where it is often convenient to model linguistic concepts as nodes and relations as edges between them. Several attempts have been made to find a generative device that is sufficiently powerful to describe languages of semantic graphs, while at the same allowing efficient parsing. We contribute to this line of work by introducing graph extension grammar, a variant of the contextual hyperedge replacement grammars proposed by Hoffmann et al. Contextual hyperedge replacement can generate graphs with non-structural reentrancies, a type of node-sharing that is very common in formalisms such as abstract meaning representation, but which context-free types of graph grammars cannot model. To provide our formalism with a way to place reentrancies in a linguistically meaningful way, we endow rules with logical formulas in counting monadic second-order logic. We then present a parsing algorithm and show as our main result that this algorithm runs in polynomial time on graph languages generated by a subclass of our grammars, the so-called local graph extension grammars.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2023
Keywords
Graph grammar, semantic graph, meaning representation, graph parsing
National Category
Natural Language Processing
Research subject
Computer Science; computational linguistics
Identifiers
urn:nbn:se:umu:diva-209515 (URN)10.1162/coli_a_00488 (DOI)001152974700005 ()2-s2.0-85173016925 (Scopus ID)
Projects
STING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)Swedish Research Council, 2020-03852
Available from: 2023-06-10 Created: 2023-06-10 Last updated: 2025-02-07Bibliographically approved
Drewes, F., Mörbitz, R. & Vogler, H. (2023). Hybrid tree automata and the yield theorem for constituent tree automata. Theoretical Computer Science, 979, Article ID 114185.
Open this publication in new window or tab >>Hybrid tree automata and the yield theorem for constituent tree automata
2023 (English)In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 979, article id 114185Article in journal (Refereed) Published
Abstract [en]

We introduce an automaton model for recognizing sets of hybrid trees, the hybrid tree automaton (HTA). Special cases of hybrid trees are constituent trees and dependency trees, as they occur in natural language processing. This includes the cases of discontinuous constituent trees and non-projective dependency trees. In general, a hybrid tree is a tree over a ranked alphabet in which a symbol can additionally be equipped with a natural number, called index; in a hybrid tree, each index occurs at most once. The yield of a hybrid tree is a sequence of strings over those symbols which occur in an indexed form; the corresponding indices determine the order within these strings; the borders between two consecutive strings are determined by the gaps in the sequence of indices. As a special case of HTA, we define constituent tree automata (CTA) which recognize sets of constituent trees. We introduce the notion of CTA-inductively recognizable and we show that the set of yields of a CTA-inductively recognizable set of constituent trees is an LCFRS language, and vice versa.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Constituent tree, Constituent tree automata, Hybrid tree, Hybrid tree automata, Linear context-free rewriting system
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-215370 (URN)10.1016/j.tcs.2023.114185 (DOI)2-s2.0-85173478895 (Scopus ID)
Available from: 2023-10-31 Created: 2023-10-31 Last updated: 2023-11-01Bibliographically approved
Drewes, F. & Volkov, M. (2023). Preface. In: Frank Drewes; Mikhail Volkov (Ed.), Developments in language theory: 27th International Conference, DLT 2023 Umeå, Sweden, June 12–16, 2023 Proceedings. Springer Science+Business Media B.V.
Open this publication in new window or tab >>Preface
2023 (English)In: Developments in language theory: 27th International Conference, DLT 2023 Umeå, Sweden, June 12–16, 2023 Proceedings / [ed] Frank Drewes; Mikhail Volkov, Springer Science+Business Media B.V., 2023Chapter in book (Other academic)
Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2023
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 13911
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-210220 (URN)2-s2.0-85161250270 (Scopus ID)978-3-031-33263-0 (ISBN)978-3-031-33264-7 (ISBN)
Available from: 2023-06-28 Created: 2023-06-28 Last updated: 2023-06-28Bibliographically approved
Projects
Tree Automata in Computational Language Technology [2008-06074_VR]; Umeå University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7349-7693

Search in DiVA

Show all publications