Umeå University's logo

umu.sePublications
Change search
Link to record
Permanent link

Direct link
Publications (10 of 119) Show all publications
Eklund, A., Forsman, M. & Drewes, F. (2025). Comparing human-perceived cluster characteristics through the lens of CIPHE: measuring coherence beyond keywords. Journal of Data Mining and Digital Humanities, NLP4DH, Article ID 32.
Open this publication in new window or tab >>Comparing human-perceived cluster characteristics through the lens of CIPHE: measuring coherence beyond keywords
2025 (English)In: Journal of Data Mining and Digital Humanities, E-ISSN 2416-5999, Vol. NLP4DH, article id 32Article in journal (Refereed) Published
Abstract [en]

A frequent problem in document clustering and topic modeling is the lack of ground truth. Models are typically intended to reflect some aspect of how human readers view texts (the general theme, sentiment, emotional response, etc), but it can be difficult to assess whether they actually do. The only real ground truth is human judgement. To enable researchers and practitioners to collect such judgement in a cost-efficient standardized way, we have developed the crowdsourcing solution CIPHE -- Cluster Interpretation and Precision from Human Exploration. CIPHE is an adaptable framework which systematically gathers and evaluates data on the human perception of a set of document clusters where participants read sample texts from the cluster. In this article, we use CIPHE to study the limitations that keyword-based methods pose in topic modeling coherence evaluation. Keyword methods, including word intrusion, are compared with the outcome of the thorougher CIPHE on scoring and characterizing clusters. The results show how the abstraction of keywords skews the cluster interpretation for almost half of the compared instances, meaning that many important cluster characteristics are missed. Further, we present a case study where CIPHE is used to (a) provide insights into the UK news domain and (b) find out how the evaluated clustering model should be tuned to better suit the intended application. The experiments provide evidence that CIPHE characterizes clusters in a predictable manner and has the potential to be a valuable framework for using human evaluation in the pursuit of nuanced research aims.

Place, publisher, year, edition, pages
Centre pour la Communication Scientifique Directe (CCSD), 2025
Keywords
document clustering, topic modeling, topic modeling evaluation, news clustering, topic coherence, human evaluation methods, crowdsourced cluster validation, BERTopic, CIPHE
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-236229 (URN)10.46298/jdmdh.15044 (DOI)
Note

The code for the CIPHE platform is uploaded at https://github.com/antoneklund/CIPHE/

Available from: 2025-03-09 Created: 2025-03-09 Last updated: 2025-03-11Bibliographically approved
Drewes, F., Hoffmann, B. & Minas, M. (2025). Finite automata for efficient graph recognition. In: Jörg Endrullis; Dominik Grzelak; Tobias Heindel; Jens Kosiol (Ed.), EPTCS 417: proceedings of the fourteenth and fifteenth international workshop on graph computation models. Paper presented at 15th International Workshop on Graph Computation Models, Enshede, the Netherlands, July 9, 2024 (pp. 134-156). Open Publishing Association, 417
Open this publication in new window or tab >>Finite automata for efficient graph recognition
2025 (English)In: EPTCS 417: proceedings of the fourteenth and fifteenth international workshop on graph computation models / [ed] Jörg Endrullis; Dominik Grzelak; Tobias Heindel; Jens Kosiol, Open Publishing Association , 2025, Vol. 417, p. 134-156Conference paper, Published paper (Refereed)
Abstract [en]

Engelfriet and Vereijken have shown that linear graph grammars based on hyperedge replacement generate graph languages that can be considered as interpretations of regular string languages over typed symbols. In this paper we show that finite automata can be lifted from strings to graphs within the same framework. For the efficient recognition of graphs with these automata, we make them deterministic by a modified powerset construction, and state sufficient conditions under which deterministic finite graph automata recognize graphs without the need to use backtracking.

Place, publisher, year, edition, pages
Open Publishing Association, 2025
Series
Electronic proceedings in theoretical computer scienc, ISSN 2075-2180 ; 417
Keywords
graph automaton, recognition
National Category
Formal Methods
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-236974 (URN)10.4204/EPTCS.417.8 (DOI)2-s2.0-105001924158 (Scopus ID)
Conference
15th International Workshop on Graph Computation Models, Enshede, the Netherlands, July 9, 2024
Projects
STING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Knut and Alice Wallenberg Foundation
Available from: 2025-03-26 Created: 2025-03-26 Last updated: 2025-05-07Bibliographically approved
Drewes, F., Hoffmann, B. & Minas, M. (2025). Graph formulas and their translation to alternating graph automata. In: Jörg Endrullis; Matthias Tichy (Ed.), Graph transformation: 18th International conference, ICGT 2025, Held as Part of STAF 2025, Koblenz, Germany, June 11–12, 2025, proceedings. Paper presented at 18th International Graph Transformation Conference, ICGT 2025, Held as Part of STAF 2025, Koblenz, Germany, June 11–12, 2025 (pp. 112-132). Springer Nature
Open this publication in new window or tab >>Graph formulas and their translation to alternating graph automata
2025 (English)In: Graph transformation: 18th International conference, ICGT 2025, Held as Part of STAF 2025, Koblenz, Germany, June 11–12, 2025, proceedings / [ed] Jörg Endrullis; Matthias Tichy, Springer Nature, 2025, p. 112-132Conference paper, Published paper (Other academic)
Abstract [en]

In many application areas, it is necessary to specify and check  properties of graphs.  Such properties may be complex, placing structural requirements on graph regions of unbounded size. In this paper, we show that alternating graph automata can check such graph properties, e.g., whether a given input graph is a tree, or whether  it contains a Hamiltonian cycle or not.  In fact, we show that these automata can accept PSPACE-complete graph languages and that, conversely, their uniform membership  problem is contained in PSPACE.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15720
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-236980 (URN)10.1007/978-3-031-94706-3_6 (DOI)2-s2.0-105009335857 (Scopus ID)978-3-031-94705-6 (ISBN)978-3-031-94706-3 (ISBN)
Conference
18th International Graph Transformation Conference, ICGT 2025, Held as Part of STAF 2025, Koblenz, Germany, June 11–12, 2025
Projects
Neuro-Symbolic Graph TransformationSTING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Knut and Alice Wallenberg FoundationSwedish Research Council, 2024-05318
Available from: 2025-03-26 Created: 2025-03-26 Last updated: 2025-07-08Bibliographically approved
Drewes, F., Hoffmann, B. & Minas, M. (2025). Specifying and checking graph properties with alternating graph automata. In: Jörg Endrullis; Matthias Tichy (Ed.), Graph transformation: 18th International Conference, ICGT 2025, held as part of STAF 2025, Koblenz, Germany, June 11–12, 2025, proceedings. Paper presented at 18th International Conference, ICGT 2025, held as part of STAF 2025, Koblenz, Germany, June 11–12, 2025 (pp. 93-111). Springer Nature
Open this publication in new window or tab >>Specifying and checking graph properties with alternating graph automata
2025 (English)In: Graph transformation: 18th International Conference, ICGT 2025, held as part of STAF 2025, Koblenz, Germany, June 11–12, 2025, proceedings / [ed] Jörg Endrullis; Matthias Tichy, Springer Nature, 2025, p. 93-111Conference paper, Published paper (Other academic)
Abstract [en]

In many application areas, it is necessary to specify and check  properties of graphs.  Such properties may be complex, placing structural requirements on graph regions of unbounded size. In this paper, we show that alternating graph automata can check such graph properties, e.g., whether a given input graph is a tree, or whether  it contains a Hamiltonian cycle or not.  In fact, we show that these automata can accept PSPACE-complete graph languages and that, conversely, their uniform membership  problem is contained in PSPACE.

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 15720
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-236980 (URN)10.1007/978-3-031-94706-3_5 (DOI)978-3-031-94705-6 (ISBN)978-3-031-94706-3 (ISBN)
Conference
18th International Conference, ICGT 2025, held as part of STAF 2025, Koblenz, Germany, June 11–12, 2025
Projects
Neuro-Symbolic Graph TransformationSTING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Knut and Alice Wallenberg FoundationSwedish Research Council, 2024-05318
Available from: 2025-03-26 Created: 2025-03-26 Last updated: 2025-07-09Bibliographically approved
Eklund, A., Forsman, M. & Drewes, F. (2024). CIPHE: A Framework for Document Cluster Interpretation and Precision from Human Exploration. In: Mika Hämäläinen; Emily Öhman; So Miyagawa; Khalid Alnajjar; Yuri Bizzoni (Ed.), Proceedings of the 4th international conference on natural language processing for digital humanities: . Paper presented at 4th International Conference on Natural Language Processing for Digital Humanities, Miami, USA, November 15-16, 2024 (pp. 536-548). Association for Computational Linguistics
Open this publication in new window or tab >>CIPHE: A Framework for Document Cluster Interpretation and Precision from Human Exploration
2024 (English)In: Proceedings of the 4th international conference on natural language processing for digital humanities / [ed] Mika Hämäläinen; Emily Öhman; So Miyagawa; Khalid Alnajjar; Yuri Bizzoni, Association for Computational Linguistics, 2024, p. 536-548Conference paper, Published paper (Refereed)
Abstract [en]

Document clustering models serve unique application purposes, which turns model quality into a property that depends on the needs of the individual investigator. We propose a framework, Cluster Interpretation and Precision from Human Exploration (CIPHE), for collecting and quantifying human interpretations of cluster samples. CIPHE tasks survey participants to explore actual document texts from cluster samples and records their perceptions. It also includes a novel inclusion task that is used to calculate the cluster precision in an indirect manner. A case study on news clusters shows that CIPHE reveals which clusters have multiple interpretation angles, aiding the investigator in their exploration.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2024
Keywords
document clustering, topic modeling, clustering, human evaluation, CIPHE, news articles
National Category
Natural Language Processing
Research subject
computational linguistics
Identifiers
urn:nbn:se:umu:diva-231697 (URN)2-s2.0-85216576924 (Scopus ID)979-8-89176-181-0 (ISBN)
Conference
4th International Conference on Natural Language Processing for Digital Humanities, Miami, USA, November 15-16, 2024
Available from: 2024-11-11 Created: 2024-11-11 Last updated: 2025-04-02Bibliographically approved
Drewes, F. & Stade, Y. (2024). On the power of local graph expansion grammars with and without additional restrictions. Theoretical Computer Science, 1015, Article ID 114763.
Open this publication in new window or tab >>On the power of local graph expansion grammars with and without additional restrictions
2024 (English)In: Theoretical Computer Science, ISSN 0304-3975, E-ISSN 1879-2294, Vol. 1015, article id 114763Article in journal (Refereed) Published
Abstract [en]

We study graph expansion grammars, a type of graph grammar that has recently been introduced with motivations in natural language processing. Graph expansion generalizes the well-known hyperedge replacement. In contrast to the latter, the former is able to generate graph languages of unbounded treewidth, like the set of all graphs. In an earlier paper, the complexity of the membership problem of the generated languages was studied, the main result being a polynomial parsing algorithm for local DAG expansion grammars (there called local DAG expansion grammars), a subclass of graph expansion grammars that generates directed acyclic graphs. Here, we study the generative power of local graph expansion grammars. While they, un- restricted, are able to simulate Turing machines, we identify natural restrictions that give rise to a pumping lemma and ensure that the generated languages have regular path languages as well as a semi-linear Parikh image. 

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
graph grammar, hyperedge replacement, graph expansion grammar, generative power
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-228142 (URN)10.1016/j.tcs.2024.114763 (DOI)001296881100001 ()2-s2.0-85203023465 (Scopus ID)
Projects
WASP NEST project STING – Synthesis and analysis with Transducers and Invertible Neural Generators
Funder
Wallenberg AI, Autonomous Systems and Software Program (WASP)
Available from: 2024-08-01 Created: 2024-08-01 Last updated: 2024-09-16Bibliographically approved
Jäger, G. & Drewes, F. (2024). Optimal strategies for the static black-peg AB game with two and three pegs. Discrete Mathematics, Algorithms and Applications (DMAA), 16(4), Article ID 2350049.
Open this publication in new window or tab >>Optimal strategies for the static black-peg AB game with two and three pegs
2024 (English)In: Discrete Mathematics, Algorithms and Applications (DMAA), ISSN 1793-8309, E-ISSN 1793-8317, Vol. 16, no 4, article id 2350049Article in journal (Refereed) Published
Abstract [en]

The AB Game is a game similar to the popular game Mastermind. We study a version of this game called Static Black-Peg AB Game. It is played by two players, the codemaker and the codebreaker. The codemaker creates a so-called secret by placing a color from a set of c colors on each of p ≤ c pegs, subject to the condition that every color is used at most once. The codebreaker tries to determine the secret by asking questions, where all questions are given at once and each question is a possible secret. As an answer the codemaker reveals the number of correctly placed colors for each of the questions. After that, the codebreaker only has one more try to determine the secret and thus to win the game. 

For given p and c, our goal is to find the smallest number k of questions the codebreaker needs to win, regardless of the secret, and the corresponding list of questions, called a (k + 1)-strategy. We present a (⌈4c/3⌉ − 1)-strategy for p = 2 for all c ≥ 2, and a ⌊(3c − 1)/2⌋-strategy for p = 3 for all c ≥ 4 and show the optimality of both strategies, i.e., we prove that no (k + 1)-strategy for a smaller k exists. 

Place, publisher, year, edition, pages
World Scientific, 2024
Keywords
Game theory, mastermind, AB game, optimal strategy
National Category
Discrete Mathematics
Research subject
Mathematics
Identifiers
urn:nbn:se:umu:diva-210346 (URN)10.1142/s1793830923500490 (DOI)001034748600002 ()2-s2.0-85165934499 (Scopus ID)
Funder
The Kempe Foundations, JCK-2022.1
Available from: 2023-06-20 Created: 2023-06-20 Last updated: 2024-06-26Bibliographically approved
Hatefi, A., Vu, X.-S., Bhuyan, M. H. & Drewes, F. (2023). ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents. In: Mourad Abbas; Abed Alhakim Freihat (Ed.), Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023): . Paper presented at 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023. (pp. 68-77). Association for Computational Linguistics
Open this publication in new window or tab >>ADCluster: Adaptive Deep Clustering for unsupervised learning from unlabeled documents
2023 (English)In: Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023) / [ed] Mourad Abbas; Abed Alhakim Freihat, Association for Computational Linguistics, 2023, p. 68-77Conference paper, Published paper (Refereed)
Abstract [en]

We introduce ADCluster, a deep document clustering approach based on language models that is trained to adapt to the clustering task. This adaptability is achieved through an iterative process where K-Means clustering is applied to the dataset, followed by iteratively training a deep classifier with generated pseudo-labels – an approach referred to as inner adaptation. The model is also able to adapt to changes in the data as new documents are added to the document collection. The latter type of adaptation, outer adaptation, is obtained by resuming the inner adaptation when a new chunk of documents has arrived. We explore two outer adaptation strategies, namely accumulative adaptation (training is resumed on the accumulated set of all documents) and non-accumulative adaptation (training is resumed using only the new chunk of data). We show that ADCluster outperforms established document clustering techniques on medium and long-text documents by a large margin. Additionally, our approach outperforms well-established baseline methods under both the accumulative and non-accumulative outer adaptation scenarios.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2023
Keywords
deep clustering, adaptive, deep learning, unsupervised, data stream
National Category
Computer Sciences
Research subject
Computer Science; computational linguistics
Identifiers
urn:nbn:se:umu:diva-220260 (URN)
Conference
6th International Conference on Natural Language and Speech Processing (ICNLSP 2023), Online, December 16-17, 2023.
Available from: 2024-01-31 Created: 2024-01-31 Last updated: 2024-07-02Bibliographically approved
Eklund, A., Forsman, M. & Drewes, F. (2023). An empirical configuration study of a common document clustering pipeline. Northern European Journal of Language Technology (NEJLT), 9(1)
Open this publication in new window or tab >>An empirical configuration study of a common document clustering pipeline
2023 (English)In: Northern European Journal of Language Technology (NEJLT), ISSN 2000-1533, Vol. 9, no 1Article in journal (Refereed) Published
Abstract [en]

Document clustering is frequently used in applications of natural language processing, e.g. to classify news articles or create topic models. In this paper, we study document clustering with the common clustering pipeline that includes vectorization with BERT or Doc2Vec, dimension reduction with PCA or UMAP, and clustering with K-Means or HDBSCAN. We discuss the inter- actions of the different components in the pipeline, parameter settings, and how to determine an appropriate number of dimensions. The results suggest that BERT embeddings combined with UMAP dimension reduction to no less than 15 dimensions provides a good basis for clustering, regardless of the specific clustering algorithm used. Moreover, while UMAP performed better than PCA in our experiments, tuning the UMAP settings showed little impact on the overall performance. Hence, we recommend configuring UMAP so as to optimize its time efficiency. According to our topic model evaluation, the combination of BERT and UMAP, also used in BERTopic, performs best. A topic model based on this pipeline typically benefits from a large number of clusters.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2023
Keywords
document clustering, topic modeling, dimension reduction, clustering, BERT, doc2vec, UMAP, PCA, K-Means, HDBSCAN
National Category
Natural Language Processing
Identifiers
urn:nbn:se:umu:diva-214455 (URN)10.3384/nejlt.2000-1533.2023.4396 (DOI)
Funder
Swedish Foundation for Strategic Research, ID19-0055
Available from: 2023-09-15 Created: 2023-09-15 Last updated: 2025-03-10Bibliographically approved
Andersson, E., Björklund, J., Drewes, F. & Jonsson, A. (2023). Generating semantic graph corpora with graph expansion grammar. In: Nagy B., Freund R. (Ed.), 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023): . Paper presented at 13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus (pp. 3-15). Open Publishing Association, 388
Open this publication in new window or tab >>Generating semantic graph corpora with graph expansion grammar
2023 (English)In: 13th International Workshop on Non-Classical Models of Automata and Applications (NCMA 2023) / [ed] Nagy B., Freund R., Open Publishing Association , 2023, Vol. 388, p. 3-15Conference paper, Published paper (Refereed)
Abstract [en]

We introduce LOVELACE, a tool for creating corpora of semantic graphs.The system uses graph expansion grammar as  a representational language, thus allowing users to craft a grammar that describes a corpus with desired properties. When given such grammar as input, the system generates a set of output graphs that are well-formed according to the grammar, i.e., a graph bank.The generation process can be controlled via a number of configurable parameters that allow the user to, for example, specify a range of desired output graph sizes.Central use cases are the creation of synthetic data to augment existing corpora, and as a pedagogical tool for teaching formal language theory. 

Place, publisher, year, edition, pages
Open Publishing Association, 2023
Series
Electronic Proceedings in Theoretical Computer Science, ISSN 2075-2180
Keywords
semantic representation, graph corpora, graph grammar
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:umu:diva-212143 (URN)10.4204/EPTCS.388.3 (DOI)001113966700004 ()2-s2.0-85173059788 (Scopus ID)
Conference
13th International Workshop on Non-Classical Models of Automata and Applications, NCMA 2023, 18-19 September, 2023, Famagusta, Cyprus
Funder
Swedish Research Council, 2020-03852
Available from: 2023-07-18 Created: 2023-07-18 Last updated: 2025-04-24Bibliographically approved
Projects
Tree Automata in Computational Language Technology [2008-06074_VR]; Umeå University
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7349-7693

Search in DiVA

Show all publications