Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An empirical configuration study of a common document clustering pipeline
Umeå University, Faculty of Science and Technology, Department of Computing Science. Adlede, Umeå, Sweden.ORCID iD: 0000-0002-4366-7863
Adlede, Umeå, Sweden.ORCID iD: 0000-0001-6601-5190
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-7349-7693
2023 (English)In: Northern European Journal of Language Technology (NEJLT), ISSN 2000-1533, Vol. 9, no 1Article in journal (Refereed) Published
Abstract [en]

Document clustering is frequently used in applications of natural language processing, e.g. to classify news articles or create topic models. In this paper, we study document clustering with the common clustering pipeline that includes vectorization with BERT or Doc2Vec, dimension reduction with PCA or UMAP, and clustering with K-Means or HDBSCAN. We discuss the inter- actions of the different components in the pipeline, parameter settings, and how to determine an appropriate number of dimensions. The results suggest that BERT embeddings combined with UMAP dimension reduction to no less than 15 dimensions provides a good basis for clustering, regardless of the specific clustering algorithm used. Moreover, while UMAP performed better than PCA in our experiments, tuning the UMAP settings showed little impact on the overall performance. Hence, we recommend configuring UMAP so as to optimize its time efficiency. According to our topic model evaluation, the combination of BERT and UMAP, also used in BERTopic, performs best. A topic model based on this pipeline typically benefits from a large number of clusters.

Place, publisher, year, edition, pages
Linköping University Electronic Press, 2023. Vol. 9, no 1
Keywords [en]
document clustering, topic modeling, dimension reduction, clustering, BERT, doc2vec, UMAP, PCA, K-Means, HDBSCAN
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:umu:diva-214455DOI: 10.3384/nejlt.2000-1533.2023.4396OAI: oai:DiVA.org:umu-214455DiVA, id: diva2:1797692
Funder
Swedish Foundation for Strategic Research, ID19-0055Available from: 2023-09-15 Created: 2023-09-15 Last updated: 2023-09-15Bibliographically approved

Open Access in DiVA

fulltext(2698 kB)161 downloads
File information
File name FULLTEXT01.pdfFile size 2698 kBChecksum SHA-512
ece569a5bf1ec48d0ca8a50d3ae983ab9adb8b1397fbb7f20a34eba36d8976d53e15c38dd449d953b36b551ceca09ed1c22b4c59d8d12d44ba19e4f6bbd66bdb
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Eklund, AntonDrewes, Frank

Search in DiVA

By author/editor
Eklund, AntonForsman, MonaDrewes, Frank
By organisation
Department of Computing Science
In the same journal
Northern European Journal of Language Technology (NEJLT)
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar
Total: 161 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 392 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf