Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Few-shot anomaly detection in text with deviation learning
Umeå University, Faculty of Science and Technology, Department of Computing Science.
Department of Computer Science Engineering, Indian Institute of Technology Patna, Patna, India.
Department of Computer Science Engineering, Indian Institute of Technology Patna, Patna, India.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0002-9842-7840
2024 (English)In: Neural Information Processing. ICONIP 2023 / [ed] Luo, B.; Cheng, L.; Wu, ZG., Li, H.; Li, C., Singapore: Springer, 2024, p. 425-438Conference paper, Published paper (Refereed)
Abstract [en]

Most current methods for detecting anomalies in text concentrate on constructing models solely relying on unlabeled data. These models operate on the presumption that no labeled anomalous examples are available, which prevents them from utilizing prior knowledge of anomalies that are typically present in small numbers in many real-world applications. Furthermore, these models prioritize learning feature embeddings rather than optimizing anomaly scores directly, which could lead to suboptimal anomaly scoring and inefficient use of data during the learning process. In this paper, we introduce FATE, a deep few-shot learning-based framework that leverages limited anomaly examples and learns anomaly scores explicitly in an end-to-end method using deviation learning. In this approach, the anomaly scores of normal examples are adjusted to closely resemble reference scores obtained from a prior distribution. Conversely, anomaly samples are forced to have anomalous scores that considerably deviate from the reference score in the upper tail of the prior. Additionally, our model is optimized to learn the distinct behavior of anomalies by utilizing a multi-head self-attention layer and multiple instance learning approaches. Comprehensive experiments on several benchmark datasets demonstrate that our proposed approach attains a new level of state-of-the-art performance (Our code is available at https://github.com/arav1ndajay/fate/ ).

Place, publisher, year, edition, pages
Singapore: Springer, 2024. p. 425-438
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 14448
Keywords [en]
Anomaly detection, Deviation learning, Few-shot learning, Natural language processing, Text anomaly
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-218126DOI: 10.1007/978-981-99-8082-6_33Scopus ID: 2-s2.0-85178580714ISBN: 9789819980819 (print)ISBN: 9789819980826 (electronic)OAI: oai:DiVA.org:umu-218126DiVA, id: diva2:1820451
Conference
30th International Conference on Neural Information Processing, ICONIP 2023, Changsha, China, November 20–23, 2023
Funder
Knut and Alice Wallenberg FoundationAvailable from: 2023-12-18 Created: 2023-12-18 Last updated: 2024-07-02Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Das, Anindya SundarBhuyan, Monowar H.

Search in DiVA

By author/editor
Das, Anindya SundarBhuyan, Monowar H.
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 126 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf