Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NeuProNet: neural profiling networks for sound classification
AI Center, FPT Software Company Limited, Hanoi, Viet Nam.
Umeå University, Faculty of Science and Technology, Department of Computing Science.ORCID iD: 0000-0001-8820-2405
AI Center, FPT Software Company Limited, Hanoi, Viet Nam.
School of Computer Science and Information Technology, University College Cork, Cork, Ireland.
2024 (English)In: Neural Computing & Applications, ISSN 0941-0643, E-ISSN 1433-3058, Vol. 36, no 11, p. 5873-5887Article in journal (Refereed) Published
Abstract [en]

Real-world sound signals exhibit various aspects of grouping and profiling behaviors, such as being recorded from identical sources, having similar environmental settings, or encountering related background noises. In this work, we propose novel neural profiling networks (NeuProNet) capable of learning and extracting high-level unique profile representations from sounds. An end-to-end framework is developed so that any backbone architectures can be plugged in and trained, achieving better performance in any downstream sound classification tasks. We introduce an in-batch profile grouping mechanism based on profile awareness and attention pooling to produce reliable and robust features with contrastive learning. Furthermore, extensive experiments are conducted on multiple benchmark datasets and tasks to show that neural computing models under the guidance of our framework gain significant performance gaps across all evaluation tasks. Particularly, the integration of NeuProNet surpasses recent state-of-the-art (SoTA) approaches on UrbanSound8K and VocalSound datasets with statistically significant improvements in benchmarking metrics, up to 5.92% in accuracy compared to the previous SoTA method and up to 20.19% compared to baselines. Our work provides a strong foundation for utilizing neural profiling for machine learning tasks.

Place, publisher, year, edition, pages
Springer Nature, 2024. Vol. 36, no 11, p. 5873-5887
Keywords [en]
Audio classification, Deep learning, Neural profiling network, Signal processing
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-220001DOI: 10.1007/s00521-023-09361-8ISI: 001152242400004Scopus ID: 2-s2.0-85182479547OAI: oai:DiVA.org:umu-220001DiVA, id: diva2:1833137
Available from: 2024-01-31 Created: 2024-01-31 Last updated: 2024-05-07Bibliographically approved

Open Access in DiVA

fulltext(1847 kB)228 downloads
File information
File name FULLTEXT02.pdfFile size 1847 kBChecksum SHA-512
43dcd7011498ceebe198a1a869550162ca348be6cec203f6992f9c721f0781a0d24e89fc8ec7c382975cb7f16e247c4412a4c2a57e186e9f7168269b9aa2650b
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Vu, Xuan-Son

Search in DiVA

By author/editor
Vu, Xuan-Son
By organisation
Department of Computing Science
In the same journal
Neural Computing & Applications
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 254 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 523 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf