Umeå University's logo

umu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards automatic DJ mixing: cue point detection and drum transcription
Umeå University, Faculty of Science and Technology, Department of Computing Science. (HPAC)ORCID iD: 0000-0001-5022-1686
2024 (English)Doctoral thesis, comprehensive summary (Other academic)Alternative title
Mot automatisk DJ-mixning : cue point-detektering och trumtranskription (Swedish)
Abstract [en]

With this thesis, we aim to automate the creation of DJ mixes. A DJ mix consists of an uninterrupted sequence of music, constructed by playing tracks one after the other, to improve the listening experience for the audience. Thus, to be able to build mixes automatically, we first need to understand the tracks we want to mix. This is done by extracting information from the audio signal. Specifically, we retrieve two pieces of information that are essential for DJs: cue points and drum transcription. In the field of music information retrieval, the two associated tasks are cue point detection and automatic drum transcription.

With cue point detection, we identify the positions in the tracks that can be used to create pleasant transitions in the mix. DJs have a good intuition on how to detect these positions. However, it is not straightforward to transform their intuition into a computer program because of the semantic gap between the two. To solve this problem we propose multiple approaches based on either expert knowledge or machine learning. Further, by interpreting the resulting models from our approaches, we also reflect on the musical content that is linked to the presence of cue points.

With automatic drum transcription, we aim to retrieve the position and the instrument of the notes played on the drumkit, to characterize the musical content of the tracks. To create the transcription, the most promising method is based on supervised deep learning. That is, models trained on labeled datasets. However, because of the difficulty of creating the annotations, the datasets available for training are usually limited in size or diversity. Thus, we propose novel methods to create better training data, either with real-world or synthetic music tracks. Further, by investigating thoroughly the performance of the models resulting from the training data, we deduce the most relevant characteristics of a dataset that help train models.

The solutions we proposed for both tasks of cue point detection and automatic drum transcription achieve high levels of accuracy. By investigating how these tasks reach this accuracy, we further our understanding of music information retrieval. And by open-sourcing our contributions, we make these findings reproducible. With the software resulting from this research, we created a proof of concept for automatic DJ mixing.

Place, publisher, year, edition, pages
Umeå: Umeå University, 2024. , p. 34
Series
Report / UMINF, ISSN 0348-0542 ; 24.08
Keywords [en]
Music Information Retrieval, Cue Point Detection, Automatic Drum Transcription
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:umu:diva-228266ISBN: 9789180704533 (print)ISBN: 9789180704540 (electronic)OAI: oai:DiVA.org:umu-228266DiVA, id: diva2:1887409
Public defence
2024-09-02, MIT.C.343, MIT-huset, Umeå, 13:00 (English)
Opponent
Supervisors
Available from: 2024-08-15 Created: 2024-08-07 Last updated: 2024-08-09Bibliographically approved
List of papers
1. M-DJCUE: a manually annotated dataset of cue points
Open this publication in new window or tab >>M-DJCUE: a manually annotated dataset of cue points
2019 (English)Conference paper, Oral presentation only (Other academic)
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228225 (URN)
Conference
20th International Society for Music Information Retrieval Conference: Across the bridge, Delft, The Netherlands, November 4-8, 2019
Note

Session: Late Breaking/Demo

Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-08Bibliographically approved
2. Automatic detection of cue points for the emulation of DJ mixing
Open this publication in new window or tab >>Automatic detection of cue points for the emulation of DJ mixing
2022 (English)In: Computer music journal, ISSN 0148-9267, E-ISSN 1531-5169, Vol. 46, no 3, p. 67-82Article in journal (Refereed) Published
Abstract [en]

The automatic identification of cue points is a central task in applications as diverse as music thumbnailing, generation of mash ups, and DJ mixing. Our focus lies in electronic dance music and in a specific kind of cue point, the “switch point,” that makes it possible to automatically construct transitions between tracks, mimicking what professional DJs do. We present two approaches for the detection of switch points. One embodies a few general rules we established from interviews with professional DJs, the other models a manually annotated dataset that we curated. Both approaches are based on feature extraction and novelty analysis. From an evaluation conducted on previously unknown tracks, we found that about 90 percent of the points generated can be reliably used in the context of a DJ mix.

Place, publisher, year, edition, pages
MIT Press, 2022
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:umu:diva-216393 (URN)10.1162/comj_a_00652 (DOI)001101195600004 ()2-s2.0-85177430629 (Scopus ID)
Available from: 2023-11-10 Created: 2023-11-10 Last updated: 2025-04-24Bibliographically approved
3. Interpretability of methods for switch point detection in electronic dance music
Open this publication in new window or tab >>Interpretability of methods for switch point detection in electronic dance music
(English)Manuscript (preprint) (Other academic)
National Category
Computer Sciences Music
Identifiers
urn:nbn:se:umu:diva-228227 (URN)
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-02-21
4. ADTOF: A large dataset of non-synthetic music for automatic drum transcription
Open this publication in new window or tab >>ADTOF: A large dataset of non-synthetic music for automatic drum transcription
2021 (English)In: Proceedings of the 22nd International Society for Music Information Retrieval Conference, 2021, p. 818-824Conference paper, Published paper (Refereed)
Abstract [en]

The state-of-the-art methods for drum transcription in the presence of melodic instruments (DTM) are machine learning models trained in a supervised manner, which means that they rely on labeled datasets. The problem is that the available public datasets are limited either in size or in realism, and are thus suboptimal for training purposes. Indeed, the best results are currently obtained via a rather convoluted multi-step training process that involves both real and synthetic datasets. To address this issue, starting from the observation that the communities of rhythm games players provide a large amount of annotated data, we curated a new dataset of crowdsourced drum transcriptions. This dataset contains real-world music, is manually annotated, and is about two orders of magnitude larger than any other non-synthetic dataset, making it a prime candidate for training purposes. However, due to crowdsourcing, the initial annotations contain mistakes. We discuss how the quality of the dataset can be improved by automatically correcting different types of mistakes. When used to train a popular DTM model, the dataset yields a performance that matches that of the state-of-the-art for DTM, thus demonstrating the quality of the annotations.

National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-189852 (URN)10.5281/zenodo.5624527 (DOI)2-s2.0-85148923089 (Scopus ID)9781732729902 (ISBN)
Conference
ISMIR 2021, the 22nd International Society for Music Information Retrieval Conference, Online, November 7-12, 2021
Available from: 2021-11-23 Created: 2021-11-23 Last updated: 2024-08-07Bibliographically approved
5. High-quality and reproducible automatic drum transcription from crowdsourced data
Open this publication in new window or tab >>High-quality and reproducible automatic drum transcription from crowdsourced data
2023 (English)In: Signals, E-ISSN 2624-6120, Vol. 4, no 4, p. 768-787Article in journal (Refereed) Published
Abstract [en]

Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data obtained from crowds of annotators have made it possible to implement large-scale supervised learning architectures for ADT. In this study, we explored the untapped potential of these new datasets by addressing three key points: First, we reviewed recent trends in DL architectures and focused on two techniques, self-attention mechanisms and tatum-synchronous convolutions. Then, to mitigate the noise and bias that are inherent in crowdsourced data, we extended the training data with additional annotations. Finally, to quantify the potential of the data, we compared many training scenarios by combining up to six different datasets, including zero-shot evaluations. Our findings revealed that crowdsourced datasets outperform previously utilized datasets, and regardless of the DL architecture employed, they are sufficient in size and quality to train accurate models. By fully exploiting this data source, our models produced high-quality drum transcriptions, achieving state-of-the-art results. Thanks to this accuracy, our work can be more successfully used by musicians (e.g., to learn new musical pieces by reading, or to convert their performances to MIDI) and researchers in music information retrieval (e.g., to retrieve information from the notes instead of audio, such as the rhythm or structure of a piece).

Place, publisher, year, edition, pages
MDPI, 2023
Keywords
automatic drum transcription, crowdsourced dataset, self-attention mechanism, tatum
National Category
Signal Processing Computer Sciences
Identifiers
urn:nbn:se:umu:diva-216394 (URN)10.3390/signals4040042 (DOI)001177003200001 ()2-s2.0-85180709684 (Scopus ID)
Funder
Swedish National Infrastructure for Computing (SNIC)Swedish Research Council, 2022-06725Swedish Research Council, 2018-05973
Available from: 2023-11-10 Created: 2023-11-10 Last updated: 2025-04-24Bibliographically approved
6. In-depth performance analysis of the ADTOF-based algorithm for automatic drum transcription
Open this publication in new window or tab >>In-depth performance analysis of the ADTOF-based algorithm for automatic drum transcription
2024 (English)In: Proceedings of the 25th international society for music information retrieval conference, San Francisco: ISMIR , 2024, p. 1060-1067Conference paper, Published paper (Refereed)
Abstract [en]

The importance of automatic drum transcription lies in the potential to extract useful information from a musical track; however, the low reliability of the models for this task represents a limiting factor. Indeed, even though in the recent literature the quality of the generated transcription has improved thanks to the curation of large training datasets via crowdsourcing, there is still a large margin of improvement for this task to be considered solved. Aiming to steer the development of future models, we identify the most common errors from training and testing on the aforementioned crowdsourced datasets. We perform this study in three steps: First, we detail the quality of the transcription for each class of interest; second, we employ a new metric and a pseudo confusion matrix to quantify different mistakes in the estimations; last, we compute the agreement between different annotators of the same track to estimate the accuracy of the ground-truth. Our findings are twofold: On the one hand, we observe that the previously reported issue that less represented instruments (e.g., toms) are less reliably transcribed is mostly solved now. On the other hand, cymbal instruments have unprecedented relative low performance. We provide intuitive explanations as to why cymbal instruments are difficult to transcribe and we identify that they represent the main source of disagreement among annotators.

Place, publisher, year, edition, pages
San Francisco: ISMIR, 2024
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228264 (URN)2-s2.0-85219129262 (Scopus ID)
Conference
25th International Society for Music Information Retrieval Conference (ISMIR), San Francisco, USA, 10-14 november, 2024.
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-04-02Bibliographically approved
7. Analyzing and reducing the synthetic-to-real transfer gap in music information retrieval: the task of automatic drum transcription
Open this publication in new window or tab >>Analyzing and reducing the synthetic-to-real transfer gap in music information retrieval: the task of automatic drum transcription
(English)Manuscript (preprint) (Other academic)
National Category
Computer Sciences
Identifiers
urn:nbn:se:umu:diva-228228 (URN)
Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2024-08-08

Open Access in DiVA

fulltext(3385 kB)206 downloads
File information
File name FULLTEXT01.pdfFile size 3385 kBChecksum SHA-512
4ab261ec3b7b48fb3febb5613fbbcd0d926f45a68310211423ccacde5cdfdda53d11bb63316de36a8755ce9a110456e988f940e3fe481e952d996045e6702240
Type fulltextMimetype application/pdf
spikblad(197 kB)34 downloads
File information
File name SPIKBLAD02.pdfFile size 197 kBChecksum SHA-512
fd621bf7610f06fa99a21da299f3a8d0294ae78ade622406d122982f270df22fa11ec08ba4e3ba2845d4e54a876225c154ff43d1eeea1718e63d139fc68ff60d
Type spikbladMimetype application/pdf

Authority records

Zehren, Mickaël

Search in DiVA

By author/editor
Zehren, Mickaël
By organisation
Department of Computing Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 206 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 678 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf