Multimodal representation learning over heterogeneous networks for tag-based music retrieval

作者:

Highlights:

摘要

Learning how to represent data represented by features obtained from multiple modalities through representation learning strategies has received much attention in Music Information Retrieval. Among several sources of information, musical data can be represented mainly by features extracted from acoustic content, lyrics, and metadata that concentrate complementary information and have relevance when discriminating the recordings. In this work, we propose a new method for learning multimodal representations structured as a heterogeneous network capable of incorporating different musical features in constructing a representation and exploring the similarity simultaneously. Our multimodal representation is centered on the information of tags extracted from a state-of-the-art neural language model and, in a complementary way, the audio represented by the melspectrogram. We submitted our method to a robust evaluation process composed of 10,000 queries with different scenarios and model parameter variations. Besides, we compute the Mean Average Precision and compare the representation proposed to representations built only with audio or tags obtained from a pre-trained neural model. The proposed method achieves the best results in all evaluated scenarios and emphasizes the discriminative power of multimodality can add to musical representations.

论文关键词:Music representation learning,Multimodal representation learning,Music information retrieval,Tag-based music retrieval

论文评审过程:Received 23 June 2021, Revised 28 May 2022, Accepted 22 June 2022, Available online 1 July 2022, Version of Record 8 July 2022.

论文官网地址:https://doi.org/10.1016/j.eswa.2022.117969