Publication Type

Master Thesis

Version

publishedVersion

Publication Date

11-2021

Abstract

In this thesis, I focus on the music generation conditional on human sentiments such as positive and negative. As there are no existing large-scale music datasets annotated with sentiment labels, generating high-quality music conditioned on sentiments is hard. I thus build a new dataset consisting of the triplets of lyric, melody and sentiment, without requiring any manual annotations. I utilize an automated sentiment recognition model (based on the BERT trained on Edmonds Dance dataset) to "label'' the music according to the sentiments recognized from its lyrics. I then train the model of generating sentimental music and call the method Sentimental Lyric and Melody Generator (SLMG). Specifically, SLMG is consisted of three modules: 1) an encoder-decoder model trained end-to-end for generating lyric and melody; 2) a music sentiment classifier trained on labelled data; and 3) a modified beam search algorithm that guides the music generation process by incorporating the music sentiment classifier. I conduct subjective and objective evaluations on the generated music and the evaluation results show that SLMG is capable of generating tuneful lyric and melody with specific sentiments.

Keywords

Conditional Music Generation, Seq2Seq, Beam Search, Transformer

Degree Awarded

MSc in Applied Finance (SUFE)

Discipline

Artificial Intelligence and Robotics | Music

Supervisor(s)

SUN, Qianru

First Page

1

Last Page

78

Publisher

Singapore Management University

City or Country

Singapore

Copyright Owner and License

Author

Share

COinS