论文标题
量化音乐风格:基于与风格的相似性的符号音乐
Quantifying Musical Style: Ranking Symbolic Music based on Similarity to a Style
论文作者
论文摘要
建模人类对音乐相似性的看法对于评估生成音乐系统,音乐学研究和许多音乐信息检索任务至关重要。尽管人类的相似性判断是黄金标准的,但通常可以优先进行计算分析,因为结果通常更容易繁殖,并且计算方法更可扩展。此外,可以快速和按需计算基于计算的方法,这是与在线系统一起使用的先决条件。我们提出了Stylerank,这是一种测量MIDI文件和任意音乐风格之间相似之处的方法,并通过一系列MIDI文件划定。 MIDI文件是使用一组新型功能编码的,并使用随机森林学习了嵌入。实验证据表明,Stylerank与人类对风格相似性的看法高度相关,并且它足够精确,可以根据其与语料库风格的相似性进行对样品进行排名。此外,可以根据单个功能来衡量相似性,从而允许生成的样本和特定的音乐风格之间的特定差异。
Modelling human perception of musical similarity is critical for the evaluation of generative music systems, musicological research, and many Music Information Retrieval tasks. Although human similarity judgments are the gold standard, computational analysis is often preferable, since results are often easier to reproduce, and computational methods are much more scalable. Moreover, computation based approaches can be calculated quickly and on demand, which is a prerequisite for use with an online system. We propose StyleRank, a method to measure the similarity between a MIDI file and an arbitrary musical style delineated by a collection of MIDI files. MIDI files are encoded using a novel set of features and an embedding is learned using Random Forests. Experimental evidence demonstrates that StyleRank is highly correlated with human perception of stylistic similarity, and that it is precise enough to rank generated samples based on their similarity to the style of a corpus. In addition, similarity can be measured with respect to a single feature, allowing specific discrepancies between generated samples and a particular musical style to be identified.