Modeling Musical Mood From Audio Features, Affect and Listening Context on an In-situ Dataset

View/ Open
Date
2012-08-29Author
Watson, Diane
Type
ThesisDegree Level
MastersMetadata
Show full item recordAbstract
Musical mood is the emotion that a piece of music expresses. When musical mood is used in music recommenders (i.e., systems that recommend music a listener is likely to enjoy), salient suggestions that match a user’s expectations are made. The musical mood of a track can be modeled solely from audio features of the music; however, these models have been derived from musical data sets of a single genre and labeled in a laboratory setting. Applying these models to data sets that reflect a user’s actual listening habits may not work well, and as a result, music recommenders based on these models may fail. Using a smartphone-based experience-sampling application that we developed for the Android platform, we collected a music listening data set gathered in-situ during a user’s daily life. Analyses of our data set showed that real-life listening experiences differ from data sets previously used in modeling musical mood. Our data set is a heterogeneous set of songs, artists, and genres. The reasons for listening and the context within which listening occurs vary across individuals and for a single user. We then created the first model of musical mood using in-situ, real-life data. We showed that while audio features, song lyrics and socially-created tags can be used to successfully model musical mood with classification accuracies greater than chance, adding contextual information such as the listener’s affective state and or listening context can improve classification accuracies. We successfully classified musical arousal in a 2-class model with a classification accuracy of 67% and musical valence with an accuracy of 75%. Finally, we discuss ways in which the classification accuracies can be improved, and the applications that result from our models.
Degree
Master of Science (M.Sc.)Department
Computer ScienceProgram
Computer ScienceSupervisor
Mandryk, ReganCommittee
Gutwin, Carl; Neufield, Eric; Bell, ScottCopyright Date
August 2012Subject
Musical Mood, Affect, Affective Computing, Music, In-Situ Data, Listening Context, Experience Sampling