Repository logo
 

Modeling Musical Mood From Audio Features, Affect and Listening Context on an In-situ Dataset

dc.contributor.advisorMandryk, Reganen_US
dc.contributor.committeeMemberGutwin, Carlen_US
dc.contributor.committeeMemberNeufield, Ericen_US
dc.contributor.committeeMemberBell, Scotten_US
dc.creatorWatson, Dianeen_US
dc.date.accessioned2013-01-03T22:32:12Z
dc.date.available2013-01-03T22:32:12Z
dc.date.created2012-08en_US
dc.date.issued2012-08-29en_US
dc.date.submittedAugust 2012en_US
dc.description.abstractMusical mood is the emotion that a piece of music expresses. When musical mood is used in music recommenders (i.e., systems that recommend music a listener is likely to enjoy), salient suggestions that match a user’s expectations are made. The musical mood of a track can be modeled solely from audio features of the music; however, these models have been derived from musical data sets of a single genre and labeled in a laboratory setting. Applying these models to data sets that reflect a user’s actual listening habits may not work well, and as a result, music recommenders based on these models may fail. Using a smartphone-based experience-sampling application that we developed for the Android platform, we collected a music listening data set gathered in-situ during a user’s daily life. Analyses of our data set showed that real-life listening experiences differ from data sets previously used in modeling musical mood. Our data set is a heterogeneous set of songs, artists, and genres. The reasons for listening and the context within which listening occurs vary across individuals and for a single user. We then created the first model of musical mood using in-situ, real-life data. We showed that while audio features, song lyrics and socially-created tags can be used to successfully model musical mood with classification accuracies greater than chance, adding contextual information such as the listener’s affective state and or listening context can improve classification accuracies. We successfully classified musical arousal in a 2-class model with a classification accuracy of 67% and musical valence with an accuracy of 75%. Finally, we discuss ways in which the classification accuracies can be improved, and the applications that result from our models.en_US
dc.identifier.urihttp://hdl.handle.net/10388/ETD-2012-08-563en_US
dc.language.isoengen_US
dc.subjectMusical Mood, Affect, Affective Computing, Music, In-Situ Data, Listening Context, Experience Samplingen_US
dc.titleModeling Musical Mood From Audio Features, Affect and Listening Context on an In-situ Dataseten_US
dc.type.genreThesisen_US
dc.type.materialtexten_US
thesis.degree.departmentComputer Scienceen_US
thesis.degree.disciplineComputer Scienceen_US
thesis.degree.grantorUniversity of Saskatchewanen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMaster of Science (M.Sc.)en_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
WATSON-THESIS.pdf
Size:
2.08 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1005 B
Format:
Plain Text
Description: