University of SaskatchewanHARVEST
  • Login
  • Submit Your Work
  • About
    • About HARVEST
    • Guidelines
    • Browse
      • All of HARVEST
      • Communities & Collections
      • By Issue Date
      • Authors
      • Titles
      • Subjects
      • This Collection
      • By Issue Date
      • Authors
      • Titles
      • Subjects
    • My Account
      • Login
      JavaScript is disabled for your browser. Some features of this site may not work without it.
      View Item 
      • HARVEST
      • Electronic Theses and Dissertations
      • Graduate Theses and Dissertations
      • View Item
      • HARVEST
      • Electronic Theses and Dissertations
      • Graduate Theses and Dissertations
      • View Item

      Modeling Musical Mood From Audio Features, Affect and Listening Context on an In-situ Dataset

      Thumbnail
      View/Open
      WATSON-THESIS.pdf (2.081Mb)
      Date
      2012-08-29
      Author
      Watson, Diane
      Type
      Thesis
      Degree Level
      Masters
      Metadata
      Show full item record
      Abstract
      Musical mood is the emotion that a piece of music expresses. When musical mood is used in music recommenders (i.e., systems that recommend music a listener is likely to enjoy), salient suggestions that match a user’s expectations are made. The musical mood of a track can be modeled solely from audio features of the music; however, these models have been derived from musical data sets of a single genre and labeled in a laboratory setting. Applying these models to data sets that reflect a user’s actual listening habits may not work well, and as a result, music recommenders based on these models may fail. Using a smartphone-based experience-sampling application that we developed for the Android platform, we collected a music listening data set gathered in-situ during a user’s daily life. Analyses of our data set showed that real-life listening experiences differ from data sets previously used in modeling musical mood. Our data set is a heterogeneous set of songs, artists, and genres. The reasons for listening and the context within which listening occurs vary across individuals and for a single user. We then created the first model of musical mood using in-situ, real-life data. We showed that while audio features, song lyrics and socially-created tags can be used to successfully model musical mood with classification accuracies greater than chance, adding contextual information such as the listener’s affective state and or listening context can improve classification accuracies. We successfully classified musical arousal in a 2-class model with a classification accuracy of 67% and musical valence with an accuracy of 75%. Finally, we discuss ways in which the classification accuracies can be improved, and the applications that result from our models.
      Degree
      Master of Science (M.Sc.)
      Department
      Computer Science
      Program
      Computer Science
      Supervisor
      Mandryk, Regan
      Committee
      Gutwin, Carl; Neufield, Eric; Bell, Scott
      Copyright Date
      August 2012
      URI
      http://hdl.handle.net/10388/ETD-2012-08-563
      Subject
      Musical Mood, Affect, Affective Computing, Music, In-Situ Data, Listening Context, Experience Sampling
      Collections
      • Graduate Theses and Dissertations
      University of Saskatchewan

      University Library

      The University of Saskatchewan's main campus is situated on Treaty 6 Territory and the Homeland of the Métis.

      © University of Saskatchewan
      Contact Us | Disclaimer | Privacy