Automated tagging of music according to mood

Date of Publication

2012

Document Type

Bachelor's Thesis

Degree Name

Bachelor of Science in Computer Science

Subject Categories

Computer Sciences

College

College of Computer Studies

Department/Unit

Computer Science

Thesis Adviser

Arturo Caronongan, III

Defense Panel Member

Joel P. Ilao
Arnulfo P. Azcarraga

Abstract/Summary

Music libraries are constantly growing, often tagged in relation to its instrumentation or artist. An emerging trend is the annotation of music according to its emotionally affective features, but the tools and methods used in annotating music remain the same, making it increasingly difficult to locate or recall a specific song for certain events. The approach presented here extracts musical features from a music file and an emotive classification of the song based on a classification model, which can then be used in conjunction with other components, such as a music recommendation system. A dataset of 546 songs tagged by a group of 4 people using a valence- arousal scale of -1 to +1 was used in training models with different classifier algorithms such as multilayer perception and different implementations of regression. Results for valence classification show a root mean square error of 0.3016 while arousal classification is at 0.3498. Overall error, calculated as the Euclidean distance between valence and arousal on a plane is an average of 0.6164 and a median o 0.5926. Some of the discriminant music features were identified to be the song spectral moments, linear predictive coding coefficients, and zero-crossings rate. These results show that while music mood classification through purely music features is feasible, it proves to be a difficult task for only musical features, and the inclusion of lyrics and establishment of the listeners cultural context in relation to the music are likely key in improving classifier performance.

Abstract Format

html

Language

English

Format

Print

Accession Number

TU18520

Shelf Location

Archives, The Learning Commons, 12F, Henry Sy Sr. Hall

Physical Description

1 v., various foliations ; 28 cm.

Keywords

Automatic musical dictation; Computer Science--Information Retrieval,

This document is currently not available here.

Share

COinS