The Interactive System of Music Emotion Recognition based on Deep Learning
Main Article Content
Abstract
This paper proposes an interactive system based on spectrogram analysis, deep neural network and wavelet analysis aiming at the complexity and subjectivity of music emotion recognition. The system first uses a spectrogram to capture the time-frequency characteristics of music signals and then automatically extracts the deep emotion-related features through a convolutional neural network (CNN). This paper introduces the Mallat algorithm for wavelet decomposition to enhance the local details of audio signals to improve the accuracy of feature extraction. The experimental results show that the system performs well in recognizing music emotions, and the accuracy is significantly improved compared with the traditional method. In addition, the system supports real-time interaction, allowing users to personalize music experience by adjusting emotional labels, thus showing broad application prospects in music therapy, game entertainment and other fields. This study promotes the development of music emotion recognition technology and provides a new perspective for further exploration of deep learning in interdisciplinary applications.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.