Music is an indispensable part of our life and study and is one of the most important forms of multimedia applications. With the development of deep learning and neural network in recent years, how to use cutting-edge technology to study and apply music has become a research hotspot. Music waveform is not only the main form of music frequency but also the basis of music feature extraction. This paper first designs a method of note extraction based on the fast Fourier transform principle of the audio signal packet route under the self-organizing map (SOM neural network) which can accurately extract the musical features of the note, such as amplitude, loudness, period, and so on. Secondly, the audio segments are divided into summary by adding window moving matching method, and the music features such as amplitude, loudness, and period of each bar are obtained according to the performance of audio signal in each bar. Finally, according to the similarity of the audio music theory of the adjacent summary of each bar, the audio segments are divided, and the music features of each segment are obtained. The traditional recurrent neural network (RNN) is improved, and the SOM neural network is used to recognize the audio emotion features. The final experimental results show that the proposed method based on SOM neural network and big data can effectively extract and analyze music waveform features. Compared with previous studies, this paper creatively proposed a new algorithm, which can more accurately and quickly extract and analyze the data sound waveform, and used SOM neural network to analyze the emotion model contained in music for the first time.