scholarly journals Pattern-Based Music Generation with Wasserstein Autoencoders and PRC Descriptions

Author(s):  
Valentijn Borghuis ◽  
Luca Angioloni ◽  
Lorenzo Brusci ◽  
Paolo Frasconi

We demonstrate a pattern-based MIDI music generation system with a generation strategy based on Wasserstein autoencoders and a novel variant of pianoroll descriptions of patterns which employs separate channels for note velocities and note durations and can be fed into classic DCGAN-style convolutional architectures. We trained the system on two new datasets (in the acid-jazz and high-pop genres) composed by musicians in our team with music generation in mind. Our demonstration shows that moving smoothly in the latent space allows us to generate meaningful sequences of four-bars patterns.

Author(s):  
Prof. Amita Suke ◽  
Prof. Khemutai Tighare ◽  
Yogeshwari Kamble

The music lyrics that we generally listen are human written and no machine involvement is present. Writing music has never been easy task, lot of challenges are involved to write because the music lyrics need to be meaningful and at the same time it needs to be in harmony and synchronised with the music being play over it. They are written by experienced artist who have been writing music lyrics form long time. This project tries to automate music lyrics generation using computerized program and deep learning which we produce lyrics and reduce the load on human skills and may generate new lyrics and a really faster rate than humans ever can. This project will generate the music with the assistance of human and AI


IARJSET ◽  
2019 ◽  
Vol 6 (5) ◽  
pp. 47-54
Author(s):  
Sanidhya Mangal ◽  
Rahul Modak ◽  
Poorva Joshi

2020 ◽  
Vol 10 (18) ◽  
pp. 6627
Author(s):  
Maarten Grachten ◽  
Stefan Lattner ◽  
Emmanuel Deruty

Deep learning has given AI-based methods for music creation a boost by over the past years. An important challenge in this field is to balance user control and autonomy in music generation systems. In this work, we present BassNet, a deep learning model for generating bass guitar tracks based on musical source material. An innovative aspect of our work is that the model is trained to learn a temporally stable two-dimensional latent space variable that offers interactive user control. We empirically show that the model can disentangle bass patterns that require sensitivity to harmony, instrument timbre, and rhythm. An ablation study reveals that this capability is because of the temporal stability constraint on latent space trajectories during training. We also demonstrate that models that are trained on pop/rock music learn a latent space that offers control over the diatonic characteristics of the output, among other things. Lastly, we present and discuss generated bass tracks for three different music fragments. The work that is presented here is a step toward the integration of AI-based technology in the workflow of musical content creators.


Author(s):  
Shuai Chen ◽  
◽  
Yoichiro Maeda ◽  
Yasutake Takahashi

In research on interactive music generation, we propose a music generation method in which the computer generates music under the recognition of a humanmusic conductor’s gestures. In this research, generated music is tuned by parameters of a network of chaotic elements which are determined by the recognized gesture in real time. The music conductor’s hand motions are detected by Microsoft Kinect in this system. Music theories are embedded in the algorithm and, as a result, generated music is richer. Furthermore, we constructed the music generation system and performed experiments for generating music composed by human beings.


Sign in / Sign up

Export Citation Format

Share Document