Generating Music Using Deep Learning
The integration of deep learning into symbolic music generation presents new opportunities for emulating artist-specific musical styles. In this paper, we propose a multi-branch Long Short-Term Memory (LSTM) network designed to generate monophonic melodies conditioned on note pitch, duration, and playback, with a focus on stylistic imitation of The Beatles. Unlike existing approaches that model music solely as sequences of pitches, our model processes three distinct streams of musical attributes and learns joint temporal dependencies through a custom architecture. We introduce a structured data representation derived from 193 MIDI files of Beatles songs using the music21 toolkit, extracting pitch and duration features and quantizing them into a format suitable for sequential prediction. Experimental results demonstrate that the model captures artist-specific musical patterns with moderate accuracy across output branches, and a listening test involving 71 participants validates the perceptual plausibility of the generated compositions. Our findings suggest that feature-aware sequence modeling is effective for stylistically informed symbolic music generation, and we discuss limitations and future extensions toward polyphonic modeling and conditional generation.