DeepBach: A Steerable Model for Bach Chorales Generation
Reference: Proceedings of the 34th International Conference on Machine Learning (ICML),
pp. 1362–1371, 2017
Resources
Abstract
DeepBach introduces a generative model capable of producing Bach-style four-part chorales with
fine-grained user control. The system combines a pseudo-Gibbs sampling scheme with deep neural
architectures trained on the full Bach chorale corpus. Unlike earlier models, DeepBach allows
users to interactively constrain musical structure—such as meter, chord progressions, or specific
voices—while maintaining stylistic coherence. Human evaluation experiments show that many
DeepBach harmonizations are indistinguishable from genuine Bach chorales, making it one of the
first successful demonstrations of steerable symbolic music generation.
Highlights
- Deep generative model for four-part Bach-style chorales using pseudo-Gibbs sampling.
- Fully steerable: users can fix notes, chords, cadences, meter, and specific voices.
- Generated chorales respect voice-leading and tonal grammar learned from the Bach corpus.
- Human listening tests show many harmonizations are indistinguishable from genuine Bach.
- One of the most cited works in neural symbolic music generation and AI-assisted composition.
Keywords
Deep learning for music; symbolic music generation; steerable generative models; Bach chorales;
pseudo-Gibbs sampling; computational creativity; music information retrieval.
BibTeX
@inproceedings{hadjeres2017deepbach,
title = {DeepBach: a Steerable Model for Bach Chorales Generation},
author = {Hadjeres, Ga{\"e}tan and Pachet, Fran{\c{c}}ois and Nielsen, Frank},
booktitle = {Proceedings of the 34th International Conference on Machine Learning},
pages = {1362--1371},
year = {2017}
}
Why this paper matters
DeepBach was one of the first systems to show that deep learning could convincingly generate music
in a highly codified style, to the point of fooling musicians in blind tests. It also framed
symbolic music generation as an interactive and steerable process, where human
users retain control over musical structure while the model fills in the details. This idea has
influenced many later AI music systems and remains a reference point for controllable music models.