Piano Concerto Accompaniment Creation

Our vision is to create an orchestral accompaniment for a solo pianist performing a piano concerto. Ideally, the accompaniment would automatically adapt to the pianist's interpretation, adjusting to aspects such as tempo and dynamics. In our project, we have experimeted with a first semi-automatic offline approach, as described below. On this website, we present the results for two movements of famous piano concertos, where the piano tracks were performed by two non-professional pianists from our research lab, and the orchestral tracks were derived from older, public-domain piano concerto recordings.

Copyright and Dataset

CC BY-NC-ND 4.0 The data provided on this website is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. If you publish results obtained using this data, please cite the literature on source separation of piano concertos mentioned below.

Download Dataset

Overall Procedure

In our project, we created an orchestral accompaniment using a semi-automatic offline approach that involved computational tools from signal processing and machine learning. The main ideas of our approach can be summarized as follows:

  • Pianist's Performance: The pianist freely performs and records a piano track, creating a personal interpretation (referred to as P).

  • Source Separation: We select an existing recording of a piano concerto and apply source separation techniques to isolate the recording into a piano track (referred to as SP) and an orchestral track (referred to as SO).

  • Temporal Alignment: The separated orchestral track SO is then temporally aligned with the pianist's recording P using alignment and beat-tracking techniques.

  • Time-Scale Modification: We apply time-scale modification techniques to warp the SO track so that it temporally synchronizes with the P track.

  • Dynamic Adjustment and Mixing: Finally, we adjust the dynamics of the SO track and mix it with the P track, creating an orchestral accompaniment tailored to the pianist's interpretation.

For more details on these techniques, we refer to the literature cited below. In our two examples, we applied the above approach, with some steps, including beat tracking and alignment of musically important anchor points, being improved through manual annotations and corrections. Furthermore, the dynamic adjustment and final mixing were manually fine-tuned by a sound engineer. Further technical details for recording the piano versions:

  • ORTF Stereo Spot Microphones (2 x Schoeps MK4)
  • Fireface UCX A/D converter
  • REAPER recording software

References

  1. Yigitcan Özer and Meinard Müller
    Source Separation of Piano Concertos Using Musically Motivated Augmentation Techniques
    IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32: 1214–1225, 2024. PDF Demo DOI
    @article{OezerM24_PianoSourceSep_TASLP,
    author      = {Yigitcan {\"O}zer  and Meinard M{\"u}ller},
    title       = {Source Separation of Piano Concertos Using Musically Motivated Augmentation Techniques},
    journal     = {{IEEE}/{ACM} Transactions on Audio, Speech, and Language Processing},
    volume      = {32},
    pages       = {1214--1225},
    year        = {2024},
    doi         = {10.1109/TASLP.2024.3356980},
    url-demo = {https://audiolabs-erlangen.de/resources/MIR/PCD},
    url-pdf = {2024_OezerM_PCSeparation_TASLP_ePrint.pdf}
    }
  2. Yigitcan Özer, Hans-Ulrich Berendes, Vlora Arifi-Müller, Fabian-Robert Stöter, and Meinard Müller
    Notewise Evaluation for Music Source Separation: A Case Study for Separated Piano Tracks
    In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR) (to appear), 2024. Demo
    @inproceedings{OezerBASM24_NotewiseEvalPiano_ISMIR,
    author    = {Yigitcan {\"O}zer and Hans-Ulrich Berendes and Vlora Arifi-M{\"u}ller and Fabian{-}Robert St{\"o}ter and Meinard M{\"u}ller},
    title     = {Notewise Evaluation for Music Source Separation: A Case Study for Separated Piano Tracks},
    booktitle = {Proceedings of the International Society for Music Information Retrieval Conference ({ISMIR}) (to appear)},
    address   = {San Francisco, USA},
    year      = {2024},
    url-demo  = {https://www.audiolabs-erlangen.de/resources/MIR/2024-ISMIR-PianoSepEval},
    }
  3. Yigitcan Özer, Simon Schwär, Vlora Arifi-Müller, Jeremy Lawrence, Emre Sen, and Meinard Müller
    Piano Concerto Dataset (PCD): A Multitrack Dataset of Piano Concertos
    Transaction of the International Society for Music Information Retrieval (TISMIR), 6(1): 75–88, 2023. PDF Details Demo DOI
    @article{OezerSAJEM23_PCD_TISMIR,
    author = {Yigitcan {\"O}zer and Simon Schw{\"a}r and Vlora Arifi-M{\"u}ller and Jeremy Lawrence and Emre Sen and Meinard M{\"u}ller},
    title = {Piano Concerto Dataset ({PCD}): A Multitrack Dataset of Piano Concertos},
    journal = {Transaction of the International Society for Music Information Retrieval ({TISMIR})},
    volume = {6},
    number = {1},
    pages = {75--88},
    year = {2023},
    doi = {10.5334/tismir.160},
    url-details = {https://transactions.ismir.net/articles/10.5334/tismir.160},
    url-pdf   = {2023_OezerSALSM_PianoConcertoDataset_TISMIR_ePrint.pdf},
    url-demo = {https://audiolabs-erlangen.de/resources/MIR/PCD}
    }
  4. Jonathan Driedger and Meinard Müller
    A Review on Time-Scale Modification of Music Signals
    Applied Sciences, 6(2): 57–82, 2016. PDF Demo
    @article{DriedgerMueller16_ReviewTSM_AppliedSciences,
    author  = {Jonathan Driedger and Meinard M{\"u}ller},
    journal = {Applied Sciences},
    title   = {A Review on Time-Scale Modification of Music Signals},
    year    = {2016},
    month   = {February},
    volume  = {6},
    number  = {2},
    pages   = {57--82},
    url-pdf   = {2016_DriedgerMueller_TSMOverview_AppliedSciences_ePrint.pdf},
    url-demo = {https://www.audiolabs-erlangen.de/resources/MIR/TSMtoolbox}
    }

Acknowledgments

This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Grant No. 328416299 (MU 2686/10-2) and Grant No. 500643750 (MU 2686/15-1). The International Audio Laboratories Erlangen are a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and Fraunhofer Institute for Integrated Circuits IIS.