Final published version
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
Fluently remixing musical objects with higher-order functions. / Lindsay, Adam T.; Hutchison, David.
2009. 422-429 Paper presented at 12th International Conference on Digital Audio Effects, DAFx 2009, Como, Italy.Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
}
TY - CONF
T1 - Fluently remixing musical objects with higher-order functions
AU - Lindsay, Adam T.
AU - Hutchison, David
PY - 2009/12/1
Y1 - 2009/12/1
N2 - Soon after the Echo Nest Remix API was made publicly available and open source, the primary author began aggressively enhancing the Python framework for re-editing music based on perceptually-based musical analyses. The basic principles of this API - integrating content-based metadata with the underlying signal - are described in the paper, then the authors' enhancements are described. The libraries moved from supporting an imperative coding style to incorporating influences from functional programming and domain specific languages to allow for a much more fluent, terse coding style, allowing users to concentrate on the functions needed to find the portions of the song that were interesting, and modifying them. The paper then goes on to describe enhancements involving mixing multiple sources with one another and enabling user-created and user-modifiable effects that are controlled by direct manipulation of the objects that represent the sound. Revelations that the Remix API does not need to be as integrated as it currently is point to future directions for the API at the end of the paper.
AB - Soon after the Echo Nest Remix API was made publicly available and open source, the primary author began aggressively enhancing the Python framework for re-editing music based on perceptually-based musical analyses. The basic principles of this API - integrating content-based metadata with the underlying signal - are described in the paper, then the authors' enhancements are described. The libraries moved from supporting an imperative coding style to incorporating influences from functional programming and domain specific languages to allow for a much more fluent, terse coding style, allowing users to concentrate on the functions needed to find the portions of the song that were interesting, and modifying them. The paper then goes on to describe enhancements involving mixing multiple sources with one another and enabling user-created and user-modifiable effects that are controlled by direct manipulation of the objects that represent the sound. Revelations that the Remix API does not need to be as integrated as it currently is point to future directions for the API at the end of the paper.
M3 - Conference paper
AN - SCOPUS:84872704676
SP - 422
EP - 429
T2 - 12th International Conference on Digital Audio Effects, DAFx 2009
Y2 - 1 September 2009 through 4 September 2009
ER -