Adaptive mapping of sound collections for data-driven musical interfaces: The International Conference on New Interfaces for Musical Expression

Gerard Roma, Owen Green, Pierre Alexandre Tremblay

Research output: Contribution to conferencePaperpeer-review

Abstract

Descriptor spaces have become an ubiquitous interaction paradigm for music based on collections of audio samples. However, most systems rely on a small predefined set of descriptors, which the user is often required to understand and choose from. There is no guarantee that the chosen descriptors are relevant for a given collection. In addition, this method does not scale to longer samples that require higher-dimensional descriptions, which biases systems towards the use of short samples.

In this paper we propose novel framework for automatic creation of interactive sound spaces from sound collections using feature learning and dimensionality reduction. The framework is implemented as a software library using the Super Collider language. We compare several algorithms and describe some example interfaces for interacting with the resulting spaces. Our experiments signal the potential of unsupervised algorithms for creating data-driven musical interfaces.
Original languageEnglish
Number of pages6
Publication statusPublished - Jun 2019
Externally publishedYes
EventInternational Conference on New Interfaces for Musical Expression -
Duration: 3 Jun 2019 → …

Academic conference

Academic conferenceInternational Conference on New Interfaces for Musical Expression
Period3/06/19 → …

Keywords

  • Dimensionality reduction
  • feature learning
  • information visualisation

Fingerprint

Dive into the research topics of 'Adaptive mapping of sound collections for data-driven musical interfaces: The International Conference on New Interfaces for Musical Expression'. Together they form a unique fingerprint.

Cite this