|| WHAT'S ALL THE FUSS ABOUT FREE UNIVERSAL SOUND SEPARATION DATA?
||Scott Wisdom, Hakan Erdogan, Daniel P. W. Ellis, Google, United States; Romain Serizel, Nicolas Turpault, Universite de Lorraine, France; Eduardo Fonseca, Universitat Pompeu Fabra, Spain; Justin Salamon, Adobe, United States; Prem Seetharaman, Descript, United States; John R. Hershey, Google, United States|
|Session||AUD-7: Audio and Speech Source Separation 3: Deep Learning|
|Session Time:||Wednesday, 09 June, 13:00 - 13:45|
|Presentation Time:||Wednesday, 09 June, 13:00 - 13:45|
|| Audio and Acoustic Signal Processing: [AUD-SEP] Audio and Speech Source Separation|
|IEEE Xplore Open Preview
|| Click here to view in IEEE Xplore
|| Click here to watch in the Virtual Conference
|| We introduce the Free Universal Sound Separation (FUSS) dataset, a new corpus for experiments in separating mixtures of an unknown number of sounds from an open domain of sound types. The dataset consists of 23 hours of single-source audio data drawn from 357 classes, which are used to create mixtures of one to four sources. To simulate reverberation, an acoustic room simulator is used to generate impulse responses of box-shaped rooms with frequency-dependent reflective walls. Additional open-source data augmentation tools are also provided to produce new mixtures with different combinations of sources and room simulations. Finally, we introduce an open-source baseline separation model, based on an improved time-domain convolutional network (TDCN++), that can separate a variable number of sources in a mixture. This model achieves 9.8 dB of scale-invariant signal-to-noise ratio improvement (SI-SNRi) on mixtures with two to four sources, while reconstructing single-source inputs with 35.8 dB absolute SI-SNR. We hope this dataset will lower the barrier to new research and allow for fast iteration and application of novel techniques from other machine learning domains to the sound separation challenge.