[ad_1]
Estimating the location of a sound source using only the audio captured by an array of microphones has been an active area of research for nearly four decades. The problem is referred to as sound source localization (SSL).
There are robust, elegant, and computationally efficient algorithms for SSL when there is only one source of sound. But in real-life situations, it is more than likely that two or more people speak at the same time, or there is noise from a projector while a speaker speaks. In such scenarios, most of the SSL algorithms that work well for a single sound source perform poorly.
In a paper we’ll present (virtually) at the International Conference on Acoustics, Speech, and Signal Processing next month, we propose a deep-learning-based approach to multiple-source localization that offers a significant improvement over the state of the art. The key to the approach is a novel means of encoding the output of the system — the locations of multiple sound sources — so as to avoid the so-called permutation problem.
In experiments, we compared our method to a state-of-the art signal-processing technique, using both simulated data and real recordings from the AV16.3 corpus, with up to three simultaneously active sources. According to the standard metric in the field, absolute direction of arrival, our method offered an improvement of nearly 15%.
Our method is also an end-to-end solution, meaning it goes from raw audio captured by an array of microphones to the spatial coordinates of multiple sources, so it avoids the need for pre- or post-processing.
The permutation problem
A sound traveling toward an array of microphones will reach each microphone at a slightly different time, and the differences in time of arrival indicate the location of the source. With a single sound source, this computation is relatively straightforward, and there are robust signal-processing solutions to the problem of single-source SSL.
With multiple sound sources, however, the computation becomes exponentially more complex, making it challenging for a purely signal-processing-based solution to handle different acoustic conditions. Deep neural networks should be able to do better, but they run up against the permutation problem.
Consider the example below, in which three speakers share a conversational space. When any two of them speak at the same time, a deep network outputs six values: the 3-D coordinates of both speakers.
If the network learns to associate the first output (the first three coordinates) with speaker A, then it must associate the second output with both speakers B and C. But then, if B and C speak at the same time (panel three), it’s unclear which output is associated with each.
To avoid the permutation problem, deep-learning-based multiple-source-localization systems typically represent the space around the microphone array as a 3-D grid. This turns the localization problem into a multilabel classification task: for each set of input signals, the output is the probability that one of the sounds originated at each grid point.
This approach has several drawbacks. One is its difficulty in localizing sources that are off-grid. The network’s training data also needs to include all possible combinations of two and three simultaneous sound sources for every grid point. Finally, the localization accuracy is limited by the resolution of the grid.
Coarse and fine
In order to achieve arbitrary spatial resolution (i.e., not limited to a grid), we employ a divide-and-conquer strategy. We first localize sound sources to coarsely defined regions and then finely localize them within the regions.
A region is said to be active if it contains at least one source and inactive otherwise. We assume that there can be at most one active source in any active region. For each region, we compute the following quantities:
- probability that the region contains a source;
- normalized Euclidean distance between the source and the center of the microphone array;
- normalized azimuthal angle of the source with respect to the horizontal line passing through the center of the array.
The distance and angle are normalized using the minimum and maximum possible distances and angles for each sector.
This design circumvents the permutation problem. Each of the coarse regions (R1 – R8) has a designated set of nodes in the network’s output layer. Hence there is no ambiguity in associating a sound source in any given region with a location estimate output by the network.
Based on the recent success of using raw audio for classification tasks, we use the SampleCNN network architecture to consume the multichannel raw audio from an array of microphones and output the three quantities above for each region. During training, we use separate cost functions for the coarse- and fine-grained localizations (a multilabel classification cost for the coarse regions and a least-squares-regression cost for the fine location).
In our experiments, we used simulated anechoic and reverberant data (using synthetic room impulse responses), with up to four active sources randomly placed in the enclosure, and real recordings from the AV16.3 corpus. During testing, we first detect the active coarse regions whose probabilities are above a certain threshold. The fine localization outputs for these active regions are considered to be the locations of each active source.
Experimental results indicate that the network trained on anechoic data also performed well on reverberant data, and vice versa. In order to make the same network perform well on simulated data and real data, we fine-tuned it with 100 samples of real data and 100 samples of simulated data in both anechoic and reverberant settings.
To compare our model’s performance to the baselines’, we used absolute DOA error, which is the absolute difference between the actual and estimated direction of arrival of a sound source. After fine-tuning, our system was able to significantly outperform state-of-the-art approaches on the real recordings.
To the best of our knowledge, this is the first end-to-end approach for localizing multiple acoustic sources that operates on raw multichannel audio data. Deploying our network in a completely different enclosure configuration from the one used for training would require a small amount of fine-tuning data.
Because our system takes raw audio as input and outputs sound source locations, it significantly reduces the domain knowledge required to deploy a multiple-source-localization system. It can also be deployed easily using existing deep-learning frameworks.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link