As an Amazon Associate I earn from qualifying purchases.

Audio watermarking algorithm is first to solve “second-screen problem” in real time

[ad_1]

Audio watermarking is the process of adding a distinctive sound pattern — undetectable to the human ear — to an audio signal to make it identifiable to a computer. It’s one of the ways that video sites recognize copyrighted recordings that have been posted illegally.

To identify a watermark, a computer usually converts a digital file into an audio signal, which it processes internally. If the watermark were embedded in the digital file, rather than in the signal itself, then re-encoding the audio in a different file format would eliminate the watermark.

Watermarking schemes designed for on-device processing tend to break down, however, when a signal is broadcast over a loudspeaker, captured by a microphone, and only then inspected for watermarks. In what is referred to as the second-screen problem, noise and interference distort the watermark, and delays from acoustic transmission make it difficult to synchronize the detector with the signal.

At this year’s International Conference on Acoustics, Speech, and Signal Processing, in May, Amazon senior research scientist Mohamed Mansour and I will present a new audio-watermarking algorithm that effectively solves the second-screen problem in real time for the first time in the watermarking literature.

In our experiments, if the watermark was added to about two seconds of the audio signal, our algorithm could detect it with almost perfect accuracy, even when the distance between the speaker and the microphone was greater than 20 feet.

Audio watermarks (red squiggles) are embedded imperceptibly in a media signal (black). Each watermark consists of a repeating sequence of audio building blocks (colored shapes). A detector segments the watermark and aligns the segments to see if they match. Randomly inverting the building blocks prevents rhythmic patterns in the media signal from triggering the detector; the detector uses a binary key to restore the inverted blocks.

Our algorithm could complement the acoustic-fingerprinting technology that currently prevents Alexa from erroneously waking when she hears media mentions of her name. Acoustic fingerprinting requires storing a separate fingerprint for each instance of Alexa’s name, and its computational complexity is proportional to the fingerprint database size. The watermarking algorithm, by contrast, has constant computational complexity, which gives it advantages for use in low-power computational devices, such as Bluetooth headsets.

We also envision that audio watermarking could improve the performance of Alexa’s automatic-speech-recognition system. Audio content that Alexa plays — music, audiobooks, podcasts, radio broadcasts, movies — could be watermarked on the fly, so that Alexa-enabled devices can better gauge room reverberation and filter out echoes.

Our system, like most modern audio-watermarking systems, uses the spread-spectrum concept. That means that the watermark energy is spread across time and/or frequency, which renders the watermark inaudible to human listeners. Further, this energy spread makes the watermark robust to common audio processing procedures, such as mp3 compression.

Also like other systems, ours builds watermarks from noise blocks of fixed duration. Each noise block introduces its own, distinct perturbation pattern to selected frequency components in the host audio signal. The watermark consists of noise blocks strung together in a predetermined sequence, and it looks like background noise to someone who lacks the decoding key.

In conventional watermarking, the key is simply the sequence of the noise blocks, and the detector looks for that sequence in the audio signal. In the second-screen scenario, however, electrical noise in the speaker and microphone and interference from echoes and ambient noise during acoustic transmission distort the watermark, making detection more challenging.

Even then, careful synchronization between the received signal and a reference copy of the noise pattern might still enable watermark detection, but acoustic transmission introduces delays that can’t be precisely gauged, rendering synchronization difficult.

We solve both problems by dispensing with the reference copy of the noise pattern. Instead, we embed the same, relatively short noise pattern in the audio signal multiple times. Rather than compare the received signal to a reference pattern, we compare it to itself.

Two versions of a clip from an Alexa ad, one with a watermark embedded in the word “Alexa” and one without.

Because the whole audio signal passes through the same acoustic environment, the separate instances of the noise pattern will be distorted in similar ways. That means that we can compare them directly, without having to do any complex echo cancellation or noise reduction. The detector takes advantage of the distortion due to the acoustic channel, rather than combatting it.

This approach — known as autocorrelation — poses its own problems, however. One is that longer watermarks yield higher detection accuracy, and we have to use shorter noise patterns, as we repeat them multiple times.

The other problem is that the audio that we’d like to watermark — whether media mentions of Alexa’s name or Alexa’s own audio output — will frequently include music, and the regular rhythms of an instrumental ensemble can look a lot like a repeating noise pattern.

Again, a single modification solves both problems. With each repetition of the noise block pattern, we randomly invert some of the blocks: where the amplitude of the block would ordinarily increase, we instead decrease it at the same rate, and vice versa.

Now, the key becomes a sequence of binary values, each indicating whether a given noise block is inverted or not. This sequence can be arbitrarily long, even though it’s built on top of a repeated pattern of noise blocks. Because it’s a binary sequence, it’s also efficient to compute with, whereas in conventional watermarking, the key is a sequence of floating-point values, each describing the shape of a noise block.

The random inversion of the noise blocks also ensures that the watermark detector won’t be fooled by a drum kit holding a steady tempo. It does require that, when we sequence the watermark to compare noise block patterns, we re-invert the blocks that were flipped. But this can be done efficiently using the binary key.

The experimental results reported in the paper show that for the general second-screen problem, the algorithm provided an excellent trade-off between detection accuracy — what percentage of watermarks we detect — and false-alarm rate — how often the algorithm infers a watermark that isn’t there. Further, the decoder has low complexity, which enables embedded implementation, and low latency, which enables real-time implementation. Applying the algorithm to the particular problem of detecting media mentions of Alexa poses additional technical challenges that the Alexa team is currently tackling.

Acknowledgments: Mohamed Mansour, Mike Rodehorst, Joe Wang, Sumit Garg, Parind Shah, Shiv Vitaladevuni



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo