/* ---- Google Analytics Code Below */

Monday, April 22, 2019

Accoustical Watermarking and the Second Screen

And yet more on context switching for the voice assistant.    Here work by Amazon, to be presented at an upcoming conference.   Originating from work to ignore 'wake words' in 'second screens' from other media.  Which seems to work quite well now.   But immediately made me think of:  Why not include more data in the watermark to identify it further, transmit information it learns about a context.  Nice direction.

Audio Watermarking Algorithm Is First to Solve "Second-Screen Problem" in Real Time   By Yuan-yen Tai

Audio watermarking is the process of adding a distinctive sound pattern — undetectable to the human ear — to an audio signal to make it identifiable to a computer. It’s one of the ways that video sites recognize copyrighted recordings that have been posted illegally.

To identify a watermark, a computer usually converts a digital file into an audio signal, which it processes internally. If the watermark were embedded in the digital file, rather than in the signal itself, then re-encoding the audio in a different file format would eliminate the watermark.

Watermarking schemes designed for on-device processing tend to break down, however, when a signal is broadcast over a loudspeaker, captured by a microphone, and only then inspected for watermarks. In what is referred to as the second-screen problem, noise and interference distort the watermark, and delays from acoustic transmission make it difficult to synchronize the detector with the signal. 

At this year’s International Conference on Acoustics, Speech, and Signal Processing, in May, Amazon senior research scientist Mohamed Mansour and I will present a new audio-watermarking algorithm that effectively solves the second-screen problem in real time for the first time in the watermarking literature.   .... "

No comments: