May also provide better ways to understand how classifiers can be tested in varying environments. Do not various contexts have likely kinds of adversarial noise? In the areas I have worked with, that is true, in some cases the noise provided another level of classification.
Noise Warfare
Harvard University
By Leah Burrows
Researchers at Harvard University say they have developed noise-robust classifiers that are prepared against the worst case of added additional data that disrupts or skews information the algorithm has already learned, known as noise. The team notes these algorithms have a guaranteed performance across a range of different example cases of noise and perform well in practice. The researchers want to use this new technology to help protect deep neural networks, which are vital for computer vision, speech recognition, and robotics, from cyberattacks. "Since people started to get really enthusiastic about the possibilities of deep learning, there has been a race to the bottom to find ways to fool the machine-learning algorithms," says Harvard professor Yaron Singer. He notes the most effective way to fool a machine-learning algorithm is to introduce specifically tailored noise for whatever classifier is in use, and this "adversarial noise" could wreak havoc on systems that rely on neural networks. .... "
Friday, February 23, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment