/* ---- Google Analytics Code Below */

Tuesday, August 29, 2017

Backdoors in Deep Learning Neural Nets

Even Artificial Neural Networks can have Exploitable Backdoors.  In Wired

EARLY IN AUGUST, NYU professor Siddharth Garg checked for traffic, and then put a yellow Post-it onto a stop sign outside the Brooklyn building in which he works. When he and two colleagues showed a photo of the scene to their road-sign detector software, it was 95 percent sure the stop sign in fact displayed a speed limit.

The stunt demonstrated a potential security headache for engineers working with machine learning software. The researchers showed that it’s possible to embed silent, nasty surprises into artificial neural networks, the type of learning software used for tasks such as recognizing speech or understanding photos.

Malicious actors can design that behavior to emerge only in response to a very specific, secret signal, as in the case of Garg's Post-it. Such “backdoors” could be a problem for companies that want to outsource work on neural networks to third parties, or build products on top of freely available neural networks available online. Both approaches have become more common as interest in machine learning grows inside and outside the tech industry. “In general it seems that no one is thinking about this issue,” says Brendan Dolan-Gavitt, an NYU professor who worked with Garg. ... " 

No comments: