/* ---- Google Analytics Code Below */

Wednesday, June 17, 2020

Security of The Form and Parameters of Neural Nets.

Out of Cornell University an intriguing article that deals with how neural nets react to adversarial attacks in their energy consumption.   As predicted by simulation.   Akin to how you might test a system by giving it questions that you know would take time for a human to do, but are easy for machines.  Doing this repeatedly could reveal indications to the form and parameters of the network involved. Which contains the 'knowledge' involved.  Threats continue to get very innovative.

Sponge Examples: Energy-Latency Attacks on Neural Networks

By Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson

The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While this enabled us to train large-scale neural networks in datacenters and deploy them on edge devices, the focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully crafted sponge examples, which are inputs designed to maximise energy consumption and latency.

We mount two variants of this attack on established vision and language models, increasing energy consumption by a factor of 10 to 200. Our attacks can also be used to delay decisions where a network has critical real-time performance, such as in perception for autonomous vehicles. We demonstrate the portability of our malicious inputs across CPUs and a variety of hardware accelerator chips including GPUs, and an ASIC simulator. We conclude by proposing a defense strategy which mitigates our attack by shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.  ... "

Also being discussed at Schneier, where there is some interesting comment going on.

No comments: