/* ---- Google Analytics Code Below */

Sunday, September 22, 2019

Learning and Revealing Private Data

Been looking at past articles of the Berkeley AI Group, and found an interesting aspect of data privacy examined.  Can a neural network, while being trained,  inadvertently learn and thus reveal pieces of data that happen to be in the presented data?  So say if a credit card number was in the data, could it later reveal that if the trained model was examined?  And what could you do about it?    Nicely done, largely non technical piece.

Evaluating and Testing Unintended Memorization in Neural Networks
By Nicholas Carlini    Aug 13, 2019

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.  (The paper's abstract provides a further descriptive overview)

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

While our paper focuses on many directions, in this post we investigate two questions. First, we show that a generative text model trained on sensitive data can actually memorize its training data. For example, we show that given access to a language model trained on the Penn Treebank with one credit card number inserted, it is possible to completely extract this credit card number from the model.

Second, we develop an approach to quantify this memorization. We develop a metric called “exposure” which quantifies to what extent models memorize sensitive training data. This allows us to generate plots, like the following. We train many models, and compute their perplexity (i.e., how useful the model is) and exposure (i.e., how much it memorized training data). Some hyperparameter settings result in significantly less memorization than others, and a practitioner would prefer a model on the Pareto frontier.    .... "

No comments: