/* ---- Google Analytics Code Below */

Thursday, May 25, 2023

AI Watermarking

Cautions and Marking.

With AI Watermarking, Creators Strike Back Backdoor attacks regulate unauthorized uses of copyrighted or restricted data      By TAMMY XU

AI models rely on immense datasets to train their complex algorithms, but sometimes the use of those datasets for training purposes can infringe on the rights of the data owners. Yet actually proving that a model used a dataset without authorization has been notoriously difficult. However, a new studypublished in IEEE Transactions on Information Forensics and Security, researchers introduce a method for protecting datasets from unauthorized use by embedding digital watermarks into them. The technique could give data owners more say in who is allowed to train AI models using their data.

The simplest way of protecting datasets is to restrict their use, such as with encryption. But doing so would make those datasets difficult to use for authorized users as well. Instead, the researchers focused on detecting whether a given AI model was trained using a particular dataset, says the study’s lead author, Yiming Li. Models known to have been impermissibly trained on a dataset can be flagged for follow up by the data owner.

Watermarking methods could cause harm, too, though. Malicious actors, for instance, could teach a self-driving system to incorrectly recognize stop signs as speed limit signs.

The technique can be applied to many different types of machine learning problems, Li said, although the study focuses on classification models, including image classification. First, a small sample of images is selected from a dataset and a watermark consisting of a set pattern of altered pixels is embedded into each image. Then the classification label of each watermarked image is changed to correspond to a target label. This establishes a relationship between the watermark and the target label, creating what’s called a backdoor attack. Finally, the altered images are recombined with the rest of the dataset and published, where it’s available for consumption by both authorized users. To verify whether a particular model was trained using the dataset, researchers simply run watermarked images through the model and see whether they get back the target label.  .... ' 

No comments: