/* ---- Google Analytics Code Below */

Sunday, April 09, 2023

Safe and Secure Abstractions for Secure Machine Learning

 Intro

Technical Perspective: Beautiful Symbolic Abstractions for Safe and Secure Machine Learning

By Martin Vechev ..   Intro 

Communications of the ACM, February 2023, Vol. 66 No. 2, Page 104     10.1145/3576893

Over the last decade, machine learning has revolutionized entire areas of science ranging from drug discovery to autonomous driving, to medical diagnostics, to natural language processing and many others. Despite this impressive progress, it has become increasingly evident that modern machine learning models suffer from several issues which, if not resolved, could prevent their widespread adoption. Example challenges include lack of robustness guarantees to slight distribution shifts, reinforcing unfair bias present in training data, leakage of sensitive information through the model, and others.

Addressing these issues by inventing new methods and tools for establishing that machine learning models enjoy certain desirable guarantees, is critical, especially for domains where safety and security are paramount. Indeed, over the last few years there has been substantial research progress in new techniques aiming to address the above issues with most work so far focusing on perturbations applied to inputs of the model. For instance, the community has developed novel verification methods for proving that a model always classifies a sample (for example, an image) to the same label regardless of certain transformations (for example, an arbitrary rotation of up to five degrees). New sophisticated methods are constantly being invented targeting different properties, different types of guarantees (probabilistic, deterministic) and application domains (for example, natural language or visual perception).  ... ' 

No comments: