/* ---- Google Analytics Code Below */

Friday, July 01, 2022

Using Makeup to Defeat Suveillance

Considerable and technical piece in the current  CACM.    Consider the implications.  Short intro: 

Using Makeup to Block Surveillance     By Esther Shein

Communications of the ACM, July 2022, Vol. 65 No. 7, Pages 21-23   10.1145/3535192

Anti-surveillance makeup, used by people who do not want to be identified to fool facial recognition systems, is bold and striking, not exactly the stuff of cloak and daggers. While experts' opinions vary on the makeup's effectiveness to avoid detection, they agree that its use is not yet widespread.

Anti-surveillance makeup relies heavily on machine learning and deep learning models to "break up the symmetry of a typical human face" with highly contrasted markings, says John Magee, an associate computer science professor at Clark University in Worcester, MA, who specializes in computer vision research. However, Magee adds that "If you go out [wearing] that makeup, you're going to draw attention to yourself."

The effectiveness of anti-surveillance makeup has been debated because of racial justice protesters who do not want to be tracked, Magee notes.

Nitzan Guetta, a Ph.D. candidate at Ben-Gurion University in Israel, was among a group of researchers who spent the past two years exploring "how deep learning-based face recognition systems can be fooled using reasonable and unnoticeable artifacts in a real-world setup." The researchers conducted an adversarial machine learning attack using natural makeup that prevents a participant from being identified by facial recognition models, she says.

The researchers "chose to focus on a makeup attack since at that time it was not explored, especially in the physical domain, and since we identified it as a potential and unnoticeable means that can be used for achieving this goal'' of evading identification, Guetta explains.

When the researchers compared an adversarial/anti-surveillance makeup algorithm with normal makeup that didn't have the guidance of the attack algorithm, "the results showed that the normal makeup did not succeed in fooling the facial recognition models," she says.  .... ' 


No comments: