/* ---- Google Analytics Code Below */

Sunday, December 01, 2019

Malevolence of the Use of Evolving Images

Good-non technical view of the current state of creating and evolving images. Somewhat like the 'photo shopping' enigma still going on, but more subtle and automated.   At first this seems like its not malevolent at all,  just amusing,  but it shows how an AI can be misled, depending how its used by people.

Malevolent Machine Learning   By Chris Edwards in the CACM

Communications of the ACM, December 2019, Vol. 62 No. 12, Pages 13-15
10.1145/3365573

At the start of the decade, deep learning restored the reputation of artificial intelligence (AI) following years stuck in a technological winter. Within a few years of becoming computationally feasible, systems trained on thousands of labeled examples began to exceed the performance of humans on specific tasks. One was able to decode road signs that had been rendered almost completely unreadable by the bleaching action of the sun, for example.

It just as quickly became apparent, however, that the same systems could just as easily be misled.

In 2013, Christian Szegedy and colleagues working at Google Brain found subtle pixel-level changes, imperceptible to a human, that extended across the image would lead to a bright yellow U.S. school bus being classified by a deep neural network (DNN) as an ostrich.

Two years later, Anh Nguyen, then a Ph.D. student at the University of Wyoming, and colleagues developed what they referre3d to as "evolved images." Some were regular patterns with added noise; others looked like the static from an analog TV broadcast. Both were just abstract images to humans, but these evolved images would be classified by DNNs trained on conventional photographs as cheetahs, armadillos, motorcycles, and whatever else the system had been trained to recognize. ... "

No comments: