Another example of advanced sensory analysis that can improve 'seeing' in multiple complex environments.
Research reflects how AI sees through the looking glass by Cornell University
AI learns to pick up on unexpected clues to differentiate original images from their reflections, the researchers found. Credit: Cornell University Things are different on the other side of the mirror.
Text is backward. Clocks run counterclockwise. Cars drive on the wrong side of the road. Right hands become left hands.
Intrigued by how reflection changes images in subtle and not-so-subtle ways, a team of Cornell University researchers used artificial intelligence to investigate what sets originals apart from their reflections. Their algorithms learned to pick up on unexpected clues such as hair parts, gaze direction and, surprisingly, beards—findings with implications for training machine learning models and detecting faked images.
"The universe is not symmetrical. If you flip an image, there are differences," said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of the study, "Visual Chirality," presented at the 2020 Conference on Computer Vision and Pattern Recognition, held virtually June 14-19. "I'm intrigued by the discoveries you can make with new ways of gleaning information." Zhiqui Lin is the paper's first author; co-authors are Abe Davis, assistant professor of computer science, and Cornell Tech postdoctoral researcher Jin Sun.
Differentiating between original images and reflections is a surprisingly easy task for AI, Snavely said—a basic deep learning algorithm can quickly learn how to classify if an image has been flipped with 60% to 90% accuracy, depending on the kinds of images used to train the algorithm. Many of the clues it picks up on are difficult for humans to notice.
For this study, the team developed technology to create a heat map that indicates the parts of the image that are of interest to the algorithm, to gain insight into how it makes these decisions.
They discovered, not surprisingly, that the most commonly used clue was text, which looks different backward in every written language. To learn more, they removed images with text from their data set, and found that the next set of characteristics the model focused on included wrist watches, shirt collars (buttons tend to be on the left side), faces and phones—which most people tend to carry in their right hands—as well as other factors revealing right-handedness. ... "
Thursday, September 17, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment