/* ---- Google Analytics Code Below */

Sunday, March 24, 2019

Humans Thinking Like Computers

Its well known that humans can be tricked by images.   Computers too, and apparently sometimes in the same way.  What are the implications?   Can we have them check each other?    About images? About ethics?  In what contexts?   In a system like  a Robotic Process Automation (RLA), where would we best insert a human agent, a computing AI agent?

People Agreeing with Neural Networks

Do You See What AI Sees? Study Finds That Humans Can Think Like Computers  By Johns Hopkins University 

Even powerful computers, like those that guide self-driving cars, can be tricked into mistaking random scribbles for trains, fences, or school buses. It was commonly believed that people couldn't see how those images trip up computers, but in a new study, Johns Hopkins University researchers show most people actually can.

The findings suggest modern computers may not be as different from humans as supposed, demonstrating how advances in artificial intelligence continue to narrow the gap between the visual abilities of people and machines. The research is described in "Humans Can Decipher Adversarial Images," published in the journal Nature Communications.

"Most of the time, research in our field is about getting computers to think like people," says senior author Chaz Firestone, an assistant professor in Johns Hopkins' Department of Psychological and Brain Sciences. "Our project does the opposite—we're asking whether people can think like computers."

No comments: