Had seen this before, fascinating example of how more and richer signals can be leveraged into new kinds of imaging. Here a kind of fabulous example, seeing through walls. But in the right circumstances, with the right data.
Seeing Through Walls by Neil Savage
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 15-16 10.1145/3392516
Machine vision coupled with artificial intelligence (AI) has made great strides toward letting computers understand images. Thanks to deep learning, which processes information in a way analogous to the human brain, machine vision is doing everything from keeping self-driving cars on the right track to improving cancer diagnosis by examining biopsy slides or x-ray images. Now some researchers are going beyond what the human eye or a camera lens can see, using machine learning to watch what people are doing on the other side of a wall.
The technique relies on low-power radio frequency (RF) signals, which reflect off living tissue and metal but pass easily through wooden or plaster interior walls. AI can decipher those signals, not only to detect the presence of people, but also to see how they are moving, and even to predict the activity they are engaged in, from talking on a phone to brushing their teeth. With RF signals, "they can see in the dark. They can see through walls or furniture," says Tianhong Li, a Ph.D. student in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT). He and fellow graduate student Lijie Fan helped develop a system to measure movement and, from that, to identify specific actions. "Our goal is to understand what people are doing," Li says.
Such an understanding could come in handy for, say, monitoring elderly residents of assisted living facilities to see if they are having difficulty performing the tasks of daily living, or to detect if they have fallen. It could also be used to create "smart environments," in which automated devices turn on lights or heat in a home or an office. A police force might use such a system to monitor the activity of a suspected terrorist or an armed robber.
In living situations, one advantage of monitoring activity through walls with RF signals is that they are unable to resolve faces or see what a person is wearing, for instance, so they could afford more of a sense of privacy than studding the home with cameras, says Dina Katabi, the MIT professor leading the research. Another is that it does not require people to wear monitoring devices they might forget or be uncomfortable with; they just move through their homes as they normally would.
The MIT system uses a radio transmitter operating at between 5.4 GHz and 7.2GHz, at power levels 1,000 times lower than a Wi-Fi signal, so it should not cause interference. The RF transmissions bounce strongly off people because of all the water content of our bodies, but they also bounce off other objects in the environment to varying degrees, depending on the composition of the object. "You get this mass of reflections, signals bouncing off everything," Katabi says.
So the first step is to teach the computer to identify which signals are coming from people. The team does this by recording a scene in both visible light and RF signals, and using the visual image to label the humans in the training data for a convolutional neural network (CNN), a type of deep learning algorithm that assigns weights to different aspects of an image. The signals also contain spatial information, because it takes a longer time for a signal to travel a longer distance. The CNN can capture that information and use it to separate two or more people in the same vicinity, although it can lead to errors if the people are very close or hugging, Katabi says. .... "
Friday, June 12, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment