Considerable, interesting piece, Below links to an intro.
MAY 18, 2022XCORR
What’s the endgame of neuroAI? in xcorr.net
It’s been 60 years since Hubel and Wiesel first started unlocking the mysteries of the visual system. Proceeding one neuron at a time, they discovered the fundamental building blocks of vision, the simple and complex cells. Yet for a long time, neurons in high-level visual cortex were something of a mystery. What kinds of neural computations support complex, flexible behaviour?
When I defended my thesis in 2014, I confidently stated that we did not know how to build computers that could see like humans do. Yet only a few months later, Niko Kriegeskorte and Jim DiCarlo’s labs showed that deep neural networks (DNNs) trained on ImageNet represent visual information similarly to shape-selective regions of the visual brain. Follow-up research, including some of my own, increased the number of visual areas that could be explained this way, while also decreasing the numerical gap between brains and computers.
There are some remaining qualitative gaps, covered in Grace Lindsay’s review: DNNs are more susceptible to adversarial stimuli than humans; they require far more training data than brains; they’re biologically implausible in that they don’t follow Dale’s law; etc. As I argued in my previous post, people are working on all these fronts. We will be able to build a comprehensive in silico version of the visual brain over the next decade. What next?
In this essay, I’ll argue that we should build, over the next decade, an in silico version of the visual brain that will unlock a whole array of applications in human health. We’ll be able to exercise fine control over our visual experiences, and this will enable therapies delivered through the visual sense. Some therapies will be applicable for people with neurological disorders, while others will enhance healthy people. It will be unlocked by stages of technological development: first with the maturation of neuroAI, then through consumer augmented reality (AR), and finally (and much further down the line) with brain-computer interfaces (BCI) through closed-loop control. Follow me as I take you on a tour of the near future of visual neuroAI.
This is one of my longer posts, and it covers a lot of ground:
What’s neuroAI, and why do I think that neuroAI models will keep getting better, fast?
Why building models of the visual brain is more than just a satisfying intellectual exercise, it can actually help people
How does visual communication and control currently work, and how will they change when visual neuroAI will be deployed?
What are the technological trends that will unlock these neuroAI applications? How AR and BCI will allow closed-loop control of the visual system
Subscribe to xcorr and be the first to know when there’s a new post ..... '
No comments:
Post a Comment