Simulation was a favorite method for analyzing alternatives in the enterprise. Of course every simulation also created new data. Now that data can be used for finding operational patterns and examples in the real world. Would work well for systems like robots, where their operational constraints are strictly defined. But even when we did not have the kind of restriction, we could simulate within ranges. Which led you to more combinatorial problems. Nice way to think about these problems. Also the results are typically quite transparent.
NVIDIA Brings Robot Simulation Closer to Reality by Making Humans Redundant Learning in simulation no longer takes human expertise to make it useful in the real world By Evan Ackerman
We all know how annoying real robots are. They’re expensive, they’re finicky, and teaching them to do anything useful takes an enormous amount of time and effort. One way of making robot learning slightly more bearable is to program robots to teach themselves things, which is not as fast as having a human instructor in the loop, but can be much more efficient because that human can be off doing something else more productive instead. Google industrialized this process by running a bunch of robots in parallel, which sped things up enormously, but you’re still constrained by those pesky physical arms.
The way to really scale up robot learning is to do as much of it as you can in simulation instead. You can use as many virtual robots running in virtual environments testing virtual scenarios as you have the computing power to handle, and then push the fast forward button so that they’re learning faster than real time. Since no simulation is perfect, it’ll take some careful tweaking to get it to actually be useful and reliable in reality, and that means that humans have get back involved in the process. Ugh.
A team of NVIDIA researchers, working at the company’s new robotics lab in Seattle, is taking a crack at eliminating this final human-dependent step in a paper that they’re presenting at ICRA today. There’s still some tuning that has to happen to match simulation with reality, but now, it’s tuning that happens completely autonomously, meaning that the gap between simulation and reality can be closed without any human involvement at all. .... "
Tuesday, May 28, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment