Curiosity is often key to solution.
MIT News | Massachusetts Institute of Technology
SUBSCRIBE TO MIT NEWS NEWSLETTER
Ensuring AI works with the right dose of curiosity
Researchers make headway in solving a longstanding problem of balancing curious “exploration” versus “exploitation” of known pathways in reinforcement learning.
Rachel Gordon | MIT CSAIL, Publication Date:November 10, 2022
It’s a dilemma as old as time. Friday night has rolled around, and you’re trying to pick a restaurant for dinner. Should you visit your most beloved watering hole or try a new establishment, in the hopes of discovering something superior? Potentially, but that curiosity comes with a risk: If you explore the new option, the food could be worse. On the flip side, if you stick with what you know works well, you won't grow out of your narrow pathway.
Curiosity drives artificial intelligence to explore the world, now in boundless use cases — autonomous navigation, robotic decision-making, optimizing health outcomes, and more. Machines, in some cases, use “reinforcement learning” to accomplish a goal, where an AI agent iteratively learns from being rewarded for good behavior and punished for bad. Just like the dilemma faced by humans in selecting a restaurant, these agents also struggle with balancing the time spent discovering better actions (exploration) and the time spent taking actions that led to high rewards in the past (exploitation). Too much curiosity can distract the agent from making good decisions, while too little means the agent will never discover good decisions.
In the pursuit of making AI agents with just the right dose of curiosity, researchers from MIT’s Improbable AI Laboratory and Computer Science and Artificial Intelligence Laboratory (CSAIL) created an algorithm that overcomes the problem of AI being too “curious” and getting distracted by a given task. Their algorithm automatically increases curiosity when it's needed, and suppresses it if the agent gets enough supervision from the environment to know what to do. ... '
No comments:
Post a Comment