This link is first a reminder to myself that we need to continually promote means of risk and uncertainty awareness in models. Meta-reasoning is always important. Thinking about the context in which your models will be used. If you don't do that you have missed something important. Understanding the risks it will have in use. The article gets quite technical, but the intros are worthwhile to read. And there are links to good online courses, which also have good intros.
Bayesian meta-learning
This story introduces bayesian meta-learning approaches, which covers bayesian black-box meta-learning, bayesian optimization-based meta-learning, ensembles of MAMLs and probabilistic MAML. This a short summary of the course ‘Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 5 — Bayesian Meta-Learning’.
By Qiurui Chen in TowardsDataScience
For meta-learning algorithms, 3 algorithmic properties are important: expressive power, consistency, and uncertainty awareness. Expressive power is the ability for f to represent a range of learning procedures, it measures scalability and applicability to a range of domains. Consistency means learned learning procedure will solve tasks with enough data, this property reduces reliance on meta-training tasks, which leads to good out-of-distribution performance. Uncertainty awareness is the ability to reason about ambiguity during learning. It allows us to think about how we might explore new environments in a reinforcement learning context in order to reduce our uncertainty. It also thinks about if we are in safety-critical settings, we want to calibrate uncertainty estimates. It also allows us to think about, from the Bayesian perspective of Meta-learning, what sort of principle approaches can be derived from those graphical models?
This story covers 1. Why be Bayesian? 2. Bayesian meta-learning approaches 3. How to evaluate Bayesians ... " ...'
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment