Under consideration for a project, how do we choose among multiple learning contexts and parameters? When there is noise in the objective evaluations? A means for automating machine learning? This paper also makes the case. See also full tutorial paper pointed to below, which is technical.
https://arxiv.org/abs/1807.02811 Abstract
A Tutorial on Bayesian Optimization By Peter I. Frazier
Bayesian optimization is an approach to optimizing objective functions that take a long time (minutes or hours) to evaluate. It is best-suited for optimization over continuous domains of less than 20 dimensions, and tolerates stochastic noise in function evaluations. It builds a surrogate for the objective and quantifies the uncertainty in that surrogate using a Bayesian machine learning technique, Gaussian process regression, and then uses an acquisition function defined from this surrogate to decide where to sample. In this tutorial, we describe how Bayesian optimization works, including Gaussian process regression and three common acquisition functions: expected improvement, entropy search, and knowledge gradient. We then discuss more advanced techniques, including running multiple function evaluations in parallel, multi-fidelity and multi-information source optimization, expensive-to-evaluate constraints, random environmental conditions, multi-task Bayesian optimization, and the inclusion of derivative information. We conclude with a discussion of Bayesian optimization software and future research directions in the field. Within our tutorial material we provide a generalization of expected improvement to noisy evaluations, beyond the noise-free setting where it is more commonly applied. This generalization is justified by a formal decision-theoretic argument, standing in contrast to previous ad hoc modifications.
Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Optimization and Control (math.OC) ....
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment