Good way to look at it. Many good points, ultimately technical.
Technical Perspective: Algorithm Selection as a Learning Problem
By Avrim Blum
Communications of the ACM, June 2020, Vol. 63 No. 6, Page 86
10.1145/3394623
The following paper by Gupta and Roughgarden—"Data-Driven Algorithm Design"—addresses the issue that the best algorithm to use for many problems depends on what the input "looks like." Certain algorithms work better for certain types of inputs, whereas other algorithms work better for others. This is especially the case for NP-hard problems, where we do not expect to ever have algorithms that work well on all inputs: instead, we often have various heuristics that each work better in different settings. Moreover, heuristic strategies often have parameters or hyperparameters that must be set in some way. ... "
To view the accompanying paper, visit doi.acm.org/10.1145/3394625
Data-Driven Algorithm Design
By Rishi Gupta, Tim Roughgarden
Communications of the ACM, June 2020, Vol. 63 No. 6, Pages 87-94
10.1145/339462
The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. Although there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem.
We model the problem of identifying a good algorithm from data as a statistical learning problem. Our framework captures several state-of-the-art empirical and theoretical approaches to the problem, and our results identify conditions under which these approaches are guaranteed to perform well. We interpret our results in the contexts of learning greedy heuristics, instance feature-based algorithm selection, and parameter tuning in machine learning.
Back to Top
1. Introduction
Rigorously comparing algorithms is hard. Two different algorithms for a computational problem generally have incomparable performance: one algorithm is better on some inputs but worse on the others. How can a theory advocate one of the algorithms over the other? The simplest and most common solution in the theoretical analysis of algorithms is to summarize the performance of an algorithm using a single number, such as its worst-case performance or its average-case performance with respect to an input distribution. This approach effectively advocates using the algorithm with the best summarizing value (e.g., the smallest worst-case running time).
Solving a problem "in practice" generally means identifying an algorithm that works well for most or all instances of interest. When the "instances of interest" are easy to specify formally in advance—say, planar graphs, the traditional analysis approaches often give accurate performance predictions and identify useful algorithms. However, the instances of interest commonly possess domain-specific features that defy formal articulation. Solving a problem in practice can require designing an algorithm that is optimized for the specific application domain, even though the special structure of its instances is not well understood. Although there is a large literature, spanning numerous communities, on empirical approaches to data-driven algorithm design (e.g., Fink11, Horvitz et al.14, Huang et al.15, Hutter et al.16, Kotthoff et al.18, Leyton-Brown et al.20), there has been surprisingly little theoretical analysis of the problem. One possible explanation is that worst-case analysis, which is the dominant algorithm analysis paradigm in theoretical computer science, is intentionally application agnostic. .... "
Saturday, May 30, 2020
Algorithm Selection, Design, as a Learning Problem
Labels:
ACM,
Algorithms,
CACM,
Design,
Feature Selection,
learning
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment