/* ---- Google Analytics Code Below */

Monday, April 04, 2022

Declarative Machine Learning Systems in Practice

Simpler systems to define machine learning

Declarative Machine Learning Systems    By Piero Molino, Christopher Ré

Communications of the ACM, January 2022, Vol. 65 No. 1, Pages 42-49 10.1145/3475167

In the past 20 years, machine learning (ML) has progressively moved from an academic endeavor to a pervasive technology adopted in almost every aspect of computing. ML-powered products are now embedded in every aspect of our digital lives: from recommendations of what to watch, to divining our search intent, to powering virtual assistants in consumer and enterprise settings. Moreover, recent successes in applying ML in natural sciences have revealed that ML can be used to tackle some of the hardest real-world problems that humanity faces today.19

For these reasons, ML has become central to the strategy of tech companies and has gathered even more attention from academia than ever before. The journey that led to the current ML-centric computing world was hastened by several factors, including hardware improvements that enabled massively parallel processing, data infrastructure improvements that resulted in the storage and consumption of the massive datasets needed to train most ML models, and algorithmic improvements that allowed for better performance and scaling.

Despite the successes, these examples of ML adoption are only the tip of the iceberg. Right now, the people training and using ML models are typically experienced developers with years of study working within large organizations, but the next wave of ML systems should allow a substantially larger number of people, potentially without any coding skills, to perform the same tasks. These new ML systems will not require users to fully understand all the details of how models are trained and used for obtaining predictions—a substantial barrier to entry—but will provide them a more abstract interface that is less demanding and more familiar. Declarative interfaces are well-suited for this goal, by hiding complexity and favoring separation of interest, and ultimately leading to increased productivity.

We worked on such abstract interfaces by developing two declarative ML systems—Overton16 and Ludwig13—that require users to declare only their data schema (names and types of inputs) and tasks rather than having to write low-level ML code. The goal of this article is to describe how ML systems are currently structured, to highlight which factors are important for ML project success and which ones will determine wider ML adoption, the issues current ML systems are facing, and how the systems we developed address them. Finally, the article describes what can be learned from the trajectory of development of ML and systems throughout the years and what the next generation of ML systems will look like.

Software engineering meets ML. A factor not appreciated enough in the successes of ML is an improved understanding of the process of producing real-world ML applications and how different it is from traditional software development. Building a working ML application requires a new set of abstractions and components, well characterized by David Sculley et al.,18 who also identified how idiosyncratic aspects of ML projects may lead to a substantial increase in technical debt (for example, the cost of reworking a solution that was obtained by cutting corners rather than following software engineering principles). These bespoke aspects of ML development are opposed to software engineering practices, with the main ones responsible being the amount of uncertainty at every step, which leads to a more service-oriented development process.

Despite the bespoke aspects of each individual ML project, researchers first and industry later distilled common patterns that abstract the most mechanical parts of building ML projects in a set of tools, systems, and platforms. Consider, for example, how the availability of projects such as scikit-learn, TensorFlow, PyTorch, and many others allowed for wide ML adoption and quick improvement of models through more standardized processes: Where implementing a ML model once required years of work for highly skilled ML researchers, now the same can be accomplished in a few lines of code that most developers would be able to write. In her article, "The Hardware Lottery," (see Communications' December 2021, p. 58) Sara Hooker argues that availability of accelerator hardware determines the success of ML algorithms, potentially more than their intrinsic merits.8 We agree with that assessment and add that availability of easy-to-use software packages tailored to ML algorithms has been at least as important for their success and adoption, if not more so.  .... ' 

No comments: