/* ---- Google Analytics Code Below */

Thursday, January 31, 2019

Provability and Machine Learning

Interesting point,  but the point being made is rarely useful in practical mathematics.   Just because something cannot be rigorously proved does not mean it is not practically useful.    Results that are used in AI methods are statistical, not exact logic.   Still what is shown does claim to link the related limitations of logic and machine learning.  Now how does that limit ML in practice?

Unprovability comes to machine learning  in Nature

Scenarios have been discovered in which it is impossible to prove whether or not a machine-learning algorithm could solve a particular problem. This finding might have implications for both established and future learning algorithms.

During the twentieth century, discoveries in mathematical logic revolutionized our understanding of the very foundations of mathematics. In 1931, the logician Kurt Gödel showed that, in any system of axioms that is expressive enough to model arithmetic, some true statements will be unprovable 1. And in the following decades, it was demonstrated that the continuum hypothesis — which states that no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers — can be neither proved nor refuted using the standard axioms of mathematics2–4. Writing in Nature Machine Intelligence, Ben-David et al.5 show that the field of machine learning, although seemingly distant from mathematical logic, shares this limitation. They identify a machine-learning problem whose fate depends on the continuum hypothesis, leaving its resolution forever beyond reach. ..... " 

Paper: https://www.nature.com/articles/s42256-018-0002-3

No comments: