/* ---- Google Analytics Code Below */

Friday, September 14, 2018

Bringing in Information Theory

Nice idea.   Its really all about the knowledge we have now, and what we can add to that by learning.  And ways we can usefully measure that.  Now how can we leverage this idea?  This is ultimately a 'computer science' technical idea, via Information Theory, but the author makes it as comprehensible as is possible.  Intro below, and then off you go.

When Bayes, Ockham, and Shannon come together to define machine learning  by Tirthajyoti Sarkar in TowardsDataScience

Editorial Associate "Towards Data Science" | Sr. Principal Engineer | Ph.D. in EE (U. of Iilinois)| AI/ML certification, Stanford, MIT | Open-source contributor

A beautiful idea, which binds together concepts from statistics, information theory, and philosophy.

Introduction

It is somewhat surprising that among all the high-flying buzzwords of machine learning, we don’t hear much about the one phrase which fuses some of the core concepts of statistical learning, information theory, and natural philosophy into a single three-word-combo.

Moreover, it is not just an obscure and pedantic phrase meant for machine learning (ML) Ph.Ds and theoreticians. It has a precise and easily accessible meaning for anyone interested to explore, and a practical pay-off for the practitioners of ML and data science.

I am talking about Minimum Description Length. And you may be thinking what the heck that is…

Let’s peal the layers off and see how useful it is…

No comments: