A considerable and detailed look at decision trees, and multiple applications. We used DT methods extensively in the enterprise. We implemented it within our standard AI methods. Had been used for a long time because the specifics of the approach allow for complete transparency. It as much machine learning as neural nets are. Its not considered enough today because its not seen as sexy enough.
The Complete Guide to Decision Trees (Long article that provides a good overview, then gets technical)
Posted by Diego Lopez Yse in DSC
Everything you need to know about a top algorithm in Machine Learning
In the beginning, learning Machine Learning (ML) can be intimidating. Terms like “Gradient Descent”, “Latent Dirichlet Allocation” or “Convolutional Layer” can scare lots of people. But there are friendly ways of getting into the discipline, and I think starting with Decision Trees is a wise decision.
Decision Trees (DTs) are probably one of the most useful supervised learningalgorithms out there. As opposed to unsupervised learning (where there is no output variable to guide the learning process and data is explored by algorithms to find patterns), in supervised learning your existing data is already labelled and you know which behaviour you want to predict in the new data you obtain. This is the type of algorithms that autonomous cars use to recognize pedestrians and objects, or organizations exploit to estimate customers lifetime value and their churn rates.
In a way, supervised learning is like learning with a teacher, and then apply that knowledge to new data.
DTs are ML algorithms that progressively divide data sets into smaller data groups based on a descriptive feature, until they reach sets that are small enough to be described by some label. They require that you have data that is labelled (tagged with one or more labels, like the plant name in pictures of plants), so they try to label new data based on that knowledge. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment