/* ---- Google Analytics Code Below */

Thursday, February 07, 2019

Backpropagation

Continuing the advanced tech theme this morning.  .. In the early days you had to code your own backpropagation, now you don't.   Still good to know what is going on.  Here a not too technical view. De-mystifying can help you use the idea, and understand how much data you need.

The Backpropagation Algorithm Demystified

By Nathalie Jeans in Medium

One of the most important aspects of machine learning is its ability to recognize error margins in its output and be able to interpret data more precisely as increasing numbers of datasets are fed through its neural network. Commonly referred to as backpropagation, it is a process that isn’t as complex as you might think.

The first thing people think of when they hear the term “Machine Learning” goes a little something like the Matrix. All around, there are computers taking over the world, let alone the human race. And maybe — just maybe — they’re making some music or creating some art on the side. In any case, people generally just want nothing to do with it.

What if I told you those people don’t even know what machine learning and things like backpropagation really are? Just hear me out. Then you can go back to worrying about the robot-led apocalypse that’s supposed to happen next Friday.

Deep Learning: Neural Networks, Weights, and Biases

A neural network is just a really complicated machine: you have an input, you put it through the machine, and you get something that comes out. There are multiple tasks that make up this machine so that you get what you want in the end. You can also adjust each of the tasks that are part of this process so you have the best-working, most accurate result at the end. In a neural network, the ‘tasks’ are hidden layers, while the adjustment of the tasks’ performance are called weights. This determines how each of the nodes within the hidden layers are considered which affects the result of the final output. The principle of machine learning states that the optimal final output can be obtained by adjusting the tasks through inputting larger amounts of datasets, like trial and error.  .... "

No comments: