/* ---- Google Analytics Code Below */

Monday, July 20, 2020

A Look at Transfer Learning

Good generalized look at the concept of Transfer Learnig

Everything you need to know about transfer learning in AI   in TNW

Today, artificial intelligence programs can recognize faces and objects in photos and videos, transcribe audio in real-time, detect cancer in x-ray scans years in advance, and compete with humans in some of the most complicated games.

Until a few years ago, all these challenges were either thought insurmountable, decades away, or were being solved with sub-optimal results. But advances in neural networks and deep learning, a branch of AI that has become very popular in the past few years, has helped computers solve these and many other complicated problems.

Unfortunately, when created from scratch, deep learning models require access to vast amounts of data and compute resources. This is a luxury that many can’t afford. Moreover, it takes a long time to train deep learning models to perform tasks, which is not suitable for use cases that have a short time budget.

Fortunately, transfer learning, the discipline of using the knowledge gained from one trained AI model to another, can help solve these problems.

The cost of training deep learning models
Deep learning is a subset of machine learning, the science of developing AI through training examples. The concepts and science behind deep learning and neural networks is as old as the term “artificial intelligence” itself. But until recent years, they had been largely dismissed by the AI community for being inefficient.

The availability of vast amounts of data and compute resources in the past few years have pushed neural networks into the limelight and made it possible to develop deep learning algorithms that can solve real world problems.

To train a deep learning model, you basically must feed a neural network with lots of annotated examples. These examples can be things such as labeled images of objects or mammograms scans of patients with their eventual outcomes. The neural network will carefully analyze and compare the images and develop mathematical models that represent the recurring patterns between images of a similar category.

[Read: Weird AI illustrates why algorithms still need people]

There already exists several large open-source datasets such as ImageNet, a database of more than 14 million images labeled in 22,000 categories, and MNIST, a dataset of 60,000 handwritten digits. AI engineers can use these sources to train their deep learning models.

However, training deep learning models also requires access to very strong computing resources. Developers usually use clusters of CPUs, GPUs or specialized hardware such as Google’s Tensor Processors (TPUs) to train neural networks in a time-efficient way. The costs of purchasing or renting such resources can be beyond the budget of individual developers or small organizations. Also, for many problems, there aren’t enough examples to train robust AI models.

Transfer learning makes deep learning training much less demanding
Say an AI engineer wants to create an image classifier neural network to solve a specific problem. Instead of gathering thousands and millions of images, the engineer can use one of the publicly available datasets such as ImageNet and enhance it with domain-specific photos.

But the AI engineer must still rent pay a hefty sum to rent the compute resources necessary to run those millions of images through the neural network. This is where transfer learning comes into play. Transfer learning is the process of creating new AI models by fine-tuning previously trained neural networks.  ... " 

No comments: