One of the biggest myths about AI is that you need to have a large amount of data to obtain sufficient accuracy — and the rapid development of Big Data analytics seems to prove this intuition. It is true, that deep learning methods require model training on a huge number of labeled images. However, in image classification even a small collection of training images may produce a reasonable accuracy rate (90–100%) if using new machine learning techniques, that either make use of previously collected data to adjacent domains or modify the classification process completely, working on similarity of images.
Similar to human capability to apply knowledge obtained in one sphere to related spheres, machine learning and deep learning algorithms can also utilize the knowledge acquired for one task to sole adjacent problems.
Even though traditionally ML/DL algorithms are designed to work in isolation to address specific tasks, the methods of transfer knowledge and domain adaptation are aimed to overcome the isolated learning paradigm to develop models which would be closer to a human way of learning.
Transfer learning is the method that generalizes knowledge, including features and weights, from previously learned tasks and applies them to newer, related ones that lack data. In computer …
Read More on Datafloq