Domain specific transfer learning using image mixing and and stochastic image selection
Can a gradual transition from the source to the target dataset improve knowledge transfer when fine-tuning a convolutional neural network to a new domain? Can we use training examples from general image datasets to improve classification on fine-grained datasets? We present two image similarity metrics and two methods for progressively transitioning from the source dataset to the target dataset when fine-tuning to a new domain. Preliminary results, using the Flowers 102 dataset, show that the first proposed method, stochastic domain subset training, gives an improvement in classification accuracy compared to standard fine-tuning, for one of the two similarity metrics. However, the second method, continuous domain subset training, results in a reduction in classification performance.