Abstract—In this paper, a novel multi-source transfer learning method based on multi-similarity ((MS)2TL) is proposed. First, we measure the similarities between domains at two levels, i.e., “domain-domain” and “sample-domain”. With the multi-similarities, (MS)2TL can explore more accurate relationship between the source domains and the target domain. Then, the knowledge of the source domains is transferred to the target based on the smoothness assumption, which enforces the requirement that the target classifier shares similar decision values with the relevant source classifiers on the unlabeled target samples. (MS)2TL can increase the chance of finding the sources closely related to the target to reduce the “negative transfer” and also imports more knowledge from multiple sources for the target learning. Furthermore, (MS)2TL only needs the pre-learned source classifiers when training the target classifier, which is suitable for large datasets. We also employ a sparsity-regularizer based on the ε-insensitive loss to enforce the sparsity of the target classifier with the support vectors only from the target domain such that the label prediction on any test sample is very fast. We also use the ε-insensitive loss function to enforce the sparsity of the decision function for fast label prediction. Validation of (MS)2TL is performed with toy and real-life datasets. Experimental results demonstrate that (MS)2TL can more effectively and stably enhance the learning performance. Finally, (MS)2TL is also applied to the communication specific emitter identification task and the result is also satisfying.
Index Terms—Transfer learning, multiple source transfer, manifold assumption
Cite: Zhen Liu, Jun-an Yang, Hui Liu, and Wei Wang, “Multi-Similarity Based Multi-Source Transfer Learning and Its Applications," Journal of Communications, vol. 11, no. 6, pp. 539-549, 2016. Doi: 10.12720/jcm.11.6.539-549
Copyright © 2013-2023 Journal of Communications, All Rights Reserved