Dyadic Transfer Learning for Cross-Domain Image Classification

Hua Wang, Feiping Nie, Heng Huang, Chris Ding.

ICCV - 2011

Because manual image annotation is both expensive and labor intensive, in practice we often do not have sufficient labeled images to train an effective classifier for the new image classification tasks. Although multiple labeled image data sets are publicly available for a number of computer vision tasks, a simple mixture of them cannot achieve good performance due to the heterogeneous properties and structures between different data sets. In this paper, we propose a novel nonnegative matrix tri-factorization based transfer learning framework, called as Dyadic Knowledge Transfer (DKT) approach, to transfer cross-domain image knowledge for the new computer vision tasks, such as classifications. An efficient iterative algorithm to solve the proposed optimization problem is introduced. We perform the proposed approach on two benchmark image data sets to simulate the real world cross-domain image classification tasks. Promising experimental results demonstrate the effectiveness of the proposed approach.

Links

Cite this paper

MLA Copied to clipboard!
Wang, Hua, et al. "Dyadic transfer learning for cross-domain image classification." 2011 International Conference on Computer Vision. IEEE, 2011.
BibTeX Copied to clipboard!
@inproceedings{wang2011dyadic,
  title={Dyadic transfer learning for cross-domain image classification},
  author={Wang, Hua and Nie, Feiping and Huang, Heng and Ding, Chris},
  booktitle={2011 International Conference on Computer Vision},
  pages={551--556},
  year={2011},
  organization={IEEE}
}