Heterogeneous Visual Features Fusion via Sparse Multimodal Machine
Hua Wang, Feiping Nie, Heng Huang, Chris Ding.
CVPR - 2013
To better understand, search, and classify image and video information, many visual feature descriptors have been proposed to describe elementary visual characteristics, such as the shape, the color, the texture, etc. How to integrate these heterogeneous visual features and identify the important ones from them for specific vision tasks has become an increasingly critical problem. In this paper, We propose a novel Sparse Multimodal Learning (SMML) approach to integrate such heterogeneous features by using the joint structured sparsity regularizations to learn the feature importance of for the vision tasks from both group-wise and individual point of views. A new optimization algorithm is also introduced to solve the non-smooth objective with rigorously proved global convergence. We applied our SMML method to five broadly used object categorization and scene understanding image data sets for both single-label and multi-label image classification tasks. For each data set we integrate six different types of popularly used image features. Compared to existing scene and object categorization methods using either single modality or multi-modalities of features, our approach always achieves better performances measured.
Links
- View publications from Hua Wang
- View publications presented in CVPR
- View publications researching Multi-Modal/View Data Fusion
- View publications researching Sparsity / Sparse Coding
- View publications applied to Computer Vision
Cite this paper
MLA
Wang, Hua, et al. "Heterogeneous visual features fusion via sparse multimodal machine." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.
BibTeX
@inproceedings{wang2013heterogeneous, title={Heterogeneous visual features fusion via sparse multimodal machine}, author={Wang, Hua and Nie, Feiping and Huang, Heng and Ding, Chris}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={3097--3102}, year={2013} }