Visual Place Recognition via Robust L2-Norm Distance Based Holism and Landmark Integration
Kai Liu, Hua Wang, Fei Han, Hao Zhang
AAAI - 2019
Visual place recognition is essential for large-scale simultaneous localization and mapping (SLAM). Long-term robot operations across different time of the days, months, and seasons introduce new challenges from significant environment appearance variations. In this paper, we propose a novel method to learn a location representation that can integrate the semantic landmarks of a place with its holistic representation. To promote the robustness of our new model against the drastic appearance variations due to long-term visual changes, we formulate our objective to use non-squared L2-norm distances, which leads to a difficult optimization problem that minimizes the ratio of the L2,1-norms of matrices. To solve our objective, we derive a new efficient iterative algorithm, whose convergence is rigorously guaranteed by theory. In addition, because our solution is strictly orthogonal, the learned location representations can have better place recognition capabilities. We evaluate the proposed method using two large-scale benchmark data sets, the CMU-VL and Nordland data sets. Experimental results have validated the effectiveness of our new method in long-term visual place recognition applications.
Links
- View publications from Fei Han
- View publications from Kai Liu
- View publications from Hua Wang
- View publications presented in AAAI
- View publications in the project, Mining Brain Imaging Genomics Data for Improved Cognitive Health
- View publications researching Embeddings
- View publications researching Robust Learning Models
- View publications applied to Computer Vision
Cite this paper
MLA
Liu, Kai, et al. "Visual Place Recognition via Robust l2-Norm Distance Based Holism and Landmark Integration." (2019).
BibTeX
@article{liu2019visual, title={Visual Place Recognition via Robust l2-Norm Distance Based Holism and Landmark Integration}, author={Liu, Kai and Wang, Hua and Han, Fei and Zhang, Hao}, year={2019} }