Visual Place Recognition via Robust L2-Norm Distance Based Holism and Landmark Integration

Kai Liu, Hua Wang, Fei Han, Hao Zhang

AAAI - 2019

Visual place recognition is essential for large-scale simultaneous localization and mapping (SLAM). Long-term robot operations across different time of the days, months, and seasons introduce new challenges from significant environment appearance variations. In this paper, we propose a novel method to learn a location representation that can integrate the semantic landmarks of a place with its holistic representation. To promote the robustness of our new model against the drastic appearance variations due to long-term visual changes, we formulate our objective to use non-squared L2-norm distances, which leads to a difficult optimization problem that minimizes the ratio of the L2,1-norms of matrices. To solve our objective, we derive a new efficient iterative algorithm, whose convergence is rigorously guaranteed by theory. In addition, because our solution is strictly orthogonal, the learned location representations can have better place recognition capabilities. We evaluate the proposed method using two large-scale benchmark data sets, the CMU-VL and Nordland data sets. Experimental results have validated the effectiveness of our new method in long-term visual place recognition applications.

Links

Cite this paper

MLA Copied to clipboard!
Liu, Kai, et al. "Visual Place Recognition via Robust l2-Norm Distance Based Holism and Landmark Integration." (2019).
BibTeX Copied to clipboard!
@article{liu2019visual,
  title={Visual Place Recognition via Robust l2-Norm Distance Based Holism and Landmark Integration},
  author={Liu, Kai and Wang, Hua and Han, Fei and Zhang, Hao},
  year={2019}
}