Visual odometry in dynamic environments using light weight semantic segmentation

College

Gokongwei College of Engineering

Department/Unit

Manufacturing Engineering and Management

Document Type

Conference Proceeding

Source Title

2019 IEEE 11th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management, HNICEM 2019

Publication Date

11-1-2019

Abstract

Visual odometry is the method in which a robot tracks its position and orientation using a sequence of images. Feature based visual odometry matches feature between frames and estimates the pose of the robot according to the matched features. These methods typically assume a static environment and relies on statistical methods such as RANSAC to remove outliers such as moving objects. But in highly dynamic environment where majority of the scene is composed of moving objects these methods fail. This paper proposes to use the feature based visual odometry part of ORB-SLAM2 RGB-D and improve it using DeepLabv3-MobileNetV2 semantic segmentation. The semantic segmentation algorithm is used to segment the image, then extracted feature points that are on pixels of dynamic objects (people) are not tracked. The method is tested on TUM-RGBD dataset. Evaluation shows that the proposed algorithm performs significantly better in dynamic scenes compared to the base algorithm, with reduction in Absolute Trajectory Error (ATE) greater than 92.90% compared to the base algorithm in fr3w-xyz, fr3w-rpy and fr3-half sequences. Additionally, when comparing the algorithm that used DeepLabv3-MobileNetV2 to the computationally intensive DeepLabv3-Xception65, the largest increase in ATE was 27%, while the computation time is 3 times faster. © 2019 IEEE.

html

Digitial Object Identifier (DOI)

10.1109/HNICEM48295.2019.9073562

Disciplines

Manufacturing | Mechanical Engineering | Robotics

Keywords

Robot vision; Computer vision; Image segmentation

Upload File

wf_no

This document is currently not available here.

Share

COinS