Wali-turtlebot
Wali turtlebot is a self-driving turtlebot, who uses Scene Segmentation on RGBD data to make Path Planning for turtlebot. This process is shown in the following picture.
auto_drive1. Hardware
The hardware device we use:
- Turtlebot2
- HIKVISION wireless camera (stereo)
- Microsoft Kinect v1
- HiSilicon970 (arm)
- others, little computer host, Intel RealSense R200
We use a kinect v1 to obtain RGB and Depth data now, in the future, we'll replace it with the stereo camera, two HIKVISION wireless cameras. The reason why we use stereo is kinect can't obtain intense depth images. The following picture shows the evolution process of Wali turtlebot.
ks2. Technology
The core technology we use:
- Bilateral Semantic Segmentation on RGBD data (BiSeNet-RGBD, 20fps on Nvidia Quadro P5000)
- ROS robot nodes communication mechanism
- Turtlebot motion control using rospy (forward, left, right, back, and smoothly speed up)
- Depth-based direction choose if not use the neural network.(choose the direction with the largest depth)
2.1 BiSeNet-RGBD
BiSeNet-RGBD architecture is shown below.
bisenet_rgbdBiSeNet-RGBD is trained on Princeton SUN-RGBD dataset. Now it can predict 37 class, we'll annotate some specific classes in our practical scenario using labelme in the future.
SUN-RGBD 37 lable map
SUNRGB_37_label_map.pngTest scenes:There are 10 scenes including 4 indoors and 6 outdoors, which are used to test model performance.
- 4 indoor scenes
- 6 outdoor scenes
- stereo vision
We also test out model on RGBD data obtained by stereo camera. The test results is shown in the video below.
2.2 Wali turtlebot control system
wali_arc.pngBy using this architecture, we've made some drive test in the real scenario.
The test video is shown below.
网友评论