Overview

AuBi needs a system to localize itself in its environment, plan paths, control the drive system, and avoid obstacles. The navigation system handles these requirements. The navigation system consists of a Nvidia Jetson Nano, 360 Degrees Lidar sensor, and three ultrasonic sensors. Combined, these devices are capable of mapping the AuBi’s surrounding environment and detecting stationary and non-stationary obstacles.


Current Research

An early step in designing the navigation system was to select sensors that would enable the robot to fulfill the requirements. The Lidar scanner provides 360 degree depth scans along a horizontal plane allowing AuBi to generate a “blueprint” of its surroundings. In order to provide a coarse localization scheme, AuBi will measure the signal strength of nearby wifi routers. Objects may exist below the Lidar’s vision to accommodate for this three ultrasonic sensors will be placed on the front shell. A Nvidia Jetson Nano will act as the brains of the navigation system. The Jetson Nano is a tiny yet capable linux computer, perfect for AuBi. It will be running a robot software framework known as Robot Operating System (ROS). This software handles communication between different hardware devices and will be managing the localization and movement software. This powerful infrastructure will enable future developments.


Design Report

Navigation Design Report Download


Future Research

So far, work has been developing the ROS software packages. There is no physical robot to test the software on so evaluation of the software has been happening in simulation. Before the navigation system can be tested the drive system has to be completed. Once this is done work can be done implementing the navigation system. 

The timeline for the navigation system is dependent on the development of the drive system. Currently, the necessary files for navigation stack to operate are being developed.

Literature Review

Lidar Room Mapping

https://www.rosroboticslearning.com/ros-navigation-stack

This project demonstrates using lidar to create a map of a room and then navigate through it to user selected locations. This uses the same technology we’d be using for our robot. It differs in size and is only navigating in a controlled environment. From this example, mapping would have to happen when no one is present in the setting.

 

Lidar Localization

https://www.youtube.com/watch?v=0yICGqriN3g&t=8s&ab_channel=RoboticsandROSLearning

This project demonstrates how different sensor inputs can be fused together for localization. This example combines an imu with the lidar sensor so that the robot can correctly localize in case of a loss of traction. Our robot will likely need some system like this to account for wheel slipping. In addition to this, adding more sensors for detecting object beneath the lidar’s plane of view will be necessary. This demonstrates that this is possible in the ROS environment.

 

University of Michigan Lidar Robot Competition

https://april.eecs.umich.edu/magic/about/

U Michigan Robot for a competition using Lidar and Cameras together for mapping and obstacle avoidance. This sort of configuration is similar to what we’d use on our robot. However, this robot used a single point lidar sensor on a servo instead of a 360 degrees one. U Michigan won this competition.

https://april.eecs.umich.edu/media/pdfs/olson2013cacm.pdf

 

Localization Algorithm

https://www.ri.cmu.edu/pub_files/pub1/dellaert_frank_1999_2/dellaert_frank_1999_2.pdf

Paper describing algorithm used for localizing robot. Can accept multiple sensor inputs