Vision-Based Autonomous Navigation of a Mobile Robot
This module is really a blend of robotics and computer vision. We will be exploring different, reactive algorithms for performing collision free navigation through the world, and maybe even vision-based tracking of an specifically colored target. The baseline set of activities will rely on the Turtlebot learning modules (AKA Turtlebot Adventures). This adventure is for people who are open to learning the python programming language and also to learning the Robot Operating System (ROS), or who already have some knowledge of either/both.
Module #1: Wandering
This module pretty much follows the standard pipeline from the Turtlebot Adventures.
Week #1: Basic Operation
This module assumes no prior experience, so the first week is about getting the basic covered in terms of simply running the Turtlebot mobile robot. Like connecting to the Turtlebot and basic tele-operation of it. To answer some of the questions and be able to see what's going on from the command line, you should get familiar with the basic command line ROS commands.
- Demonstrate that you can connnect to the Turtlebot, launch the core services, and tele-operate it.
- Answer the questions in the First Run adventure.
Week #2: Drive Commands and Visual Sensing
Now that we have some grasp on the basics of the Turtlebot, we want to understand how to both actuate and to sense, the latter because that's the purpose of the class, and the former because this module is about deciding how to actuate based on the sensed information.
- Complete the Basic Movements module and answer the questions. Also, demo the robot movement during office hours.
- Complete the first two bullets of the Sensing the World adventure. Demo that you can subscribe to the visual sensor and to the depth sensor, and properly display both of their messages.
Week #3: A Safety Finite State Machine
Here, we will explore how to implement some safety checks in the robot en route to creating visual navigation algorithms for the Turtlebot. Importantly, these involve creating a finite state machine (FSM) for the operating mode of the Turtlebot, then coding this same FSM into python code.
- Complete the first two bullets of the Sensing the Turtlebot adventure.
- Demo the robot blindly navigating around based on its bump sensors.
Week #4: Vision-Based Wandering
Here, we will explore the most basic form of navigation, wandering around aimlessly without hitting things (hopefully). Limitations in the sensor field of view means that some collisions are inevitable under the right obstacle geometries.
- Complete the Wandering adventure, turn in the coding mistake regarding the sectors, and demo the robot wandering around.
Module #2: Follow the Gap
This module will explore an early sensor-based navigation method, known as Follow the Gap. It was designed to work for laser scan data, and works by identifying gaps in the local polar space around the robot. As far as obstacle-avoiding navigation strategies goes, it's one of the more basic algorithms. The prior sector based approach is modified to dynamically identify navigable sectors, then to select one for navigation through. Let's work this out and get to know ROS a little better.
Week #1: Gap Analysis
Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap. Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, you will work with rviz
to create a visualization of the gap array. This part step and the verification is to make sure that the gap calculations are indeed correct, plus that processing of the Nan values is done properly. Turn in the pseudo-code associated with the procedure for computing the gap array.
Visualization: Visualizing ROS information is done through rviz
. Use rviz
on the laptop to open up a visualization of the robot sensor data. Since we are still using the laser scan topic, use rviz to visualize the laser scan data by adding the topic to set of displayed topics. Your processing of the gap array should create a published topic called the gapscan
that is of the same type as the laser scan topic (and even has the same internal parameters found in the topic). The difference is that gaps will have the scan data set to the max value and obstacles regions of the scan will be sest to the min value (or some small value).
Not a Numbers. There will be some issues with noise. Sometimes NaN will appear due to a missed reading or due to long distance. A NaN by itself and surrounded by good data is probably not really a NaN, so spurious NaN's should be filtered out. Likewise, if there are large values or many Nan's together that really represent something far away, but there is a random close-ish thing, then the gap will be split into two. One way to do this, which we will explore in the context of segmentation, is to not try to filter the actual data, but to filter the decision. So, compute the decision of gap versus no-gap based on distance, then filter the resulting 1D array of true and false values (or 1 and 0 values). One way would be to use a median filter or some other form of filter.
A sample output is the following:
Note that for this activity, you are being asked to return both far and near values for the gap scan, while this image only has the far values and uses NaNs for the near values (that indicate collision). So, only the gaps are visualized. You are welcome to use this form of output also, if you'd like. The red is the original scan data and the green is the gap data. I believe that the color was controlled for using rviz.
Hints: I don't advocate looking at this first, but here is a Matlab implementation of the Follow the Gap method. You should really try to get it working on your own. Write down the pseudo-code and see if you can convert it to actual python code.
Week #2: Gap Selection and Control
Given that the gap array and maximum gap has been computed, finish things off with the gap center angle computation, followed by the final heading angle computation. Use these to implement the algorithm.
Week #3: Hallway Navigation
Up to now, you have not really been contemplating some end-goal or objective for travelling. Not that this problem is going to be any better, but let's say that the robot's goal was to move down the 15 units of distance hallway from its start position (I don't know what units the odometry and mapping system uses; I think it is meters). Incorporate this end-goal by subscribing to the Turtlebot's internal frame estimation and using where it thinks it is to identify the vector or angle to the goal. Incorporate it into the follow-the-gap method as it was published to have.
Exploration & Deliverable: Demo the robot moving down the hallway towards its goal, reacting to static obstacles. Do the same for slightly dynamic obstacles. Comment on how robust the algorithm appears to be. What would you want to fix about it if you could? how would you go about doing that?
Week #4: Consistent Operation
You may have found the gap method to sometimes jitter, sometimes crash, sometimes just do slightly the wrong thing. One reason was using too small of a threshold for the distance so that the robot would react to late, while at the same time having a miserable view-able area for maneuvering. Making the gap threshold distance larger helps with that, but still may exhibit some of the behavior above (just less frequent or less drastic). The persistence of those behaviors is a function of the noise in the sensor, the small field of view of the camera, and the lack of memory regarding parts of the world that leave the field of view. Here, we want to incorporate some kind of memory into the algorithm for smoother behavior, and better operation when navigating through a gap.
Create a state machine for the system as it navigates the gaps. There will be a gap scan and go to goal behavior, a go through gap behavior. They may map to more than 2 states. Roughly, we have the following:
- While not gap is perceived, go to the goal $p_{goal}$ while continually evaluating for a gap.
- When a gap is perceived, instantiate a new goal state being the gap center location, $p_{gap}$, then drive to that goal state.
- The gap line creates a partition of the world into two halves, in front of the line and behind the line. Your robot should be in front of the line, and as it passes through the gap transitions to being behind the line. That line can be written as an equation of the form $n_1 x + n_2 y = 0$, where being in front of the line means that the line equation is negative instead of zero, and being behind the line has it being positive instead of zero. Though the goal is the gap, the objective should be to drive past the gap by some distance threshold, so that $n_1 x + n_2 y > d_\tau$. Then it should start to drive towards the real goal again.
- One way to drive through the gap is to set up a secondary goal position that is beyond the goal along the normal $\vec n = (n_1, n_2)$ to the line by a distance $2 d_\tau$, as in $p_{past} = p_{gap} + 2 d_\tau \vec n$. When you get to the transition line (from negative to positive), then switch to this new goal and drive towards it until going a distance of $d_\tau$ past the transition line. Switch back to the go to goal state.
Hopefully that makes sense.
You will have to implement a closed loop control scheme, like discussed in the ECE4560 Turtlebot adventures. The module for those adventures have two links to internal pages that discuss how to create a feedback control strategy for turning and for forward control. You'll need the latter, but it might be instructive to read both, as well as the original adventure topics to get an overall picture of what was being done.
Use your odometry to estimate where you are and where the intermediate goal positions are. Some of the above may need to be properly integrated with the “follow the gap” trajectory heading angle computation.
Module #3: Dynamic Window Approach
This module explores the Dynamic Window Approach (DWA), which is an older algorithm but still in use today.
Week #1: Externally Derived Objective Functions
Week #2: Velocity Derived Objective Function
Week #3: Integration and Selection
Week #4: Execution
Module #4: Vector Polar Histogram
Another recent addition to the solution landscape is the Vector Polar Histogram (VPH) method.