This module is really a blend of robotics and computer vision. We will be exploring different, reactive algorithms for performing collision free navigation through the world, and maybe even vision-based tracking of an specifically colored target. The baseline set of activities will rely on the Turtlebot learning modules (AKA Turtlebot Adventures). This adventure is for people who are open to learning the python programming language and also to learning the Robot Operating System (ROS), or who already have some knowledge of either/both.
This module pretty much follows the standard pipeline from the Turtlebot Adventures.
This module assumes no prior experience, so the first week is about getting the basic covered in terms of simply running the Turtlebot mobile robot. Like connecting to the Turtlebot and basic tele-operation of it. To answer some of the questions and be able to see what's going on from the command line, you should get familiar with the basic command line ROS commands.
Now that we have some grasp on the basics of the Turtlebot, we want to understand how to both actuate and to sense, the latter because that's the purpose of the class, and the former because this module is about deciding how to actuate based on the sensed information.
Here, we will explore how to implement some safety checks in the robot en route to creating visual navigation algorithms for the Turtlebot. Importantly, these involve creating a finite state machine (FSM) for the operating mode of the Turtlebot, then coding this same FSM into python code.
Here, we will explore the most basic form of navigation, wandering around aimlessly without hitting things (hopefully). Limitations in the sensor field of view means that some collisions are inevitable under the right obstacle geometries.
This module will explore an early sensor-based navigation method, known as Follow the Gap. It was designed to work for laser scan data, and works by identifying gaps in the local polar space around the robot. As far as obstacle-avoiding navigation strategies goes, it's one of the more basic algorithms. The prior sector based approach is modified to dynamically identify navigable sectors, then to select one for navigation through. Let's work this out and get to know ROS a little better.
Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap. Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, you will work with rviz
to create a visualization of the gap array. This part step and the verification is to make sure that the gap calculations are indeed correct, plus that processing of the Nan values is done properly. Turn in the pseudo-code associated with the procedure for computing the gap array.
Visualization: Visualizing ROS information is done through rviz
. Use rviz
on the laptop to open up a visualization of the robot sensor data. Since we are still using the laser scan topic, use rviz to visualize the laser scan data by adding the topic to set of displayed topics. Your processing of the gap array should create a published topic called the gapscan
that is of the same type as the laser scan topic (and even has the same internal parameters found in the topic). The difference is that gaps will have the scan data set to the max value and obstacles regions of the scan will be sest to the min value (or some small value).
Not a Numbers. There will be some issues with noise. Sometimes NaN will appear due to a missed reading or due to long distance. A NaN by itself and surrounded by good data is probably not really a NaN, so spurious NaN's should be filtered out. Likewise, if there are large values or many Nan's together that really represent something far away, but there is a random close-ish thing, then the gap will be split into two. One way to do this, which we will explore in the context of segmentation, is to not try to filter the actual data, but to filter the decision. So, compute the decision of gap versus no-gap based on distance, then filter the resulting 1D array of true and false values (or 1 and 0 values). One way would be to use a median filter or some other form of filter.
A sample output is the following:
Note that for this activity, you are being asked to return both far and near values for the gap scan, while this image only has the far values and uses NaNs for the near values (that indicate collision). So, only the gaps are visualized. You are welcome to use this form of output also, if you'd like. The red is the original scan data and the green is the gap data. I believe that the color was controlled for using rviz.
Hints: I don't advocate looking at this first, but here is a Matlab implementation of the Follow the Gap method. You should really try to get it working on your own. Write down the pseudo-code and see if you can convert it to actual python code.
Given that the gap array and maximum gap has been computed, finish things off with the gap center angle computation, followed by the final heading angle computation. Use these to implement the algorithm.
Up to now, you have not really been contemplating some end-goal or objective for travelling. Not that this problem is going to be any better, but let's say that the robot's goal was to move down the 15 units of distance hallway from its start position (I don't know what units the odometry and mapping system uses; I think it is meters). Incorporate this end-goal by subscribing to the Turtlebot's internal frame estimation and using where it thinks it is to identify the vector or angle to the goal. Incorporate it into the follow-the-gap method as it was published to have.
Exploration & Deliverable: Demo the robot moving down the hallway towards its goal, reacting to static obstacles. Do the same for slightly dynamic obstacles. Comment on how robust the algorithm appears to be. What would you want to fix about it if you could? how would you go about doing that?
You may have found the gap method to sometimes jitter, sometimes crash, sometimes just do slightly the wrong thing. One reason was using too small of a threshold for the distance so that the robot would react to late, while at the same time having a miserable view-able area for maneuvering. Making the gap threshold distance larger helps with that, but still may exhibit some of the behavior above (just less frequent or less drastic). The persistence of those behaviors is a function of the noise in the sensor, the small field of view of the camera, and the lack of memory regarding parts of the world that leave the field of view. Here, we want to incorporate some kind of memory into the algorithm for smoother behavior, and better operation when navigating through a gap.
Create a state machine for the system as it navigates the gaps. There will be a gap scan and go to goal behavior, a go through gap behavior. They may map to more than 2 states. Roughly, we have the following:
Hopefully that makes sense.
You will have to implement a closed loop control scheme, like discussed in the ECE4560 Turtlebot adventures. The module for those adventures have two links to internal pages that discuss how to create a feedback control strategy for turning and for forward control. You'll need the latter, but it might be instructive to read both, as well as the original adventure topics to get an overall picture of what was being done.
Use your odometry to estimate where you are and where the intermediate goal positions are. Some of the above may need to be properly integrated with the “follow the gap” trajectory heading angle computation.
This module explores the Dynamic Window Approach (DWA), which is an older algorithm but still in use today.
Another recent addition to the solution landscape is the Vector Polar Histogram (VPH) method.