User Tools

Site Tools


ece4580:module_autonav

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ece4580:module_autonav [2017/03/11 20:19] – [Week #1: Gap Analysis] pvelaece4580:module_autonav [2024/08/20 21:38] (current) – external edit 127.0.0.1
Line 35: Line 35:
 ==== Week #1: Gap Analysis ===== ==== Week #1: Gap Analysis =====
  
-Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap.  Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, you will work with ''rviz'' to create a visualization of the gap array.  This part step and the verification is to make sure that the gap calculations are indeed correct, plus that processing of the **Nan** values is done properly.+Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap.  Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, you will work with ''rviz'' to create a visualization of the gap array.  This part step and the verification is to make sure that the gap calculations are indeed correct, plus that processing of the **Nan** values is done properly. Turn in the pseudo-code associated with the procedure for computing the gap array.
  
 //Visualization:// Visualizing ROS information is done through ''rviz''. Use ''rviz'' on the laptop to open up a visualization of the robot sensor data. Since we are still using the laser scan topic, use rviz to visualize the laser scan data by [[http://wiki.ros.org/rviz/UserGuide#Adding_a_new_display|adding the topic]] to set of displayed topics. Your processing of the gap array should create a published topic called the ''gapscan'' that is of the same type as the laser scan topic (and even has the same internal parameters found in the topic). The difference is that gaps will have the scan data set to the max value and obstacles regions of the scan will be sest to the min value (or some small value). //Visualization:// Visualizing ROS information is done through ''rviz''. Use ''rviz'' on the laptop to open up a visualization of the robot sensor data. Since we are still using the laser scan topic, use rviz to visualize the laser scan data by [[http://wiki.ros.org/rviz/UserGuide#Adding_a_new_display|adding the topic]] to set of displayed topics. Your processing of the gap array should create a published topic called the ''gapscan'' that is of the same type as the laser scan topic (and even has the same internal parameters found in the topic). The difference is that gaps will have the scan data set to the max value and obstacles regions of the scan will be sest to the min value (or some small value).
Line 45: Line 45:
 {{ ece4580:module_autonav:gap_visualized.png?400 }} {{ ece4580:module_autonav:gap_visualized.png?400 }}
  
-Note that for this activity, you are being asked to return both far and near values for the gap scan, while this image only has the far values and uses NaNs for the near values (that indicate collision).  So, only the gaps are visualized.  You are welcome to use this form of output also, if you'd like.+Note that for this activity, you are being asked to return both far and near values for the gap scan, while this image only has the far values and uses NaNs for the near values (that indicate collision).  So, only the gaps are visualized.  You are welcome to use this form of output also, if you'd like. The red is the original scan data and the green is the gap data. I believe that the color was controlled for using rviz. 
 + 
 +//Hints:// I don't advocate looking at this first, but [[http://pvela.gatech.edu/classes/lib/exe/fetch.php?media=ece4580:module_autonav:ftg.m|here]] is a Matlab implementation of the //Follow the Gap// method.  You should really try to get it working on your own. Write down the pseudo-code and see if you can convert it to actual python code. 
 ==== Week #2: Gap Selection and Control ===== ==== Week #2: Gap Selection and Control =====
  
Line 56: Line 59:
 //Exploration & Deliverable:// Demo the robot moving down the hallway towards its goal, reacting to static obstacles.  Do the same for slightly dynamic obstacles.  Comment on how robust the algorithm appears to be.  What would you want to fix about it if you could?  how would you go about doing that? //Exploration & Deliverable:// Demo the robot moving down the hallway towards its goal, reacting to static obstacles.  Do the same for slightly dynamic obstacles.  Comment on how robust the algorithm appears to be.  What would you want to fix about it if you could?  how would you go about doing that?
  
 +==== Week #4: Consistent Operation =====
 +
 +You may have found the gap method to sometimes jitter, sometimes crash, sometimes just do slightly the wrong thing. One reason was using too small of a threshold for the distance so that the robot would react to late, while at the same time having a miserable view-able area for maneuvering. Making the gap threshold distance larger helps with that, but still may exhibit some of the behavior above (just less frequent or less drastic). The persistence of those behaviors is a function of the noise in the sensor, the small field of view of the camera, and the lack of memory regarding parts of the world that leave the field of view. Here, we want to incorporate some kind of memory into the algorithm for smoother behavior, and better operation when navigating through a gap.
 +
 +Create a state machine for the system as it navigates the gaps.  There will be a gap scan and go to goal behavior, a go through gap behavior. They may map to more than 2 states. Roughly, we have the following:
 +  - While not gap is perceived, go to the goal $p_{goal}$ while continually evaluating for a gap.
 +  - When a gap is perceived, instantiate a new goal state being the gap center location, $p_{gap}$, then drive to that goal state.
 +  - The gap line creates a partition of the world into two halves, in front of the line and behind the line. Your robot should be in front of the line, and as it passes through the gap transitions to being behind the line.  That line can be written as an equation of the form $n_1 x + n_2 y = 0$, where being in front of the line means that the line equation is negative instead of zero, and being behind the line has it being positive instead of zero. Though the goal is the gap, the objective should be to drive past the gap by some distance threshold, so that $n_1 x + n_2 y > d_\tau$. Then it should start to drive towards the real goal again.
 +  - One way to drive through the gap is to set up a secondary goal position that is beyond the goal along the normal $\vec n = (n_1, n_2)$ to the line by a distance $2 d_\tau$, as in $p_{past} =  p_{gap} + 2 d_\tau \vec n$. When you get to the transition line (from negative to positive), then switch to this new goal and drive towards it until going a distance of $d_\tau$ past the transition line. Switch back to the go to goal state.
 +
 +Hopefully that makes sense.  
 +
 +You will have to implement a closed loop control scheme, like discussed in the [[Turtlebot:Adventures:Sensing101|ECE4560 Turtlebot adventures]]. The module for those adventures have two links to internal pages that discuss how to create a feedback control strategy for [[Turtlebot:Adventures:Sensing101_ThetaError|turning]] and for [[Turtlebot:Adventures:Sensing101_ForwardError|forward control]]. You'll need the latter, but it might be instructive to read both, as well as the original [[Turtlebot:Adventures:Sensing101|adventure topics]] to get an overall picture of what was being done.
  
 +Use your odometry to estimate where you are and where the intermediate goal positions are. Some of the above may need to be properly integrated with the "follow the gap" trajectory heading angle computation.
 ===== Module #3: Dynamic Window Approach ===== ===== Module #3: Dynamic Window Approach =====
 -------------------------------- --------------------------------
ece4580/module_autonav.1489281568.txt.gz · Last modified: 2024/08/20 21:38 (external edit)