ece4580:module_autonav
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ece4580:module_autonav [2017/03/11 20:17] – pvela | ece4580:module_autonav [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 35: | Line 35: | ||
==== Week #1: Gap Analysis ===== | ==== Week #1: Gap Analysis ===== | ||
- | Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap. Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, | + | Read the paper to get a sense for what is involved in calculating the gap array and finding the maximum gap. Implement the procedure for doing so, and using select depth images from obstacle avoiding scenarios, turn in the gap array and maximum gap outputs. As demonstration, |
// | // | ||
Line 43: | Line 43: | ||
A sample output is the following: | A sample output is the following: | ||
- | {{ ece4580: | + | {{ ece4580: |
+ | |||
+ | Note that for this activity, you are being asked to return both far and near values for the gap scan, while this image only has the far values and uses NaNs for the near values (that indicate collision). | ||
+ | |||
+ | //Hints:// I don't advocate looking at this first, but [[http:// | ||
==== Week #2: Gap Selection and Control ===== | ==== Week #2: Gap Selection and Control ===== | ||
Line 55: | Line 59: | ||
// | // | ||
+ | ==== Week #4: Consistent Operation ===== | ||
+ | |||
+ | You may have found the gap method to sometimes jitter, sometimes crash, sometimes just do slightly the wrong thing. One reason was using too small of a threshold for the distance so that the robot would react to late, while at the same time having a miserable view-able area for maneuvering. Making the gap threshold distance larger helps with that, but still may exhibit some of the behavior above (just less frequent or less drastic). The persistence of those behaviors is a function of the noise in the sensor, the small field of view of the camera, and the lack of memory regarding parts of the world that leave the field of view. Here, we want to incorporate some kind of memory into the algorithm for smoother behavior, and better operation when navigating through a gap. | ||
+ | |||
+ | Create a state machine for the system as it navigates the gaps. There will be a gap scan and go to goal behavior, a go through gap behavior. They may map to more than 2 states. Roughly, we have the following: | ||
+ | - While not gap is perceived, go to the goal $p_{goal}$ while continually evaluating for a gap. | ||
+ | - When a gap is perceived, instantiate a new goal state being the gap center location, $p_{gap}$, then drive to that goal state. | ||
+ | - The gap line creates a partition of the world into two halves, in front of the line and behind the line. Your robot should be in front of the line, and as it passes through the gap transitions to being behind the line. That line can be written as an equation of the form $n_1 x + n_2 y = 0$, where being in front of the line means that the line equation is negative instead of zero, and being behind the line has it being positive instead of zero. Though the goal is the gap, the objective should be to drive past the gap by some distance threshold, so that $n_1 x + n_2 y > d_\tau$. Then it should start to drive towards the real goal again. | ||
+ | - One way to drive through the gap is to set up a secondary goal position that is beyond the goal along the normal $\vec n = (n_1, n_2)$ to the line by a distance $2 d_\tau$, as in $p_{past} = p_{gap} + 2 d_\tau \vec n$. When you get to the transition line (from negative to positive), then switch to this new goal and drive towards it until going a distance of $d_\tau$ past the transition line. Switch back to the go to goal state. | ||
+ | |||
+ | Hopefully that makes sense. | ||
+ | |||
+ | You will have to implement a closed loop control scheme, like discussed in the [[Turtlebot: | ||
+ | Use your odometry to estimate where you are and where the intermediate goal positions are. Some of the above may need to be properly integrated with the " | ||
===== Module #3: Dynamic Window Approach ===== | ===== Module #3: Dynamic Window Approach ===== | ||
-------------------------------- | -------------------------------- |
ece4580/module_autonav.1489281449.txt.gz · Last modified: 2024/08/20 21:38 (external edit)