Both sides previous revisionPrevious revisionNext revision | Previous revision |
turtlebot:adventures:sensing101 [2015/10/02 18:25] – pvela | turtlebot:adventures:sensing101 [2024/08/20 21:38] (current) – external edit 127.0.0.1 |
---|
The gyro information is interesting. Even though the turtlebot can only navigate locally planar regions (i.e., it is stuck in a flatland of sorts), the gyro information provides the three dimensional angular velocity of the robot. I am not sure if the gyro does provide 3D rotation, it may just provide rotation about the body vertical axis. The point of this is to note that the gyro results are given in quaternion vector form. A quaternion vector is to 3D rotations as a complex number is to 1D rotations. You can read about them in [[https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation | wikipedia]]. Since the robot is only concerned with rotation about its body z-axis, only two parts of the quaternion will change. These are like the complex version of rotation (as in cos(theta) + j sin(theta)). | The gyro information is interesting. Even though the turtlebot can only navigate locally planar regions (i.e., it is stuck in a flatland of sorts), the gyro information provides the three dimensional angular velocity of the robot. I am not sure if the gyro does provide 3D rotation, it may just provide rotation about the body vertical axis. The point of this is to note that the gyro results are given in quaternion vector form. A quaternion vector is to 3D rotations as a complex number is to 1D rotations. You can read about them in [[https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation | wikipedia]]. Since the robot is only concerned with rotation about its body z-axis, only two parts of the quaternion will change. These are like the complex version of rotation (as in cos(theta) + j sin(theta)). |
| |
=== Adventure === | === Adventures === |
| |
OK, now that we know a little bit about how the robot senses, lets try to do something with those sensors in combination with the actuators. | OK, now that we know a little bit about how the robot senses, lets try to do something with those sensors in combination with the actuators. |
| |
- Modify the ''kobuki_buttons'' code to also check for a bumper event. Call the file ''kobuki_bumpers''. Display output whenever that is triggered. How many different bumpers does the kobuki have? Modify so that the message specifies which bumper was hit or "unhit." Once you have this working, you are ready to combine with actuation. The important thing here is to learn how to modify what gets imported at the beginning of the code, and how to setup the subscriber. To learn what the messages are like, you can access the [[https://github.com/yujinrobot/kobuki_msgs/tree/indigo/msg|kobuki message python code]] that describes what the message structures are like. | - Modify the ''kobuki_buttons'' code to also check for a bumper event. Call the file ''kobuki_bumpers''. Display output whenever that is triggered. How many different bumpers does the kobuki have? Modify so that the message specifies which bumper was hit or "unhit." Once you have this working, you are ready to combine with actuation. The important thing here is to learn how to modify what gets imported at the beginning of the code, and how to setup the subscriber. To learn what the messages are like, you can access the [[https://github.com/yujinrobot/kobuki_msgs/tree/indigo/msg|kobuki message python code]] that describes what the message structures are like. |
- Starting with code similar to the kobuki_buttons code and considering the goforward code, write a pythong script called ''gostraight'' that uses a subscriber to grab the orientation information and uses it to create a feedback signal to drive straight (thereby overcoming any discrepancies between the commanded signal and the actual signal). Since the orientation is a quaternion, you will most likely want to use complex numbers to do the error calculations. This keeps the measurement in its most native form until the very end, and will be useful for future enhancements of the feedback controller. | |
- Create a script that uses odometry information to turn to a specified orientation. This will be similar in spirit to the gostraight code, but it will just turn in place. The difference most likely will be that the gains should be different as the robot motion behaves nonlinearly as a function of forward velocity. That means the linear control gains should differ. | |
- Copy the goforward code to another file. Modify it so that the robot moves with a constant slow velocity. Add a condition to stop when it bumps into something or senses a wheel drop condition. It should start again 2 seconds after the condition is removed. The best way to do this involves coding a finite state machine, so that specific commands will be sent when the state changes, as well as keeping track of the overall hit state of the bumpers. | - Copy the goforward code to another file. Modify it so that the robot moves with a constant slow velocity. Add a condition to stop when it bumps into something or senses a wheel drop condition. It should start again 2 seconds after the condition is removed. The best way to do this involves coding a finite state machine, so that specific commands will be sent when the state changes, as well as keeping track of the overall hit state of the bumpers. |
- Copy the draw_a_square code and modify it to use the gyro information. Can you use it to rotate as close to 90 degrees as possible during each turn phase of the draw a square path? (In reality you might not use the gyro measurements directly, but might use their integrated form available through a topic) | - Starting with code similar to the ''kobuki_buttons'' code and considering the ''goforward'' code, write a python script called ''gostraight'' such that the robot maintains a forward orientation during its driving. Like ''kobuki_buttons'', this code will use a subscriber to grab the orientation information and uses it to create a feedback signal for straight driving (thereby overcoming any discrepancies between the commanded signal and the actual signal). Like the ''goforward'' code, it will publish drive commands. Since the orientation is a quaternion, you will most likely want to use complex numbers to do the [[turtlebot:adventures:Sensing101_ThetaError|error calculations]]. This keeps the measurement in its most native form until the very end, and will be useful for future enhancements of the feedback controller. Presuming that the robot was just started and has a heading of 0, it should drive forward and use feedback to maintain a zero heading. Compare your original goforward modification versus the feedback-based modification down a long corridor. Is there a difference in how long it can remain centered? |
- Copy the goforward code and see if you can modify it so that the robot maintains a forward orientation during its driving. Whatever the initial heading is, it should drive forward and use feedback to maintain the heading. Compare your original goforward modification versus the feedback-based modification down a long corridor. Is there a difference in how long it can remain centered? | - Create a script that uses odometry information to turn to a specified orientation. This will be similar in spirit to the gostraight code, but it will just turn in place. The difference most likely will be that the gains should be different as the robot motion behaves nonlinearly as a function of forward velocity. That means the linear control gains should differ. Call the script ''turninplace''. |
| - Copy the draw_a_square code and modify it to use the gyro information. Can you use it to rotate as close to 90 degrees as possible during each turn phase of the draw a square path? (In reality you might not use the gyro measurements directly, but might use their integrated form available through a topic) In this version, you will need to switch between different movement states and controllers. So, although the previous two methods were most likely done within the callback function, here you should consider a finite state machine much like in the ''kobuki_bumpers'' adventure. The callback functions will generate an error signal, while the different state machines will generate the proper control signal and publish it. The solution I envision can either involve a single callback routine for the odometry with processing that depends on the current robot state, or multiple callback routines for the odometry that get switched around based on the state. The forward driving will require a controller too. Here are some tips for the [[turtlebot:adventures:Sensing101_ForwardError|forward error signal]]. |
| |
If you get stuck, then here are some [[turtlebot:adventures:Sensing101_Hints|Hints]]. | If you get stuck, then here are some [[turtlebot:adventures:Sensing101_Hints|Hints]]. |
| |
==== Explore ==== | ==== Explore ==== |
| |