User Tools

Site Tools


turtlebot:adventures:sensing101

Turtlebot: Sensing Part 1


Intro

It might seem weird to have an adventure entitle sensing the turtlebot, but in the world of controls actuation and sensing go hand in hand. We really need to sense the turtlebot to be able to properly control it. That said, the turtlebot does indeed have a few sensors on it that are used to measure movement of the turtlebot. The first are a pair of rotary encoders, used to measure the rotational velocity of the wheels (as you've seen, commanding a velocity and getting that particular velocity are two different things). They are used by the turtlebot to achieve, as best as possible, the desired rotational velocity of the wheels. The forward and rotational velocities defined by the twist are converted into desired wheel rotation velocities, which the robot attempts to match. The encoders help with this regulation.

Another on-board sensor is a z-axis gyro (where the z-axis is the body vertical vector on the turtlebot). Most importantly, the turtlebot also has “cliff” and “bump” sensors, which are really switches attached to pressable or pop-out-able body parts for sensing whether the turtlebot has run into a relatively solid object, or whether a part of its body is hanging over a vertical edge. Associated to this, there are also “wheel drop” sensors that let the turtlebot know when it is raised off of the ground (or maybe when you've jumped it off of a stunt ramp).

The encoder and the gyro sensors are indeed sensing the robot proper. The other switch-type sensors measure what sort of local, tactile interaction the robot is having with its environment (or could possibly have!). Typically the goal is to not have these interactions, unless of course your goal is to push an object or have the turtlebot catch air.

Investigation

What are the ROS topics that must be subscribed to in order to get these sensor measurements? To find out these things quickly, bringup the turtlebot, then use the rostopic command with the list option to query the different published topics. Using the type and echo options will let you know what the data type of the message is and what the different fields are of the message. Of course, the message will only pop up when the publisher publishes. Some messages publish all the time. Some only when tiggered to. Which of the above publish all the time, which publish as needed?

The gyro information is interesting. Even though the turtlebot can only navigate locally planar regions (i.e., it is stuck in a flatland of sorts), the gyro information provides the three dimensional angular velocity of the robot. I am not sure if the gyro does provide 3D rotation, it may just provide rotation about the body vertical axis. The point of this is to note that the gyro results are given in quaternion vector form. A quaternion vector is to 3D rotations as a complex number is to 1D rotations. You can read about them in wikipedia. Since the robot is only concerned with rotation about its body z-axis, only two parts of the quaternion will change. These are like the complex version of rotation (as in cos(theta) + j sin(theta)).

Adventures

OK, now that we know a little bit about how the robot senses, lets try to do something with those sensors in combination with the actuators.

  1. Modify the kobuki_buttons code to also check for a bumper event. Call the file kobuki_bumpers. Display output whenever that is triggered. How many different bumpers does the kobuki have? Modify so that the message specifies which bumper was hit or “unhit.” Once you have this working, you are ready to combine with actuation. The important thing here is to learn how to modify what gets imported at the beginning of the code, and how to setup the subscriber. To learn what the messages are like, you can access the kobuki message python code that describes what the message structures are like.
  2. Copy the goforward code to another file. Modify it so that the robot moves with a constant slow velocity. Add a condition to stop when it bumps into something or senses a wheel drop condition. It should start again 2 seconds after the condition is removed. The best way to do this involves coding a finite state machine, so that specific commands will be sent when the state changes, as well as keeping track of the overall hit state of the bumpers.
  3. Starting with code similar to the kobuki_buttons code and considering the goforward code, write a python script called gostraight such that the robot maintains a forward orientation during its driving. Like kobuki_buttons, this code will use a subscriber to grab the orientation information and uses it to create a feedback signal for straight driving (thereby overcoming any discrepancies between the commanded signal and the actual signal). Like the goforward code, it will publish drive commands. Since the orientation is a quaternion, you will most likely want to use complex numbers to do the error calculations. This keeps the measurement in its most native form until the very end, and will be useful for future enhancements of the feedback controller. Presuming that the robot was just started and has a heading of 0, it should drive forward and use feedback to maintain a zero heading. Compare your original goforward modification versus the feedback-based modification down a long corridor. Is there a difference in how long it can remain centered?
  4. Create a script that uses odometry information to turn to a specified orientation. This will be similar in spirit to the gostraight code, but it will just turn in place. The difference most likely will be that the gains should be different as the robot motion behaves nonlinearly as a function of forward velocity. That means the linear control gains should differ. Call the script turninplace.
  5. Copy the draw_a_square code and modify it to use the gyro information. Can you use it to rotate as close to 90 degrees as possible during each turn phase of the draw a square path? (In reality you might not use the gyro measurements directly, but might use their integrated form available through a topic) In this version, you will need to switch between different movement states and controllers. So, although the previous two methods were most likely done within the callback function, here you should consider a finite state machine much like in the kobuki_bumpers adventure. The callback functions will generate an error signal, while the different state machines will generate the proper control signal and publish it. The solution I envision can either involve a single callback routine for the odometry with processing that depends on the current robot state, or multiple callback routines for the odometry that get switched around based on the state. The forward driving will require a controller too. Here are some tips for the forward error signal.

If you get stuck, then here are some Hints.

Explore

The adventures utilized the sensory information to modify the actuation, creating a closed-loop feedback system. Having this feedback is an essential ingredient to intelligent movement (a necessary, but not sufficient, condition if you will). However, systems and controls engineers realized long time ago that not all sensor measurements can be relied upon as they have noise. System's engineers will instead add a filter or estimator to the measurements in order to generate a hopefully cleaner estimate of the true measured state. If done properly, estimators can even give estimates of unmeasured states (sometimes correct, sometimes only approximate and getting worse with time). The turtlebot actually has an on-board filter for doing this.

The encoder and gyro measurements are fed into a dead-reckoning odometry system. Did you discover this system during your investigation? If not, go back and see if you can find it. Compare the published outputs of the odometry to the raw signals. How close are they? Are there other estimates available in the odometry topic? What are they? Can they be used to improve the code above?


Adventure Contents

turtlebot/adventures/sensing101.txt · Last modified: 2023/03/06 10:31 by 127.0.0.1