User Tools

Site Tools


turtlebot:adventures:sensing101

This is an old revision of the document!


Turtlebot: Sensing Part 1


Intro

It might seem weird to have an adventure entitle sensing the turtlebot, but in the world of controls actuation and sensing go hand in hand. We really need to sense the turtlebot to be able to properly control it. That said, the turtlebot does indeed have a few sensors on it that are used to measure movement of the turtlebot. The first are a pair of rotary encoders, used to measure the rotational velocity of the wheels (as you've seen, commanding a velocity and getting that particular velocity are two different things). They are used by the turtlebot to achieve, as best as possible, the desired rotational velocity of the wheels. The forward and rotational velocities defined by the twist are converted into desired wheel rotation velocities, which the robot attempts to match. The encoders help with this regulation.

Another on-board sensor is a z-axis gyro (where the z-axis is the body vertical vector on the turtlebot). Most importantly, the turtlebot also has “cliff” and “bump” sensors, which are really switches attached to pressable or pop-out-able body parts for sensing whether the turtlebot has run into a relatively solid object, or whether a part of its body is hanging over a vertical edge. Associated to this, there are also “wheel drop” sensors that let the turtlebot know when it is raised off of the ground (or maybe when you've jumped it off of a stunt ramp).

The encoder and the gyro sensors are indeed sensing the robot proper. The other switch-type sensors measure what sort of local, tactile interaction the robot is having with its environment (or could possibly have!). Typically the goal is to not have these interactions, unless of course your goal is to push an object or have the turtlebot catch air.

Investigation

What are the ROS topics that must be subscribed to in order to get these sensor measurements? To find out these things quickly, bringup the turtlebot, then use the rostopic command with the list option to query the different published topics. Using the type and echo options will let you know what the data type of the message is and what the different fields are of the message. Of course, the message will only pop up when the publisher publishes. Some messages publish all the time. Some only when tiggered to. Which of the above publish all the time, which publish as needed?

The gyro information is interesting. Even though the turtlebot can only navigate locally planar regions (i.e., it is stuck in a flatland of sorts), the gyro information provides the three dimensional angular velocity of the robot. I am not sure if the gyro does provide 3D rotation, it may just provide rotation about the body vertical axis. The point of this is to note that the gyro results are given in quaternion vector form. A quaternion vector is to 3D rotations as a complex number is to 1D rotations. You can read about them in wikipedia. Since the robot is only concerned with rotation about its body z-axis, only two parts of the quaternion will change. These are like the complex version of rotation (as in cos(theta) + j sin(theta)).

Adventure

OK, now that we know a little bit about how the robot senses, lets try to do something with those sensors in combination with the actuators.

  1. Modify the kobuki_buttons code to also check for a bumper event. Call the file 'kobuki_bumpers''. Display output whenever that is triggered. How many different bumpers does the kobuki have? Modify so that the message specifies which bumper was hit or “unhit.” Once you have this working, you are ready to combine with actuation. The important thing here is to learn how to modify what gets imported at the beginning of the code, and how to setup the subscriber. To learn what the messages are like, you can access the kobuki message python code that describes what the message structures are like.
  2. Copy the goforward code to another file. Modify it so that the robot moves with a constant slow velocity. Add a condition to stop when it bumps into something or senses a wheel drop condition. It should start again 2 seconds after the condition is removed. The best way to do this involves coding a finite state machine, so that specific commands will be sent when the state changes, as well as keeping track of the overall hit state of the bumpers.
  3. Copy the draw_a_square code and modify it to use the gyro information. Can you use it to rotate as close to 90 degrees as possible during each turn phase of the draw a square path? (In reality you might not use the gyro measurements directly, but might use their integrated form available through a topic)
  4. Copy the goforward code and see if you can modify it so that the robot maintains a forward orientation during its driving. Whatever the initial heading is, it should drive forward and use feedback to maintain the heading. Compare your original goforward modification versus the feedback-based modification down a long corridor. Is there a difference in how long it can remain centered?

If you get stuck, then here are some Hints.

Explore

The adventures utilized the sensory information to modify the actuation, creating a closed-loop feedback system. Having this feedback is an essential ingredient to intelligent movement (a necessary, but not sufficient, condition if you will). However, systems and controls engineers realized long time ago that not all sensor measurements can be relied upon as they have noise. System's engineers will instead add a filter or estimator to the measurements in order to generate a hopefully cleaner estimate of the true measured state. If done properly, estimators can even give estimates of unmeasured states (sometimes correct, sometimes only approximate and getting worse with time). The turtlebot actually has an on-board filter for doing this.

The encoder and gyro measurements are fed into a dead-reckoning odometry system. Did you discover this system during your investigation? If not, go back and see if you can find it. Compare the published outputs of the odometry to the raw signals. How close are they? Are there other estimates available in the odometry topic? What are they? Can they be used to improve the code above?


Adventure Contents

turtlebot/adventures/sensing101.1442353675.txt.gz · Last modified: 2024/08/20 21:38 (external edit)