User Tools

Site Tools


turtlebot:adventures:sensing102

Turtlebot: Sensing Part 2


Intro

Now we get to the sensing part that maybe you were thinking about initially. This is sensing of the world outside of the turtlebot (not using proprioceptive sensors). The default version comes with a kinect sensor, so it has an on-board RGB camera and also a depth image. To access it you would have to do one of the following two things (1) run the minimal launch file plus a sensor launch file separately (can you figure out what this second launch file should be), or (2) run a launch file with more services specified to be launched within in (can you find this richer launch file). Of course, there is a third more appealing option from a long term understanding perspective. That would be to create your own launch file by figuring out what the above launch files do and then custom writing one that does only what is needed by the code.

Investigation

What are the ROS topics that must be subscribed to in order to get these sensor measurements? Identify the main two that provide the proper raw data associated to the image stream and to the depth stream for the kinect camera. Name a few of the different imagine sensor topics and explain how they differ from the raw versions.

Adventure

  1. Properly launch the necessary services to have kinect sensor data be published (beyond the linescan). Show that this worked by doing a rostopic list which should then output the kinect sensor messages.
  2. Python and ROS fortunately have harnessed the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system snad their standards). Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.
  3. Now that we can get images, let's try to do something with them. The classic first pass is to perform color-based detection. Identify a color that you think is unique within the world, and most likely for which you have an object of that color (fist sized to head sized). Identify the color range and perform color-based detection on the Turtlebot image data. Instead of plotting the color image, plot the detection image. It should show white (or true) values when the target is in the field of view, and nothing when it isn't (assuming that your color is pretty unique relative to the Turtlebot's environment).
  4. Once it is possible to reliably segment and track a blob/object, the next step is to actually follow it using the Turtlebot's movement capabilities. The simplest thing to do right now would be to try to center the target in the image. Instead of using some desired forward distance and desired orientation, use the deviation of the object centroid from the center of the image as the error. Identify the proper way to implement feedback so that the turtlebot moves to center the object within it's field of view. For the left/right control, the mapping is direct. For the distance, it is best to have depth, but we will avoid that for now. Instead, what should be done is to select a target pixel height to control to. If the target is higher than the target height, it should move back. If it is lower, then it should move forward. The target should be somewhat higher than the camera for this to work. If it is below the camera, then the opposite logic holds. You will have to play with the actual target pixel height and the actual world height that you hold the target at in order to get the closed-loop system to work proper. Show that you can follow the target around.
  5. Better than the color only guessing game would be to know how far the object is from the image. Take the binary image from the previous step and apply it to the depth image. There should be a ROS message that has depth registered to the color image. Extract only the depth values that correspond to true binary image values. Get the approximate depth of the object pixels by computing the median or the average of the extracted depth values (you may want to go through the depth data and remove the NaN values as these are bad measurements). While you are at it, extract the coordinates of the binary mask and compute the mean of the x and y values (this is called taking the centroid of the detected object blob). Print out these values to the screen.
  6. Let's make our system more capable. Rather than print the depth and target centroid to the screen, implement some kind of feedback on the data. Use the target distance from the kinect to regulate the forward movement, as well as the target centroid to regulate the turtlebot heading/orientation. Show that you can follow the target around.

As usual, here are some Hints.

Explore

turtlebot/adventures/sensing102.txt · Last modified: 2024/08/20 21:38 by 127.0.0.1