User Tools

Site Tools


turtlebot:adventures:sensing102_hints

Connecting to the Kinect Sensor w/out Color+Depth Registration

If you have trouble with subscribing to the depth data or to the linescan data, then it may be that the main publisher needed is not running, or is publishing to another topic. One fix, found here indicates that one should disable registration of the depth data to the image:

roslaunch turtlebot_bringup 3dsensor.launch depth_registration:=false

Connecting to the Kinect Sensor with Color+Depth Registration

The default ``3Dsensor`` launch file for the turtlebot enables depth registration in its fully glory by default. The amount of computation required to register in real-time overwhelms the laptop, which is why running the ``3Dsensor`` launch with default configuration settings can crash, either right away or eventually. But, suppose that the color and depth should be registered, then what? There is an intermediate option. What causes problems is that depth registration creates a point cloud by default. Converting an image with 300k+ pixels to a point cloud is time consuming. That's what is causes problems.

The solution is to override the point cloud generation with the following disabling flag:

roslaunch turtlebot_bringup 3dsensor.launch enable_pointcloud:=false

as inferred from this stack overflow question. In my experience, the first time launching failed (lots of red error messages) but the second time succeeded (only yellow warning messages). See if that helps.

Why register depth? If access to the depth for given color pixels is necessary, then registering color and depth images is essential. Otherwise, the corresponding pixel in the depth image needs to be computed. That calculation is outside of the scope of the course and more inline with ECE4580. Registering depth provides two topics that provide color and depth images whos pixels match in terms of information content about the world. These topics were found to be ``/camera/depth_registered/hw_registered/`` and ``/camera/rgb/image_rect_color`` (at least on the laptop used to confirm functionality). Visualizing those two image topics and waving your hand infront of the camera should result in the hand being plotted in the same place within the two images. Doing the same with the raw information would not do so.

Checking that Data is Being Published

ROS has a command that can be used to view visual data streams. Using this command is way better than using rostopic echo to view the data coming from image streams. The way to view visual data is through a command like the following

rosrun image_view image_view image:=/name/of/topic

where /name/of/topic is the image topic to subscribe to and view.

It is also possible to use rostopic echo to view some published topics, but be careful with what topics that is done. Images are huge and will output a scrolling window full of numbers (that's certainly one way to confirm that the images are streaming). A laser scan is also kinda big and will also output a scrolling window, but you might be able to make sense of it.

ROS Basics and Python for Image Topics

The ROS wiki site has a decent tutorial that provides code tips on how to subscribe to an image stream and how to display it through the OpenCV bridge libraries. Of course, some massaging and editing of the code is needed to get it to work for the Turtlebot proper. The code is quite generic, so it's main utility is to provide how structurally the code for bullet 2 should be organized.


Sensors 102

turtlebot/adventures/sensing102_hints.txt · Last modified: 2023/11/28 12:24 by classes