User Tools

Site Tools


gazebo:manipulation:basics

This is an old revision of the document!


Manipulations: Learning Modules

Assumptions: There are as few assumptions as possible regarding the learning module sequencing. It is good to know linux and have access to a Linux+ROS installation, as well as comfort with python. In general, anyone choosing to learn robotics is going to have to be comfortable in at least two programming languages (typicall C++ and python), which includes knowing or being open to learning the rich set of libraries associated to- and extending- these languages. Less familiarity and confidence with the above means that you should work through the modules when it is possible to ask someone experienced for help should you get stuck.

Philosophy: The intent is to get you working with manipulation as fast as possible, but not necessarily to teach manipulation. The modules should be complemented with outside learning or with course material if you truly wish to learn what is involved and how to maximally exploit or extend the linked code. The premise is that you will make an effort to understand what is involved in each Learning Module, but not necessarily master on the first pass. As you proceed through the Learning Modules, you will need to improve your understanding-and mastery-of the earlier modules. You should go back and review to better understand them.

Companion Code: Should be linked as needed in the learning module text.

Modules/Adventures


A basic setup for manipulation involves a robot arm, a surface, and an RGB-D sensor (color+depth). The availability of range or depth data resolves one of the main problems associated with manipulation, which is the recovery of world and object geometry from a visual stream. A single arm also limits what can be done to relatively simple manipulation tasks. The following series of learning modules should get you up to speed with basic robot arms, how visual sensing helps, and how to plan manipulation tasks.

Module Set 1: Setup and Moving the Hand to Grasp

  1. First make sure that Gazebo+ROS are installed and up and running.
  2. Understand for the most part the ROS Beginner Tutorials 1-8.
  3. Understand what role Gazebo plays regarding simulating robots with Gazebo Tutorials for Beginner
  4. We first start with the PR2 robot, so make sure you all associated packages installed for the robot (see step 1. Installation on the PR2 page)
  5. Work out the Introductory Tutorials, though it might be possible to skim Tutorial 1.3 (PR2 Simulator Workshop). You should know how to add objects to the world though.
  6. Get user-guided movement using MoveIt!. (Note: git checkout the correct branch to match your ROS version)
  7. (Optional) For ROS version higher than Indigo, PR2 has been removed from moveit! as a separate repository which you can find here. If you are interested in tutorial of moveit!, see this repository
  8. Try out the Pick and Place Demo. What is the difference between this one and the MoveIt! one?

At the conclusion, you should be able to get one arm on the PR2 robot move wherever you specify it should using MoveIt!. MoveIt! is a GUI front-end for OMPL. This sets up the basic elements associated with a robotic arm and configures it to execute plans that move the end-effector (i.e., the hand) from one $SE(3)$ configuration to another.

Module Set 2: Sensing the World

  1. The PR2 should have visual sensors. With the PR2 running in a Gazebo world, use rviz, connect to the appropriate topics and visualize the sensor information. Move the arms and you should see the motion reflected in the cameras.
  2. If the PR2 does not have a depth sensor, then add one to the simulation. Visualize the depth imagery in rviz. Show that this worked by doing a rostopic list which should then output the range sensor topics (your range sensor might be the Kinect camera).
  3. Let's access the sensor data outside of rviz. Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harnessthe power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system snad their standards). Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.
  4. Modify the displayed output to threshold the range data based on a target range. See if you can place objects in the world so that they are segmented out by the thresholding procedure.
  5. If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects. Publish this point cloud and visualize in rviz.

Module Set 3: Modifying for Project Use

  1. Identify an alternative robot, usually one that is simply a fixed-base robotic arm. Replace the PR2 robot with your chosen robotic arm. You may have to create a custom launch file with this new robot. Shown that you can create motion plans via MoveIt!.
  2. Install our custom code from the [https://github.com/ivaROS | IVALab ROS]] public github site. The two main repositories are ivaOmplCore and ivaMoveItCore. On our lab computers, these should be installed already.
  3. Modify the launch file to now refer to the modified OMPL and MoveIt! libraries. You should still be able to create plans, but now the planner will jointly solve for the trajectory and the terminal joint configuration. This option might only be available for the ForageRRT planner.

The first module set got the basic arm components launched and working, but clearly not for the robot of your choice. At the conclusion of this module, you should have a model of your robotic arm loaded and working with a custom motion planning code-base. This code-base is a modified version of OMPL that admits planning queries more typical of a true manipulation scenario, where the start pose is known as a joint configuration and the final pose is known as an end-effector $SE(3)$ configuration. These mixed representation queries are not standard to many planning algorithms, hence the customized code.


Main

gazebo/manipulation/basics.1538159071.txt.gz · Last modified: 2023/03/06 10:31 (external edit)