User Tools

Site Tools


gazebo:manipulation:basics

Manipulations: Learning Modules

Assumptions: There are as few assumptions as possible regarding the learning module sequencing. It is good to know linux and have access to a Linux+ROS installation, as well as comfort with python. In general, anyone choosing to learn robotics is going to have to be comfortable in at least two programming languages (typicall C++ and python), which includes knowing or being open to learning the rich set of libraries associated to- and extending- these languages. Less familiarity and confidence with the above means that you should work through the modules when it is possible to ask someone experienced for help should you get stuck.

Philosophy: The intent is to get you working with manipulation as fast as possible, but not necessarily to teach manipulation. The modules should be complemented with outside learning or with course material if you truly wish to learn what is involved and how to maximally exploit or extend the linked code. The premise is that you will make an effort to understand what is involved in each Learning Module, but not necessarily master on the first pass. As you proceed through the Learning Modules, you will need to improve your understanding-and mastery-of the earlier modules. You should go back and review to better understand them.

Companion Code: Should be linked as needed in the learning module text.

Modules/Adventures


A basic setup for manipulation involves a robot arm, a surface, and an RGB-D sensor (color+depth). The availability of range or depth data resolves one of the main problems associated with manipulation, which is the recovery of world and object geometry from a visual stream. A single arm also limits what can be done to relatively simple manipulation tasks. The following series of learning modules should get you up to speed with basic robot arms, how visual sensing helps, and how to plan manipulation tasks.

Module Set 1: Setup and Moving the Hand to Grasp

  1. First make sure that Gazebo+ROS are installed and up and running.
  2. Understand for the most part the ROS Beginner Tutorials 1-8.
  3. Understand what role Gazebo plays regarding simulating robots with Gazebo Tutorials for Beginner
  4. We first start with the PR2 robot, so make sure you all associated packages installed for the robot (see step 1. Installation on the PR2 page)
  5. Work out the Introductory Tutorials, though it might be possible to skim Tutorial 1.3 (PR2 Simulator Workshop). You should know how to add objects to the world though.
  6. Get user-guided movement using MoveIt!. (Note: git checkout the correct branch to match your ROS version)
  7. (Optional) For ROS version higher than Indigo, PR2 has been removed from moveit! as a separate repository which you can find here. If you are interested in tutorial of moveit!, see this repository
  8. Try out the Pick and Place Demo. What is the difference between this one and the MoveIt! one?

At the conclusion, you should be able to get one arm on the PR2 robot move wherever you specify it should using MoveIt!. MoveIt! is a GUI front-end for OMPL. This sets up the basic elements associated with a robotic arm and configures it to execute plans that move the end-effector (i.e., the hand) from one $SE(3)$ configuration to another.

Module Set 2: Sensing the World

  1. The PR2 should have visual sensors. With the PR2 running in a Gazebo world, use rviz, connect to the appropriate topics and visualize the sensor information. Move the arms and you should see the motion reflected in the cameras.
  2. If the PR2 does not have a depth sensor, then add one to the simulation. Visualize the depth imagery in rviz. Show that this worked by doing a rostopic list which should then output the range sensor topics (your range sensor might be the Kinect camera).
  3. Let's access the sensor data outside of rviz. Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harness the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system and their standards).
  4. Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.
  5. Checkout this example code to see how one can load different objects on a table and take RGB-D images with a Kinect sensor in Gazebo (star it on the upper-right corner if you find it useful, thanks!).
  6. Modify the displayed output to threshold the range data based on a target range. See if you can place objects in the world so that they are segmented out by the thresholding procedure.
  7. If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects. Publish this point cloud and visualize in rviz.

Module Set 3: Modifying for Project Use

  1. Identify an alternative robot, usually one that is simply a fixed-base robotic arm. Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arms from our github site: Handy from ivaHandy or Edy from ivaEdy.
  2. The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code.
  3. Install our custom code from the IVALab ROS public github site. The two main repositories are ivaOmplCore and ivaMoveItCore. The above codes are tested on ubuntu 14.04 with ROS indigo. On our lab computers, these should be installed already.
  4. Modify the source code that you run for pick_place or any other experiments to change your planner to ForageRRT which will use manipulation state space as its default state space. Nothing will change, but now the planner will jointly solve for the trajectory and the terminal joint configuration. This option right now is just tested on the ForageRRT planner but it should be available for all other sample-based planners.

Module Set 4: Handy Arm

  1. Handy Arm is a customized 7-DOF(end-effector not included) robot arm.
  2. This public github site ivaHandy maintains all Handy-related files, codes and tutorials about how to build your Handy from Solidworks, Solidworks-related files and ROS-related packages.
  3. The above ROS-related packages are tested on ubuntu 14.04 and ROS indigo. If you want to use it in ubuntu 16.04 and ROS Kinetic, finalarm_control, finalarm_description and finalarm_gazebo should still be good to use but you will need to follow the setup assistant tutorial of MoveIt! MoveIt! setup assistant tutorial to generated a new finalarm_moveit_config package since it depends on the verison of ROS.
  4. In order to use Handy for simulation or real world experiments, first you need to install ROS indigo and MoveIt! with compatible verison. Then git clone dynamixel motor which applies controllers for dynamixel motors to your workspace. After that, if you don't need to use Handy in gazebo, you can just git clone the Handy repository, remove finalarm_gazebo folder and catkin_make your workspace. If you need, since gazebo has dependencies on ros_control, ros_controllers, control_toolbox and realtime_tools packages if you want to load controllers in gazebo, you need to additionally git clone the above packages to your workspace. After that, you should have all packages you need.
  5. Commands needed for running real world experiment is introduced in readme of ivaHandy repository. There are details about what each command does. After reading that, you will have a better understand of how we do motion planning for Handy.

Module Set 5: Detecting grasps on objects for manipulations

  1. For robotic manipulations, grasping is a fundamental affordance for many tasks, including pick-and-place, pouring water, using a spoon. Identifying where to grasp is essential for path planning and control.
  2. DeepGrasp predicts a list of ranked grasp candidate based on RGB-D image input (which can be obtained from Kinect). With the orientated rectangles specifying grasps on the image, one can projects grasp pose into 3D space and control the robot for manipulation tasks.
  3. To directly use the pretrained DeepGrasp, first install tensorflow cpu version. Then follow the detailed instructions on DeepGrasp to set up the environment and download the pretrained model. Run the demo python file to see the results! Does it work on your images?
  4. To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes.
  5. (Optional) If you would like to finetune on specific object for grasping, this annotation tool provide a GUI interface for annotating grasps easily to generate training data!
  6. To figure out the transformation between robot base and camera, you can start with ARUCO tag. Here is the github repository in IVAlab for using kinect to use ARUCO. ARUCO is a standalone library implemented in C++ and wrapped in Python. It is also incorporated into OpenCV after 3.0.0. The ARUCO library provides tools to generate tags and detect printed tags. Can you generate tags and draw the pose for them?

The first module set got the basic arm components launched and working, but clearly not for the robot of your choice. At the conclusion of this module, you should have a model of your robotic arm loaded and working with a custom motion planning code-base. This code-base is a modified version of OMPL that admits planning queries more typical of a true manipulation scenario, where the start pose is known as a joint configuration and the final pose is known as an end-effector $SE(3)$ configuration. These mixed representation queries are not standard to many planning algorithms, hence the customized code.


Main

gazebo/manipulation/basics.txt · Last modified: 2023/03/06 10:31 by 127.0.0.1