User Tools

Site Tools


gazebo:manipulation:basics

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
gazebo:manipulation:basics [2018/09/29 00:55] – [Modules/Adventures] typosgazebo:manipulation:basics [2023/03/06 10:31] (current) – external edit 127.0.0.1
Line 30: Line 30:
   - Let's access the sensor data outside of ''rviz'' Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harness the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system and their standards).    - Let's access the sensor data outside of ''rviz'' Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harness the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system and their standards). 
   - Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.   - Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.
-  - Checkout this [[https://github.com/ivalab/simData_imgSaver/blob/master/src/visuomotor_grasp_3D_Box.py|example code]] (please star it on the upper-right corner, thanks) to see how one can load different objects on a table and take RGB-D images with a Kinect sensor in Gazebo.+  - Checkout this [[https://github.com/ivalab/simData_imgSaver/blob/master/src/visuomotor_grasp_3D_Box.py|example code]]  to see how one can load different objects on a table and take RGB-D images with a Kinect sensor in Gazebo (star it on the upper-right corner if you find it useful, thanks!).
   - Modify the displayed output to threshold the range data based on a target range.  See if you can place objects in the world so that they are segmented out by the thresholding procedure.   - Modify the displayed output to threshold the range data based on a target range.  See if you can place objects in the world so that they are segmented out by the thresholding procedure.
   - If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects.  Publish this point cloud and visualize in ''rviz''.   - If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects.  Publish this point cloud and visualize in ''rviz''.
  
 === Module Set 3: Modifying for Project Use === === Module Set 3: Modifying for Project Use ===
-  - Identify an alternative robot, usually one that is simply a fixed-base robotic arm.  Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arm named Handy from the [[https://github.com/ivaROS/ivaHandy | ivaHandy]] github site.+  - Identify an alternative robot, usually one that is simply a fixed-base robotic arm.  Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arms from our [[https://github.com/ivaROS | github site]]:  Handy from  [[https://github.com/ivaROS/ivaHandy | ivaHandy]] or Edy from [[https://github.com/ivaROS/ivaEdy| ivaEdy]].
   - The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code.   - The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code.
-  - Install our custom code from the [[https://github.com/ivaROS | IVALab ROS]] public github site.  The two main repositories are [[https://github.com/ivaROS/ivaOmplCore | ivaOmplCore]] and [[https://github.com/ivaROS/ivaMoveitCore| ivaMoveItCore]].  On our lab computers, these should be installed already.+  - Install our custom code from the [[https://github.com/ivaROS | IVALab ROS]] public github site.  The two main repositories are [[https://github.com/ivaROS/ivaOmplCore | ivaOmplCore]] and [[https://github.com/ivaROS/ivaMoveitCore| ivaMoveItCore]].  The above codes are tested on ubuntu 14.04 with ROS indigo. On our lab computers, these should be installed already.
   - Modify the source code that you run for pick_place or any other experiments to change your planner to ForageRRT which will use manipulation state space as its default state space. Nothing will change, but now the planner will jointly solve for the trajectory and the terminal joint configuration.  This option right now is just tested on the ForageRRT planner but it should be available for all other sample-based planners.   - Modify the source code that you run for pick_place or any other experiments to change your planner to ForageRRT which will use manipulation state space as its default state space. Nothing will change, but now the planner will jointly solve for the trajectory and the terminal joint configuration.  This option right now is just tested on the ForageRRT planner but it should be available for all other sample-based planners.
  
 === Module Set 4: Handy Arm === === Module Set 4: Handy Arm ===
- - Handy Arm is a customized 7-DOF(end-effector not included) robot arm. +  - Handy Arm is a customized 7-DOF(end-effector not included) robot arm. 
- - [[https://github.com/ivaROS/ivaHandy | ivaHandy]] This public github site maintains all Handy-related files and codes which include tutorials about how to build your Handy from Solidworks, Solidworks-related files and ROS-related packages. +  This public github site [[https://github.com/ivaROS/ivaHandy | ivaHandy]] maintains all Handy-related files, codes and tutorials about how to build your Handy from Solidworks, Solidworks-related files and ROS-related packages. 
- - The above ROS-related packages are tested on ubuntu 14.04 and ROS indigo. If you want to use it in ubuntu 16.04 and ROS Kinetic, finalarm_control, finalarm_description and finalarm_gazebo should still be good to use but you will need to follow the setup assistant tutorial of MoveIt! [https://ros-planning.github.io/moveit_tutorials/doc/setup_assistant/setup_assistant_tutorial.html?highlight=assistant] to generated a new finalarm_moveit_config package since it depends on ROS.  +  - The above ROS-related packages are tested on ubuntu 14.04 and ROS indigo. If you want to use it in ubuntu 16.04 and ROS Kinetic, finalarm_control, finalarm_description and finalarm_gazebo should still be good to use but you will need to follow the setup assistant tutorial of MoveIt! [[https://ros-planning.github.io/moveit_tutorials/doc/setup_assistant/setup_assistant_tutorial.html?highlight=assistant | MoveIt! setup assistant tutorial]] to generated a new finalarm_moveit_config package since it depends on the verison of ROS.  
- - In order to use Handy for simulation or real world experiments, first you need to install ROS indigo and MoveIt! with compatible verison. Then git clone [[https://github.com/arebgun/dynamixel_motor | dynamixel motor]] which applies controllers for dynamixel motors to your workspace. After that, if you don't need to use Handy in gazebo, you can just git clone the Handy repository, remove finalarm_gazebo folder and catkin_make your workspace. If you need, since gazebo has dependencies on balabala packages if you want to load controllers in gazebo, you need to additionally git clone balabala packages to your workspace. After that, you should have all packages you need. +  - In order to use Handy for simulation or real world experiments, first you need to install ROS indigo and MoveIt! with compatible verison. Then git clone [[https://github.com/arebgun/dynamixel_motor | dynamixel motor]] which applies controllers for dynamixel motors to your workspace. After that, if you don't need to use Handy in gazebo, you can just git clone the Handy repository, remove finalarm_gazebo folder and catkin_make your workspace. If you need, since gazebo has dependencies on ros_control, ros_controllers, control_toolbox and realtime_tools packages if you want to load controllers in gazebo, you need to additionally git clone the above packages to your workspace. After that, you should have all packages you need. 
- - Commands needed for running real world experiment is introduced in readme of ivaHandy repository. There are details about what each command does. After reading that, you will have a better understand of how we do motion planning for Handy.+  - Commands needed for running real world experiment is introduced in readme of ivaHandy repository. There are details about what each command does. After reading that, you will have a better understand of how we do motion planning for Handy.
  
  
Line 54: Line 54:
   - To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes.    - To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes. 
   - (Optional) If you would like to finetune on specific object for grasping, this [[https://github.com/ivalab/grasp_annotation_tool|annotation tool]] provide a GUI interface for annotating grasps easily to generate training data!   - (Optional) If you would like to finetune on specific object for grasping, this [[https://github.com/ivalab/grasp_annotation_tool|annotation tool]] provide a GUI interface for annotating grasps easily to generate training data!
 +  - To figure out the transformation between robot base and camera, you can start with [[https://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html|ARUCO tag]]. [[https://github.com/ivalab/aruco_tag_saver|Here]] is the github repository in IVAlab for using kinect to use ARUCO. ARUCO is a standalone library implemented in C++ and wrapped in Python. It is also incorporated into OpenCV after 3.0.0. The ARUCO library provides tools to generate tags and detect printed tags. Can you generate tags and draw the pose for them?
  
  
gazebo/manipulation/basics.1538196900.txt.gz · Last modified: 2023/03/06 10:31 (external edit)