User Tools

Site Tools


gazebo:manipulation:basics

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
gazebo:manipulation:basics [2018/10/10 17:01] typosgazebo:manipulation:basics [2023/03/06 10:31] (current) – external edit 127.0.0.1
Line 30: Line 30:
   - Let's access the sensor data outside of ''rviz'' Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harness the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system and their standards).    - Let's access the sensor data outside of ''rviz'' Properly launch the necessary services to have range sensor data be published. Python and ROS fortunately harness the power of OpenCV. Using the OpenCV bridge code for python, demonstrate the ability to subscribe to the sensor data and display it. The libraries to import are ``cv2`` and ``cv_bridge``. The first one gives access to OpenCV code. The second one lets ROS and OpenCV work together (usually through conversion routines that translate data forms between the two system and their standards). 
   - Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.   - Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display.
-  - Checkout this [[https://github.com/ivalab/simData_imgSaver/blob/master/src/visuomotor_grasp_3D_Box.py|example code]]  to see how one can load different objects on a table and take RGB-D images with a Kinect sensor in Gazebo.+  - Checkout this [[https://github.com/ivalab/simData_imgSaver/blob/master/src/visuomotor_grasp_3D_Box.py|example code]]  to see how one can load different objects on a table and take RGB-D images with a Kinect sensor in Gazebo (star it on the upper-right corner if you find it useful, thanks!).
   - Modify the displayed output to threshold the range data based on a target range.  See if you can place objects in the world so that they are segmented out by the thresholding procedure.   - Modify the displayed output to threshold the range data based on a target range.  See if you can place objects in the world so that they are segmented out by the thresholding procedure.
   - If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects.  Publish this point cloud and visualize in ''rviz''.   - If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects.  Publish this point cloud and visualize in ''rviz''.
  
 === Module Set 3: Modifying for Project Use === === Module Set 3: Modifying for Project Use ===
-  - Identify an alternative robot, usually one that is simply a fixed-base robotic arm.  Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arm named Handy from the [[https://github.com/ivaROS/ivaHandy | ivaHandy]] github site.+  - Identify an alternative robot, usually one that is simply a fixed-base robotic arm.  Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arms from our [[https://github.com/ivaROS | github site]]:  Handy from  [[https://github.com/ivaROS/ivaHandy | ivaHandy]] or Edy from [[https://github.com/ivaROS/ivaEdy| ivaEdy]].
   - The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code.   - The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code.
   - Install our custom code from the [[https://github.com/ivaROS | IVALab ROS]] public github site.  The two main repositories are [[https://github.com/ivaROS/ivaOmplCore | ivaOmplCore]] and [[https://github.com/ivaROS/ivaMoveitCore| ivaMoveItCore]].  The above codes are tested on ubuntu 14.04 with ROS indigo. On our lab computers, these should be installed already.   - Install our custom code from the [[https://github.com/ivaROS | IVALab ROS]] public github site.  The two main repositories are [[https://github.com/ivaROS/ivaOmplCore | ivaOmplCore]] and [[https://github.com/ivaROS/ivaMoveitCore| ivaMoveItCore]].  The above codes are tested on ubuntu 14.04 with ROS indigo. On our lab computers, these should be installed already.
Line 54: Line 54:
   - To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes.    - To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes. 
   - (Optional) If you would like to finetune on specific object for grasping, this [[https://github.com/ivalab/grasp_annotation_tool|annotation tool]] provide a GUI interface for annotating grasps easily to generate training data!   - (Optional) If you would like to finetune on specific object for grasping, this [[https://github.com/ivalab/grasp_annotation_tool|annotation tool]] provide a GUI interface for annotating grasps easily to generate training data!
 +  - To figure out the transformation between robot base and camera, you can start with [[https://docs.opencv.org/3.1.0/d5/dae/tutorial_aruco_detection.html|ARUCO tag]]. [[https://github.com/ivalab/aruco_tag_saver|Here]] is the github repository in IVAlab for using kinect to use ARUCO. ARUCO is a standalone library implemented in C++ and wrapped in Python. It is also incorporated into OpenCV after 3.0.0. The ARUCO library provides tools to generate tags and detect printed tags. Can you generate tags and draw the pose for them?
  
  
gazebo/manipulation/basics.1539205295.txt.gz · Last modified: 2023/03/06 10:31 (external edit)