gazebo:manipulation:basics
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
gazebo:manipulation:basics [2018/09/29 00:02] – [Modules/Adventures] typos | gazebo:manipulation:basics [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 30: | Line 30: | ||
- Let's access the sensor data outside of '' | - Let's access the sensor data outside of '' | ||
- Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display. | - Code up a python class for subscribing and displaying the kinect image streams. When run, this should just display to separate windows the color image data and the depth data. The ``cv2`` function for displaying an image is called ``imshow()`` which expects the data to be in a specific format (either from 0 to 1, or from o to 255, I forget). The depth data may have to be normalized for proper display. | ||
- | - Checkout this [[https:// | + | - Checkout this [[https:// |
- Modify the displayed output to threshold the range data based on a target range. | - Modify the displayed output to threshold the range data based on a target range. | ||
- If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects. | - If you are using an RGB-D sensor with registered range/color images, use the registered image to extract the point clouds of the segmented objects. | ||
=== Module Set 3: Modifying for Project Use === | === Module Set 3: Modifying for Project Use === | ||
- | - Identify an alternative robot, usually one that is simply a fixed-base robotic arm. Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arm named Handy from the [[https:// | + | - Identify an alternative robot, usually one that is simply a fixed-base robotic arm. Replace the PR2 robot with your chosen robotic arm. You can find ROS-related packages in Github for some commercial robot arms i.e. Kinova or use our customized robot arms from our [[https:// |
- The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code. | - The rest part is for the group that needs ForageRRT planner and Manipulation State Space. Otherwise, you can keep using the default MoveIt! and Ompl code. | ||
- | - Install our custom code from the [[https:// | + | - Install our custom code from the [[https:// |
- Modify the source code that you run for pick_place or any other experiments to change your planner to ForageRRT which will use manipulation state space as its default state space. Nothing will change, but now the planner will jointly solve for the trajectory and the terminal joint configuration. | - Modify the source code that you run for pick_place or any other experiments to change your planner to ForageRRT which will use manipulation state space as its default state space. Nothing will change, but now the planner will jointly solve for the trajectory and the terminal joint configuration. | ||
- | === Module Set 4: Handy goes here === | + | === Module Set 4: Handy Arm === |
- | bala bala bala | + | - Handy Arm is a customized 7-DOF(end-effector not included) robot arm. |
- | (will be updated soon) | + | - This public github site [[https:// |
+ | - The above ROS-related packages are tested on ubuntu 14.04 and ROS indigo. If you want to use it in ubuntu 16.04 and ROS Kinetic, finalarm_control, | ||
+ | - In order to use Handy for simulation or real world experiments, | ||
+ | - Commands needed for running real world experiment is introduced in readme of ivaHandy repository. There are details about what each command does. After reading that, you will have a better understand of how we do motion planning for Handy. | ||
Line 51: | Line 54: | ||
- To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes. | - To use it on Kinect via ROS, simply import tf (tensorflow) in your python node, and modify the provided demo.py to load the pretrained model for your own purposes. | ||
- (Optional) If you would like to finetune on specific object for grasping, this [[https:// | - (Optional) If you would like to finetune on specific object for grasping, this [[https:// | ||
+ | - To figure out the transformation between robot base and camera, you can start with [[https:// | ||
gazebo/manipulation/basics.1538193725.txt.gz · Last modified: 2024/08/20 21:38 (external edit)