User Tools

Site Tools


ece4560:visman:03cam

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ece4560:visman:03cam [2021/10/17 16:04] pvelaece4560:visman:03cam [2024/08/20 21:38] (current) – external edit 127.0.0.1
Line 4: Line 4:
 In order to respond to environmental variations, the manipulator will need a mechanism for seeing the world and modeling what is in the scene. The easiest such sensor to work with is a depth sensor since it gives color imagery and geometric information in the form of depth.  Depth data captures the structure of the world, which really means that world structure can be obtained from the depth images. Some processing is required. Let's start down the path of understanding how to do so. In order to respond to environmental variations, the manipulator will need a mechanism for seeing the world and modeling what is in the scene. The easiest such sensor to work with is a depth sensor since it gives color imagery and geometric information in the form of depth.  Depth data captures the structure of the world, which really means that world structure can be obtained from the depth images. Some processing is required. Let's start down the path of understanding how to do so.
  
-This task mirrors a little bit the [[turtlebot:adventures:sensing102| Turtlebot camera activity]] in that the students should learn how to //launch// a camera or depth camera and obtain readings (color and/or depth).  +This task mirrors a little bit the [[turtlebot:adventures:sensing102| Turtlebot camera activity]] in that the students should learn how to //launch// a camera or depth camera and obtain readings (color and/or depth).  Complete the following:
  
-**RGB-D Camera data:+**RGB-D Camera data:**\\
   * Connect to the depth camera and display the streaming RGB images and depth images.   * Connect to the depth camera and display the streaming RGB images and depth images.
   * Combine depth information with intrinsic camera information to recover a point cloud for a single frame of depth data. The process can be slow, which means using the full depth image might be too slow for real-time visualization. Processing a single frame is good enough for now.   * Combine depth information with intrinsic camera information to recover a point cloud for a single frame of depth data. The process can be slow, which means using the full depth image might be too slow for real-time visualization. Processing a single frame is good enough for now.
  
-**Tags for obtaining reference frames+**Tags for obtaining reference frames.**\\
   * Learn how to use the AURCO tag API.   * Learn how to use the AURCO tag API.
   * Use the ARUCO tag API to recover a tag reference frame relative to the camera.   * Use the ARUCO tag API to recover a tag reference frame relative to the camera.
ece4560/visman/03cam.1634501099.txt.gz · Last modified: 2024/08/20 21:38 (external edit)