ece4560:visman:03cam
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| ece4560:visman:03cam [2021/10/09 14:32] – pvela | ece4560:visman:03cam [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
|---|---|---|---|
| Line 2: | Line 2: | ||
| --------------------------------------------- | --------------------------------------------- | ||
| - | Mirrors | + | In order to respond to environmental variations, the manipulator will need a mechanism for seeing the world and modeling what is in the scene. The easiest such sensor to work with is a depth sensor since it gives color imagery and geometric information in the form of depth. |
| + | |||
| + | This task mirrors | ||
| + | |||
| + | **RGB-D Camera data:**\\ | ||
| + | * Connect to the depth camera and display the streaming RGB images and depth images. | ||
| + | * Combine depth information with intrinsic camera information to recover a point cloud for a single frame of depth data. The process can be slow, which means using the full depth image might be too slow for real-time visualization. Processing a single frame is good enough for now. | ||
| + | |||
| + | **Tags for obtaining reference frames.**\\ | ||
| + | * Learn how to use the AURCO tag API. | ||
| + | * Use the ARUCO tag API to recover | ||
| + | * Use the known reference frames to obtain geometry of robot, including camera, base, end-effector. | ||
| --------- | --------- | ||
ece4560/visman/03cam.1633804341.txt.gz · Last modified: 2024/08/20 21:38 (external edit)
