ece4560:visman:03cam
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
ece4560:visman:03cam [2021/10/09 11:45] – pvela | ece4560:visman:03cam [2024/08/20 21:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== VisMan: Robot Eyes / Robot Vision ====== | ||
+ | --------------------------------------------- | ||
+ | In order to respond to environmental variations, the manipulator will need a mechanism for seeing the world and modeling what is in the scene. The easiest such sensor to work with is a depth sensor since it gives color imagery and geometric information in the form of depth. | ||
+ | |||
+ | This task mirrors a little bit the [[turtlebot: | ||
+ | |||
+ | **RGB-D Camera data:**\\ | ||
+ | * Connect to the depth camera and display the streaming RGB images and depth images. | ||
+ | * Combine depth information with intrinsic camera information to recover a point cloud for a single frame of depth data. The process can be slow, which means using the full depth image might be too slow for real-time visualization. Processing a single frame is good enough for now. | ||
+ | |||
+ | **Tags for obtaining reference frames.**\\ | ||
+ | * Learn how to use the AURCO tag API. | ||
+ | * Use the ARUCO tag API to recover a tag reference frame relative to the camera. | ||
+ | * Use the known reference frames to obtain geometry of robot, including camera, base, end-effector. Might be just at the description level, maybe not at the fully specified in ROS and visualizable as a frame graph (or whatever is it called). | ||
- | Mirrors a little bit the [[turtlebot: | ||
--------- | --------- |
ece4560/visman/03cam.1633794357.txt.gz · Last modified: 2024/08/20 21:38 (external edit)