aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md')
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md10
1 files changed, 2 insertions, 8 deletions
diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
index 8c13bf2..4f9af7c 100644
--- a/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
+++ b/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
@@ -101,17 +101,11 @@ The names of 3D AOI **and** their related [ArFrames](../../argaze.md/#argaze.ArF
After camera image is passed to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perpective transformation in order to project watched image into each [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) [frames background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
```python
-# Assuming that Full HD (1920x1080) video stream or file is opened
-...
-
-# Assuming that the video reading is handled in a looping code block
+# Assuming that Full HD (1920x1080) timestamped images are available
...:
- # Capture image from video stream of file
- image = video_capture.read()
-
# Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
- aruco_camera.watch(image)
+ aruco_camera.watch(timestamp, image)
# Map watched image into ArUcoScenes frames background
aruco_camera.map()