aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide
diff options
context:
space:
mode:
authorThéo de la Hogue2023-06-21 09:03:41 +0200
committerThéo de la Hogue2023-06-21 09:03:41 +0200
commita594afb5bb17798cd138f1632dcfc53f4eaac09f (patch)
treeac3527627e4171e6fd545c73e0cc81f49dfe6a94 /docs/user_guide
parent0354377903fbc8a828b5735b2d25e1c5bc02c768 (diff)
downloadargaze-a594afb5bb17798cd138f1632dcfc53f4eaac09f.zip
argaze-a594afb5bb17798cd138f1632dcfc53f4eaac09f.tar.gz
argaze-a594afb5bb17798cd138f1632dcfc53f4eaac09f.tar.bz2
argaze-a594afb5bb17798cd138f1632dcfc53f4eaac09f.tar.xz
Replacing frame word by image when it is about drawing or detecting.
Diffstat (limited to 'docs/user_guide')
-rw-r--r--docs/user_guide/ar_environment/environment_exploitation.md10
-rw-r--r--docs/user_guide/areas_of_interest/aoi_matching.md2
-rw-r--r--docs/user_guide/areas_of_interest/aoi_scene_projection.md4
-rw-r--r--docs/user_guide/aruco_markers/camera_calibration.md12
-rw-r--r--docs/user_guide/aruco_markers/markers_detection.md10
-rw-r--r--docs/user_guide/timestamped_data/introduction.md2
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md2
-rw-r--r--docs/user_guide/utils/ready-made_scripts.md2
8 files changed, 22 insertions, 22 deletions
diff --git a/docs/user_guide/ar_environment/environment_exploitation.md b/docs/user_guide/ar_environment/environment_exploitation.md
index f07d150..a4013ea 100644
--- a/docs/user_guide/ar_environment/environment_exploitation.md
+++ b/docs/user_guide/ar_environment/environment_exploitation.md
@@ -4,8 +4,8 @@ Environment exploitation
Once loaded, [ArEnvironment](../../../argaze/#argaze.ArFeatures.ArEnvironment) assets can be exploited as illustrated below:
```python
-# Access to AR environment ArUco detector passing it a frame where to detect ArUco markers
-ar_environment.aruco_detector.detect_markers(frame)
+# Access to AR environment ArUco detector passing it a image where to detect ArUco markers
+ar_environment.aruco_detector.detect_markers(image)
# Access to an AR environment scene
my_first_scene = ar_environment.scenes['my first AR scene']
@@ -15,15 +15,15 @@ try:
# Try to estimate AR scene pose from detected markers
tvec, rmat, consistent_markers = my_first_scene.estimate_pose(ar_environment.aruco_detector.detected_markers)
- # Project AR scene into camera frame according estimated pose
+ # Project AR scene into camera image according estimated pose
# Optional visual_hfov argument is set to 160° to clip AOI scene according a cone vision
aoi2D_scene = my_first_scene.project(tvec, rmat, visual_hfov=160)
# Draw estimated AR scene axis
- my_first_scene.draw_axis(frame)
+ my_first_scene.draw_axis(image)
# Draw AOI2D scene projection
- aoi2D_scene.draw(frame)
+ aoi2D_scene.draw(image)
# Do something with AOI2D scene projection
...
diff --git a/docs/user_guide/areas_of_interest/aoi_matching.md b/docs/user_guide/areas_of_interest/aoi_matching.md
index 1e18238..ff658a2 100644
--- a/docs/user_guide/areas_of_interest/aoi_matching.md
+++ b/docs/user_guide/areas_of_interest/aoi_matching.md
@@ -5,7 +5,7 @@ title: AOI matching
AOI matching
============
-Once [AOI3DScene](../../../argaze/#argaze.AreaOfInterest.AOI3DScene) is projected into a frame as [AOI2DScene](../../../argaze/#argaze.AreaOfInterest.AOI2DScene), it could be needed to know which AOI is looked.
+Once [AOI3DScene](../../../argaze/#argaze.AreaOfInterest.AOI3DScene) is projected as [AOI2DScene](../../../argaze/#argaze.AreaOfInterest.AOI2DScene), it could be needed to know which AOI is looked.
The [AreaOfInterest](../../../argaze/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) class in [AOIFeatures](../../../argaze/#argaze.AreaOfInterest.AOIFeatures) provides two ways to accomplish such task.
diff --git a/docs/user_guide/areas_of_interest/aoi_scene_projection.md b/docs/user_guide/areas_of_interest/aoi_scene_projection.md
index ad50f6f..bdb3fe0 100644
--- a/docs/user_guide/areas_of_interest/aoi_scene_projection.md
+++ b/docs/user_guide/areas_of_interest/aoi_scene_projection.md
@@ -5,7 +5,7 @@ title: AOI scene projection
AOI scene projection
====================
-An [AOI3DScene](../../../argaze/#argaze.AreaOfInterest.AOI3DScene) can be rotated and translated according to a pose estimation before to project it onto camera frame as an [AOI2DScene](../../../argaze/#argaze.AreaOfInterest.AOI2DScene).
+An [AOI3DScene](../../../argaze/#argaze.AreaOfInterest.AOI3DScene) can be rotated and translated according to a pose estimation before to project it onto camera image as an [AOI2DScene](../../../argaze/#argaze.AreaOfInterest.AOI2DScene).
![AOI projection](../../img/aoi_projection.png)
@@ -18,5 +18,5 @@ An [AOI3DScene](../../../argaze/#argaze.AreaOfInterest.AOI3DScene) can be rotate
aoi2D_scene = aoi3D_scene.project(tvec, rmat, optic_parameters.K)
# Draw AOI 2D scene
-aoi2D_scene.draw(frame)
+aoi2D_scene.draw(image)
```
diff --git a/docs/user_guide/aruco_markers/camera_calibration.md b/docs/user_guide/aruco_markers/camera_calibration.md
index 7bff480..1019fc1 100644
--- a/docs/user_guide/aruco_markers/camera_calibration.md
+++ b/docs/user_guide/aruco_markers/camera_calibration.md
@@ -24,7 +24,7 @@ Then, the calibration process needs to make many different captures of an [ArUco
![Calibration step](../../img/camera_calibration_step.png)
-The sample of code below shows how to detect board corners into camera frames, store detected corners then process them to build calibration data and, finally, save it into a JSON file:
+The sample of code below shows how to detect board corners into camera images, store detected corners then process them to build calibration data and, finally, save it into a JSON file:
``` python
from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoOpticCalibrator, ArUcoBoard, ArUcoDetector
@@ -42,19 +42,19 @@ expected_aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary)
# Create ArUco detector
aruco_detector = ArUcoDetector.ArUcoDetector(dictionary=aruco_dictionary, marker_size=3)
-# Capture frames from a live Full HD video stream (1920x1080)
+# Capture images from a live Full HD video stream (1920x1080)
while video_stream.is_alive():
- frame = video_stream.read()
+ image = video_stream.read()
- # Detect all board corners in frame
- aruco_detector.detect_board(frame, expected_aruco_board, expected_aruco_board.markers_number)
+ # Detect all board corners in image
+ aruco_detector.detect_board(image, expected_aruco_board, expected_aruco_board.markers_number)
# If board corners are detected
if aruco_detector.board_corners_number > 0:
# Draw board corners to show that board tracking succeeded
- aruco_detector.draw_board(frame)
+ aruco_detector.draw_board(image)
# Append tracked board data for further calibration processing
aruco_optic_calibrator.store_calibration_data(aruco_detector.board_corners, aruco_detector.board_corners_identifier)
diff --git a/docs/user_guide/aruco_markers/markers_detection.md b/docs/user_guide/aruco_markers/markers_detection.md
index 3851cb4..9a3bc9f 100644
--- a/docs/user_guide/aruco_markers/markers_detection.md
+++ b/docs/user_guide/aruco_markers/markers_detection.md
@@ -29,14 +29,14 @@ Here is [DetectorParameters](../../../argaze/#argaze.ArUcoMarkers.ArUcoDetector.
}
```
-The [ArUcoDetector](../../../argaze/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) processes frame to detect markers and allows to draw detection results onto it:
+The [ArUcoDetector](../../../argaze/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) processes image to detect markers and allows to draw detection results onto it:
``` python
-# Detect markers into a frame and draw them
-aruco_detector.detect_markers(frame)
-aruco_detector.draw_detected_markers(frame)
+# Detect markers into image and draw them
+aruco_detector.detect_markers(image)
+aruco_detector.draw_detected_markers(image)
-# Get corners position into frame related to each detected markers
+# Get corners position into image related to each detected markers
for marker_id, marker in aruco_detector.detected_markers.items():
print(f'marker {marker_id} corners: ', marker.corners)
diff --git a/docs/user_guide/timestamped_data/introduction.md b/docs/user_guide/timestamped_data/introduction.md
index ed13d85..a36daca 100644
--- a/docs/user_guide/timestamped_data/introduction.md
+++ b/docs/user_guide/timestamped_data/introduction.md
@@ -1,6 +1,6 @@
Timestamped data
================
-Working with wearable eye tracker devices implies to handle various timestamped data like frames, gaze positions, pupils diameter, fixations, saccades, ...
+Working with wearable eye tracker devices implies to handle various timestamped data like gaze positions, pupils diameter, fixations, saccades, ...
This section mainly refers to [DataStructures.TimeStampedBuffer](../../../argaze/#argaze.DataStructures.TimeStampedBuffer) class.
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index adcc8b3..5de2927 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -11,7 +11,7 @@ Collection of command-line scripts for demonstration purpose.
## AR environment demonstration
-Load AR environment from **setup.json** file, detect ArUco markers into camera device (-d DEVICE) frames and estimate envirnoment pose.
+Load AR environment from **setup.json** file, detect ArUco markers into camera device (-d DEVICE) images and estimate envirnoment pose.
```shell
python ./src/argaze/utils/demo_ar_features_run.py -d DEVICE
diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/ready-made_scripts.md
index 035d697..afc5749 100644
--- a/docs/user_guide/utils/ready-made_scripts.md
+++ b/docs/user_guide/utils/ready-made_scripts.md
@@ -36,7 +36,7 @@ python ./src/argaze/utils/camera_calibrate.py 7 5 5 3 DICT_APRILTAG_16h5 -d DEVI
## ArUco scene exporter
-Load a MOVIE with ArUco markers inside and select a frame into it, detect ArUco markers belonging to DICT_APRILTAG_16h5 dictionary with 5cm size into the selected frame thanks to given OPTIC_PARAMETERS and DETECTOR_PARAMETERS then, export detected ArUco markers scene as .obj file into an *./src/argaze/utils/_export/scenes* folder.
+Load a MOVIE with ArUco markers inside and select image into it, detect ArUco markers belonging to DICT_APRILTAG_16h5 dictionary with 5cm size into the selected image thanks to given OPTIC_PARAMETERS and DETECTOR_PARAMETERS then, export detected ArUco markers scene as .obj file into an *./src/argaze/utils/_export/scenes* folder.
```shell
python ./src/argaze/utils/aruco_markers_scene_export.py MOVIE DICT_APRILTAG_16h5 5 OPTIC_PARAMETERS DETECTOR_PARAMETERS -o ./src/argaze/utils/_export/scenes