aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide
diff options
context:
space:
mode:
authorThéo de la Hogue2023-11-07 15:54:45 +0100
committerThéo de la Hogue2023-11-07 15:54:45 +0100
commit78ce6ffc892ef7d64a8d1da0dbdfcbf34d214bbd (patch)
tree4509c14aa1800d2666c50c47549a044e5a6c11d0 /docs/user_guide
parentbc9257268bb54ea68f777cbb853dc6498274dd99 (diff)
parentf8b1a36c9e486ef19f62159475b9bf19a5b90a03 (diff)
downloadargaze-78ce6ffc892ef7d64a8d1da0dbdfcbf34d214bbd.zip
argaze-78ce6ffc892ef7d64a8d1da0dbdfcbf34d214bbd.tar.gz
argaze-78ce6ffc892ef7d64a8d1da0dbdfcbf34d214bbd.tar.bz2
argaze-78ce6ffc892ef7d64a8d1da0dbdfcbf34d214bbd.tar.xz
Merge branch 'master' of ssh://git.recherche.enac.fr/interne-ihm-aero/eye-tracking/argaze
Diffstat (limited to 'docs/user_guide')
-rw-r--r--docs/user_guide/ar_environment/environment_exploitation.md36
-rw-r--r--docs/user_guide/ar_environment/environment_setup.md77
-rw-r--r--docs/user_guide/ar_environment/introduction.md6
-rw-r--r--docs/user_guide/areas_of_interest/aoi_matching.md48
-rw-r--r--docs/user_guide/areas_of_interest/aoi_scene_description.md83
-rw-r--r--docs/user_guide/areas_of_interest/aoi_scene_projection.md22
-rw-r--r--docs/user_guide/areas_of_interest/heatmap.md40
-rw-r--r--docs/user_guide/areas_of_interest/introduction.md8
-rw-r--r--docs/user_guide/areas_of_interest/vision_cone_filtering.md18
-rw-r--r--docs/user_guide/aruco_markers/dictionary_selection.md17
-rw-r--r--docs/user_guide/aruco_markers/introduction.md15
-rw-r--r--docs/user_guide/aruco_markers/markers_creation.md17
-rw-r--r--docs/user_guide/aruco_markers/markers_detection.md47
-rw-r--r--docs/user_guide/aruco_markers/markers_pose_estimation.md20
-rw-r--r--docs/user_guide/aruco_markers/markers_scene_description.md146
-rw-r--r--docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md40
-rw-r--r--docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md66
-rw-r--r--docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md136
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md53
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md128
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md (renamed from docs/user_guide/aruco_markers_pipeline/aoi_projection.md)51
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aoi_description.md62
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md35
-rw-r--r--docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md41
-rw-r--r--docs/user_guide/aruco_markers_pipeline/introduction.md24
-rw-r--r--docs/user_guide/aruco_markers_pipeline/pose_estimation.md30
-rw-r--r--docs/user_guide/gaze_analysis/gaze_movement.md163
-rw-r--r--docs/user_guide/gaze_analysis/gaze_position.md98
-rw-r--r--docs/user_guide/gaze_analysis/introduction.md7
-rw-r--r--docs/user_guide/gaze_analysis/scan_path.md169
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md10
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md12
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_2d_description.md57
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md64
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/background.md8
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md24
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/heatmap.md10
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/introduction.md11
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/logging.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/pipeline_modules/scan_path_analyzers.md6
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualisation.md37
-rw-r--r--docs/user_guide/timestamped_data/data_synchronisation.md106
-rw-r--r--docs/user_guide/timestamped_data/introduction.md6
-rw-r--r--docs/user_guide/timestamped_data/ordered_dictionary.md19
-rw-r--r--docs/user_guide/timestamped_data/pandas_dataframe_conversion.md41
-rw-r--r--docs/user_guide/timestamped_data/saving_and_loading.md14
-rw-r--r--docs/user_guide/utils/ready-made_scripts.md6
50 files changed, 681 insertions, 1463 deletions
diff --git a/docs/user_guide/ar_environment/environment_exploitation.md b/docs/user_guide/ar_environment/environment_exploitation.md
deleted file mode 100644
index 9e4b236..0000000
--- a/docs/user_guide/ar_environment/environment_exploitation.md
+++ /dev/null
@@ -1,36 +0,0 @@
-Environment exploitation
-========================
-
-Once loaded, [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) assets can be exploited as illustrated below:
-
-```python
-# Access to AR environment ArUco detector passing it a image where to detect ArUco markers
-ar_camera.aruco_detector.detect_markers(image)
-
-# Access to an AR environment scene
-my_first_scene = ar_camera.scenes['my first AR scene']
-
-try:
-
- # Try to estimate AR scene pose from detected markers
- tvec, rmat, consistent_markers = my_first_scene.estimate_pose(ar_camera.aruco_detector.detected_markers)
-
- # Project AR scene into camera image according estimated pose
- # Optional visual_hfov argument is set to 160° to clip AOI scene according a cone vision
- aoi2D_scene = my_first_scene.project(tvec, rmat, visual_hfov=160)
-
- # Draw estimated AR scene axis
- my_first_scene.draw_axis(image)
-
- # Draw AOI2D scene projection
- aoi2D_scene.draw(image)
-
- # Do something with AOI2D scene projection
- ...
-
-# Catch exceptions raised by estimate_pose and project methods
-except (ArFeatures.PoseEstimationFailed, ArFeatures.SceneProjectionFailed) as e:
-
- print(e)
-
-```
diff --git a/docs/user_guide/ar_environment/environment_setup.md b/docs/user_guide/ar_environment/environment_setup.md
deleted file mode 100644
index 1f26d26..0000000
--- a/docs/user_guide/ar_environment/environment_setup.md
+++ /dev/null
@@ -1,77 +0,0 @@
-Environment Setup
-=================
-
-[ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) setup is loaded from JSON file format.
-
-Each [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) defines a unique [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) dedicated to detection of markers from a specific [ArUcoMarkersDictionary](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersDictionary) and with a given size. However, it is possible to load multiple [ArScene](../../argaze.md/#argaze.ArFeatures.ArScene) into a same [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera).
-
-Here is JSON environment file example where it is assumed that mentioned .obj files are located relatively to the environment file on disk.
-
-```
-{
- "name": "my AR environment",
- "aruco_detector": {
- "dictionary": {
- "name": "DICT_APRILTAG_16h5"
- }
- "marker_size": 5,
- "optic_parameters": {
- "rms": 0.6,
- "dimensions": [
- 1920,
- 1080
- ],
- "K": [
- [
- 1135,
- 0.0,
- 956
- ],
- [
- 0.0,
- 1135,
- 560
- ],
- [
- 0.0,
- 0.0,
- 1.0
- ]
- ],
- "D": [
- 0.01655492265003404,
- 0.1985524264972037,
- 0.002129965902489484,
- -0.0019528582922179365,
- -0.5792910353639452
- ]
- },
- "parameters": {
- "cornerRefinementMethod": 3,
- "aprilTagQuadSigma": 2,
- "aprilTagDeglitch": 1
- }
- },
- "scenes": {
- "my first AR scene" : {
- "aruco_markers_group": "./first_scene/markers.obj",
- "aoi_scene": "./first_scene/aoi.obj",
- "angle_tolerance": 15.0,
- "distance_tolerance": 2.54
- },
- "my second AR scene" : {
- "aruco_markers_group": "./second_scene/markers.obj",
- "aoi_scene": "./second_scene/aoi.obj",
- "angle_tolerance": 15.0,
- "distance_tolerance": 2.54
- }
- }
-}
-```
-
-```python
-from argaze import ArFeatures
-
-# Load AR environment
-ar_camera = ArFeatures.ArCamera.from_json('./environment.json')
-```
diff --git a/docs/user_guide/ar_environment/introduction.md b/docs/user_guide/ar_environment/introduction.md
deleted file mode 100644
index b19383b..0000000
--- a/docs/user_guide/ar_environment/introduction.md
+++ /dev/null
@@ -1,6 +0,0 @@
-AR environment setup
-====================
-
-ArGaze toolkit eases ArUco and AOI management in a single AR environment setup.
-
-This section refers to [ArFeatures](../../argaze.md/#argaze.ArFeatures).
diff --git a/docs/user_guide/areas_of_interest/aoi_matching.md b/docs/user_guide/areas_of_interest/aoi_matching.md
deleted file mode 100644
index 60467f9..0000000
--- a/docs/user_guide/areas_of_interest/aoi_matching.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: AOI matching
----
-
-AOI matching
-============
-
-Once [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) is projected as [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene), it could be needed to know which AOI is looked.
-
-The [AreaOfInterest](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) class in [AOIFeatures](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures) provides two ways to accomplish such task.
-
-## Pointer-based matching
-
-Test if 2D pointer is inside or not AOI using contains_point() method as illustrated below.
-
-![Contains point](../../img/contains_point.png)
-
-``` python
-pointer = (x, y)
-
-for name, aoi in aoi2D_scene.items():
-
- if aoi.contains_point(pointer):
-
- # Do something with looked aoi
- ...
-
-```
-
-It is also possible to get where a pointer is looking inside an AOI provided that AOI is a rectangular plane:
-
-``` python
-
-inner_x, inner_y = aoi.inner_axis(pointer)
-
-```
-
-## Circle-based matching
-
-As positions have limited accuracy, it is possible to define a radius around a pointer to test circle intersection with AOI.
-
-![Circle intersection](../../img/circle_intersection.png)
-
-``` python
-
-intersection_shape, intersection_aoi_ratio, intersection_circle_ratio = aoi.circle_intersection(pointer, radius)
-
-```
diff --git a/docs/user_guide/areas_of_interest/aoi_scene_description.md b/docs/user_guide/areas_of_interest/aoi_scene_description.md
deleted file mode 100644
index b96c1e0..0000000
--- a/docs/user_guide/areas_of_interest/aoi_scene_description.md
+++ /dev/null
@@ -1,83 +0,0 @@
----
-title: AOI scene description
----
-
-AOI scene description
-=====================
-
-## 2D description
-
-An AOI scene can be described in 2D dimension using an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene) from a dictionary description.
-
-``` dict
-{
- "tracking": [[672.0, 54.0], [1632.0, 54.0], [1632.0, 540.0], [672.0, 540.0]],
- "system": [[0.0, 54.0], [672.0, 54.0], [672.0, 540.0], [0.0, 540.0]],
- "communications": [[0.0, 594.0], [576.0, 594.0], [576.0, 1080.0], [0.0, 1080.0]],
- "resources": [[576.0, 594.0], [1632.0, 594.0], [1632.0, 1080.0], [576.0, 1080.0]]
-}
-...
-```
-
-Here is a sample of code to show the loading of an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene) from a dictionary description:
-
-
-``` python
-from argaze.AreaOfInterest import AOI2DScene
-
-# Load an AOI2D scene from dictionary
-aoi_2d_scene = AOI2DScene.AOI2DScene(aoi_scene_dictionary)
-```
-
-## 3D description
-
-An AOI scene can be described in 3D dimension using an [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) built from a 3D model with all AOI as 3D planes and loaded through OBJ file format.
-Notice that plane normals are not needed and planes are not necessary 4 vertices shapes.
-
-``` obj
-o PIC_ND
-v 6.513238 -27.113548 -25.163900
-v 22.994461 -27.310783 -24.552130
-v 6.718690 -6.467261 -26.482569
-v 23.252594 -6.592890 -25.873484
-f 1 2 4 3
-o PIC_ND_Aircraft
-v 6.994747 -21.286463 -24.727146
-v 22.740919 -21.406120 -24.147078
-v 7.086208 -12.096219 -25.314123
-v 22.832380 -12.215876 -24.734055
-f 5 6 8 7
-o PIC_ND_Wind
-v 7.086199 -11.769333 -25.335127
-v 12.081032 -11.807289 -25.151123
-v 7.115211 -8.854101 -25.521320
-v 12.110044 -8.892057 -25.337317
-f 9 10 12 11
-o PIC_ND_Waypoint
-v 17.774197 -11.819057 -24.943428
-v 22.769030 -11.857013 -24.759424
-v 17.803209 -8.903825 -25.129622
-v 22.798042 -8.941781 -24.945618
-f 13 14 16 15
-...
-o Thrust_Lever
-v 19.046124 15.523837 4.774072
-v 18.997263 -0.967944 5.701000
-v 18.988382 15.923470 -13.243046
-v 18.921808 -0.417994 -17.869610
-v 19.032232 19.241346 -3.040264
-v 19.020988 6.392717 5.872663
-v 18.945322 6.876906 -17.699480
-s off
-f 185 190 186 188 191 187 189
-...
-```
-
-Here is a sample of code to show the loading of an [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) from an OBJ file description:
-
-``` python
-from argaze.AreaOfInterest import AOI3DScene
-
-# Load an AOI3D scene from OBJ file
-aoi_3d_scene = AOI3DScene.AOI3DScene.from_obj('./aoi_scene.obj')
-```
diff --git a/docs/user_guide/areas_of_interest/aoi_scene_projection.md b/docs/user_guide/areas_of_interest/aoi_scene_projection.md
deleted file mode 100644
index f348c6c..0000000
--- a/docs/user_guide/areas_of_interest/aoi_scene_projection.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: AOI scene projection
----
-
-AOI scene projection
-====================
-
-An [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) can be rotated and translated according to a pose estimation before to project it onto camera image as an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene).
-
-![AOI projection](../../img/aoi_projection.png)
-
-``` python
-...
-
-# Assuming pose estimation is done (tvec and rmat)
-
-# Project AOI 3D scene according pose estimation and optic parameters
-aoi2D_scene = aoi3D_scene.project(tvec, rmat, optic_parameters.K)
-
-# Draw AOI 2D scene
-aoi2D_scene.draw(image)
-```
diff --git a/docs/user_guide/areas_of_interest/heatmap.md b/docs/user_guide/areas_of_interest/heatmap.md
deleted file mode 100644
index 450c033..0000000
--- a/docs/user_guide/areas_of_interest/heatmap.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: Heatmap
----
-
-Heatmap
-=========
-
-[AOIFeatures](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures) provides [Heatmap](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.Heatmap) class to draw heatmap image.
-
-## Point spread
-
-The **point_spread** method draw a gaussian point spread into heatmap image at a given pointer position.
-
-![Point spread](../../img/point_spread.png)
-
-## Heatmap
-
-Heatmap visualisation allows to show where a pointer is most of the time.
-
-![Heatmap](../../img/heatmap.png)
-
-```python
-from argaze.AreaOfInterest import AOIFeatures
-
-# Create heatmap of 800px * 600px resolution
-heatmap = AOIFeatures.Heatmap((800, 600))
-
-# Initialize heatmap
-heatmap.init()
-
-# Assuming a pointer position (x, y) is moving inside frame
-...:
-
- # Update heatmap at pointer position
- heatmap.update((x, y), sigma=0.05)
-
- # Do something with heatmap image
- ... heatmap.image
-
-``` \ No newline at end of file
diff --git a/docs/user_guide/areas_of_interest/introduction.md b/docs/user_guide/areas_of_interest/introduction.md
deleted file mode 100644
index 6f74dd4..0000000
--- a/docs/user_guide/areas_of_interest/introduction.md
+++ /dev/null
@@ -1,8 +0,0 @@
-About Areas Of Interest (AOI)
-=============================
-
-The [AreaOfInterest submodule](../../argaze.md/#argaze.AreaOfInterest) allows to deal with AOI in a AR environment through a set of high level classes:
-
-* [AOIFeatures](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures)
-* [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene)
-* [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene) \ No newline at end of file
diff --git a/docs/user_guide/areas_of_interest/vision_cone_filtering.md b/docs/user_guide/areas_of_interest/vision_cone_filtering.md
deleted file mode 100644
index 7b29642..0000000
--- a/docs/user_guide/areas_of_interest/vision_cone_filtering.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Vision cone filtering
-=====================
-
-The [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) provides cone clipping support in order to select only [AOI](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) which are inside vision cone field.
-
-![Vision cone](../../img/vision_cone.png)
-
-``` python
-# Transform scene into camera referential
-aoi3D_camera = aoi3D_scene.transform(tvec, rmat)
-
-# Get aoi inside vision cone field
-# The vision cone tip is positionned behind the head
-aoi3D_inside, aoi3D_outside = aoi3D_camera.vision_cone(cone_radius=300, cone_height=150, cone_tip=[0., 0., -20.])
-
-# Keep only aoi inside vision cone field
-aoi3D_scene = aoi3D_scene.copy(exclude=aoi3D_outside.keys())
-```
diff --git a/docs/user_guide/aruco_markers/dictionary_selection.md b/docs/user_guide/aruco_markers/dictionary_selection.md
deleted file mode 100644
index b9ba510..0000000
--- a/docs/user_guide/aruco_markers/dictionary_selection.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Dictionary selection
-====================
-
-ArUco markers always belongs to a set of markers called ArUco markers dictionary.
-
-![ArUco dictionaries](../../img/aruco_dictionaries.png)
-
-Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking.
-
-Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975).
-
-``` python
-from argaze.ArUcoMarkers import ArUcoMarkersDictionary
-
-# Create a dictionary of specific April tags
-aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5')
-```
diff --git a/docs/user_guide/aruco_markers/introduction.md b/docs/user_guide/aruco_markers/introduction.md
deleted file mode 100644
index 9d78de0..0000000
--- a/docs/user_guide/aruco_markers/introduction.md
+++ /dev/null
@@ -1,15 +0,0 @@
-About ArUco markers
-===================
-
-![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png)
-
-The OpenCV library provides a module to detect fiducial markers into a picture and estimate its pose (cf [OpenCV ArUco tutorial page](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html)).
-
-The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, camera calibration, markers detection and 3D scene pose estimation through a set of high level classes:
-
-* [ArUcoMarkersDictionary](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersDictionary)
-* [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker)
-* [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard)
-* [ArUcoOpticCalibrator](../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator)
-* [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector)
-* [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers/markers_creation.md b/docs/user_guide/aruco_markers/markers_creation.md
deleted file mode 100644
index eab9890..0000000
--- a/docs/user_guide/aruco_markers/markers_creation.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Markers creation
-================
-
-The creation of [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) from a dictionary is illustrated in the code below:
-
-``` python
-from argaze.ArUcoMarkers import ArUcoMarkersDictionary
-
-# Create a dictionary of specific April tags
-aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5')
-
-# Export marker n°5 as 3.5 cm picture with 300 dpi resolution
-aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300)
-
-# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution
-aruco_dictionary.save('./markers/', 3.5, 300)
-``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers/markers_detection.md b/docs/user_guide/aruco_markers/markers_detection.md
deleted file mode 100644
index af2fb4f..0000000
--- a/docs/user_guide/aruco_markers/markers_detection.md
+++ /dev/null
@@ -1,47 +0,0 @@
-Markers detection
-=================
-
-![Detected markers](../../img/detected_markers.png)
-
-Firstly, the [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) needs to know the expected dictionary and size (in centimeter) of the [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) it have to detect.
-
-Notice that extra parameters are passed to detector: see [OpenCV ArUco markers detection parameters documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html) to know more.
-
-``` python
-from argaze.ArUcoMarkers import ArUcoDetector, ArUcoOpticCalibrator
-
-# Assuming camera calibration data are loaded
-
-# Loading extra detector parameters
-extra_parameters = ArUcoDetector.DetectorParameters.from_json('./detector_parameters.json')
-
-# Create ArUco detector to track DICT_APRILTAG_16h5 5cm length markers
-aruco_detector = ArUcoDetector.ArUcoDetector(optic_parameters=optic_parameters, dictionary='DICT_APRILTAG_16h5', marker_size=5, parameters=extra_parameters)
-```
-
-Here is [DetectorParameters](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.DetectorParameters) JSON file example:
-
-```
-{
- "cornerRefinementMethod": 1,
- "aprilTagQuadSigma": 2,
- "aprilTagDeglitch": 1
-}
-```
-
-The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) processes image to detect markers and allows to draw detection results onto it:
-
-``` python
-# Detect markers into image and draw them
-aruco_detector.detect_markers(image)
-aruco_detector.draw_detected_markers(image)
-
-# Get corners position into image related to each detected markers
-for marker_id, marker in aruco_detector.detected_markers.items():
-
- print(f'marker {marker_id} corners: ', marker.corners)
-
- # Do something with detected marker i corners
- ...
-
-```
diff --git a/docs/user_guide/aruco_markers/markers_pose_estimation.md b/docs/user_guide/aruco_markers/markers_pose_estimation.md
deleted file mode 100644
index 487c220..0000000
--- a/docs/user_guide/aruco_markers/markers_pose_estimation.md
+++ /dev/null
@@ -1,20 +0,0 @@
-Markers pose estimation
-=======================
-
-After [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) detection, it is possible to estimate [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pose in camera axis.
-
-![Pose estimation](../../img/pose_estimation.png)
-
-``` python
-# Estimate markers pose
-aruco_detector.estimate_markers_pose()
-
-# Get pose estimation related to each detected markers
-for marker_id, marker in aruco_detector.detected_markers.items():
-
- print(f'marker {marker_id} translation: ', marker.translation)
- print(f'marker {marker_id} rotation: ', marker.rotation)
-
- # Do something with each marker pose estimation
- ...
-``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers/markers_scene_description.md b/docs/user_guide/aruco_markers/markers_scene_description.md
deleted file mode 100644
index c6dbf31..0000000
--- a/docs/user_guide/aruco_markers/markers_scene_description.md
+++ /dev/null
@@ -1,146 +0,0 @@
-Markers scene description
-=========================
-
-The ArGaze toolkit provides [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) class to describe where [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) are placed into a 3D model.
-
-![ArUco scene](../../img/aruco_markers_group.png)
-
-[ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) is useful to:
-
-* filter markers that belongs to this predefined scene,
-* check the consistency of detected markers according the place where each marker is expected to be,
-* estimate the pose of the scene from the pose of detected markers.
-
-## Scene creation
-
-### from OBJ
-
-ArUco scene description uses common OBJ file format that can be exported from most 3D editors. Notice that plane normals (vn) needs to be exported.
-
-``` obj
-o DICT_APRILTAG_16h5#0_Marker
-v -3.004536 0.022876 2.995370
-v 2.995335 -0.015498 3.004618
-v -2.995335 0.015498 -3.004618
-v 3.004536 -0.022876 -2.995370
-vn 0.0064 1.0000 -0.0012
-s off
-f 1//1 2//1 4//1 3//1
-o DICT_APRILTAG_16h5#1_Marker
-v -33.799068 46.450645 -32.200436
-v -27.852505 47.243549 -32.102116
-v -34.593925 52.396473 -32.076626
-v -28.647360 53.189377 -31.978306
-vn -0.0135 -0.0226 0.9997
-s off
-f 5//2 6//2 8//2 7//2
-...
-```
-
-Here is a sample of code to show the loading of an [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) OBJ file description:
-
-``` python
-from argaze.ArUcoMarkers import ArUcoMarkersGroup
-
-# Create an ArUco scene from a OBJ file description
-aruco_markers_group = ArUcoMarkersGroup.ArUcoMarkersGroup.from_obj('./markers.obj')
-
-# Print loaded marker places
-for place_id, place in aruco_markers_group.places.items():
-
- print(f'place {place_id} for marker: ', place.marker.identifier)
- print(f'place {place_id} translation: ', place.translation)
- print(f'place {place_id} rotation: ', place.rotation)
-```
-
-### from JSON
-
-[ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) description can also be written in a JSON file format.
-
-``` json
-{
- "dictionary": "DICT_ARUCO_ORIGINAL",
- "marker_size": 1,
- "places": {
- "0": {
- "translation": [0, 0, 0],
- "rotation": [0, 0, 0]
- },
- "1": {
- "translation": [10, 10, 0],
- "rotation": [0, 0, 0]
- },
- "2": {
- "translation": [0, 10, 0],
- "rotation": [0, 0, 0]
- }
- }
-}
-```
-
-### from detected markers
-
-Here is a more advanced usage where ArUco scene is built from markers detected into an image:
-
-``` python
-from argaze.ArUcoMarkers import ArUcoMarkersGroup
-
-# Assuming markers have been detected and their pose estimated thanks to ArUcoDetector
-...
-
-# Build ArUco scene from detected markers
-aruco_markers_group = ArUcoMarkersGroup.ArUcoMarkersGroup(aruco_detector.marker_size, aruco_detector.dictionary, aruco_detector.detected_markers)
-```
-
-## Markers filtering
-
-Considering markers are detected, here is how to filter them to consider only those which belongs to the scene:
-
-``` python
-scene_markers, remaining_markers = aruco_markers_group.filter_markers(aruco_detector.detected_markers)
-```
-
-## Marker poses consistency
-
-Then, scene markers poses can be validated by verifying their spatial consistency considering angle and distance tolerance. This is particularly useful to discard ambiguous marker pose estimations when markers are parallel to camera plane (see [issue on OpenCV Contribution repository](https://github.com/opencv/opencv_contrib/issues/3190#issuecomment-1181970839)).
-
-``` python
-# Check scene markers consistency with 10° angle tolerance and 1 cm distance tolerance
-consistent_markers, unconsistent_markers, unconsistencies = aruco_markers_group.check_markers_consistency(scene_markers, 10, 1)
-```
-
-## Scene pose estimation
-
-Several approaches are available to perform [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) pose estimation from markers belonging to the scene.
-
-The first approach considers that scene pose can be estimated **from a single marker pose**:
-
-``` python
-# Let's select one consistent scene marker
-marker_id, marker = consistent_markers.popitem()
-
-# Estimate scene pose from a single marker
-tvec, rmat = self.aruco_markers_group.estimate_pose_from_single_marker(marker)
-```
-
-The second approach considers that scene pose can be estimated by **averaging several marker poses**:
-
-``` python
-# Estimate scene pose from all consistent scene markers
-tvec, rmat = self.aruco_markers_group.estimate_pose_from_markers(consistent_markers)
-```
-
-The third approach is only available when ArUco markers are placed in such a configuration that is possible to **define orthogonal axis**:
-
-``` python
-tvec, rmat = self.aruco_markers_group.estimate_pose_from_axis_markers(origin_marker, horizontal_axis_marker, vertical_axis_marker)
-```
-
-## Scene exportation
-
-As ArUco scene can be exported to OBJ file description to import it into most 3D editors.
-
-``` python
-# Export an ArUco scene as OBJ file description
-aruco_markers_group.to_obj('markers.obj')
-```
diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md
new file mode 100644
index 0000000..f5b66c6
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md
@@ -0,0 +1,40 @@
+Improve ArUco markers detection
+===============================
+
+As explain in [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html), ArUco markers detection is highly configurable.
+
+## Load ArUcoDetector parameters
+
+[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry.
+
+Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with ArUco detector parameters:
+
+```json
+{
+ "name": "My FullHD camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "marker_size": 5,
+ "parameters": {
+ "cornerRefinementMethod": 3,
+ "aprilTagQuadSigma": 2,
+ "aprilTagDeglitch": 1,
+ "useAruco3Detection": 1
+ }
+ },
+ ...
+```
+
+## Print ArUcoDetector parameters
+
+```python
+# Assuming ArUcoCamera is loaded
+...
+
+# Print all ArUcoDetector parameters
+print(aruco_camera.aruco_detector.parameters)
+
+# Print only modified ArUcoDetector parameters
+print(f'{aruco_camera.aruco_detector.parameters:modified}')
+```
diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md
index 455d95a..3277216 100644
--- a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md
+++ b/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md
@@ -3,11 +3,11 @@ Calibrate optic parameters
A camera device have to be calibrated to compensate its optical distorsion.
-![Optic parameters calibration](../../img/optic_calibration.png)
+![Optic parameters calibration](../../../img/optic_calibration.png)
## Print calibration board
-The first step to calibrate a camera is to create an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below:
+The first step to calibrate a camera is to create an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below:
``` python
from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard
@@ -29,9 +29,9 @@ Let's print the calibration board before to go further.
## Capture board pictures
-Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process.
+Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process.
-![Calibration step](../../img/optic_calibration_step.png)
+![Calibration step](../../../img/optic_calibration_step.png)
The sample of code below illustrates how to:
@@ -131,3 +131,61 @@ Below, an optic_parameters JSON file example:
]
}
```
+
+## Load and display optic parameters
+
+[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry.
+
+Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file where optic parameters are loaded and displayed:
+
+```json
+{
+ "name": "My FullHD Camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "marker_size": 5,
+ "optic_parameters": {
+ "rms": 0.6688921504088245,
+ "dimensions": [
+ 1920,
+ 1080
+ ],
+ "K": [
+ [
+ 1135.6524381415752,
+ 0.0,
+ 956.0685325355497
+ ],
+ [
+ 0.0,
+ 1135.9272506869524,
+ 560.059099810324
+ ],
+ [
+ 0.0,
+ 0.0,
+ 1.0
+ ]
+ ],
+ "D": [
+ 0.01655492265003404,
+ 0.1985524264972037,
+ 0.002129965902489484,
+ -0.0019528582922179365,
+ -0.5792910353639452
+ ]
+ }
+ },
+ ...
+ "image_parameters": {
+ ...
+ "draw_optic_parameters_grid": {
+ "width": 192,
+ "height": 108,
+ "z": 100,
+ "point_size": 1,
+ "point_color": [0, 0, 255]
+ }
+ }
+``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md
new file mode 100644
index 0000000..892d6dd
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md
@@ -0,0 +1,136 @@
+Script the pipeline
+===================
+
+All aruco markers pipeline objects are accessible from Python script.
+This could be particularly useful for realtime AR interaction applications.
+
+## Load ArUcoCamera configuration from dictionary
+
+First of all, [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration can be loaded from a python dictionary.
+
+```python
+from argaze.ArUcoMarkers import ArUcoCamera
+
+# Edit a dict with ArUcoCamera configuration
+configuration = {
+ "name": "My FullHD camera",
+ "size": (1920, 1080),
+ ...
+ "aruco_detector": {
+ ...
+ },
+ "scenes": {
+ "MyScene" : {
+ "aruco_markers_group": {
+ ...
+ },
+ "layers": {
+ "MyLayer": {
+ "aoi_scene": {
+ ...
+ }
+ },
+ ...
+ }
+ },
+ ...
+ }
+ "layers": {
+ "MyLayer": {
+ ...
+ },
+ ...
+ },
+ "image_parameters": {
+ ...
+ }
+}
+
+# Load ArUcoCamera
+aruco_camera = ArUcoCamera.ArUcoCamera.from_dict(configuration)
+
+# Do something with ArUcoCamera
+...
+```
+
+## Access to ArUcoCamera and ArScenes attributes
+
+Then, once the configuration is loaded, it is possible to access to its attributes: [read ArUcoCamera code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) to get a complete list of what is available.
+
+Thus, the [ArUcoCamera.scenes](../../../argaze.md/#argaze.ArFeatures.ArCamera) attribute allows to access each loaded aruco scene and so, access to their attributes: [read ArUcoScene code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) to get a complete list of what is available.
+
+```python
+from argaze import ArFeatures
+
+# Assuming the ArUcoCamera is loaded
+...
+
+# Iterate over each ArUcoCamera scene
+for name, aruco_scene in aruco_camera.scenes.items():
+ ...
+```
+
+## Pipeline execution outputs
+
+[ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution.
+
+```python
+# Assuming that images are available
+...:
+
+ # Watch image with ArUco camera
+ detection_time, projection_time, exception = aruco_camera.watch(image)
+
+ # Do something with pipeline times
+ ...
+
+ # Do something with pipeline exception
+ if exception:
+ ...
+```
+
+Let's understand the meaning of each returned data.
+
+### *detection_time*
+
+ArUco marker detection time in ms.
+
+### *projection_time*
+
+Scenes projection time in ms.
+
+### *exception*
+
+A [python Exception](https://docs.python.org/3/tutorial/errors.html#exceptions) object raised during pipeline execution.
+
+## Setup ArUcoCamera image parameters
+
+Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary.
+
+```python
+# Assuming ArUcoCamera is loaded
+...
+
+# Edit a dict with ArUcoCamera image parameters
+image_parameters = {
+ "draw_detected_markers": {
+ ...
+ },
+ "draw_scenes": {
+ ...
+ },
+ "draw_optic_parameters_grid": {
+ ...
+ },
+ ...
+}
+
+# Pass image parameters to ArUcoCamera
+aruco_camera_image = aruco_camera.image(**image_parameters)
+
+# Do something with ArUcoCamera image
+...
+```
+
+!!! note
+ [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all image parameters described in [gaze analysis pipeline visualisation section](../../gaze_analysis_pipeline/visualisation.md). \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md
new file mode 100644
index 0000000..b02bc9e
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md
@@ -0,0 +1,53 @@
+Describe 3D AOI
+===============
+
+Now [scene pose is estimated](aruco_markers_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
+
+In the example scene, the screen and the sheet are considered as areas of interest.
+
+![3D AOI description](../../img/aoi_3d_description.png)
+
+All AOI need to be described from same origin than markers in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where:
+
+* +X is pointing to the right,
+* +Y is pointing to the top,
+* +Z is pointing to the backward.
+
+!!! warning
+ All AOI spatial values must be given in **centimeters**.
+
+### Edit OBJ file description
+
+OBJ file format could be exported from most 3D editors.
+
+``` obj
+o Sheet
+v 14.200000 -3.000000 28.350000
+v 35.200000 -3.000000 28.350000
+v 14.200000 -3.000000 -1.35
+v 35.200000 -3.000000 -1.35
+f 1 2 4 3
+o Screen
+v 2.750000 2.900000 -0.500000
+v 49.250000 2.900000 -0.500000
+v 2.750000 29.100000 -0.500000
+v 49.250000 29.100000 -0.500000
+f 5 6 8 7
+```
+
+Here are common OBJ file features needed to describe AOI:
+
+* Object lines (starting with *o* key) indicate AOI name.
+* Vertice lines (starting with *v* key) indicate AOI vertices.
+* Face (starting with *f* key) link vertices together.
+
+### Edit JSON file description
+
+JSON file format allows to describe AOI vertices.
+
+``` json
+{
+ "Sheet": [[14.2, -3, 28.35], [35.2, -3, 28.35], [14.2, -3, -1.35], [35.2, -3, -1.35]],
+ "Screen": [[2.75, 2.9, -0.5], [49.25, 2.9, -0.5], [2.75, 29.1, -0.5], [49.25, 29.1, -0.5]]
+}
+```
diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
new file mode 100644
index 0000000..f1ae1f6
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md
@@ -0,0 +1,128 @@
+Define a 3D AOI as a frame
+==========================
+
+When an 3D AOI of the scene contains others coplanar 3D AOI, like a screen with GUI elements displayed on, it is better to described them as 2D AOI inside 2D coordinates system related to the containing 3D AOI.
+
+![3D AOI frame](../../img/aruco_camera_aoi_frame.png)
+
+## Add ArFrame to ArUcoScene
+
+The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed.
+
+Here is the previous extract where "Screen" AOI is defined as a frame into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration:
+
+```json
+{
+ "name": "My FullHD camera",
+ "size": [1920, 1080],
+ ...
+ "scenes": {
+ "MyScene" : {
+ "aruco_markers_group": {
+ ...
+ },
+ "layers": {
+ "MyLayer": {
+ "aoi_scene": {
+ "Sheet": [[14.2, -3, 28.35], [35.2, -3, 28.35], [14.2, -3, -1.35], [35.2, -3, -1.35]],
+ "Screen": [[2.75, 2.9, -0.5], [49.25, 2.9, -0.5], [2.75, 29.1, -0.5], [49.25, 29.1, -0.5]]
+ }
+ }
+ },
+ "frames": {
+ "Screen": {
+ "size": [1920, 1080],
+ "layers": {
+ "MyLayer": {
+ "aoi_scene": {
+ "GeoSector": [[860, 160], [1380, 100], [1660, 400], [1380, 740], [1440, 960], [920, 920], [680, 800], [640, 560]],
+ "LeftPanel": {
+ "Rectangle": {
+ "x": 0,
+ "y": 0,
+ "width": 350,
+ "height": 1080
+ }
+ },
+ "CircularWidget": {
+ "Circle": {
+ "cx": 1800,
+ "cy": 120,
+ "radius": 80
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ ...
+}
+```
+Now, let's understand the meaning of each JSON entry.
+
+### *frames*
+
+An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArFrames](../../argaze.md/#argaze.ArFeatures.ArFrame) stored by name.
+
+### Screen
+
+The name of a 3D AOI **and** an [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame). Basically useful for visualisation purpose.
+
+!!! warning "AOI / Frame names policy"
+
+ An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer 3D AOI is defined as an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame, **provided they have the same name**.
+
+!!! warning "Layer name policy"
+
+ An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layer is projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer, **provided they have the same name**.
+
+!!! note
+
+ [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layers are projected into their dedicated [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers when the JSON configuration file is loaded.
+
+## Pipeline execution
+
+### Map ArUcoCamera image into ArUcoScenes frames
+
+After camera image is passed to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perpective transformation in order to project watched image into each [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) [frames background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
+
+```python
+# Assuming that Full HD (1920x1080) video stream or file is opened
+...
+
+# Assuming that the video reading is handled in a looping code block
+...:
+
+ # Capture image from video stream of file
+ image = video_capture.read()
+
+ # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
+ aruco_camera.watch(image)
+
+ # Map watched image into ArUcoScenes frames background
+ aruco_camera.map()
+```
+
+### Analyse timestamped gaze positions into ArUcoScenes frames
+
+[ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
+
+!!! note
+
+ Timestamped gaze positions passed to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method are projected into [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames if applicable.
+
+### Display each ArUcoScenes frames
+
+All [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames image can be displayed as any [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame).
+
+```python
+ ...
+
+ # Display all ArUcoScenes frames
+ for frame in aruco_camera.scene_frames:
+
+ ... frame.image()
+``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_projection.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md
index 027f805..8c7310b 100644
--- a/docs/user_guide/aruco_markers_pipeline/aoi_projection.md
+++ b/docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md
@@ -1,15 +1,15 @@
-Project AOI into camera frame
-=============================
+Project 3D AOI into camera frame
+================================
-Once [ArUcoScene pose is estimated](pose_estimation.md) and [AOI are described](aoi_description.md), AOI can be projected into [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame.
+Once [ArUcoScene pose is estimated](pose_estimation.md) and [3D AOI are described](aoi_3d_description.md), AOI can be projected into [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame.
-![AOI projection](../../img/aruco_camera_aoi_projection.png)
+![3D AOI projection](../../img/aruco_camera_aoi_projection.png)
-## Add ArLayer to ArUcoScene to load AOI
+## Add ArLayer to ArUcoScene to load 3D AOI
-The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class allows to load areas of interest description. An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer).
+The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class allows to load 3D AOI description.
-Here is the previous extract where one layer is added to the [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration:
+Here is the previous extract where one layer is added to [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration:
```json
{
@@ -24,9 +24,8 @@ Here is the previous extract where one layer is added to the [ArUcoScene](../../
"layers": {
"MyLayer": {
"aoi_scene": {
- "YellowSquare": [[6.2, -7.275252, 25.246159], [31.2, -7.275252, 25.246159], [31.2, 1.275252, 1.753843], [6.2, 1.275252, 1.753843]],
- "GrayRectangle": [[2.5, 2.5, -0.5], [37.5, 2.5, -0.5], [37.5, 27.5, -0.5], [2.5, 27.5, -0.5]],
- "BlueTriangle": [[12.5, 7.5, -0.5], [27.5, 7.5, -0.5], [20, 22.5, -0.5]]
+ "Sheet": [[14.2, -3, 28.35], [35.2, -3, 28.35], [14.2, -3, -1.35], [35.2, -3, -1.35]],
+ "Screen": [[2.75, 2.9, -0.5], [49.25, 2.9, -0.5], [2.75, 29.1, -0.5], [49.25, 29.1, -0.5]]
}
}
}
@@ -38,17 +37,21 @@ Here is the previous extract where one layer is added to the [ArUcoScene](../../
Now, let's understand the meaning of each JSON entry.
-### "MyLayer"
+### *layers*
-The name of the [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
+An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name.
-### AOI Scene
+### MyLayer
-The [AOIScene](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AOIScene) defines a set of 3D [AreaOfInterest](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) registered by name.
+The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
-## Add ArLayer to ArUcoCamera to project AOI
+### *aoi_scene*
-Here is the previous extract where one layer is added to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) and displayed:
+The set of 3D AOI into the layer as defined at [3D AOI description chapter](aoi_3d_description.md).
+
+## Add ArLayer to ArUcoCamera to project 3D AOI into
+
+Here is the previous extract where one layer is added to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration and displayed:
```json
{
@@ -91,21 +94,25 @@ Here is the previous extract where one layer is added to the [ArUcoCamera](../..
Now, let's understand the meaning of each JSON entry.
-### "MyLayer"
+### *layers*
+
+An [ArUcoCamera](../../argaze.md/#argaze.ArFeatures.ArFrame) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name.
+
+### MyLayer
-The name of the [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
+The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
!!! warning "Layer name policy"
- An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer is projected into [an ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer, **provided they have the same name**.
+ An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer is projected into an [ ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer, **provided they have the same name**.
!!! note
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers are projected into their dedicated [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layers when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method.
-## Add AOI analysis
+## Add AOI analysis features to ArUcoCamera layer
-When a scene layer is projected into a camera layer, it means that the 3D [ArLayer.aoi_scene](../../argaze.md/#argaze.ArFeatures.ArLayer.aoi_scene) description of the scene becomes the 2D camera's [ArLayer.aoi_scene](../../argaze.md/#argaze.ArFeatures.ArLayer.aoi_scene) description of the camera.
+When a scene layer is projected into a camera layer, it means that the 3D scene's AOI are transformed into 2D camera's AOI.
Therefore, it means that [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) benefits from all the services described in [AOI analysis pipeline section](../gaze_analysis_pipeline/aoi_analysis.md).
@@ -156,4 +163,4 @@ Here is the previous extract where AOI matcher, AOI scan path and AOI scan path
!!! warning
- Adding scan path and scan path analyzers to an [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer doesn't make sense if the camera is moving.
+ Adding scan path and scan path analyzers to an [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer doesn't make sense as the space viewed thru camera frame doesn't necessary reflect the space the gaze is covering.
diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_description.md b/docs/user_guide/aruco_markers_pipeline/aoi_description.md
deleted file mode 100644
index 101ec9f..0000000
--- a/docs/user_guide/aruco_markers_pipeline/aoi_description.md
+++ /dev/null
@@ -1,62 +0,0 @@
-Describe AOI
-============
-
-Once [ArUco markers are placed into a scene](aruco_markers_description.md), areas of interest need to be described into the same 3D referential.
-
-In the example scene, each screen is considered as an area of interest more the blue triangle area inside the top screen.
-
-![AOI description](../../img/aoi_description.png)
-
-All AOIs need to be described from same origin than markers in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where:
-
-* +X is pointing to the right,
-* +Y is pointing to the top,
-* +Z is pointing to the backward.
-
-!!! warning
- All AOIs spatial values must be given in **centimeters**.
-
-### Edit OBJ file description
-
-OBJ file format could be exported from most 3D editors.
-
-``` obj
-o YellowSquare
-v 6.200003 -7.275252 25.246159
-v 31.200003 -7.275252 25.246159
-v 6.200003 1.275252 1.753843
-v 31.200003 1.275252 1.753843
-s off
-f 1 2 4 3
-o GrayRectangle
-v 2.500000 2.500000 -0.500000
-v 37.500000 2.500000 -0.500000
-v 2.500000 27.500000 -0.500000
-v 37.500000 27.500000 -0.500000
-s off
-f 5 6 8 7
-o BlueTriangle
-v 12.500002 7.500000 -0.500000
-v 27.500002 7.500000 -0.500000
-v 20.000002 22.500000 -0.500000
-s off
-f 9 10 11
-```
-
-Here are common OBJ file features needed to describe AOIs:
-
-* Object lines (starting with *o* key) indicate AOI name.
-* Vertice lines (starting with *v* key) indicate AOI vertices.
-* Face (starting with *f* key) link vertices together.
-
-### Edit JSON file description
-
-JSON file format allows to describe AOIs vertices.
-
-``` json
-{
- "YellowSquare": [[6.2, -7.275252, 25.246159], [31.2, -7.275252, 25.246159], [31.2, 1.275252, 1.753843], [6.2, 1.275252, 1.753843]],
- "GrayRectangle": [[2.5, 2.5, -0.5], [37.5, 2.5, -0.5], [37.5, 27.5, -0.5], [2.5, 27.5, -0.5]],
- "BlueTriangle": [[12.5, 7.5, -0.5], [27.5, 7.5, -0.5], [20, 22.5, -0.5]]
-}
-```
diff --git a/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md b/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md
index 1c13013..6380f88 100644
--- a/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md
+++ b/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md
@@ -3,11 +3,11 @@ Set up ArUco markers
First of all, ArUco markers needs to be printed and placed into the scene.
-Here is an example scene where markers are surrounding a multi-screen workspace with a triangle area inside one of them.
+Here is an example scene where markers are surrounding a workspace with a screen and a sheet on the table (considering the sheet stays static for the moment).
![Scene](../../img/scene.png)
-## Print ArUco markers from a ArUco dictionary
+## Print ArUco markers from an ArUco dictionary
ArUco markers always belongs to a set of markers called ArUco markers dictionary.
@@ -65,24 +65,18 @@ v 0.000000 0.000000 0.000000
v 5.000000 0.000000 0.000000
v 0.000000 5.000000 0.000000
v 5.000000 5.000000 0.000000
-vn 0.0000 0.0000 1.0000
-s off
f 1//1 2//1 4//1 3//1
o DICT_APRILTAG_16h5#1_Marker
-v -1.767767 23.000002 3.767767
-v 1.767767 23.000002 0.232233
-v -1.767767 28.000002 3.767767
-v 1.767767 28.000002 0.232233
-vn 0.7071 0.0000 0.7071
-s off
+v -0.855050 24.000002 4.349232
+v 0.855050 24.000002 -0.349231
+v -0.855050 29.000002 4.349232
+v 0.855050 29.000002 -0.349231
f 5//2 6//2 8//2 7//2
o DICT_APRILTAG_16h5#2_Marker
-v 33.000000 -1.767767 4.767767
-v 38.000000 -1.767767 4.767767
-v 33.000000 1.767767 1.232233
-v 38.000000 1.767767 1.232233
-vn 0.0000 0.7071 0.7071
-s off
+v 44.000000 0.000000 9.500000
+v 49.000000 0.000000 9.500000
+v 44.000000 -0.000000 4.500000
+v 49.000000 -0.000000 4.500000
f 9//3 10//3 12//3 11//3
```
@@ -90,7 +84,6 @@ Here are common OBJ file features needed to describe ArUco markers places:
* Object lines (starting with *o* key) indicate markers dictionary and id by following this format: **DICTIONARY**#**ID**\_Marker.
* Vertice lines (starting with *v* key) indicate markers corners. The marker size will be automatically deducted from the geometry.
-* Plane normals (starting with *vn* key) need to be exported for further pose estimation.
* Face (starting with *f* key) link vertices and normals indexes together.
!!! warning
@@ -110,12 +103,12 @@ JSON file format allows to describe markers places using translation and euler a
"rotation": [0, 0, 0]
},
"1": {
- "translation": [0, 25.5, 2],
- "rotation": [0, 45, 0]
+ "translation": [0, 26.5, 2],
+ "rotation": [0, 70, 0]
},
"2": {
- "translation": [35.5, 0, 3],
- "rotation": [-45, 0, 0]
+ "translation": [46.5, 0, 7],
+ "rotation": [-90, 0, 0]
}
}
}
diff --git a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md
index 81c577f..60a1115 100644
--- a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md
@@ -3,7 +3,7 @@ Load and execute pipeline
Once [ArUco markers are placed into a scene](aruco_markers_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class.
-As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](./user_guide/gaze_analysis_pipeline/introduction.md).
+As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
![ArUco camera frame](../../img/aruco_camera_frame.png)
@@ -29,6 +29,12 @@ Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCa
},
"image_parameters": {
"background_weight": 1,
+ "draw_detected_markers": {
+ "color": [0, 255, 0],
+ "draw_axes": {
+ "thickness": 3
+ }
+ },
"draw_gaze_positions": {
"color": [0, 255, 255],
"size": 2
@@ -40,12 +46,6 @@ Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCa
},
"draw_saccades": {
"line_color": [255, 0, 255]
- },
- "draw_detected_markers": {
- "color": [0, 255, 0],
- "draw_axes": {
- "thickness": 3
- }
}
}
}
@@ -62,15 +62,15 @@ aruco_camera = ArUcoCamera.ArUcoCamera.from_json('./configuration.json')
Now, let's understand the meaning of each JSON entry.
-### Name - *inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
+### *name - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
The name of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. Basically useful for visualisation purpose.
-### Size - *inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
+### *size - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
The size of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame in pixels. Be aware that gaze positions have to be in the same range of value to be projected in.
-### ArUco Detector
+### *aruco_detector*
The first [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) pipeline step is to detect ArUco markers inside input image and estimate their poses.
@@ -81,21 +81,21 @@ The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) is in ch
!!! warning "Mandatory"
JSON *aruco_detector* entry is mandatory.
-### Gaze Movement Identifier - *inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
+### *gaze_movement_identifier - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step dedicated to identify fixations or saccades from consecutive timestamped gaze positions.
![Gaze movement identification](../../img/aruco_camera_gaze_movement_identification.png)
-### Image parameters - *inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
+### *image_parameters - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*
-The usual [ArFrame visualisation parameters](./user_guide/gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field.
+The usual [ArFrame visualisation parameters](../gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field.
## Pipeline execution
-### Detect ArUco markers, estimate scene pose and project AOI
+### Detect ArUco markers, estimate scene pose and project 3D AOI
-Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco markers detection, scene pose estimation and AOI projection.
+Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco markers detection, scene pose estimation and 3D AOI projection.
```python
# Assuming that Full HD (1920x1080) video stream or file is opened
@@ -107,19 +107,16 @@ Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures
# Capture image from video stream of file
image = video_capture.read()
- # Detect ArUco markers, estimate scene pose then, project AOI into camera frame
+ # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
aruco_camera.watch(image)
- # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, AOI projection and ArFrame visualisation.
+ # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualisation.
... aruco_camera.image()
```
-!!! warning "Pose estimation error"
- ArUco markers pose estimation algorithm can lead to errors due to geometric ambiguities as explain in [this article](https://ieeexplore.ieee.org/document/1717461). To discard such ambiguous cases, markers should **as less as possible be parallel to camera plan**.
-
### Analyse timestamped gaze positions into camera frame
-As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](./user_guide/gaze_analysis_pipeline/introduction.md).
+As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
Particularly, timestamped gaze positions can be passed one by one to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis.
@@ -135,4 +132,4 @@ Particularly, timestamped gaze positions can be passed one by one to [ArUcoCamer
At this point, the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method only detects ArUco markers and the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArCamera.look) method only process gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
- Read the next chapters to learn [how to estimate scene pose](pose_estimation.md) and [how to project AOI](aoi_projection.md). \ No newline at end of file
+ Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe 3D scene's AOI](aoi_3d_description.md) and [how to project them into camera frame](aoi_3d_projection.md). \ No newline at end of file
diff --git a/docs/user_guide/aruco_markers_pipeline/introduction.md b/docs/user_guide/aruco_markers_pipeline/introduction.md
index 836569a..37ab055 100644
--- a/docs/user_guide/aruco_markers_pipeline/introduction.md
+++ b/docs/user_guide/aruco_markers_pipeline/introduction.md
@@ -1,29 +1,29 @@
Overview
========
-This section explains how to build augmented reality pipelines based on ArUco Markers technology for various use cases.
+This section explains how to build augmented reality pipelines based on [ArUco Markers technology](https://www.sciencedirect.com/science/article/abs/pii/S0031320314000235) for various use cases.
-The OpenCV library provides a module to detect fiducial markers into a picture and estimate their poses (cf [OpenCV ArUco tutorial page](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html)).
+The OpenCV library provides a module to detect fiducial markers into a picture and estimate their poses.
-![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png)
+![OpenCV ArUco markers](../../img/opencv_aruco.png)
-The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, optic calibration, markers detection and 3D scene pose estimation through a set of high level classes.
+The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, markers detection and 3D scene pose estimation through a set of high level classes.
-First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
+<!-- First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. -->
-![ArUco markers pipeline](../../img/aruco_markers_pipeline.png)
+<!-- ![ArUco markers pipeline](../../img/aruco_markers_pipeline.png) -->
To build your own ArUco markers pipeline, you need to know:
* [How to setup ArUco markers into a scene](aruco_markers_description.md),
-* [How to describe scene's AOI](aoi_description.md),
* [How to load and execute ArUco markers pipeline](configuration_and_execution.md),
* [How to estimate scene pose](pose_estimation.md),
-* [How to project AOI into camera frame](aoi_projection.md),
-* [How to visualize ArUcoCamera and ArUcoScenes](visualisation.md)
+* [How to describe scene's AOI](aoi_3d_description.md),
+* [How to project 3D AOI into camera frame](aoi_3d_projection.md),
+* [How to define a 3D AOI as a frame](aoi_3d_frame.md).
More advanced features are also explained like:
-* [How to script ArUco markers pipeline](advanced_topics/scripting.md)
-* [How to calibrate optic parameters](optic_parameters_calibration.md)
-* [How to improve ArUco markers detection](advanced_topics/aruco_detector_configuration.md)
+* [How to script ArUco markers pipeline](advanced_topics/scripting.md),
+* [How to calibrate optic parameters](advanced_topics/optic_parameters_calibration.md),
+* [How to improve ArUco markers detection](advanced_topics/aruco_detector_configuration.md).
diff --git a/docs/user_guide/aruco_markers_pipeline/pose_estimation.md b/docs/user_guide/aruco_markers_pipeline/pose_estimation.md
index 6acafee..6b58b24 100644
--- a/docs/user_guide/aruco_markers_pipeline/pose_estimation.md
+++ b/docs/user_guide/aruco_markers_pipeline/pose_estimation.md
@@ -1,13 +1,13 @@
Estimate scene pose
===================
-An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) class defines a space with [ArUco markers inside](aruco_markers_description.md) helping to estimate scene pose when they are watched by [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera).
+Once [ArUco markers are placed into a scene](aruco_markers_description.md) and [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) is [configured](configuration_and_execution.md), scene pose can be estimated.
![Scene pose estimation](../../img/aruco_camera_pose_estimation.png)
## Add ArUcoScene to ArUcoCamera JSON configuration file
-An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) instance can contains multiples [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene).
+An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) class defines a space with [ArUco markers inside](aruco_markers_description.md) helping to estimate scene pose when they are watched by [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera).
Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with a sample where one scene is added and displayed:
@@ -27,17 +27,17 @@ Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMark
"rotation": [0, 0, 0]
},
"1": {
- "translation": [0, 25.5, 2],
- "rotation": [0, 45, 0]
+ "translation": [0, 26.5, 2],
+ "rotation": [0, 70, 0]
},
"2": {
- "translation": [35.5, 0, 3],
- "rotation": [-45, 0, 0]
+ "translation": [46.5, 0, 7],
+ "rotation": [-90, 0, 0]
}
}
}
}
- }
+ },
...
"image_parameters": {
...
@@ -51,10 +51,6 @@ Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMark
"draw_places": {
"color": [0, 0, 0],
"border_size": 1
- },
- "draw_places_axes": {
- "thickness": 1,
- "length": 2.5
}
}
}
@@ -65,11 +61,15 @@ Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMark
Now, let's understand the meaning of each JSON entry.
-### "MyScene"
+### *scenes*
+
+An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) instance can contains multiples [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) stored by name.
+
+### MyScene
-The name of the [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene). Basically useful for visualisation purpose.
+The name of an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene). Basically useful for visualisation purpose.
-### ArUco markers group
+### *aruco_markers_group*
The 3D places of ArUco markers into the scene as defined at [ArUco markers description chapter](aruco_markers_description.md). Thanks to this description, it is possible to estimate the pose of [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame.
@@ -77,6 +77,6 @@ The 3D places of ArUco markers into the scene as defined at [ArUco markers descr
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) pose estimation is done when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method.
-### Draw scenes
+### *draw_scenes*
The drawing parameters of each loaded [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image).
diff --git a/docs/user_guide/gaze_analysis/gaze_movement.md b/docs/user_guide/gaze_analysis/gaze_movement.md
deleted file mode 100644
index 83f67e1..0000000
--- a/docs/user_guide/gaze_analysis/gaze_movement.md
+++ /dev/null
@@ -1,163 +0,0 @@
-Gaze movement
-=============
-
-## Definition
-
-!!! note
-
- *"The act of classifying eye movements into distinct events is, on a general level, driven by a desire to isolate different intervals of the data stream strongly correlated with certain oculomotor or cognitive properties."*
-
- Citation from ["One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms"](https://link.springer.com/article/10.3758/s13428-016-0738-9) article.
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) class, then abstract [Fixation](../../argaze.md/#argaze.GazeFeatures.Fixation) and [Saccade](../../argaze.md/#argaze.GazeFeatures.Saccade) classes which inherit from [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement).
-
-The **positions** [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) attribute contain all [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) belonging to itself.
-
-![Fixation and Saccade](../../img/fixation_and_saccade.png)
-
-## Identification
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) classe to let add various identification algorithms.
-
-Some gaze movement identification algorithms are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule:
-
-* [Dispersion threshold identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification)
-* [Velocity threshold identification (I-VT)](../../argaze.md/#argaze.GazeAnalysis.VelocityThresholdIdentification)
-
-### Identify method
-
-[GazeMovementIdentifier.identify](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.identify) method allows to fed its identification algorithm with successive gaze positions to output Fixation, Saccade or any kind of GazeMovement instances.
-
-Here is a sample of code based on [I-DT](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) algorithm to illustrate how to use it:
-
-``` python
-from argaze import GazeFeatures
-from argaze.GazeAnalysis import DispersionThresholdIdentification
-
-# Create a gaze movement identifier based on dispersion algorithm with 50px max deviation 200 ms max duration thresholds
-gaze_movement_identifier = DispersionThresholdIdentification.GazeMovementIdentifier(50, 200)
-
-# Assuming that timestamped gaze positions are provided through live stream or later data reading
-...:
-
- gaze_movement = gaze_movement_identifier.identify(timestamp, gaze_position)
-
- # Fixation identified
- if GazeFeatures.is_fixation(gaze_movement):
-
- # Access to first gaze position of identified fixation
- start_ts, start_position = gaze_movement.positions.first
-
- # Access to fixation duration
- print('duration: {gaze_movement.duration}')
-
- # Iterate over all gaze positions of identified fixation
- for ts, position in gaze_movement.positions.items():
-
- # Do something with each fixation position
- ...
-
- # Saccade identified
- elif GazeFeatures.is_saccade(gaze_movement):
-
- # Access to first gaze position of identified saccade
- start_ts, start_position = gaze_movement.positions.first
-
- # Access to saccade amplitude
- print('amplitude: {gaze_movement.amplitude}')
-
- # Iterate over all gaze positions of identified saccade
- for ts, position in gaze_movement.positions.items():
-
- # Do something with each saccade position
- ...
-
- # No gaze movement identified
- else:
-
- continue
-
-```
-
-### Browse method
-
-[GazeMovementIdentifier.browse](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.browse) method allows to pass a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer to apply identification algorithm on all gaze positions inside.
-
-Identified gaze movements are returned through:
-
-* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all fixations are stored by starting gaze position timestamp.
-* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all saccades are stored by starting gaze position timestamp.
-* [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) instance where all gaze positions are linked to a fixation or saccade index.
-
-``` python
-# Assuming that timestamped gaze positions are provided through data reading
-
-ts_fixations, ts_saccades, ts_status = gaze_movement_identifier.browse(ts_gaze_positions)
-
-```
-
-* ts_fixations would look like:
-
-|timestamp|positions |duration|dispersion|focus |
-|:--------|:-------------------------------------------------------------|:-------|:---------|:--------|
-|60034 |{"60034":[846,620], "60044":[837,641], "60054":[835,649], ...}|450 |40 |(840,660)|
-|60504 |{"60504":[838,667], "60514":[838,667], "60524":[837,669], ...}|100 |38 |(834,651)|
-|... |... |... |.. |... |
-
-* ts_saccades would look like:
-
-|timestamp|positions |duration|
-|:--------|:---------------------------------------|:-------|
-|60484 |{"60484":[836, 669], "60494":[837, 669]}|10 |
-|60594 |{"60594":[833, 613], "60614":[927, 601]}|20 |
-|... |... |... |
-
-* ts_status would look like:
-
-|timestamp|position |type |index|
-|:--------|:---------|:-------|:----|
-|60034 |(846, 620)|Fixation|1 |
-|60044 |(837, 641)|Fixation|1 |
-|... |... |... |. |
-|60464 |(836, 668)|Fixation|1 |
-|60474 |(836, 668)|Fixation|1 |
-|60484 |(836, 669)|Saccade |1 |
-|60494 |(837, 669)|Saccade |1 |
-|60504 |(838, 667)|Fixation|2 |
-|60514 |(838, 667)|Fixation|2 |
-|... |... |... |. |
-|60574 |(825, 629)|Fixation|2 |
-|60584 |(829, 615)|Fixation|2 |
-|60594 |(833, 613)|Saccade |2 |
-|60614 |(927, 601)|Saccade |2 |
-|60624 |(933, 599)|Fixation|3 |
-|60634 |(934, 603)|Fixation|3 |
-|... |... |... |. |
-
-
-!!! note
- [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements), [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) and [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) classes inherit from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class.
-
- Read [Timestamped data](../timestamped_data/introduction.md) section to understand all features it provides.
-
-### Generator method
-
-[GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) can be called with a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer in argument to generate gaze movement each time one is identified.
-
-``` python
-# Assuming that timestamped gaze positions are provided through data reading
-
-for ts, gaze_movement in gaze_movement_identifier(ts_gaze_positions):
-
- # Fixation identified
- if GazeFeatures.is_fixation(gaze_movement):
-
- # Do something with each fixation
- ...
-
- # Saccade identified
- elif GazeFeatures.is_saccade(gaze_movement):
-
- # Do something with each saccade
- ...
-``` \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis/gaze_position.md b/docs/user_guide/gaze_analysis/gaze_position.md
deleted file mode 100644
index 48495b4..0000000
--- a/docs/user_guide/gaze_analysis/gaze_position.md
+++ /dev/null
@@ -1,98 +0,0 @@
-Gaze position
-=============
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines a [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class to handle point coordinates with a precision value.
-
-``` python
-from argaze import GazeFeatures
-
-# Define a basic gaze position
-gaze_position = GazeFeatures.GazePosition((123, 456))
-
-# Define a gaze position with a precision value
-gaze_position = GazeFeatures.GazePosition((789, 765), precision=10)
-
-# Access to gaze position value and precision
-print(f'position: {gaze_position.value}')
-print(f'precision: {gaze_position.precision}')
-
-```
-
-## Validity
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines also a [UnvalidGazePosition](../../argaze.md/#argaze.GazeFeatures.UnvalidGazePosition) class that inherits from [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) to handle case where no gaze position exists because of any specific device reason.
-
-``` python
-from argaze import GazeFeatures
-
-# Define a basic unvalid gaze position
-gaze_position = GazeFeatures.UnvalidGazePosition()
-
-# Define a basic unvalid gaze position with a message value
-gaze_position = GazeFeatures.UnvalidGazePosition("Something bad happened")
-
-# Access to gaze position validity
-print(f'validity: {gaze_position.valid}')
-
-```
-
-## Distance
-
-[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides a **distance** method to calculate the distance to another gaze position instance.
-
-![Distance](../../img/distance.png)
-
-``` python
-# Distance between A and B positions
-d = gaze_position_A.distance(gaze_position_B)
-```
-
-## Overlapping
-
-[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides an **overlap** method to test if a gaze position overlaps another one considering their precisions.
-
-![Gaze overlapping](../../img/overlapping.png)
-
-``` python
-# Check that A overlaps B
-if gaze_position_A.overlap(gaze_position_B):
-
- # Do something if A overlaps B
- ...
-
-# Check that A overlaps B and B overlaps A
-if gaze_position_A.overlap(gaze_position_B, both=True):
-
- # Do something if A overlaps B AND B overlaps A
- ...
-```
-
-## Timestamped gaze positions
-
-[TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) inherits from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class to handle especially gaze positions.
-
-### Import from dataframe
-
-It is possible to load timestamped gaze positions from a [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) object.
-
-```python
-import pandas
-
-# Load gaze positions from a CSV file into Panda Dataframe
-dataframe = pandas.read_csv('gaze_positions.csv', delimiter="\t", low_memory=False)
-
-# Convert Panda dataframe into TimestampedGazePositions buffer precising the use of each specific column labels
-ts_gaze_positions = GazeFeatures.TimeStampedGazePositions.from_dataframe(dataframe, timestamp = 'Recording timestamp [ms]', x = 'Gaze point X [px]', y = 'Gaze point Y [px]')
-
-```
-### Iterator
-
-Like [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer), [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) class provides iterator feature:
-
-```python
-for timestamp, gaze_position in ts_gaze_positions.items():
-
- # Do something with each gaze position
- ...
-
-```
diff --git a/docs/user_guide/gaze_analysis/introduction.md b/docs/user_guide/gaze_analysis/introduction.md
deleted file mode 100644
index bf818ba..0000000
--- a/docs/user_guide/gaze_analysis/introduction.md
+++ /dev/null
@@ -1,7 +0,0 @@
-Gaze analysis
-=============
-
-This section refers to:
-
-* [GazeFeatures](../../argaze.md/#argaze.GazeFeatures)
-* [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis/scan_path.md b/docs/user_guide/gaze_analysis/scan_path.md
deleted file mode 100644
index 46af28b..0000000
--- a/docs/user_guide/gaze_analysis/scan_path.md
+++ /dev/null
@@ -1,169 +0,0 @@
-Scan path
-=========
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines classes to handle successive fixations/saccades and analyse their spatial or temporal properties.
-
-## Fixation based scan path
-
-### Definition
-
-The [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) class is defined as a list of [ScanSteps](../../argaze.md/#argaze.GazeFeatures.ScanStep) which are defined as a fixation and a consecutive saccade.
-
-![Fixation based scan path](../../img/scan_path.png)
-
-As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_saccade) methods.
-
-### Analysis
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer) classe to let add various analysis algorithms.
-
-Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule:
-
-* [K-Coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient)
-* [Nearest Neighbor Index](../../argaze.md/#argaze.GazeAnalysis.NearestNeighborIndex)
-* [Exploit Explore Ratio](../../argaze.md/#argaze.GazeAnalysis.ExploitExploreRatio)
-
-### Example
-
-Here is a sample of code to illustrate how to built a scan path and analyze it:
-
-``` python
-from argaze import GazeFeatures
-from argaze.GazeAnalysis import KCoefficient
-
-# Create a empty scan path
-scan_path = GazeFeatures.ScanPath()
-
-# Create a K coefficient analyzer
-kc_analyzer = KCoefficient.ScanPathAnalyzer()
-
-# Assuming a gaze movement is identified at ts time
-...:
-
- # Fixation identified
- if GazeFeatures.is_fixation(gaze_movement):
-
- # Append fixation to scan path : no step is created
- scan_path.append_fixation(ts, gaze_movement)
-
- # Saccade identified
- elif GazeFeatures.is_saccade(gaze_movement):
-
- # Append saccade to scan path : a new step should be created
- new_step = scan_path.append_saccade(data_ts, gaze_movement)
-
- # Analyse scan path
- if new_step:
-
- K = kc_analyzer.analyze(scan_path)
-
- # Do something with K metric
- ...
-```
-
-## AOI based scan path
-
-### Definition
-
-The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class is defined as a list of [AOIScanSteps](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) which are defined as set of consecutives fixations looking at a same Area Of Interest (AOI) and a consecutive saccade.
-
-![AOI based scan path](../../img/aoi_scan_path.png)
-
-As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_saccade) methods.
-
-### Analysis
-
-[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [AOIScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) classe to let add various analysis algorithms.
-
-Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule:
-
-* [Transition matrix](../../argaze.md/#argaze.GazeAnalysis.TransitionMatrix)
-* [Entropy](../../argaze.md/#argaze.GazeAnalysis.Entropy)
-* [Lempel-Ziv complexity](../../argaze.md/#argaze.GazeAnalysis.LempelZivComplexity)
-* [N-Gram](../../argaze.md/#argaze.GazeAnalysis.NGram)
-* [K-modified coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient)
-
-### Example
-
-Here is a sample of code to illustrate how to built a AOI scan path and analyze it:
-
-``` python
-from argaze import GazeFeatures
-from argaze.GazeAnalysis import LempelZivComplexity
-
-# Assuming all AOI names are listed
-...
-
-# Create a empty AOI scan path
-aoi_scan_path = GazeFeatures.AOIScanPath(aoi_names)
-
-# Create a Lempel-Ziv complexity analyzer
-lzc_analyzer = LempelZivComplexity.AOIScanPathAnalyzer()
-
-# Assuming a gaze movement is identified at ts time
-...:
-
- # Fixation identified
- if GazeFeatures.is_fixation(gaze_movement):
-
- # Assuming fixation is detected as inside an AOI
- ...
-
- # Append fixation to AOI scan path : a new step should be created
- new_step = aoi_scan_path.append_fixation(ts, gaze_movement, looked_aoi_name)
-
- # Analyse AOI scan path
- if new_step:
-
- LZC = kc_analyzer.analyze(aoi_scan_path)
-
- # Do something with LZC metric
- ...
-
- # Saccade identified
- elif GazeFeatures.is_saccade(gaze_movement):
-
- # Append saccade to scan path : no step is created
- aoi_scan_path.append_saccade(data_ts, gaze_movement)
-
-```
-
-### Advanced
-
-The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class provides some advanced features to analyse it.
-
-#### Letter sequence
-
-When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally affects a unique letter index related to its AOI to ease pattern analysis.
-Then, the [AOIScanPath letter_sequence](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.letter_sequence) property returns the concatenation of each [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) letter.
-The [AOIScanPath get_letter_aoi](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.get_letter_aoi) method helps to get back the AOI related to a letter index.
-
-``` python
-# Assuming the following AOI scan path is built: Foo > Bar > Shu > Foo
-aoi_scan_path = ...
-
-# Letter sequence representation should be: 'ABCA'
-print(aoi_scan_path.letter_sequence)
-
-# Output should be: 'Bar'
-print(aoi_scan_path.get_letter_aoi('B'))
-
-```
-
-#### Transition matrix
-
-When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally counts the number of transitions from an AOI to another AOI to ease Markov chain analysis.
-Then, the [AOIScanPath transition_matrix](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.transition_matrix) property returns a [Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) where indexes are transition departures and columns are transition destinations.
-
-Here is an exemple of transition matrix for the following [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath): Foo > Bar > Shu > Foo > Bar
-
-| |Foo|Bar|Shu|
-|:--|:--|:--|:--|
-|Foo|0 |2 |0 |
-|Bar|0 |0 |1 |
-|Shu|1 |0 |0 |
-
-
-#### Fixations count
-
-The [AOIScanPath fixations_count](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.fixations_count) method returns the total number of fixations in the whole scan path and a dictionary to get the fixations count per AOI.
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md
index 0b45368..f2e84d6 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md
@@ -1,7 +1,7 @@
Loading modules from another package
====================================
-It possible to load GazeMovementIdentifier, ScanPathAnalyzer or AOIScanPathAnalyzer modules from another [python package](https://docs.python.org/3/tutorial/modules.html#packages).
+It possible to load [GazeMovementIdentifier](../../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier), [ScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer), [AOIMatcher](../../../argaze.md/#argaze.GazeFeatures.AOIMatcher) or [AOIScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) modules from another [python package](https://docs.python.org/3/tutorial/modules.html#packages).
To do so, simply prepend the package where to find the module into the JSON configuration file:
@@ -20,6 +20,12 @@ To do so, simply prepend the package where to find the module into the JSON conf
}
}
...
+ "aoi_matcher": {
+ "my_package.MyAOIMatcherAlgorithm": {
+ "specific_plugin_parameter": 0
+ }
+ }
+ ...
"aoi_scan_path_analyzers": {
"my_package.MyAOIScanPathAnalyzerAlgorithm": {
"specific_plugin_parameter": 0
@@ -28,7 +34,7 @@ To do so, simply prepend the package where to find the module into the JSON conf
}
```
-Then, load your package from the python script where the ArFrame is created.
+Then, load your package from the python script where the [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) is created.
```python
from argaze import ArFeatures
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 81efa40..eefeee1 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -106,7 +106,7 @@ for name, ar_layer in ar_frame.layers.items():
Let's understand the meaning of each returned data.
-### Gaze movement
+### *gaze_movement*
A [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) once it have been identified by [ArFrame.gaze_movement_identifier](../../../argaze.md/#argaze.ArFeatures.ArFrame) object from incoming consecutive timestamped gaze positions. If no gaze movement have been identified, it returns an [UnvalidGazeMovement](../../../argaze.md/#argaze.GazeFeatures.UnvalidGazeMovement).
@@ -115,25 +115,25 @@ In that case, the returned gaze movement *finished* flag is false.
Then, the returned gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
-### Scan path analysis
+### *scan_path_analysis*
A dictionary with all last scan path analysis if new scan step have been added to the [ArFrame.scan_path](../../../argaze.md/#argaze.ArFeatures.ArFrame) object.
-### Layers analysis
+### *layers_analysis*
A dictionary with all layers AOI scan path analysis if new AOI scan step have been added to an [ArLayer.aoi_scan_path](../../../argaze.md/#argaze.ArFeatures.ArLayer) object.
-### Execution times
+### *execution_times*
A dictionary with each pipeline step execution time.
-### Exception
+### *exception*
A [python Exception](https://docs.python.org/3/tutorial/errors.html#exceptions) object raised during pipeline execution.
## Setup ArFrame image parameters
-[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary.
+[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary.
```python
# Assuming ArFrame is loaded
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_2d_description.md b/docs/user_guide/gaze_analysis_pipeline/aoi_2d_description.md
new file mode 100644
index 0000000..4b7ed69
--- /dev/null
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_2d_description.md
@@ -0,0 +1,57 @@
+Describe 2D AOI
+================
+
+Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](configuration_and_execution.md), [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described to know what is looked in frame.
+
+![2D AOI description](../../img/aoi_2d_description.png)
+
+According common computer graphics coordinates convention, all AOI need to be described from a top left frame corner origin with a coordinate system where:
+
+* +X is pointing to the right,
+* +Y is pointing to the downward.
+
+!!! warning
+ All AOI spatial values must be given in **pixels**.
+
+### Edit SVG file description
+
+SVG file format could be exported from most vector graphics editors.
+
+``` xml
+<svg>
+ <path id="GeoSector" d="M860,160L1380,100L1660,400L1380,740L1440,960L920,920L680,800L640,560L860,160Z"/>
+ <rect id="LeftPanel" x="0" y="0" width="350" height="1080"/>
+ <circle id="CircularWidget" cx="1800" cy="120" r="80"/>
+</svg>
+```
+
+Here are common SVG file features needed to describe AOI:
+
+* *id* attribute indicates AOI name.
+* *path* element describes any polygon using only [M, L and Z path intructions](https://www.w3.org/TR/SVG2/paths.html#PathData)
+* *rect*, *circle* and *ellipse* allow respectively to describe rectangular, circular and elliptic AOI.
+
+### Edit JSON file description
+
+JSON file format allows to describe AOI.
+
+``` json
+{
+ "GeoSector": [[860, 160], [1380, 100], [1660, 400], [1380, 740], [1440, 960], [920, 920], [680, 800], [640, 560]],
+ "LeftPanel": {
+ "Rectangle": {
+ "x": 0,
+ "y": 0,
+ "width": 350,
+ "height": 1080
+ }
+ },
+ "CircularWidget": {
+ "Circle": {
+ "cx": 1800,
+ "cy": 120,
+ "radius": 80
+ }
+ }
+}
+```
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index ffc72c7..66fa12f 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -1,13 +1,13 @@
-Add AOI analysis
-================
+Enable AOI analysis
+===================
-The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to make matching of gaze movements with AOIs and inside which those matchings need to be analyzed.
+Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](configuration_and_execution.md) and [2D AOI are described](aoi_2d_description.md), fixation can be matched with AOI to build an AOI scan path before analyzing it.
![Layer](../../img/ar_layer.png)
## Add ArLayer to ArFrame JSON configuration file
-An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer).
+The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to make matching of fixations with AOI and inside which those matchings need to be analyzed.
Here is an extract from the JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration file with a sample where one layer is added:
@@ -19,10 +19,22 @@ Here is an extract from the JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.Ar
"layers": {
"MyLayer": {
"aoi_scene" : {
- "upper_left_area": [[0, 0], [960, 0], [960, 540], [0, 540]],
- "upper_right_area": [[960, 0], [1920, 0], [1920, 540], [960, 540]],
- "lower_left_area": [[0, 540], [960, 540], [960, 1080], [0, 1080]],
- "lower_right_area": [[960, 540], [1920, 540], [1920, 1080], [960, 1080]]
+ "GeoSector": [[860, 160], [1380, 100], [1660, 400], [1380, 740], [1440, 960], [920, 920], [680, 800], [640, 560]],
+ "LeftPanel": {
+ "Rectangle": {
+ "x": 0,
+ "y": 0,
+ "width": 350,
+ "height": 1080
+ }
+ },
+ "CircularWidget": {
+ "Circle": {
+ "cx": 1800,
+ "cy": 120,
+ "radius": 80
+ }
+ }
},
"aoi_matcher": {
"DeviationCircleCoverage": {
@@ -51,46 +63,50 @@ Here is an extract from the JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.Ar
Now, let's understand the meaning of each JSON entry.
-### "MyLayer"
+### *layers*
-The name of the [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
+An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name.
-### AOI Scene
+### MyLayer
-The [AOIScene](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AOIScene) defines a set of 2D [AreaOfInterest](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) registered by name.
+The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose.
-![AOI Scene](../../img/ar_layer_aoi_scene.png)
+### *aoi_scene*
-### AOI Matcher
+The set of 2D AOI into the layer as defined at [2D AOI description chapter](aoi_2d_description.md).
-The first [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step aims to make match identified gaze movement with an AOI of the scene.
+![AOI scene](../../img/aoi_2d_description.png)
-![AOI Matcher](../../img/ar_layer_aoi_matcher.png)
+### *aoi_matcher*
-The matching algorithm can be selected by instantiating a particular AOIMatcher [from GazeAnalysis submodule](pipeline_modules/aoi_matchers.md) or [from another python package](advanced_topics/module_loading.md).
+The first [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step aims to make match identified gaze movement with a layer's AOI.
+
+![AOI matcher](../../img/aoi_matcher.png)
+
+The matching algorithm can be selected by instantiating a particular [AOIMatcher from GazeAnalysis submodule](pipeline_modules/aoi_matchers.md) or [from another python package](advanced_topics/module_loading.md).
In the example file, the choosen matching algorithm is the [Deviation Circle Coverage](../../argaze.md/#argaze.GazeAnalysis.DeviationCircleCoverage) which has one specific *coverage_threshold* attribute.
!!! warning "Mandatory"
- JSON *aoi_matcher* entry is mandatory. Otherwise, the AOIScanPath and AOIScanPathAnalyzers steps are disabled.
+ JSON *aoi_matcher* entry is mandatory. Otherwise, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) and [AOIScanPathAnalyzers](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) steps are disabled.
-### AOI Scan Path
+### *aoi_scan_path*
The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step aims to build a [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) defined as a list of [AOIScanSteps](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) made by a set of successive fixations/saccades onto a same AOI.
-![AOI Scan Path](../../img/ar_layer_aoi_scan_path.png)
+![AOI scan path](../../img/aoi_scan_path.png)
-Once identified gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
+Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added.
!!! note "Optional"
- JSON *aoi_scan_path* entry is not mandatory. If aoi_scan_path_analyzers entry is not empty, the AOIScanPath step is automatically enabled.
+ JSON *aoi_scan_path* entry is not mandatory. If aoi_scan_path_analyzers entry is not empty, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) step is automatically enabled.
-### AOI Scan Path Analyzers
+### *aoi_scan_path_analyzers*
Finally, the last [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step consists in passing the previously built [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) to each loaded [AOIScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer).
-Each analysis algorithm can be selected by instantiating a particular AOIScanPathAnalyzer [from GazeAnalysis submodule](pipeline_modules/aoi_scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md).
+Each analysis algorithm can be selected by instantiating a particular [AOIScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/aoi_scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md).
In the example file, the choosen analysis algorithms are the [Basic](../../argaze.md/#argaze.GazeAnalysis.Basic) module, the [TransitionMatrix](../../argaze.md/#argaze.GazeAnalysis.TransitionMatrix) module and the [NGram](../../argaze.md/#argaze.GazeAnalysis.NGram) module which has two specific *n_min* and *n_max* attributes.
diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md
index a7d59f6..a61abdc 100644
--- a/docs/user_guide/gaze_analysis_pipeline/background.md
+++ b/docs/user_guide/gaze_analysis_pipeline/background.md
@@ -3,7 +3,7 @@ Add a background
Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) attribute to display any image behind pipeline visualisation.
-![Background](../../img/ar_frame_background.png)
+![Background](../../img/background.png)
## Load and display ArFrame background
@@ -16,7 +16,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p
"name": "My FullHD screen",
"size": [1920, 1080],
...
- "background": "./joconde.png",
+ "background": "./bosch.png",
...
"image_parameters": {
...
@@ -30,10 +30,10 @@ Here is an extract from the JSON ArFrame configuration file where a background p
Now, let's understand the meaning of each JSON entry.
-### Background
+### *background*
The path to an image file on disk.
-### Background weight
+### *background_weight*
The weight of background overlay in [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) between 0 and 1.
diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
index 5aca8f3..71d3c33 100644
--- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
@@ -26,7 +26,7 @@ Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) conf
},
"scan_path_analyzers": {
"Basic": {},
- "ExploitExploreRatio": {
+ "ExploreExploitRatio": {
"short_fixation_duration_threshold": 0
}
}
@@ -44,24 +44,24 @@ ar_frame = ArFeatures.ArFrame.from_json('./configuration.json')
Now, let's understand the meaning of each JSON entry.
-### Name
+### *name*
The name of the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame). Basically useful for visualisation purpose.
-### Size
+### *size*
The size of the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) defines the dimension of the rectangular area where gaze positions are projected. Be aware that gaze positions have to be in the same range of value to be projected in.
!!! warning "Free spatial unit"
Gaze positions can either be integer or float, pixels, millimeters or what ever you need. The only concern is that all spatial values used in further configurations have to be all the same unit.
-### Gaze Movement Identifier
+### *gaze_movement_identifier*
The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step is to identify fixations or saccades from consecutive timestamped gaze positions.
-![Gaze Movement Identifier](../../img/ar_frame_gaze_movement_identifier.png)
+![Gaze movement identifier](../../img/gaze_movement_identifier.png)
-The identification algorithm can be selected by instantiating a particular GazeMovementIdentifier [from GazeAnalysis submodule](pipeline_modules/gaze_movement_identifiers.md) or [from another python package](advanced_topics/module_loading.md).
+The identification algorithm can be selected by instantiating a particular [GazeMovementIdentifier from GazeAnalysis submodule](pipeline_modules/gaze_movement_identifiers.md) or [from another python package](advanced_topics/module_loading.md).
In the example file, the choosen identification algorithm is the [Dispersion Threshold Identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) which has two specific *deviation_max_threshold* and *duration_min_threshold* attributes.
@@ -71,11 +71,11 @@ In the example file, the choosen identification algorithm is the [Dispersion Thr
!!! warning "Mandatory"
JSON *gaze_movement_identifier* entry is mandatory. Otherwise, the ScanPath and ScanPathAnalyzers steps are disabled.
-### Scan Path
+### *scan_path*
The second [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step aims to build a [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) defined as a list of [ScanSteps](../../argaze.md/#argaze.GazeFeatures.ScanStep) made by a fixation and a consecutive saccade.
-![Scan Path](../../img/ar_frame_scan_path.png)
+![Scan path](../../img/scan_path.png)
Once fixations and saccades are identified, they are automatically appended to the ScanPath if required.
@@ -84,13 +84,13 @@ The [ScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.ScanPath.durati
!!! note "Optional"
JSON *scan_path* entry is not mandatory. If scan_path_analyzers entry is not empty, the ScanPath step is automatically enabled.
-### Scan Path Analyzers
+### *scan_path_analyzers*
Finally, the last [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step consists in passing the previously built [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) to each loaded [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer).
-Each analysis algorithm can be selected by instantiating a particular ScanPathAnalyzer [from GazeAnalysis submodule](pipeline_modules/scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md).
+Each analysis algorithm can be selected by instantiating a particular [ScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md).
-In the example file, the choosen analysis algorithms are the [Basic](../../argaze.md/#argaze.GazeAnalysis.Basic) module and the [ExploitExploreRatio](../../argaze.md/#argaze.GazeAnalysis.ExploitExploreRatio) module which has one specific *short_fixation_duration_threshold* attribute.
+In the example file, the choosen analysis algorithms are the [Basic](../../argaze.md/#argaze.GazeAnalysis.Basic) module and the [ExploreExploitRatio](../../argaze.md/#argaze.GazeAnalysis.ExploreExploitRatio) module which has one specific *short_fixation_duration_threshold* attribute.
## Pipeline execution
@@ -107,4 +107,4 @@ Timestamped gaze positions have to be passed one by one to [ArFrame.look](../../
At this point, the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method only process gaze movement identification and scan path analysis without any AOI neither any logging or visualisation supports.
- Read the next chapters to learn how to [add AOI analysis](aoi_analysis.md), [log gaze analysis](logging.md) and [visualize pipeline steps](visualisation.md). \ No newline at end of file
+ Read the next chapters to learn how to [describe AOI](aoi_2d_description.md), [add AOI analysis](aoi_analysis.md), [log gaze analysis](logging.md) and [visualize pipeline steps](visualisation.md). \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
index fe4246e..6d9ad18 100644
--- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md
+++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
@@ -3,7 +3,7 @@ Add a heatmap
Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step. It is executed at each new gaze position to update heatmap image.
-![Heatmap](../../img/ar_frame_heatmap.png)
+![Heatmap](../../img/heatmap.png)
## Enable and display ArFrame heatmap
@@ -33,21 +33,21 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena
Now, let's understand the meaning of each JSON entry.
-### Size
+### *size*
The heatmap image size in pixel. Higher size implies higher CPU load.
-### Sigma
+### *sigma*
The gaussian point spreading to draw at each gaze position.
![Point spread](../../img/point_spread.png)
-### Buffer
+### *buffer*
The size of point spread images buffer (0 means no buffering) to visualize only last N gaze positions.
-### Heatmap weight
+### *heatmap_weight*
The weight of heatmap overlay in [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) between 0 and 1.
diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md
index 02aa82e..65cc53a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/introduction.md
+++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md
@@ -11,13 +11,14 @@ To build your own gaze analysis pipeline, you need to know:
* [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md),
* [How to load and execute gaze analysis pipeline](configuration_and_execution.md),
-* [How to add AOI analysis](aoi_analysis.md),
-* [How to visualize ArFrame and ArLayers](visualisation.md),
+* [How to describe AOI](aoi_2d_description.md),
+* [How to enable AOI analysis](aoi_analysis.md),
+* [How to visualize pipeline steps outputs](visualisation.md),
* [How to log resulted gaze analysis](logging.md),
-* [How to make heatmap image](heatmap.md).
+* [How to make heatmap image](heatmap.md),
* [How to add a background image](background.md).
More advanced features are also explained like:
-* [How to script gaze analysis pipeline](advanced_topics/scripting.md)
-* [How to load module from another package](advanced_topics/module_loading.md)
+* [How to script gaze analysis pipeline](advanced_topics/scripting.md),
+* [How to load module from another package](advanced_topics/module_loading.md).
diff --git a/docs/user_guide/gaze_analysis_pipeline/logging.md b/docs/user_guide/gaze_analysis_pipeline/logging.md
index 1dea712..055a535 100644
--- a/docs/user_guide/gaze_analysis_pipeline/logging.md
+++ b/docs/user_guide/gaze_analysis_pipeline/logging.md
@@ -7,7 +7,7 @@ Log gaze analysis
[ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) have a log attribute to enable analysis logging.
-Here is an extract from the JSON ArFrame configuration file where logging is enabled for the ArFrame and for one ArLayer:
+Here is an extract from the JSON ArFrame configuration file where logging is enabled for the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and for one [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer):
```json
{
@@ -91,7 +91,7 @@ Assuming that [ArGaze.GazeAnalysis.NGram](../../argaze.md/#argaze.GazeAnalysis.N
|timestamped|ngrams_count|
|:----------|:-----------|
|5687 |"{3: {}, 4: {}, 5: {}}"|
-|6208 |"{3: {('upper_left_corner', 'lower_left_corner', 'lower_right_corner'): 1}, 4: {}, 5: {}}"|
+|6208 |"{3: {('LeftPanel', 'GeoSector', 'CircularWidget'): 1}, 4: {}, 5: {}}"|
|... |... |
diff --git a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md
index c8fa63c..61338cc 100644
--- a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md
+++ b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md
@@ -3,7 +3,7 @@ AOI matchers
ArGaze provides ready-to-use AOI matching algorithms.
-Here are JSON samples to include the chosen module inside [ArLayer configuration](../ar_layer_configuration_and_execution.md) *aoi_matcher* entry.
+Here are JSON samples to include the chosen module inside [ArLayer configuration](../aoi_analysis.md) *aoi_matcher* entry.
## Deviation circle coverage
diff --git a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md
index 8d02967..ad1832d 100644
--- a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md
+++ b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md
@@ -3,7 +3,7 @@ AOI scan path analyzers
ArGaze provides ready-to-use AOI scan path analysis algorithms.
-Here are JSON samples to include a chosen module inside [ArLayer configuration](../ar_layer_configuration_and_execution.md) *aoi_scan_path_analyzers* entry.
+Here are JSON samples to include a chosen module inside [ArLayer configuration](../aoi_analysis.md) *aoi_scan_path_analyzers* entry.
## Basic metrics
diff --git a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/scan_path_analyzers.md b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/scan_path_analyzers.md
index afba844..f9f757a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/scan_path_analyzers.md
+++ b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/scan_path_analyzers.md
@@ -13,15 +13,15 @@ Here are JSON samples to include a chosen module inside [ArFrame configuration](
[See in code reference](../../../argaze.md/#argaze.GazeAnalysis.Basic.ScanPathAnalyzer)
-## Exploit/Explore ratio
+## Explore/Exploit ratio
```json
-"ExploitExploreRatio": {
+"ExploreExploitRatio": {
"short_fixation_duration_threshold": 0
}
```
-[See in code reference](../../../argaze.md/#argaze.GazeAnalysis.ExploitExploreRatio.ScanPathAnalyzer)
+[See in code reference](../../../argaze.md/#argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer)
## K coefficient
diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
index 93d2a65..2156f3b 100644
--- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
+++ b/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
@@ -3,7 +3,7 @@ Edit timestamped gaze positions
Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before to go further.
-![Timestamped Gaze Positions](../../img/timestamped_gaze_positions.png)
+![Timestamped gaze positions](../../img/timestamped_gaze_positions.png)
## Import gaze positions from CSV file
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualisation.md b/docs/user_guide/gaze_analysis_pipeline/visualisation.md
index 99f0259..5f06fac 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualisation.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualisation.md
@@ -3,7 +3,7 @@ Visualize pipeline steps
Visualisation is not a pipeline step but each [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline steps outputs can be drawn in real time or afterward, depending of application purpose.
-![ArFrame visualisation](../../img/ar_frame_visualisation.png)
+![ArFrame visualisation](../../img/visualisation.png)
## Add image parameters to ArFrame JSON configuration file
@@ -17,6 +17,22 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
"size": [1920, 1080],
...
"image_parameters": {
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 2
+ },
+ "draw_fixations": {
+ "deviation_circle_color": [255, 255, 255],
+ "duration_border_color": [127, 0, 127],
+ "duration_factor": 1e-2,
+ "draw_positions": {
+ "position_color": [0, 255, 255],
+ "line_color": [0, 0, 0]
+ }
+ },
+ "draw_saccades": {
+ "line_color": [255, 0, 255]
+ },
"draw_scan_path": {
"draw_fixations": {
"deviation_circle_color": [255, 0, 255],
@@ -25,8 +41,7 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
},
"draw_saccades": {
"line_color": [255, 0, 255]
- },
- "deepness": 0
+ }
},
"draw_layers": {
"MyLayer": {
@@ -38,11 +53,11 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
},
"draw_aoi_matching": {
"draw_matched_fixation": {
- "deviation_circle_color": [255, 255, 255]
- },
- "draw_matched_fixation_positions": {
- "position_color": [0, 255, 255],
- "line_color": [0, 0, 0]
+ "deviation_circle_color": [255, 255, 255],
+ "draw_positions": {
+ "position_color": [0, 255, 0],
+ "line_color": [0, 0, 0]
+ }
},
"draw_matched_region": {
"color": [0, 255, 0],
@@ -56,10 +71,6 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
"looked_aoi_name_offset": [0, -10]
}
}
- },
- "draw_gaze_positions": {
- "color": [0, 255, 255],
- "size": 2
}
}
}
@@ -81,7 +92,7 @@ import cv2
# Assuming that timestamped gaze positions have been processed by ArFrame.look method
...
-# Export heatmap image
+# Export ArFrame image
cv2.imwrite('./ar_frame.png', ar_frame.image())
```
diff --git a/docs/user_guide/timestamped_data/data_synchronisation.md b/docs/user_guide/timestamped_data/data_synchronisation.md
deleted file mode 100644
index 5190eab..0000000
--- a/docs/user_guide/timestamped_data/data_synchronisation.md
+++ /dev/null
@@ -1,106 +0,0 @@
-Data synchronisation
-====================
-
-Recorded data needs to be synchronized to link them before further processings.
-
-The [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class provides various methods to help in such task.
-
-## Pop last before
-
-![Pop last before](../../img/pop_last_before.png)
-
-The code below shows how to use [pop_last_before](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer.pop_last_before) method in order to synchronise two timestamped data buffers with different timestamps:
-
-``` python
-from argaze import DataStructures
-
-# Assuming A_data_record and B_data_record are TimeStampedBuffer instances with different timestamps
-
-for A_ts, A_data in A_data_record.items():
-
- try:
-
- # Get nearest B data before current A data and remove all B data before (including the returned one)
- B_ts, B_data = B_data_record.pop_last_before(A_ts)
-
- # No data stored before A_ts timestamp
- except KeyError:
-
- pass
-
-```
-
-## Pop last until
-
-![Pop last until](../../img/pop_last_until.png)
-
-The code below shows how to use [pop_last_until](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer.pop_last_until) method in order to synchronise two timestamped data buffers with different timestamps:
-
-``` python
-from argaze import DataStructures
-
-# Assuming A_data_record and B_data_record are TimeStampedBuffer instances with different timestamps
-
-for A_ts, A_data in A_data_record.items():
-
- try:
-
- # Get nearest B data after current A data and remove all B data before
- B_ts, B_data = B_data_record.pop_last_until(A_ts)
-
- # No data stored until A_ts timestamp
- except KeyError:
-
- pass
-
-```
-
-## Get last before
-
-![Get last before](../../img/get_last_before.png)
-
-The code below shows how to use [get_last_before](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer.get_last_before) method in order to synchronise two timestamped data buffers with different timestamps:
-
-``` python
-from argaze import DataStructures
-
-# Assuming A_data_record and B_data_record are TimeStampedBuffer instances with different timestamps
-
-for A_ts, A_data in A_data_record.items():
-
- try:
-
- # Get nearest B data before current A data
- B_ts, B_data = B_data_record.get_last_before(A_ts)
-
- # No data stored before A_ts timestamp
- except KeyError:
-
- pass
-
-```
-
-## Get last until
-
-![Get last until](../../img/get_last_until.png)
-
-The code below shows how to use [get_last_until](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer.get_last_until) method in order to synchronise two timestamped data buffers with different timestamps:
-
-``` python
-from argaze import DataStructures
-
-# Assuming A_data_record and B_data_record are TimeStampedBuffer instances with different timestamps
-
-for A_ts, A_data in A_data_record.items():
-
- try:
-
- # Get nearest B data after current A data
- B_ts, B_data = B_data_record.get_last_until(A_ts)
-
- # No data stored until A_ts timestamp
- except KeyError:
-
- pass
-
-```
diff --git a/docs/user_guide/timestamped_data/introduction.md b/docs/user_guide/timestamped_data/introduction.md
deleted file mode 100644
index 974e2be..0000000
--- a/docs/user_guide/timestamped_data/introduction.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Timestamped data
-================
-
-Working with wearable eye tracker devices implies to handle various timestamped data like gaze positions, pupills diameter, fixations, saccades, ...
-
-This section mainly refers to [DataStructures.TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class.
diff --git a/docs/user_guide/timestamped_data/ordered_dictionary.md b/docs/user_guide/timestamped_data/ordered_dictionary.md
deleted file mode 100644
index 64dd899..0000000
--- a/docs/user_guide/timestamped_data/ordered_dictionary.md
+++ /dev/null
@@ -1,19 +0,0 @@
-Ordered dictionary
-==================
-
-[TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class inherits from [OrderedDict](https://docs.python.org/3/library/collections.html#collections.OrderedDict) as data are de facto ordered by time.
-
-Any data type can be stored using int or float keys as timestamp.
-
-```python
-from argaze import DataStructures
-
-# Create a timestamped data buffer
-ts_data = DataStructures.TimeStampedBuffer()
-
-# Store any data type using numeric keys
-ts_data[0] = 123
-ts_data[0.1] = "message"
-ts_data[0.23] = {"key": value}
-...
-```
diff --git a/docs/user_guide/timestamped_data/pandas_dataframe_conversion.md b/docs/user_guide/timestamped_data/pandas_dataframe_conversion.md
deleted file mode 100644
index 7614e73..0000000
--- a/docs/user_guide/timestamped_data/pandas_dataframe_conversion.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Pandas DataFrame conversion
----
-
-Pandas DataFrame conversion
-===========================
-
-A [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) is a python data structure allowing powerful table processings.
-
-## Export as dataframe
-
-[TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) instance can be converted into dataframe provided that data values are stored as dictionaries.
-
-```python
-from argaze import DataStructures
-
-# Create a timestamped data buffer
-ts_data = DataStructures.TimeStampedBuffer()
-
-# Store various data as dictionary
-ts_data[10] = {"A_key": 0, "B_key": 0.123}}
-ts_data[20] = {"A_key": 4, "B_key": 0.567}}
-ts_data[30] = {"A_key": 8, "B_key": 0.901}}
-...
-
-# Convert timestamped data buffer into dataframe
-ts_buffer_dataframe = ts_buffer.as_dataframe()
-```
-
-ts_buffer_dataframe would look like:
-
-|timestamp|A_key|B_key|
-|:--------|:----|:----|
-|10 |0 |0.123|
-|20 |4 |0.567|
-|30 |8 |0.901|
-|... |... |... |
-
-## Import from dataframe
-
-Reversely, [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) instance can be created from dataframe, as a result of which each dataframe columns label will become a key of data value dictionary. Notice that the column containing timestamp values have to be called 'timestamp'.
diff --git a/docs/user_guide/timestamped_data/saving_and_loading.md b/docs/user_guide/timestamped_data/saving_and_loading.md
deleted file mode 100644
index 4e6a094..0000000
--- a/docs/user_guide/timestamped_data/saving_and_loading.md
+++ /dev/null
@@ -1,14 +0,0 @@
-Saving and loading
-==================
-
-[TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) instance can be saved as and loaded from JSON file format.
-
-```python
-
-# Save
-ts_data.to_json('./data.json')
-
-# Load
-ts_data = DataStructures.TimeStampedBuffer.from_json('./data.json')
-
-```
diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/ready-made_scripts.md
index bc8b277..55258e9 100644
--- a/docs/user_guide/utils/ready-made_scripts.md
+++ b/docs/user_guide/utils/ready-made_scripts.md
@@ -9,10 +9,10 @@ Collection of command-line scripts to provide useful features.
!!! note
*Use -h option to get command arguments documentation.*
-## ArUco scene exporter
+## ArUco markers group exporter
-Load a MOVIE with ArUco markers inside and select image into it, detect ArUco markers belonging to DICT_APRILTAG_16h5 dictionary with 5cm size into the selected image thanks to given OPTIC_PARAMETERS and DETECTOR_PARAMETERS then, export detected ArUco markers scene as .obj file into an *./src/argaze/utils/_export/scenes* folder.
+Load a MOVIE and an ArUcoCamera CONFIGURATION to detect ArUco markers inside a selected movie frame then, export detected ArUco markers group as .obj file into an OUTPUT folder.
```shell
-python ./src/argaze/utils/aruco_markers_scene_export.py MOVIE DICT_APRILTAG_16h5 5 OPTIC_PARAMETERS DETECTOR_PARAMETERS -o ./src/argaze/utils/_export/scenes
+python ./src/argaze/utils/aruco_markers_group_export.py MOVIE CONFIGURATION -o OUTPUT
``` \ No newline at end of file