From 23fa1a7835b3c7cfd976b1d160878289b1f0657c Mon Sep 17 00:00:00 2001 From: Theo De La Hogue Date: Sat, 23 Sep 2023 07:22:23 +0200 Subject: Fixing code annotation. Removing useless documentation section. Fixing documentation cross reference. --- .../ar_environment/environment_exploitation.md | 36 ----- .../user_guide/ar_environment/environment_setup.md | 77 ---------- docs/user_guide/ar_environment/introduction.md | 6 - .../areas_of_interest/aoi_scene_description.md | 83 ---------- .../areas_of_interest/aoi_scene_projection.md | 22 --- docs/user_guide/areas_of_interest/introduction.md | 2 +- .../aruco_markers/dictionary_selection.md | 17 --- docs/user_guide/aruco_markers/introduction.md | 15 -- docs/user_guide/aruco_markers/markers_creation.md | 17 --- docs/user_guide/aruco_markers/markers_detection.md | 47 ------ .../aruco_markers/markers_pose_estimation.md | 20 --- .../aruco_markers/markers_scene_description.md | 146 ------------------ .../optic_parameters_calibration.md | 8 +- .../configuration_and_execution.md | 6 +- .../aruco_markers_pipeline/introduction.md | 12 +- docs/user_guide/gaze_analysis/gaze_movement.md | 163 -------------------- docs/user_guide/gaze_analysis/gaze_position.md | 98 ------------ docs/user_guide/gaze_analysis/introduction.md | 7 - docs/user_guide/gaze_analysis/scan_path.md | 169 --------------------- .../advanced_topics/scripting.md | 2 +- .../gaze_analysis_pipeline/aoi_analysis.md | 4 +- .../gaze_analysis_pipeline/introduction.md | 2 +- .../pipeline_modules/aoi_matchers.md | 2 +- .../pipeline_modules/aoi_scan_path_analyzers.md | 2 +- docs/user_guide/gaze_features/gaze_movement.md | 163 ++++++++++++++++++++ docs/user_guide/gaze_features/gaze_position.md | 98 ++++++++++++ docs/user_guide/gaze_features/introduction.md | 7 + docs/user_guide/gaze_features/scan_path.md | 169 +++++++++++++++++++++ 28 files changed, 457 insertions(+), 943 deletions(-) delete mode 100644 docs/user_guide/ar_environment/environment_exploitation.md delete mode 100644 docs/user_guide/ar_environment/environment_setup.md delete mode 100644 docs/user_guide/ar_environment/introduction.md delete mode 100644 docs/user_guide/areas_of_interest/aoi_scene_description.md delete mode 100644 docs/user_guide/areas_of_interest/aoi_scene_projection.md delete mode 100644 docs/user_guide/aruco_markers/dictionary_selection.md delete mode 100644 docs/user_guide/aruco_markers/introduction.md delete mode 100644 docs/user_guide/aruco_markers/markers_creation.md delete mode 100644 docs/user_guide/aruco_markers/markers_detection.md delete mode 100644 docs/user_guide/aruco_markers/markers_pose_estimation.md delete mode 100644 docs/user_guide/aruco_markers/markers_scene_description.md delete mode 100644 docs/user_guide/gaze_analysis/gaze_movement.md delete mode 100644 docs/user_guide/gaze_analysis/gaze_position.md delete mode 100644 docs/user_guide/gaze_analysis/introduction.md delete mode 100644 docs/user_guide/gaze_analysis/scan_path.md create mode 100644 docs/user_guide/gaze_features/gaze_movement.md create mode 100644 docs/user_guide/gaze_features/gaze_position.md create mode 100644 docs/user_guide/gaze_features/introduction.md create mode 100644 docs/user_guide/gaze_features/scan_path.md (limited to 'docs') diff --git a/docs/user_guide/ar_environment/environment_exploitation.md b/docs/user_guide/ar_environment/environment_exploitation.md deleted file mode 100644 index 9e4b236..0000000 --- a/docs/user_guide/ar_environment/environment_exploitation.md +++ /dev/null @@ -1,36 +0,0 @@ -Environment exploitation -======================== - -Once loaded, [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) assets can be exploited as illustrated below: - -```python -# Access to AR environment ArUco detector passing it a image where to detect ArUco markers -ar_camera.aruco_detector.detect_markers(image) - -# Access to an AR environment scene -my_first_scene = ar_camera.scenes['my first AR scene'] - -try: - - # Try to estimate AR scene pose from detected markers - tvec, rmat, consistent_markers = my_first_scene.estimate_pose(ar_camera.aruco_detector.detected_markers) - - # Project AR scene into camera image according estimated pose - # Optional visual_hfov argument is set to 160° to clip AOI scene according a cone vision - aoi2D_scene = my_first_scene.project(tvec, rmat, visual_hfov=160) - - # Draw estimated AR scene axis - my_first_scene.draw_axis(image) - - # Draw AOI2D scene projection - aoi2D_scene.draw(image) - - # Do something with AOI2D scene projection - ... - -# Catch exceptions raised by estimate_pose and project methods -except (ArFeatures.PoseEstimationFailed, ArFeatures.SceneProjectionFailed) as e: - - print(e) - -``` diff --git a/docs/user_guide/ar_environment/environment_setup.md b/docs/user_guide/ar_environment/environment_setup.md deleted file mode 100644 index 1f26d26..0000000 --- a/docs/user_guide/ar_environment/environment_setup.md +++ /dev/null @@ -1,77 +0,0 @@ -Environment Setup -================= - -[ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) setup is loaded from JSON file format. - -Each [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera) defines a unique [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) dedicated to detection of markers from a specific [ArUcoMarkersDictionary](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersDictionary) and with a given size. However, it is possible to load multiple [ArScene](../../argaze.md/#argaze.ArFeatures.ArScene) into a same [ArCamera](../../argaze.md/#argaze.ArFeatures.ArCamera). - -Here is JSON environment file example where it is assumed that mentioned .obj files are located relatively to the environment file on disk. - -``` -{ - "name": "my AR environment", - "aruco_detector": { - "dictionary": { - "name": "DICT_APRILTAG_16h5" - } - "marker_size": 5, - "optic_parameters": { - "rms": 0.6, - "dimensions": [ - 1920, - 1080 - ], - "K": [ - [ - 1135, - 0.0, - 956 - ], - [ - 0.0, - 1135, - 560 - ], - [ - 0.0, - 0.0, - 1.0 - ] - ], - "D": [ - 0.01655492265003404, - 0.1985524264972037, - 0.002129965902489484, - -0.0019528582922179365, - -0.5792910353639452 - ] - }, - "parameters": { - "cornerRefinementMethod": 3, - "aprilTagQuadSigma": 2, - "aprilTagDeglitch": 1 - } - }, - "scenes": { - "my first AR scene" : { - "aruco_markers_group": "./first_scene/markers.obj", - "aoi_scene": "./first_scene/aoi.obj", - "angle_tolerance": 15.0, - "distance_tolerance": 2.54 - }, - "my second AR scene" : { - "aruco_markers_group": "./second_scene/markers.obj", - "aoi_scene": "./second_scene/aoi.obj", - "angle_tolerance": 15.0, - "distance_tolerance": 2.54 - } - } -} -``` - -```python -from argaze import ArFeatures - -# Load AR environment -ar_camera = ArFeatures.ArCamera.from_json('./environment.json') -``` diff --git a/docs/user_guide/ar_environment/introduction.md b/docs/user_guide/ar_environment/introduction.md deleted file mode 100644 index b19383b..0000000 --- a/docs/user_guide/ar_environment/introduction.md +++ /dev/null @@ -1,6 +0,0 @@ -AR environment setup -==================== - -ArGaze toolkit eases ArUco and AOI management in a single AR environment setup. - -This section refers to [ArFeatures](../../argaze.md/#argaze.ArFeatures). diff --git a/docs/user_guide/areas_of_interest/aoi_scene_description.md b/docs/user_guide/areas_of_interest/aoi_scene_description.md deleted file mode 100644 index b96c1e0..0000000 --- a/docs/user_guide/areas_of_interest/aoi_scene_description.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -title: AOI scene description ---- - -AOI scene description -===================== - -## 2D description - -An AOI scene can be described in 2D dimension using an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene) from a dictionary description. - -``` dict -{ - "tracking": [[672.0, 54.0], [1632.0, 54.0], [1632.0, 540.0], [672.0, 540.0]], - "system": [[0.0, 54.0], [672.0, 54.0], [672.0, 540.0], [0.0, 540.0]], - "communications": [[0.0, 594.0], [576.0, 594.0], [576.0, 1080.0], [0.0, 1080.0]], - "resources": [[576.0, 594.0], [1632.0, 594.0], [1632.0, 1080.0], [576.0, 1080.0]] -} -... -``` - -Here is a sample of code to show the loading of an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene) from a dictionary description: - - -``` python -from argaze.AreaOfInterest import AOI2DScene - -# Load an AOI2D scene from dictionary -aoi_2d_scene = AOI2DScene.AOI2DScene(aoi_scene_dictionary) -``` - -## 3D description - -An AOI scene can be described in 3D dimension using an [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) built from a 3D model with all AOI as 3D planes and loaded through OBJ file format. -Notice that plane normals are not needed and planes are not necessary 4 vertices shapes. - -``` obj -o PIC_ND -v 6.513238 -27.113548 -25.163900 -v 22.994461 -27.310783 -24.552130 -v 6.718690 -6.467261 -26.482569 -v 23.252594 -6.592890 -25.873484 -f 1 2 4 3 -o PIC_ND_Aircraft -v 6.994747 -21.286463 -24.727146 -v 22.740919 -21.406120 -24.147078 -v 7.086208 -12.096219 -25.314123 -v 22.832380 -12.215876 -24.734055 -f 5 6 8 7 -o PIC_ND_Wind -v 7.086199 -11.769333 -25.335127 -v 12.081032 -11.807289 -25.151123 -v 7.115211 -8.854101 -25.521320 -v 12.110044 -8.892057 -25.337317 -f 9 10 12 11 -o PIC_ND_Waypoint -v 17.774197 -11.819057 -24.943428 -v 22.769030 -11.857013 -24.759424 -v 17.803209 -8.903825 -25.129622 -v 22.798042 -8.941781 -24.945618 -f 13 14 16 15 -... -o Thrust_Lever -v 19.046124 15.523837 4.774072 -v 18.997263 -0.967944 5.701000 -v 18.988382 15.923470 -13.243046 -v 18.921808 -0.417994 -17.869610 -v 19.032232 19.241346 -3.040264 -v 19.020988 6.392717 5.872663 -v 18.945322 6.876906 -17.699480 -s off -f 185 190 186 188 191 187 189 -... -``` - -Here is a sample of code to show the loading of an [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) from an OBJ file description: - -``` python -from argaze.AreaOfInterest import AOI3DScene - -# Load an AOI3D scene from OBJ file -aoi_3d_scene = AOI3DScene.AOI3DScene.from_obj('./aoi_scene.obj') -``` diff --git a/docs/user_guide/areas_of_interest/aoi_scene_projection.md b/docs/user_guide/areas_of_interest/aoi_scene_projection.md deleted file mode 100644 index f348c6c..0000000 --- a/docs/user_guide/areas_of_interest/aoi_scene_projection.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -title: AOI scene projection ---- - -AOI scene projection -==================== - -An [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) can be rotated and translated according to a pose estimation before to project it onto camera image as an [AOI2DScene](../../argaze.md/#argaze.AreaOfInterest.AOI2DScene). - -![AOI projection](../../img/aoi_projection.png) - -``` python -... - -# Assuming pose estimation is done (tvec and rmat) - -# Project AOI 3D scene according pose estimation and optic parameters -aoi2D_scene = aoi3D_scene.project(tvec, rmat, optic_parameters.K) - -# Draw AOI 2D scene -aoi2D_scene.draw(image) -``` diff --git a/docs/user_guide/areas_of_interest/introduction.md b/docs/user_guide/areas_of_interest/introduction.md index 6f74dd4..9467963 100644 --- a/docs/user_guide/areas_of_interest/introduction.md +++ b/docs/user_guide/areas_of_interest/introduction.md @@ -1,7 +1,7 @@ About Areas Of Interest (AOI) ============================= -The [AreaOfInterest submodule](../../argaze.md/#argaze.AreaOfInterest) allows to deal with AOI in a AR environment through a set of high level classes: +The [AreaOfInterest submodule](../../argaze.md/#argaze.AreaOfInterest) allows to deal with AOI through a set of high level classes: * [AOIFeatures](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures) * [AOI3DScene](../../argaze.md/#argaze.AreaOfInterest.AOI3DScene) diff --git a/docs/user_guide/aruco_markers/dictionary_selection.md b/docs/user_guide/aruco_markers/dictionary_selection.md deleted file mode 100644 index b9ba510..0000000 --- a/docs/user_guide/aruco_markers/dictionary_selection.md +++ /dev/null @@ -1,17 +0,0 @@ -Dictionary selection -==================== - -ArUco markers always belongs to a set of markers called ArUco markers dictionary. - -![ArUco dictionaries](../../img/aruco_dictionaries.png) - -Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking. - -Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975). - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersDictionary - -# Create a dictionary of specific April tags -aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') -``` diff --git a/docs/user_guide/aruco_markers/introduction.md b/docs/user_guide/aruco_markers/introduction.md deleted file mode 100644 index 9d78de0..0000000 --- a/docs/user_guide/aruco_markers/introduction.md +++ /dev/null @@ -1,15 +0,0 @@ -About ArUco markers -=================== - -![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png) - -The OpenCV library provides a module to detect fiducial markers into a picture and estimate its pose (cf [OpenCV ArUco tutorial page](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html)). - -The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, camera calibration, markers detection and 3D scene pose estimation through a set of high level classes: - -* [ArUcoMarkersDictionary](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersDictionary) -* [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) -* [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) -* [ArUcoOpticCalibrator](../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) -* [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) -* [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) \ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_creation.md b/docs/user_guide/aruco_markers/markers_creation.md deleted file mode 100644 index eab9890..0000000 --- a/docs/user_guide/aruco_markers/markers_creation.md +++ /dev/null @@ -1,17 +0,0 @@ -Markers creation -================ - -The creation of [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) from a dictionary is illustrated in the code below: - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersDictionary - -# Create a dictionary of specific April tags -aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') - -# Export marker n°5 as 3.5 cm picture with 300 dpi resolution -aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300) - -# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution -aruco_dictionary.save('./markers/', 3.5, 300) -``` \ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_detection.md b/docs/user_guide/aruco_markers/markers_detection.md deleted file mode 100644 index af2fb4f..0000000 --- a/docs/user_guide/aruco_markers/markers_detection.md +++ /dev/null @@ -1,47 +0,0 @@ -Markers detection -================= - -![Detected markers](../../img/detected_markers.png) - -Firstly, the [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) needs to know the expected dictionary and size (in centimeter) of the [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) it have to detect. - -Notice that extra parameters are passed to detector: see [OpenCV ArUco markers detection parameters documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html) to know more. - -``` python -from argaze.ArUcoMarkers import ArUcoDetector, ArUcoOpticCalibrator - -# Assuming camera calibration data are loaded - -# Loading extra detector parameters -extra_parameters = ArUcoDetector.DetectorParameters.from_json('./detector_parameters.json') - -# Create ArUco detector to track DICT_APRILTAG_16h5 5cm length markers -aruco_detector = ArUcoDetector.ArUcoDetector(optic_parameters=optic_parameters, dictionary='DICT_APRILTAG_16h5', marker_size=5, parameters=extra_parameters) -``` - -Here is [DetectorParameters](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.DetectorParameters) JSON file example: - -``` -{ - "cornerRefinementMethod": 1, - "aprilTagQuadSigma": 2, - "aprilTagDeglitch": 1 -} -``` - -The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) processes image to detect markers and allows to draw detection results onto it: - -``` python -# Detect markers into image and draw them -aruco_detector.detect_markers(image) -aruco_detector.draw_detected_markers(image) - -# Get corners position into image related to each detected markers -for marker_id, marker in aruco_detector.detected_markers.items(): - - print(f'marker {marker_id} corners: ', marker.corners) - - # Do something with detected marker i corners - ... - -``` diff --git a/docs/user_guide/aruco_markers/markers_pose_estimation.md b/docs/user_guide/aruco_markers/markers_pose_estimation.md deleted file mode 100644 index 487c220..0000000 --- a/docs/user_guide/aruco_markers/markers_pose_estimation.md +++ /dev/null @@ -1,20 +0,0 @@ -Markers pose estimation -======================= - -After [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) detection, it is possible to estimate [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pose in camera axis. - -![Pose estimation](../../img/pose_estimation.png) - -``` python -# Estimate markers pose -aruco_detector.estimate_markers_pose() - -# Get pose estimation related to each detected markers -for marker_id, marker in aruco_detector.detected_markers.items(): - - print(f'marker {marker_id} translation: ', marker.translation) - print(f'marker {marker_id} rotation: ', marker.rotation) - - # Do something with each marker pose estimation - ... -``` \ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_scene_description.md b/docs/user_guide/aruco_markers/markers_scene_description.md deleted file mode 100644 index c6dbf31..0000000 --- a/docs/user_guide/aruco_markers/markers_scene_description.md +++ /dev/null @@ -1,146 +0,0 @@ -Markers scene description -========================= - -The ArGaze toolkit provides [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) class to describe where [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) are placed into a 3D model. - -![ArUco scene](../../img/aruco_markers_group.png) - -[ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) is useful to: - -* filter markers that belongs to this predefined scene, -* check the consistency of detected markers according the place where each marker is expected to be, -* estimate the pose of the scene from the pose of detected markers. - -## Scene creation - -### from OBJ - -ArUco scene description uses common OBJ file format that can be exported from most 3D editors. Notice that plane normals (vn) needs to be exported. - -``` obj -o DICT_APRILTAG_16h5#0_Marker -v -3.004536 0.022876 2.995370 -v 2.995335 -0.015498 3.004618 -v -2.995335 0.015498 -3.004618 -v 3.004536 -0.022876 -2.995370 -vn 0.0064 1.0000 -0.0012 -s off -f 1//1 2//1 4//1 3//1 -o DICT_APRILTAG_16h5#1_Marker -v -33.799068 46.450645 -32.200436 -v -27.852505 47.243549 -32.102116 -v -34.593925 52.396473 -32.076626 -v -28.647360 53.189377 -31.978306 -vn -0.0135 -0.0226 0.9997 -s off -f 5//2 6//2 8//2 7//2 -... -``` - -Here is a sample of code to show the loading of an [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) OBJ file description: - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersGroup - -# Create an ArUco scene from a OBJ file description -aruco_markers_group = ArUcoMarkersGroup.ArUcoMarkersGroup.from_obj('./markers.obj') - -# Print loaded marker places -for place_id, place in aruco_markers_group.places.items(): - - print(f'place {place_id} for marker: ', place.marker.identifier) - print(f'place {place_id} translation: ', place.translation) - print(f'place {place_id} rotation: ', place.rotation) -``` - -### from JSON - -[ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) description can also be written in a JSON file format. - -``` json -{ - "dictionary": "DICT_ARUCO_ORIGINAL", - "marker_size": 1, - "places": { - "0": { - "translation": [0, 0, 0], - "rotation": [0, 0, 0] - }, - "1": { - "translation": [10, 10, 0], - "rotation": [0, 0, 0] - }, - "2": { - "translation": [0, 10, 0], - "rotation": [0, 0, 0] - } - } -} -``` - -### from detected markers - -Here is a more advanced usage where ArUco scene is built from markers detected into an image: - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersGroup - -# Assuming markers have been detected and their pose estimated thanks to ArUcoDetector -... - -# Build ArUco scene from detected markers -aruco_markers_group = ArUcoMarkersGroup.ArUcoMarkersGroup(aruco_detector.marker_size, aruco_detector.dictionary, aruco_detector.detected_markers) -``` - -## Markers filtering - -Considering markers are detected, here is how to filter them to consider only those which belongs to the scene: - -``` python -scene_markers, remaining_markers = aruco_markers_group.filter_markers(aruco_detector.detected_markers) -``` - -## Marker poses consistency - -Then, scene markers poses can be validated by verifying their spatial consistency considering angle and distance tolerance. This is particularly useful to discard ambiguous marker pose estimations when markers are parallel to camera plane (see [issue on OpenCV Contribution repository](https://github.com/opencv/opencv_contrib/issues/3190#issuecomment-1181970839)). - -``` python -# Check scene markers consistency with 10° angle tolerance and 1 cm distance tolerance -consistent_markers, unconsistent_markers, unconsistencies = aruco_markers_group.check_markers_consistency(scene_markers, 10, 1) -``` - -## Scene pose estimation - -Several approaches are available to perform [ArUcoMarkersGroup](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarkersGroup) pose estimation from markers belonging to the scene. - -The first approach considers that scene pose can be estimated **from a single marker pose**: - -``` python -# Let's select one consistent scene marker -marker_id, marker = consistent_markers.popitem() - -# Estimate scene pose from a single marker -tvec, rmat = self.aruco_markers_group.estimate_pose_from_single_marker(marker) -``` - -The second approach considers that scene pose can be estimated by **averaging several marker poses**: - -``` python -# Estimate scene pose from all consistent scene markers -tvec, rmat = self.aruco_markers_group.estimate_pose_from_markers(consistent_markers) -``` - -The third approach is only available when ArUco markers are placed in such a configuration that is possible to **define orthogonal axis**: - -``` python -tvec, rmat = self.aruco_markers_group.estimate_pose_from_axis_markers(origin_marker, horizontal_axis_marker, vertical_axis_marker) -``` - -## Scene exportation - -As ArUco scene can be exported to OBJ file description to import it into most 3D editors. - -``` python -# Export an ArUco scene as OBJ file description -aruco_markers_group.to_obj('markers.obj') -``` diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md index 455d95a..fbe06d1 100644 --- a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md +++ b/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md @@ -3,11 +3,11 @@ Calibrate optic parameters A camera device have to be calibrated to compensate its optical distorsion. -![Optic parameters calibration](../../img/optic_calibration.png) +![Optic parameters calibration](../../../img/optic_calibration.png) ## Print calibration board -The first step to calibrate a camera is to create an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below: +The first step to calibrate a camera is to create an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below: ``` python from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard @@ -29,9 +29,9 @@ Let's print the calibration board before to go further. ## Capture board pictures -Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process. +Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process. -![Calibration step](../../img/optic_calibration_step.png) +![Calibration step](../../../img/optic_calibration_step.png) The sample of code below illustrates how to: diff --git a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md index 81c577f..35b64f7 100644 --- a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md +++ b/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md @@ -3,7 +3,7 @@ Load and execute pipeline Once [ArUco markers are placed into a scene](aruco_markers_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class. -As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](./user_guide/gaze_analysis_pipeline/introduction.md). +As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). ![ArUco camera frame](../../img/aruco_camera_frame.png) @@ -89,7 +89,7 @@ The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step de ### Image parameters - *inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* -The usual [ArFrame visualisation parameters](./user_guide/gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field. +The usual [ArFrame visualisation parameters](../gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field. ## Pipeline execution @@ -119,7 +119,7 @@ Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures ### Analyse timestamped gaze positions into camera frame -As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](./user_guide/gaze_analysis_pipeline/introduction.md). +As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). Particularly, timestamped gaze positions can be passed one by one to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis. diff --git a/docs/user_guide/aruco_markers_pipeline/introduction.md b/docs/user_guide/aruco_markers_pipeline/introduction.md index 836569a..f781fe8 100644 --- a/docs/user_guide/aruco_markers_pipeline/introduction.md +++ b/docs/user_guide/aruco_markers_pipeline/introduction.md @@ -11,7 +11,7 @@ The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. -![ArUco markers pipeline](../../img/aruco_markers_pipeline.png) + To build your own ArUco markers pipeline, you need to know: @@ -19,11 +19,11 @@ To build your own ArUco markers pipeline, you need to know: * [How to describe scene's AOI](aoi_description.md), * [How to load and execute ArUco markers pipeline](configuration_and_execution.md), * [How to estimate scene pose](pose_estimation.md), -* [How to project AOI into camera frame](aoi_projection.md), -* [How to visualize ArUcoCamera and ArUcoScenes](visualisation.md) +* [How to project AOI into camera frame](aoi_projection.md) + More advanced features are also explained like: -* [How to script ArUco markers pipeline](advanced_topics/scripting.md) -* [How to calibrate optic parameters](optic_parameters_calibration.md) -* [How to improve ArUco markers detection](advanced_topics/aruco_detector_configuration.md) + +* [How to calibrate optic parameters](advanced_topics/optic_parameters_calibration.md) + diff --git a/docs/user_guide/gaze_analysis/gaze_movement.md b/docs/user_guide/gaze_analysis/gaze_movement.md deleted file mode 100644 index 83f67e1..0000000 --- a/docs/user_guide/gaze_analysis/gaze_movement.md +++ /dev/null @@ -1,163 +0,0 @@ -Gaze movement -============= - -## Definition - -!!! note - - *"The act of classifying eye movements into distinct events is, on a general level, driven by a desire to isolate different intervals of the data stream strongly correlated with certain oculomotor or cognitive properties."* - - Citation from ["One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms"](https://link.springer.com/article/10.3758/s13428-016-0738-9) article. - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) class, then abstract [Fixation](../../argaze.md/#argaze.GazeFeatures.Fixation) and [Saccade](../../argaze.md/#argaze.GazeFeatures.Saccade) classes which inherit from [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement). - -The **positions** [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) attribute contain all [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) belonging to itself. - -![Fixation and Saccade](../../img/fixation_and_saccade.png) - -## Identification - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) classe to let add various identification algorithms. - -Some gaze movement identification algorithms are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: - -* [Dispersion threshold identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) -* [Velocity threshold identification (I-VT)](../../argaze.md/#argaze.GazeAnalysis.VelocityThresholdIdentification) - -### Identify method - -[GazeMovementIdentifier.identify](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.identify) method allows to fed its identification algorithm with successive gaze positions to output Fixation, Saccade or any kind of GazeMovement instances. - -Here is a sample of code based on [I-DT](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) algorithm to illustrate how to use it: - -``` python -from argaze import GazeFeatures -from argaze.GazeAnalysis import DispersionThresholdIdentification - -# Create a gaze movement identifier based on dispersion algorithm with 50px max deviation 200 ms max duration thresholds -gaze_movement_identifier = DispersionThresholdIdentification.GazeMovementIdentifier(50, 200) - -# Assuming that timestamped gaze positions are provided through live stream or later data reading -...: - - gaze_movement = gaze_movement_identifier.identify(timestamp, gaze_position) - - # Fixation identified - if GazeFeatures.is_fixation(gaze_movement): - - # Access to first gaze position of identified fixation - start_ts, start_position = gaze_movement.positions.first - - # Access to fixation duration - print('duration: {gaze_movement.duration}') - - # Iterate over all gaze positions of identified fixation - for ts, position in gaze_movement.positions.items(): - - # Do something with each fixation position - ... - - # Saccade identified - elif GazeFeatures.is_saccade(gaze_movement): - - # Access to first gaze position of identified saccade - start_ts, start_position = gaze_movement.positions.first - - # Access to saccade amplitude - print('amplitude: {gaze_movement.amplitude}') - - # Iterate over all gaze positions of identified saccade - for ts, position in gaze_movement.positions.items(): - - # Do something with each saccade position - ... - - # No gaze movement identified - else: - - continue - -``` - -### Browse method - -[GazeMovementIdentifier.browse](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.browse) method allows to pass a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer to apply identification algorithm on all gaze positions inside. - -Identified gaze movements are returned through: - -* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all fixations are stored by starting gaze position timestamp. -* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all saccades are stored by starting gaze position timestamp. -* [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) instance where all gaze positions are linked to a fixation or saccade index. - -``` python -# Assuming that timestamped gaze positions are provided through data reading - -ts_fixations, ts_saccades, ts_status = gaze_movement_identifier.browse(ts_gaze_positions) - -``` - -* ts_fixations would look like: - -|timestamp|positions |duration|dispersion|focus | -|:--------|:-------------------------------------------------------------|:-------|:---------|:--------| -|60034 |{"60034":[846,620], "60044":[837,641], "60054":[835,649], ...}|450 |40 |(840,660)| -|60504 |{"60504":[838,667], "60514":[838,667], "60524":[837,669], ...}|100 |38 |(834,651)| -|... |... |... |.. |... | - -* ts_saccades would look like: - -|timestamp|positions |duration| -|:--------|:---------------------------------------|:-------| -|60484 |{"60484":[836, 669], "60494":[837, 669]}|10 | -|60594 |{"60594":[833, 613], "60614":[927, 601]}|20 | -|... |... |... | - -* ts_status would look like: - -|timestamp|position |type |index| -|:--------|:---------|:-------|:----| -|60034 |(846, 620)|Fixation|1 | -|60044 |(837, 641)|Fixation|1 | -|... |... |... |. | -|60464 |(836, 668)|Fixation|1 | -|60474 |(836, 668)|Fixation|1 | -|60484 |(836, 669)|Saccade |1 | -|60494 |(837, 669)|Saccade |1 | -|60504 |(838, 667)|Fixation|2 | -|60514 |(838, 667)|Fixation|2 | -|... |... |... |. | -|60574 |(825, 629)|Fixation|2 | -|60584 |(829, 615)|Fixation|2 | -|60594 |(833, 613)|Saccade |2 | -|60614 |(927, 601)|Saccade |2 | -|60624 |(933, 599)|Fixation|3 | -|60634 |(934, 603)|Fixation|3 | -|... |... |... |. | - - -!!! note - [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements), [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) and [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) classes inherit from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class. - - Read [Timestamped data](../timestamped_data/introduction.md) section to understand all features it provides. - -### Generator method - -[GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) can be called with a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer in argument to generate gaze movement each time one is identified. - -``` python -# Assuming that timestamped gaze positions are provided through data reading - -for ts, gaze_movement in gaze_movement_identifier(ts_gaze_positions): - - # Fixation identified - if GazeFeatures.is_fixation(gaze_movement): - - # Do something with each fixation - ... - - # Saccade identified - elif GazeFeatures.is_saccade(gaze_movement): - - # Do something with each saccade - ... -``` \ No newline at end of file diff --git a/docs/user_guide/gaze_analysis/gaze_position.md b/docs/user_guide/gaze_analysis/gaze_position.md deleted file mode 100644 index 48495b4..0000000 --- a/docs/user_guide/gaze_analysis/gaze_position.md +++ /dev/null @@ -1,98 +0,0 @@ -Gaze position -============= - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines a [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class to handle point coordinates with a precision value. - -``` python -from argaze import GazeFeatures - -# Define a basic gaze position -gaze_position = GazeFeatures.GazePosition((123, 456)) - -# Define a gaze position with a precision value -gaze_position = GazeFeatures.GazePosition((789, 765), precision=10) - -# Access to gaze position value and precision -print(f'position: {gaze_position.value}') -print(f'precision: {gaze_position.precision}') - -``` - -## Validity - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines also a [UnvalidGazePosition](../../argaze.md/#argaze.GazeFeatures.UnvalidGazePosition) class that inherits from [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) to handle case where no gaze position exists because of any specific device reason. - -``` python -from argaze import GazeFeatures - -# Define a basic unvalid gaze position -gaze_position = GazeFeatures.UnvalidGazePosition() - -# Define a basic unvalid gaze position with a message value -gaze_position = GazeFeatures.UnvalidGazePosition("Something bad happened") - -# Access to gaze position validity -print(f'validity: {gaze_position.valid}') - -``` - -## Distance - -[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides a **distance** method to calculate the distance to another gaze position instance. - -![Distance](../../img/distance.png) - -``` python -# Distance between A and B positions -d = gaze_position_A.distance(gaze_position_B) -``` - -## Overlapping - -[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides an **overlap** method to test if a gaze position overlaps another one considering their precisions. - -![Gaze overlapping](../../img/overlapping.png) - -``` python -# Check that A overlaps B -if gaze_position_A.overlap(gaze_position_B): - - # Do something if A overlaps B - ... - -# Check that A overlaps B and B overlaps A -if gaze_position_A.overlap(gaze_position_B, both=True): - - # Do something if A overlaps B AND B overlaps A - ... -``` - -## Timestamped gaze positions - -[TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) inherits from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class to handle especially gaze positions. - -### Import from dataframe - -It is possible to load timestamped gaze positions from a [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) object. - -```python -import pandas - -# Load gaze positions from a CSV file into Panda Dataframe -dataframe = pandas.read_csv('gaze_positions.csv', delimiter="\t", low_memory=False) - -# Convert Panda dataframe into TimestampedGazePositions buffer precising the use of each specific column labels -ts_gaze_positions = GazeFeatures.TimeStampedGazePositions.from_dataframe(dataframe, timestamp = 'Recording timestamp [ms]', x = 'Gaze point X [px]', y = 'Gaze point Y [px]') - -``` -### Iterator - -Like [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer), [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) class provides iterator feature: - -```python -for timestamp, gaze_position in ts_gaze_positions.items(): - - # Do something with each gaze position - ... - -``` diff --git a/docs/user_guide/gaze_analysis/introduction.md b/docs/user_guide/gaze_analysis/introduction.md deleted file mode 100644 index bf818ba..0000000 --- a/docs/user_guide/gaze_analysis/introduction.md +++ /dev/null @@ -1,7 +0,0 @@ -Gaze analysis -============= - -This section refers to: - -* [GazeFeatures](../../argaze.md/#argaze.GazeFeatures) -* [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) \ No newline at end of file diff --git a/docs/user_guide/gaze_analysis/scan_path.md b/docs/user_guide/gaze_analysis/scan_path.md deleted file mode 100644 index 46af28b..0000000 --- a/docs/user_guide/gaze_analysis/scan_path.md +++ /dev/null @@ -1,169 +0,0 @@ -Scan path -========= - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines classes to handle successive fixations/saccades and analyse their spatial or temporal properties. - -## Fixation based scan path - -### Definition - -The [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) class is defined as a list of [ScanSteps](../../argaze.md/#argaze.GazeFeatures.ScanStep) which are defined as a fixation and a consecutive saccade. - -![Fixation based scan path](../../img/scan_path.png) - -As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_saccade) methods. - -### Analysis - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer) classe to let add various analysis algorithms. - -Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: - -* [K-Coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient) -* [Nearest Neighbor Index](../../argaze.md/#argaze.GazeAnalysis.NearestNeighborIndex) -* [Exploit Explore Ratio](../../argaze.md/#argaze.GazeAnalysis.ExploitExploreRatio) - -### Example - -Here is a sample of code to illustrate how to built a scan path and analyze it: - -``` python -from argaze import GazeFeatures -from argaze.GazeAnalysis import KCoefficient - -# Create a empty scan path -scan_path = GazeFeatures.ScanPath() - -# Create a K coefficient analyzer -kc_analyzer = KCoefficient.ScanPathAnalyzer() - -# Assuming a gaze movement is identified at ts time -...: - - # Fixation identified - if GazeFeatures.is_fixation(gaze_movement): - - # Append fixation to scan path : no step is created - scan_path.append_fixation(ts, gaze_movement) - - # Saccade identified - elif GazeFeatures.is_saccade(gaze_movement): - - # Append saccade to scan path : a new step should be created - new_step = scan_path.append_saccade(data_ts, gaze_movement) - - # Analyse scan path - if new_step: - - K = kc_analyzer.analyze(scan_path) - - # Do something with K metric - ... -``` - -## AOI based scan path - -### Definition - -The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class is defined as a list of [AOIScanSteps](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) which are defined as set of consecutives fixations looking at a same Area Of Interest (AOI) and a consecutive saccade. - -![AOI based scan path](../../img/aoi_scan_path.png) - -As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_saccade) methods. - -### Analysis - -[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [AOIScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) classe to let add various analysis algorithms. - -Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: - -* [Transition matrix](../../argaze.md/#argaze.GazeAnalysis.TransitionMatrix) -* [Entropy](../../argaze.md/#argaze.GazeAnalysis.Entropy) -* [Lempel-Ziv complexity](../../argaze.md/#argaze.GazeAnalysis.LempelZivComplexity) -* [N-Gram](../../argaze.md/#argaze.GazeAnalysis.NGram) -* [K-modified coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient) - -### Example - -Here is a sample of code to illustrate how to built a AOI scan path and analyze it: - -``` python -from argaze import GazeFeatures -from argaze.GazeAnalysis import LempelZivComplexity - -# Assuming all AOI names are listed -... - -# Create a empty AOI scan path -aoi_scan_path = GazeFeatures.AOIScanPath(aoi_names) - -# Create a Lempel-Ziv complexity analyzer -lzc_analyzer = LempelZivComplexity.AOIScanPathAnalyzer() - -# Assuming a gaze movement is identified at ts time -...: - - # Fixation identified - if GazeFeatures.is_fixation(gaze_movement): - - # Assuming fixation is detected as inside an AOI - ... - - # Append fixation to AOI scan path : a new step should be created - new_step = aoi_scan_path.append_fixation(ts, gaze_movement, looked_aoi_name) - - # Analyse AOI scan path - if new_step: - - LZC = kc_analyzer.analyze(aoi_scan_path) - - # Do something with LZC metric - ... - - # Saccade identified - elif GazeFeatures.is_saccade(gaze_movement): - - # Append saccade to scan path : no step is created - aoi_scan_path.append_saccade(data_ts, gaze_movement) - -``` - -### Advanced - -The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class provides some advanced features to analyse it. - -#### Letter sequence - -When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally affects a unique letter index related to its AOI to ease pattern analysis. -Then, the [AOIScanPath letter_sequence](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.letter_sequence) property returns the concatenation of each [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) letter. -The [AOIScanPath get_letter_aoi](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.get_letter_aoi) method helps to get back the AOI related to a letter index. - -``` python -# Assuming the following AOI scan path is built: Foo > Bar > Shu > Foo -aoi_scan_path = ... - -# Letter sequence representation should be: 'ABCA' -print(aoi_scan_path.letter_sequence) - -# Output should be: 'Bar' -print(aoi_scan_path.get_letter_aoi('B')) - -``` - -#### Transition matrix - -When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally counts the number of transitions from an AOI to another AOI to ease Markov chain analysis. -Then, the [AOIScanPath transition_matrix](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.transition_matrix) property returns a [Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) where indexes are transition departures and columns are transition destinations. - -Here is an exemple of transition matrix for the following [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath): Foo > Bar > Shu > Foo > Bar - -| |Foo|Bar|Shu| -|:--|:--|:--|:--| -|Foo|0 |2 |0 | -|Bar|0 |0 |1 | -|Shu|1 |0 |0 | - - -#### Fixations count - -The [AOIScanPath fixations_count](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.fixations_count) method returns the total number of fixations in the whole scan path and a dictionary to get the fixations count per AOI. diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md index 81efa40..637ba57 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md @@ -133,7 +133,7 @@ A [python Exception](https://docs.python.org/3/tutorial/errors.html#exceptions) ## Setup ArFrame image parameters -[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary. +[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary. ```python # Assuming ArFrame is loaded diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md index ffc72c7..84730d4 100644 --- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md +++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md @@ -1,5 +1,5 @@ -Add AOI analysis -================ +Enable AOI analysis +=================== The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to make matching of gaze movements with AOIs and inside which those matchings need to be analyzed. diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md index 02aa82e..23b41a9 100644 --- a/docs/user_guide/gaze_analysis_pipeline/introduction.md +++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md @@ -11,7 +11,7 @@ To build your own gaze analysis pipeline, you need to know: * [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md), * [How to load and execute gaze analysis pipeline](configuration_and_execution.md), -* [How to add AOI analysis](aoi_analysis.md), +* [How to enable AOI analysis](aoi_analysis.md), * [How to visualize ArFrame and ArLayers](visualisation.md), * [How to log resulted gaze analysis](logging.md), * [How to make heatmap image](heatmap.md). diff --git a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md index c8fa63c..61338cc 100644 --- a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md +++ b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_matchers.md @@ -3,7 +3,7 @@ AOI matchers ArGaze provides ready-to-use AOI matching algorithms. -Here are JSON samples to include the chosen module inside [ArLayer configuration](../ar_layer_configuration_and_execution.md) *aoi_matcher* entry. +Here are JSON samples to include the chosen module inside [ArLayer configuration](../aoi_analysis.md) *aoi_matcher* entry. ## Deviation circle coverage diff --git a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md index 8d02967..ad1832d 100644 --- a/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md +++ b/docs/user_guide/gaze_analysis_pipeline/pipeline_modules/aoi_scan_path_analyzers.md @@ -3,7 +3,7 @@ AOI scan path analyzers ArGaze provides ready-to-use AOI scan path analysis algorithms. -Here are JSON samples to include a chosen module inside [ArLayer configuration](../ar_layer_configuration_and_execution.md) *aoi_scan_path_analyzers* entry. +Here are JSON samples to include a chosen module inside [ArLayer configuration](../aoi_analysis.md) *aoi_scan_path_analyzers* entry. ## Basic metrics diff --git a/docs/user_guide/gaze_features/gaze_movement.md b/docs/user_guide/gaze_features/gaze_movement.md new file mode 100644 index 0000000..83f67e1 --- /dev/null +++ b/docs/user_guide/gaze_features/gaze_movement.md @@ -0,0 +1,163 @@ +Gaze movement +============= + +## Definition + +!!! note + + *"The act of classifying eye movements into distinct events is, on a general level, driven by a desire to isolate different intervals of the data stream strongly correlated with certain oculomotor or cognitive properties."* + + Citation from ["One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms"](https://link.springer.com/article/10.3758/s13428-016-0738-9) article. + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) class, then abstract [Fixation](../../argaze.md/#argaze.GazeFeatures.Fixation) and [Saccade](../../argaze.md/#argaze.GazeFeatures.Saccade) classes which inherit from [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement). + +The **positions** [GazeMovement](../../argaze.md/#argaze.GazeFeatures.GazeMovement) attribute contain all [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) belonging to itself. + +![Fixation and Saccade](../../img/fixation_and_saccade.png) + +## Identification + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) classe to let add various identification algorithms. + +Some gaze movement identification algorithms are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: + +* [Dispersion threshold identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) +* [Velocity threshold identification (I-VT)](../../argaze.md/#argaze.GazeAnalysis.VelocityThresholdIdentification) + +### Identify method + +[GazeMovementIdentifier.identify](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.identify) method allows to fed its identification algorithm with successive gaze positions to output Fixation, Saccade or any kind of GazeMovement instances. + +Here is a sample of code based on [I-DT](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) algorithm to illustrate how to use it: + +``` python +from argaze import GazeFeatures +from argaze.GazeAnalysis import DispersionThresholdIdentification + +# Create a gaze movement identifier based on dispersion algorithm with 50px max deviation 200 ms max duration thresholds +gaze_movement_identifier = DispersionThresholdIdentification.GazeMovementIdentifier(50, 200) + +# Assuming that timestamped gaze positions are provided through live stream or later data reading +...: + + gaze_movement = gaze_movement_identifier.identify(timestamp, gaze_position) + + # Fixation identified + if GazeFeatures.is_fixation(gaze_movement): + + # Access to first gaze position of identified fixation + start_ts, start_position = gaze_movement.positions.first + + # Access to fixation duration + print('duration: {gaze_movement.duration}') + + # Iterate over all gaze positions of identified fixation + for ts, position in gaze_movement.positions.items(): + + # Do something with each fixation position + ... + + # Saccade identified + elif GazeFeatures.is_saccade(gaze_movement): + + # Access to first gaze position of identified saccade + start_ts, start_position = gaze_movement.positions.first + + # Access to saccade amplitude + print('amplitude: {gaze_movement.amplitude}') + + # Iterate over all gaze positions of identified saccade + for ts, position in gaze_movement.positions.items(): + + # Do something with each saccade position + ... + + # No gaze movement identified + else: + + continue + +``` + +### Browse method + +[GazeMovementIdentifier.browse](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier.browse) method allows to pass a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer to apply identification algorithm on all gaze positions inside. + +Identified gaze movements are returned through: + +* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all fixations are stored by starting gaze position timestamp. +* [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) instance where all saccades are stored by starting gaze position timestamp. +* [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) instance where all gaze positions are linked to a fixation or saccade index. + +``` python +# Assuming that timestamped gaze positions are provided through data reading + +ts_fixations, ts_saccades, ts_status = gaze_movement_identifier.browse(ts_gaze_positions) + +``` + +* ts_fixations would look like: + +|timestamp|positions |duration|dispersion|focus | +|:--------|:-------------------------------------------------------------|:-------|:---------|:--------| +|60034 |{"60034":[846,620], "60044":[837,641], "60054":[835,649], ...}|450 |40 |(840,660)| +|60504 |{"60504":[838,667], "60514":[838,667], "60524":[837,669], ...}|100 |38 |(834,651)| +|... |... |... |.. |... | + +* ts_saccades would look like: + +|timestamp|positions |duration| +|:--------|:---------------------------------------|:-------| +|60484 |{"60484":[836, 669], "60494":[837, 669]}|10 | +|60594 |{"60594":[833, 613], "60614":[927, 601]}|20 | +|... |... |... | + +* ts_status would look like: + +|timestamp|position |type |index| +|:--------|:---------|:-------|:----| +|60034 |(846, 620)|Fixation|1 | +|60044 |(837, 641)|Fixation|1 | +|... |... |... |. | +|60464 |(836, 668)|Fixation|1 | +|60474 |(836, 668)|Fixation|1 | +|60484 |(836, 669)|Saccade |1 | +|60494 |(837, 669)|Saccade |1 | +|60504 |(838, 667)|Fixation|2 | +|60514 |(838, 667)|Fixation|2 | +|... |... |... |. | +|60574 |(825, 629)|Fixation|2 | +|60584 |(829, 615)|Fixation|2 | +|60594 |(833, 613)|Saccade |2 | +|60614 |(927, 601)|Saccade |2 | +|60624 |(933, 599)|Fixation|3 | +|60634 |(934, 603)|Fixation|3 | +|... |... |... |. | + + +!!! note + [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements), [TimeStampedGazeMovements](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeMovements) and [TimeStampedGazeStatus](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazeStatus) classes inherit from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class. + + Read [Timestamped data](../timestamped_data/introduction.md) section to understand all features it provides. + +### Generator method + +[GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) can be called with a [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) buffer in argument to generate gaze movement each time one is identified. + +``` python +# Assuming that timestamped gaze positions are provided through data reading + +for ts, gaze_movement in gaze_movement_identifier(ts_gaze_positions): + + # Fixation identified + if GazeFeatures.is_fixation(gaze_movement): + + # Do something with each fixation + ... + + # Saccade identified + elif GazeFeatures.is_saccade(gaze_movement): + + # Do something with each saccade + ... +``` \ No newline at end of file diff --git a/docs/user_guide/gaze_features/gaze_position.md b/docs/user_guide/gaze_features/gaze_position.md new file mode 100644 index 0000000..48495b4 --- /dev/null +++ b/docs/user_guide/gaze_features/gaze_position.md @@ -0,0 +1,98 @@ +Gaze position +============= + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines a [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class to handle point coordinates with a precision value. + +``` python +from argaze import GazeFeatures + +# Define a basic gaze position +gaze_position = GazeFeatures.GazePosition((123, 456)) + +# Define a gaze position with a precision value +gaze_position = GazeFeatures.GazePosition((789, 765), precision=10) + +# Access to gaze position value and precision +print(f'position: {gaze_position.value}') +print(f'precision: {gaze_position.precision}') + +``` + +## Validity + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines also a [UnvalidGazePosition](../../argaze.md/#argaze.GazeFeatures.UnvalidGazePosition) class that inherits from [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) to handle case where no gaze position exists because of any specific device reason. + +``` python +from argaze import GazeFeatures + +# Define a basic unvalid gaze position +gaze_position = GazeFeatures.UnvalidGazePosition() + +# Define a basic unvalid gaze position with a message value +gaze_position = GazeFeatures.UnvalidGazePosition("Something bad happened") + +# Access to gaze position validity +print(f'validity: {gaze_position.valid}') + +``` + +## Distance + +[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides a **distance** method to calculate the distance to another gaze position instance. + +![Distance](../../img/distance.png) + +``` python +# Distance between A and B positions +d = gaze_position_A.distance(gaze_position_B) +``` + +## Overlapping + +[GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class provides an **overlap** method to test if a gaze position overlaps another one considering their precisions. + +![Gaze overlapping](../../img/overlapping.png) + +``` python +# Check that A overlaps B +if gaze_position_A.overlap(gaze_position_B): + + # Do something if A overlaps B + ... + +# Check that A overlaps B and B overlaps A +if gaze_position_A.overlap(gaze_position_B, both=True): + + # Do something if A overlaps B AND B overlaps A + ... +``` + +## Timestamped gaze positions + +[TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) inherits from [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer) class to handle especially gaze positions. + +### Import from dataframe + +It is possible to load timestamped gaze positions from a [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) object. + +```python +import pandas + +# Load gaze positions from a CSV file into Panda Dataframe +dataframe = pandas.read_csv('gaze_positions.csv', delimiter="\t", low_memory=False) + +# Convert Panda dataframe into TimestampedGazePositions buffer precising the use of each specific column labels +ts_gaze_positions = GazeFeatures.TimeStampedGazePositions.from_dataframe(dataframe, timestamp = 'Recording timestamp [ms]', x = 'Gaze point X [px]', y = 'Gaze point Y [px]') + +``` +### Iterator + +Like [TimeStampedBuffer](../../argaze.md/#argaze.DataStructures.TimeStampedBuffer), [TimeStampedGazePositions](../../argaze.md/#argaze.GazeFeatures.TimeStampedGazePositions) class provides iterator feature: + +```python +for timestamp, gaze_position in ts_gaze_positions.items(): + + # Do something with each gaze position + ... + +``` diff --git a/docs/user_guide/gaze_features/introduction.md b/docs/user_guide/gaze_features/introduction.md new file mode 100644 index 0000000..bf818ba --- /dev/null +++ b/docs/user_guide/gaze_features/introduction.md @@ -0,0 +1,7 @@ +Gaze analysis +============= + +This section refers to: + +* [GazeFeatures](../../argaze.md/#argaze.GazeFeatures) +* [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) \ No newline at end of file diff --git a/docs/user_guide/gaze_features/scan_path.md b/docs/user_guide/gaze_features/scan_path.md new file mode 100644 index 0000000..46af28b --- /dev/null +++ b/docs/user_guide/gaze_features/scan_path.md @@ -0,0 +1,169 @@ +Scan path +========= + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines classes to handle successive fixations/saccades and analyse their spatial or temporal properties. + +## Fixation based scan path + +### Definition + +The [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) class is defined as a list of [ScanSteps](../../argaze.md/#argaze.GazeFeatures.ScanStep) which are defined as a fixation and a consecutive saccade. + +![Fixation based scan path](../../img/scan_path.png) + +As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.ScanPath.append_saccade) methods. + +### Analysis + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer) classe to let add various analysis algorithms. + +Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: + +* [K-Coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient) +* [Nearest Neighbor Index](../../argaze.md/#argaze.GazeAnalysis.NearestNeighborIndex) +* [Exploit Explore Ratio](../../argaze.md/#argaze.GazeAnalysis.ExploitExploreRatio) + +### Example + +Here is a sample of code to illustrate how to built a scan path and analyze it: + +``` python +from argaze import GazeFeatures +from argaze.GazeAnalysis import KCoefficient + +# Create a empty scan path +scan_path = GazeFeatures.ScanPath() + +# Create a K coefficient analyzer +kc_analyzer = KCoefficient.ScanPathAnalyzer() + +# Assuming a gaze movement is identified at ts time +...: + + # Fixation identified + if GazeFeatures.is_fixation(gaze_movement): + + # Append fixation to scan path : no step is created + scan_path.append_fixation(ts, gaze_movement) + + # Saccade identified + elif GazeFeatures.is_saccade(gaze_movement): + + # Append saccade to scan path : a new step should be created + new_step = scan_path.append_saccade(data_ts, gaze_movement) + + # Analyse scan path + if new_step: + + K = kc_analyzer.analyze(scan_path) + + # Do something with K metric + ... +``` + +## AOI based scan path + +### Definition + +The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class is defined as a list of [AOIScanSteps](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) which are defined as set of consecutives fixations looking at a same Area Of Interest (AOI) and a consecutive saccade. + +![AOI based scan path](../../img/aoi_scan_path.png) + +As fixations and saccades are identified, the scan path is built by calling respectively [append_fixation](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_fixation) and [append_saccade](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.append_saccade) methods. + +### Analysis + +[GazeFeatures](../../argaze.md/#argaze.GazeFeatures) defines abstract [AOIScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) classe to let add various analysis algorithms. + +Some scan path analysis are available thanks to [GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule: + +* [Transition matrix](../../argaze.md/#argaze.GazeAnalysis.TransitionMatrix) +* [Entropy](../../argaze.md/#argaze.GazeAnalysis.Entropy) +* [Lempel-Ziv complexity](../../argaze.md/#argaze.GazeAnalysis.LempelZivComplexity) +* [N-Gram](../../argaze.md/#argaze.GazeAnalysis.NGram) +* [K-modified coefficient](../../argaze.md/#argaze.GazeAnalysis.KCoefficient) + +### Example + +Here is a sample of code to illustrate how to built a AOI scan path and analyze it: + +``` python +from argaze import GazeFeatures +from argaze.GazeAnalysis import LempelZivComplexity + +# Assuming all AOI names are listed +... + +# Create a empty AOI scan path +aoi_scan_path = GazeFeatures.AOIScanPath(aoi_names) + +# Create a Lempel-Ziv complexity analyzer +lzc_analyzer = LempelZivComplexity.AOIScanPathAnalyzer() + +# Assuming a gaze movement is identified at ts time +...: + + # Fixation identified + if GazeFeatures.is_fixation(gaze_movement): + + # Assuming fixation is detected as inside an AOI + ... + + # Append fixation to AOI scan path : a new step should be created + new_step = aoi_scan_path.append_fixation(ts, gaze_movement, looked_aoi_name) + + # Analyse AOI scan path + if new_step: + + LZC = kc_analyzer.analyze(aoi_scan_path) + + # Do something with LZC metric + ... + + # Saccade identified + elif GazeFeatures.is_saccade(gaze_movement): + + # Append saccade to scan path : no step is created + aoi_scan_path.append_saccade(data_ts, gaze_movement) + +``` + +### Advanced + +The [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) class provides some advanced features to analyse it. + +#### Letter sequence + +When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally affects a unique letter index related to its AOI to ease pattern analysis. +Then, the [AOIScanPath letter_sequence](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.letter_sequence) property returns the concatenation of each [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) letter. +The [AOIScanPath get_letter_aoi](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.get_letter_aoi) method helps to get back the AOI related to a letter index. + +``` python +# Assuming the following AOI scan path is built: Foo > Bar > Shu > Foo +aoi_scan_path = ... + +# Letter sequence representation should be: 'ABCA' +print(aoi_scan_path.letter_sequence) + +# Output should be: 'Bar' +print(aoi_scan_path.get_letter_aoi('B')) + +``` + +#### Transition matrix + +When a new [AOIScanStep](../../argaze.md/#argaze.GazeFeatures.AOIScanStep) is created, the [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) internally counts the number of transitions from an AOI to another AOI to ease Markov chain analysis. +Then, the [AOIScanPath transition_matrix](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.transition_matrix) property returns a [Pandas DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html) where indexes are transition departures and columns are transition destinations. + +Here is an exemple of transition matrix for the following [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath): Foo > Bar > Shu > Foo > Bar + +| |Foo|Bar|Shu| +|:--|:--|:--|:--| +|Foo|0 |2 |0 | +|Bar|0 |0 |1 | +|Shu|1 |0 |0 | + + +#### Fixations count + +The [AOIScanPath fixations_count](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.fixations_count) method returns the total number of fixations in the whole scan path and a dictionary to get the fixations count per AOI. -- cgit v1.1