aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/contributor_guide/build_package.md36
-rw-r--r--docs/img/4flight_visual_pattern.pngbin0 -> 331959 bytes
-rw-r--r--docs/img/4flight_workspace.pngbin0 -> 311033 bytes
-rw-r--r--docs/img/argaze_load_gui.pngbin168761 -> 151200 bytes
-rw-r--r--docs/img/argaze_load_gui_haiku.pngbin0 -> 321011 bytes
-rw-r--r--docs/img/argaze_load_gui_pfe.pngbin0 -> 454631 bytes
-rw-r--r--docs/img/argaze_pipeline.pngbin92231 -> 92553 bytes
-rw-r--r--docs/index.md2
-rw-r--r--docs/installation.md4
-rw-r--r--docs/use_cases/air_controller_gaze_study/context.md22
-rw-r--r--docs/use_cases/air_controller_gaze_study/introduction.md48
-rw-r--r--docs/use_cases/air_controller_gaze_study/observers.md90
-rw-r--r--docs/use_cases/air_controller_gaze_study/pipeline.md366
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/context.md7
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/introduction.md12
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/observers.md6
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/pipeline.md6
-rw-r--r--docs/use_cases/pilot_gaze_tracking/context.md (renamed from docs/use_cases/pilot_gaze_monitoring/context.md)13
-rw-r--r--docs/use_cases/pilot_gaze_tracking/introduction.md (renamed from docs/use_cases/pilot_gaze_monitoring/introduction.md)7
-rw-r--r--docs/use_cases/pilot_gaze_tracking/observers.md (renamed from docs/use_cases/pilot_gaze_monitoring/observers.md)4
-rw-r--r--docs/use_cases/pilot_gaze_tracking/pipeline.md (renamed from docs/use_cases/pilot_gaze_monitoring/pipeline.md)51
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md4
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md2
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md82
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md8
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md4
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md9
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/file.md75
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/opencv.md18
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md (renamed from docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md)10
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md8
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md32
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md5
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/background.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/heatmap.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualization.md2
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md110
-rw-r--r--docs/user_guide/utils/estimate_aruco_markers_pose.md4
-rw-r--r--docs/user_guide/utils/main_commands.md7
45 files changed, 983 insertions, 123 deletions
diff --git a/docs/contributor_guide/build_package.md b/docs/contributor_guide/build_package.md
new file mode 100644
index 0000000..fae1730
--- /dev/null
+++ b/docs/contributor_guide/build_package.md
@@ -0,0 +1,36 @@
+Build package
+=============
+
+ArGaze build system is based on [setuptools](https://setuptools.pypa.io/en/latest/userguide/index.html) and [setuptools-scm](https://setuptools-scm.readthedocs.io/en/latest/) to use Git tag as package version number.
+
+!!! note
+
+ *Consider that all inline commands below have to be executed at the root of ArGaze Git repository.*
+
+Install or upgrade the required packages:
+
+```console
+pip install build setuptools setuptools-scm
+```
+
+Commit last changes then, tag the Git repository with a VERSION that follows the [setuptools versionning schemes](https://setuptools.pypa.io/en/latest/userguide/distribution.html):
+
+```console
+git tag -a VERSION -m "Version message"
+```
+
+Push commits and tags:
+
+```console
+git push && git push --tags
+```
+
+Then, build package:
+```console
+python -m build
+```
+
+Once the build is done, two files are created in a *dist* folder:
+
+* **argaze-VERSION-py3-none-any.whl**: the built package (*none* means for no specific OS, *any* means for any architecture).
+* **argaze-VERSION.tar.gz**: the source package.
diff --git a/docs/img/4flight_visual_pattern.png b/docs/img/4flight_visual_pattern.png
new file mode 100644
index 0000000..0550063
--- /dev/null
+++ b/docs/img/4flight_visual_pattern.png
Binary files differ
diff --git a/docs/img/4flight_workspace.png b/docs/img/4flight_workspace.png
new file mode 100644
index 0000000..f899ab2
--- /dev/null
+++ b/docs/img/4flight_workspace.png
Binary files differ
diff --git a/docs/img/argaze_load_gui.png b/docs/img/argaze_load_gui.png
index b8874b2..e012adc 100644
--- a/docs/img/argaze_load_gui.png
+++ b/docs/img/argaze_load_gui.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_haiku.png b/docs/img/argaze_load_gui_haiku.png
new file mode 100644
index 0000000..6a4e1ec
--- /dev/null
+++ b/docs/img/argaze_load_gui_haiku.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_pfe.png b/docs/img/argaze_load_gui_pfe.png
new file mode 100644
index 0000000..0e622d3
--- /dev/null
+++ b/docs/img/argaze_load_gui_pfe.png
Binary files differ
diff --git a/docs/img/argaze_pipeline.png b/docs/img/argaze_pipeline.png
index 953cbba..61606b2 100644
--- a/docs/img/argaze_pipeline.png
+++ b/docs/img/argaze_pipeline.png
Binary files differ
diff --git a/docs/index.md b/docs/index.md
index 2b668a3..ca9271a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -14,7 +14,7 @@ By offering a wide array of gaze metrics and supporting easy extension to incorp
## Eye tracking context
-**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **real-time and/or post-processing analysis**.
+**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **live data capture and afterward data playback**.
[Learn how to handle various eye tracking context by reading the dedicated user guide section](./user_guide/eye_tracking_context/introduction.md).
diff --git a/docs/installation.md b/docs/installation.md
index 66b801b..fe4cfa4 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -37,8 +37,8 @@ pip install ./dist/argaze-VERSION.whl
!!! note "As ArGaze package contributor"
- *You should prefer to install the package in developer mode to test live code changes:*
+ *You should prefer to install the package in editable mode to test live code changes:*
```
- pip install -e .
+ pip install --editable .
```
diff --git a/docs/use_cases/air_controller_gaze_study/context.md b/docs/use_cases/air_controller_gaze_study/context.md
new file mode 100644
index 0000000..8bb4ef8
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/context.md
@@ -0,0 +1,22 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
+For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to playback data from records made by this device.
+
+While *segment* entry is specific to the [TobiiProGlasses2.SegmentPlayback](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback": {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "/Volumes/projects/fbr6k3e/records/4rcbdzk/segments/1",
+ "pipeline": "post_processing_pipeline.json"
+ }
+}
+```
+
+The [post_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
diff --git a/docs/use_cases/air_controller_gaze_study/introduction.md b/docs/use_cases/air_controller_gaze_study/introduction.md
new file mode 100644
index 0000000..f188eec
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/introduction.md
@@ -0,0 +1,48 @@
+Post-processing head-mounted eye tracking records
+=================================================
+
+**ArGaze** enabled a study of air traffic controller gaze strategy.
+
+The following use case has integrated the [ArUco marker pipeline](../../user_guide/aruco_marker_pipeline/introduction.md) to map air traffic controllers gaze onto multiple screens environment in post-processing then, enable scan path study using the [gaze analysis pipeline](../../user_guide/gaze_analysis_pipeline/introduction.md).
+
+## Background
+
+The next-gen air traffic control system (4Flight) aims to enhance the operational capacity of the en-route control center by offering new tools to air traffic controllers. However, it entails significant changes in their working method, which will consequently have an impact on how they are trained.
+Several research projects on visual patterns of air traffic controllers indicate the urgent need to improve the effectiveness of training in visual information seeking behavior.
+An exploratory study was initiated by a group of trainee air traffic controllers with the aim of analyzing the visual patterns of novice controllers and instructors, intending to propose guidelines regarding the visual pattern for training.
+
+## Environment
+
+The 4Flight control position consists of two screens: the first displays the radar image along with other information regarding the observed sector, the second displays the agenda, which allows the controller to link conflicting aircraft by creating data blocks, and the Dyp info, which displays some information about the flight.
+During their training, controllers are taught to visually follow all aircraft streams along a given route, focusing on their planned flight path and potential interactions with other aircraft.
+
+![4Flight Workspace](../../img/4flight_workspace.png)
+
+A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft simultaneously was performed by air traffic controllers. The controller could encounter lateral conflicts (same altitude) between 2 and 3 aircraft and conflicts between aircraft that need to ascend or descend within the sector.
+After the simulation, a directed interview about the gaze pattern was conducted.
+Eye tracking data was recorded with a Tobii Pro Glasses 2, a head-mounted eye tracker.
+The gaze and scene camera video were captured with Tobii Pro Lab software and post-processed with **ArGaze** software library.
+As the eye tracker model is head mounted, ArUco markers were placed around the two screens to ensure that several of them were always visible in the field of view of the eye tracker camera.
+
+Various metrics were exported with specific pipeline observers, including average fixation duration, explore/exploit ratio, K-coefficient, AOI distribution, transition matrix, entropy and N-grams.
+Although statistical analysis is not possible due to the small sample size of the study (6 instructors, 5 qualified controllers, and 5 trainees), visual pattern summaries have been manually built from transition matrix export to produce a qualitative interpretation showing what instructors attend during training and how qualified controllers work. Red arcs are more frequent than the blue ones. Instructors (Fig. a) and four different qualified controllers (Fig. b, c, d, e).
+
+![4Flight Visual pattern](../../img/4flight_visual_pattern.png)
+
+## Setup
+
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
+
+* The context file that playback gaze data and scene camera video records: [data_playback_context.json](context.md)
+* The pipeline file that processes gaze data and scene camera video: [post_processing_pipeline.json](pipeline.md)
+* The observers file that exports analysis outputs: [observers.py](observers.md)
+
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
+
+```shell
+python -m argaze load segment_playback_context.json
+```
+
+This command opens one GUI window per frame (one for the scene camera, one for the sector screen and one for the info screen) that allow to monitor gaze mapping while processing.
+
+![ArGaze load GUI for PFE study](../../img/argaze_load_gui_pfe.png)
diff --git a/docs/use_cases/air_controller_gaze_study/observers.md b/docs/use_cases/air_controller_gaze_study/observers.md
new file mode 100644
index 0000000..500d573
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/observers.md
@@ -0,0 +1,90 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
+
+For this use case we need to record gaze analysis metrics on *ArUcoCamera.on_look* call and to record sector screen image on *ArUcoCamera.on_copy_background_into_scenes_frames* signal.
+
+```python
+import logging
+
+from argaze.utils import UtilsFeatures
+
+import cv2
+
+class ScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Explore/Exploit ratio", "K coefficient"
+
+ def on_look(self, timestamp, frame, exception):
+ """Log scan path metrics."""
+
+ if frame.is_analysis_available():
+
+ analysis = frame.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer'].explore_exploit_ratio,
+ analysis['argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer'].K
+ )
+
+ self.write(data)
+
+class AOIScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Transition matrix probabilities", "Transition matrix density", "N-Grams count", "Stationary entropy", "Transition entropy"
+
+ def on_look(self, timestamp, layer, exception):
+ """Log aoi scan path metrics"""
+
+ if layer.is_analysis_available():
+
+ analysis = layer.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_probabilities,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_density,
+ analysis['argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer'].ngrams_count,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].stationary_entropy,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].transition_entropy
+ )
+
+ self.write(data)
+
+class VideoRecorder(UtilsFeatures.VideoWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ def on_copy_background_into_scenes_frames(self, timestamp, frame, exception):
+ """Write frame image."""
+
+ logging.debug('VideoRecorder.on_map')
+
+ image = frame.image()
+
+ # Write video timing
+ cv2.rectangle(image, (0, 0), (550, 50), (63, 63, 63), -1)
+ cv2.putText(image, f'Time: {int(timestamp)} ms', (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
+
+ self.write(image)
+```
diff --git a/docs/use_cases/air_controller_gaze_study/pipeline.md b/docs/use_cases/air_controller_gaze_study/pipeline.md
new file mode 100644
index 0000000..69fdd2c
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/pipeline.md
@@ -0,0 +1,366 @@
+Post processing pipeline
+========================
+
+The pipeline processes camera image and gaze data to enable gaze mapping and gaze analysis.
+
+## post_processing_pipeline.json
+
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "ATC_Study",
+ "size": [1920, 1080],
+ "sides_mask": 420,
+ "copy_background_into_scenes_frames": true,
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "optic_parameters": "optic_parameters.json",
+ "parameters": {
+ "adaptiveThreshConstant": 20,
+ "useAruco3Detection": true
+ }
+ },
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "layers": {
+ "Main" : {
+ "aoi_matcher": {
+ "argaze.GazeAnalysis.DeviationCircleCoverage.AOIMatcher": {
+ "coverage_threshold": 0.5
+ }
+ },
+ "aoi_scan_path" : {
+ "duration_max": 60000
+ },
+ "aoi_scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer": {
+ "n_min": 3,
+ "n_max": 5
+ },
+ "argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer": {}
+ },
+ "observers": {
+ "observers.AOIScanPathAnalysisRecorder": {
+ "path": "aoi_metrics.csv"
+ }
+ }
+ }
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [0, 255, 0]
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "update_looked_aoi": true,
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [0, -10]
+ }
+ }
+ }
+ },
+ "scenes": {
+ "Workspace": {
+ "aruco_markers_group": "workspace_markers.obj",
+ "layers": {
+ "Main" : {
+ "aoi_scene": "workspace_aois.obj"
+ }
+ },
+ "frames": {
+ "Sector_Screen": {
+ "size": [1080, 1017],
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "scan_path": {
+ "duration_max": 30000
+ },
+ "scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.ScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer": {
+ "short_fixation_duration_threshold": 0
+ },
+ "argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer": {}
+ },
+ "layers" :{
+ "Main": {
+ "aoi_scene": "sector_screen_aois.svg"
+ }
+ },
+ "heatmap": {
+ "size": [80, 60]
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "heatmap_weight": 0.5,
+ "draw_gaze_positions": {
+ "color": [0, 127, 127],
+ "size": 4
+ },
+ "draw_scan_path": {
+ "draw_fixations": {
+ "deviation_circle_color": [255, 255, 255],
+ "duration_border_color": [0, 127, 127],
+ "duration_factor": 1e-2
+ },
+ "draw_saccades": {
+ "line_color": [0, 255, 255]
+ },
+ "deepness": 0
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "draw_matched_fixation": {
+ "deviation_circle_color": [255, 255, 255],
+ "draw_positions": {
+ "position_color": [0, 255, 0],
+ "line_color": [0, 0, 0]
+ }
+ },
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [10, 10]
+ }
+ }
+ }
+ },
+ "observers": {
+ "observers.ScanPathAnalysisRecorder": {
+ "path": "sector_screen.csv"
+ },
+ "observers.VideoRecorder": {
+ "path": "sector_screen.mp4",
+ "width": 1080,
+ "height": 1024,
+ "fps": 25
+ }
+ }
+ },
+ "Info_Screen": {
+ "size": [640, 1080],
+ "layers" : {
+ "Main": {
+ "aoi_scene": "info_screen_aois.svg"
+ }
+ }
+ }
+ }
+ }
+ },
+ "observers": {
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "watch_performance.csv"
+ }
+ }
+ }
+}
+```
+
+All the files mentioned above are described below.
+
+The *ScanPathAnalysisRecorder* and *AOIScanPathAnalysisRecorder* observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+
+## optic_parameters.json
+
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
+```json
+{
+ "rms": 0.6688921504088245,
+ "dimensions": [
+ 1920,
+ 1080
+ ],
+ "K": [
+ [
+ 1135.6524381415752,
+ 0.0,
+ 956.0685325355497
+ ],
+ [
+ 0.0,
+ 1135.9272506869524,
+ 560.059099810324
+ ],
+ [
+ 0.0,
+ 0.0,
+ 1.0
+ ]
+ ],
+ "D": [
+ 0.01655492265003404,
+ 0.1985524264972037,
+ 0.002129965902489484,
+ -0.0019528582922179365,
+ -0.5792910353639452
+ ]
+}
+```
+
+## workspace_markers.obj
+
+This file defines the place where are the ArUco markers into the workspace geometry. Markers' positions have been edited in Blender software from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o DICT_APRILTAG_16h5#1_Marker
+v -2.532475 48.421242 0.081627
+v 2.467094 48.355682 0.077174
+v 2.532476 53.352734 -0.081634
+v -2.467093 53.418293 -0.077182
+s off
+f 1 2 3 4
+o DICT_APRILTAG_16h5#6_Marker
+v 88.144676 23.084166 -0.070246
+v 93.144661 23.094980 -0.072225
+v 93.133904 28.092941 0.070232
+v 88.133919 28.082127 0.072211
+s off
+f 5 6 7 8
+o DICT_APRILTAG_16h5#2_Marker
+v -6.234516 27.087950 0.176944
+v -1.244015 27.005413 -0.119848
+v -1.164732 32.004459 -0.176936
+v -6.155232 32.086998 0.119855
+s off
+f 9 10 11 12
+o DICT_APRILTAG_16h5#3_Marker
+v -2.518053 -2.481743 -0.018721
+v 2.481756 -2.518108 0.005601
+v 2.518059 2.481743 0.018721
+v -2.481749 2.518108 -0.005601
+s off
+f 13 14 15 16
+o DICT_APRILTAG_16h5#5_Marker
+v 48.746418 48.319012 -0.015691
+v 53.746052 48.374046 0.009490
+v 53.690983 53.373741 0.015698
+v 48.691349 53.318699 -0.009490
+s off
+f 17 18 19 20
+o DICT_APRILTAG_16h5#4_Marker
+v 23.331947 -3.018721 5.481743
+v 28.331757 -2.994399 5.518108
+v 28.368059 -2.981279 0.518257
+v 23.368252 -3.005600 0.481892
+s off
+f 21 22 23 24
+
+```
+
+## workspace_aois.obj
+
+This file defines the place of the AOI into the workspace geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o Sector_Screen
+v 0.000000 1.008786 0.000000
+v 51.742416 1.008786 0.000000
+v 0.000000 52.998108 0.000000
+v 51.742416 52.998108 0.000000
+s off
+f 1 2 4 3
+o Info_Screen
+v 56.407101 0.000000 0.000000
+v 91.407104 0.000000 0.000000
+v 56.407101 52.499996 0.000000
+v 91.407104 52.499996 0.000000
+s off
+f 5 6 8 7
+
+```
+
+## sector_screen_aois.svg
+
+This file defines the place of the AOI into the sector screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the sector screen then exported at SVG format.
+
+```svg
+<svg >
+ <path id="Area_1" d="M317.844,198.526L507.431,426.837L306.453,595.073L110.442,355.41L317.844,198.526Z"/>
+ <path id="Area_2" d="M507.431,426.837L611.554,563.624L444.207,750.877L306.453,595.073L507.431,426.837Z"/>
+ <path id="Area_3" d="M395.175,1017L444.207,750.877L611.554,563.624L1080,954.462L1080,1017L395.175,1017Z"/>
+ <path id="Area_4" d="M611.554,563.624L756.528,293.236L562.239,198.526L471.45,382.082L611.554,563.624Z"/>
+ <path id="Area_5" d="M0,900.683L306.453,595.073L444.207,750.877L395.175,1017L0,1017L0,900.683Z"/>
+ <path id="Area_6" d="M471.45,381.938L557.227,207.284L354.832,65.656L237.257,104.014L471.45,381.938Z"/>
+ <path id="Area_7" d="M0,22.399L264.521,24.165L318.672,77.325L237.257,103.625L248.645,118.901L0,80.963L0,22.399Z"/>
+</svg>
+```
+
+## info_screen_aois.svg
+
+This file defines the place of the AOI into the info screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the info screen then exported at SVG format.
+
+```svg
+<svg >
+ <rect id="Strips" x="0" y="880" width="640" height="200"/>
+</svg>
+```
+
+## aoi_metrics.csv
+
+The file contains all the metrics recorded by the *AOIScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.csv
+
+The file contains all the metrics recorded by the *ScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.mp4
+
+The video file is a record of the sector screen frame image.
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is created into the folder where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2.3GHz Intel Core i9 8 cores), the *look* method execution time is ~1ms and it is called ~51 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is created into the folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2.3GHz Intel Core i9 8 cores) without CUDA acceleration, the *watch* method execution time is ~52ms and it is called more than 12 times per second.
diff --git a/docs/use_cases/gaze_based_candidate_selection/context.md b/docs/use_cases/gaze_based_candidate_selection/context.md
new file mode 100644
index 0000000..96547ea
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/context.md
@@ -0,0 +1,7 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
diff --git a/docs/use_cases/gaze_based_candidate_selection/introduction.md b/docs/use_cases/gaze_based_candidate_selection/introduction.md
new file mode 100644
index 0000000..da8d6f9
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/introduction.md
@@ -0,0 +1,12 @@
+Post-processing screen-based eye tracker data
+=================================================
+
+**ArGaze** enabled ...
+
+The following use case has integrated ...
+
+## Background
+
+## Environment
+
+## Setup
diff --git a/docs/use_cases/gaze_based_candidate_selection/observers.md b/docs/use_cases/gaze_based_candidate_selection/observers.md
new file mode 100644
index 0000000..a1f1fce
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/observers.md
@@ -0,0 +1,6 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
diff --git a/docs/use_cases/gaze_based_candidate_selection/pipeline.md b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
new file mode 100644
index 0000000..6fae01a
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
@@ -0,0 +1,6 @@
+Post processing pipeline
+========================
+
+The pipeline processes gaze data to enable gaze analysis.
+
+## post_processing_pipeline.json
diff --git a/docs/use_cases/pilot_gaze_monitoring/context.md b/docs/use_cases/pilot_gaze_tracking/context.md
index 417ed13..8839cb6 100644
--- a/docs/use_cases/pilot_gaze_monitoring/context.md
+++ b/docs/use_cases/pilot_gaze_tracking/context.md
@@ -1,12 +1,11 @@
-Live streaming context
-======================
+Data capture context
+====================
-The context handles pipeline inputs.
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
## live_streaming_context.json
-For this use case we need to connect to a Tobii Pro Glasses 2 device.
-**ArGaze** provides a context class to live stream data from this device.
+For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to capture data from this device.
While *address*, *project*, *participant* and *configuration* entries are specific to the [TobiiProGlasses2.LiveStream](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.LiveStream) class, *name*, *pipeline* and *observers* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
@@ -37,6 +36,6 @@ While *address*, *project*, *participant* and *configuration* entries are specif
}
```
-The [live_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
+The [live_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
-The observers objects are defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
+The *IvyBus* observer object is defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
diff --git a/docs/use_cases/pilot_gaze_monitoring/introduction.md b/docs/use_cases/pilot_gaze_tracking/introduction.md
index 453a443..7e88c69 100644
--- a/docs/use_cases/pilot_gaze_monitoring/introduction.md
+++ b/docs/use_cases/pilot_gaze_tracking/introduction.md
@@ -30,17 +30,18 @@ Finally, fixation events were sent in real-time through [Ivy bus middleware](htt
## Setup
-The setup to integrate **ArGaze** to the experiment is defined by 3 main files:
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
* The context file that captures gaze data and scene camera video: [live_streaming_context.json](context.md)
* The pipeline file that processes gaze data and scene camera video: [live_processing_pipeline.json](pipeline.md)
* The observers file that send fixation events via Ivy bus middleware: [observers.py](observers.md)
-As any **ArGaze** setup, it is loaded by executing the following command:
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
```shell
python -m argaze load live_streaming_context.json
```
-## Performance
+This command opens a GUI window that allows to start gaze calibration, to launch recording and to monitor gaze mapping. Another window is also opened to display gaze mapping onto PFD screen.
+![ArGaze load GUI for Haiku](../../img/argaze_load_gui_haiku.png)
diff --git a/docs/use_cases/pilot_gaze_monitoring/observers.md b/docs/use_cases/pilot_gaze_tracking/observers.md
index 2e3f394..5f5bc78 100644
--- a/docs/use_cases/pilot_gaze_monitoring/observers.md
+++ b/docs/use_cases/pilot_gaze_tracking/observers.md
@@ -1,8 +1,12 @@
Fixation events sending
=======================
+Observers are attached to pipeline steps to be notified when a method is called.
+
## observers.py
+For this use case we need to enable [Ivy bus communication](https://gitlab.com/ivybus/ivy-python/) to log ArUco detection results (on *ArUcoCamera.on_watch* call) and fixation identification with AOI matching (on *ArUcoCamera.on_look* call).
+
```python
import logging
diff --git a/docs/use_cases/pilot_gaze_monitoring/pipeline.md b/docs/use_cases/pilot_gaze_tracking/pipeline.md
index 8f8dad0..65fccc3 100644
--- a/docs/use_cases/pilot_gaze_monitoring/pipeline.md
+++ b/docs/use_cases/pilot_gaze_tracking/pipeline.md
@@ -5,8 +5,7 @@ The pipeline processes camera image and gaze data to enable gaze mapping and gaz
## live_processing_pipeline.json
-For this use case we need to detect ArUco markers to enable gaze mapping.
-**ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
```json
{
@@ -37,12 +36,6 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
"PIC_PFD": {
"size": [960, 1080],
"background": "PIC_PFD.png",
- "gaze_movement_identifier": {
- "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
- "deviation_max_threshold": 50,
- "duration_min_threshold": 150
- }
- },
"layers": {
"Main": {
"aoi_scene": "PIC_PFD.svg"
@@ -56,9 +49,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
}
}
- },
- "angle_tolerance": 15.0,
- "distance_tolerance": 10.0
+ }
}
},
"layers": {
@@ -119,18 +110,26 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
},
"observers": {
- "observers.ArUcoCameraLogger": {}
+ "observers.ArUcoCameraLogger": {},
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "_export/look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "_export/watch_performance.csv"
+ }
}
}
}
```
-All the files mentioned aboved are described below.
+All the files mentioned above are described below.
-The observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+The *ArUcoCameraLogger* observer object is defined into the [observers.py](observers.md) file that is described in the next chapter.
## optic_parameters.json
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
```json
{
"rms": 0.6688921504088245,
@@ -167,15 +166,19 @@ The observers objects are defined into the [observers.py](observers.md) file tha
## detector_parameters.json
+This file defines the ArUco detector parameters as explained into [the detection improvement chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md).
+
```json
{
"adaptiveThreshConstant": 7,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
```
## aruco_scene.obj
+This file defines the place where are the ArUco markers into the cockpit geometry. Markers' positions have been edited in Blender software from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -239,6 +242,8 @@ f 29 30 32 31
## Cockpit.obj
+This file defines the place of the AOI into the cockpit geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -274,10 +279,14 @@ f 13 14 16 15
## PIC_PFD.png
+This file is a screenshot of the PFD screen used to monitor where the gaze is projected after gaze mapping processing.
+
![PFD frame background](../../img/haiku_PIC_PFD_background.png)
## PIC_PFD.svg
+This file defines the place of the AOI into the PFD frame. AOI positions have been edited with [Inkscape software](https://inkscape.org/fr/) from a screenshot of the PFD screen then exported at SVG format.
+
```svg
<svg>
<rect id="PIC_PFD_Air_Speed" x="93.228" y="193.217" width="135.445" height="571.812"/>
@@ -288,3 +297,15 @@ f 13 14 16 15
<rect id="PIC_PFD_Vertical_Speed" x="819.913" y="193.217" width="85.185" height="609.09"/>
</svg>
```
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer, the *look* method execution time is 5.7ms and it is called ~100 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer with CUDA acceleration, the *watch* method execution time is 46.5ms and it is called more than 12 times per second.
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
index 975f278..311916b 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
@@ -5,7 +5,7 @@ As explain in the [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dc
## Load ArUcoDetector parameters
-[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file with ArUco detector parameters:
@@ -18,7 +18,7 @@ Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoM
"dictionary": "DICT_APRILTAG_16h5",
"parameters": {
"adaptiveThreshConstant": 10,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
},
...
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
index 625f257..e9ce740 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
@@ -134,7 +134,7 @@ Below, an optic_parameters JSON file example:
## Load and display optic parameters
-[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file where optic parameters are loaded and displayed:
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
index a9d66e9..f258e04 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
@@ -150,7 +150,7 @@ Particularly, timestamped gaze positions can be passed one by one to the [ArUcoC
## Setup ArUcoCamera image parameters
-Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArUcoCamera is loaded
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
index 46422b8..78a513a 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
@@ -1,7 +1,7 @@
Describe 3D AOI
===============
-Now that the [scene pose is estimated](aruco_marker_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
+Now that the [scene pose is estimated](aruco_marker_description.md) considering the ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
In the example scene, the two screens—the control panel and the window—are considered to be areas of interest.
diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
index c2ee1b9..56846e2 100644
--- a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
@@ -1,7 +1,7 @@
Edit and execute pipeline
=========================
-Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
+Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected by the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
index c163696..a543bc7 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -3,27 +3,27 @@ Define a context class
The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
-The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides playback features to stop or pause processings, performance assement features to measure how many times processings are called and the time spent by the process.
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process.
-Besides, there is also a [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
-In the same way, there is a [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and processing advancement.
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement.
-Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) or [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class.
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
-## Write live processing context
+## Write data capture context
-Here is a live processing context example that processes gaze positions and camera images in two separated threads:
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
```python
from argaze import ArFeatures, DataFeatures
-class LiveProcessingExample(ArFeatures.LiveProcessingContext):
+class DataCaptureExample(ArFeatures.DataCaptureContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -45,23 +45,23 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a gaze position processing thread
- self.__gaze_thread = threading.Thread(target = self.__gaze_position_processing)
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
self.__gaze_thread.start()
- # Start a camera image processing thread if applicable
- self.__camera_thread = threading.Thread(target = self.__camera_image_processing)
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
self.__camera_thread.start()
return self
- def __gaze_position_processing(self):
- """Process gaze position."""
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, x and y values are available
@@ -73,13 +73,13 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Wait some time eventually
...
- def __camera_image_processing(self):
- """Process camera image if applicable."""
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -95,10 +95,10 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop capture loops
self.stop()
- # Stop processing threads
+ # Stop capture threads
threading.Thread.join(self.__gaze_thread)
threading.Thread.join(self.__camera_thread)
@@ -108,19 +108,19 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
...
```
-## Write post processing context
+## Write data playback context
-Here is a post processing context example that processes gaze positions and camera images in a same thread:
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
```python
from argaze import ArFeatures, DataFeatures
-class PostProcessingExample(ArFeatures.PostProcessingContext):
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -142,19 +142,19 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a reading data thread
- self.__read_thread = threading.Thread(target = self.__data_reading)
- self.__read_thread.start()
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
return self
- def __data_reading(self):
- """Process gaze position and camera image if applicable."""
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
- # Processing loop
+ # Playback loop
while self.is_running():
- # Pause processing
+ # Pause playback
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -176,18 +176,20 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop playback loop
self.stop()
- # Stop processing threads
- threading.Thread.join(self.__read_thread)
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
- def previous(self):
- """Go to previous camera image frame."""
+ @property
+ def duration(self) -> int|float:
+ """Get data duration."""
...
- def next(self):
- """Go to next camera image frame."""
+ @property
+ def progression(self) -> float:
+ """Get data playback progression between 0 and 1."""
...
```
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
index 8753eb6..d8eb389 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -68,12 +68,12 @@ from argaze import ArFeatures
# Check context type
- # Live processing case: calibration method is available
- if issubclass(type(context), ArFeatures.LiveProcessingContext):
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
...
- # Post processing case: more playback methods are available
- if issubclass(type(context), ArFeatures.PostProcessingContext):
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
...
# Check pipeline type
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
index 340dbaf..959d955 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
@@ -28,8 +28,8 @@ for timestamped_gaze_position in ts_gaze_positions:
## Edit timestamped gaze positions from live stream
-Real-time gaze positions can be edited thanks to the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
-Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html).
+Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
+Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html).
```python
from argaze import GazeFeatures
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
index f13c6a2..3deeb57 100644
--- a/docs/user_guide/eye_tracking_context/configuration_and_execution.md
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -3,9 +3,12 @@ Edit and execute context
The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
-* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) live stream and post processing contexts,
-* [Pupil Labs](context_modules/pupil_labs.md) live stream context,
-* [OpenCV](context_modules/opencv.md) window cursor position and movie processing,
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Tobii Pro Glasses 3](context_modules/tobii_pro_glasses_3.md) data capture context,
+* [Pupil Labs Invisible](context_modules/pupil_labs_invisible.md) data capture context,
+* [Pupil Labs Neon](context_modules/pupil_labs_neon.md) data capture context,
+* [File](context_modules/file.md) data playback contexts,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
* [Random](context_modules/random.md) gaze position generator.
## Edit JSON configuration
diff --git a/docs/user_guide/eye_tracking_context/context_modules/file.md b/docs/user_guide/eye_tracking_context/context_modules/file.md
new file mode 100644
index 0000000..5b5c8e9
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/file.md
@@ -0,0 +1,75 @@
+File
+======
+
+ArGaze provides a ready-made contexts to read data from various file format.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## CSV
+
+::: argaze.utils.contexts.File.CSV
+
+### JSON sample: splitted case
+
+To use when gaze position coordinates are splitted in two separated columns.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV": {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_splitted.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "x_column": "Gaze Position X (px)",
+ "y_column": "Gaze Position Y (px)",
+ "pipeline": ...
+ }
+}
+```
+
+### JSON sample: joined case
+
+To use when gaze position coordinates are joined as a list in one single column.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV" : {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_xy_joined.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "xy_column": "Gaze Position (px)",
+ "pipeline": ...
+ }
+}
+```
+
+### JSON sample: left and right eyes
+
+To use when gaze position coordinates and validity are given for each eye in six separated columns.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV": {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_left_right_eyes.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "left_eye_x_column": "Left eye X",
+ "left_eye_y_column": "Left eye Y",
+ "left_eye_validity_column": "Left eye validity",
+ "right_eye_x_column": "Right eye X",
+ "right_eye_y_column": "Right eye Y",
+ "right_eye_validity_column": "Right eye validity",
+ "rescale_to_pipeline_size": true,
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
index 7244cd4..7d73a03 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/opencv.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
@@ -39,9 +39,25 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
```json
{
"argaze.utils.contexts.OpenCV.Movie": {
- "name": "Open CV cursor",
+ "name": "Open CV movie",
"path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
"pipeline": ...
}
}
```
+
+## Camera
+
+::: argaze.utils.contexts.OpenCV.Camera
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Camera": {
+ "name": "Open CV camera",
+ "identifier": 0,
+ "pipeline": ...
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md
new file mode 100644
index 0000000..1f4a94f
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md
@@ -0,0 +1,32 @@
+Pupil Labs Invisible
+==========
+
+ArGaze provides a ready-made context to work with Pupil Labs Invisible device.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.PupilLabsInvisible.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.PupilLabsInvisible.LiveStream": {
+ "name": "Pupil Labs Invisible live stream",
+ "project": "my_experiment",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md
index d2ec336..535f5d5 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md
@@ -1,7 +1,7 @@
-Pupil Labs
+Pupil Labs Neon
==========
-ArGaze provides a ready-made context to work with Pupil Labs devices.
+ArGaze provides a ready-made context to work with Pupil Labs Neon device.
To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
Notice that the *pipeline* entry is mandatory.
@@ -17,14 +17,14 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
## Live Stream
-::: argaze.utils.contexts.PupilLabs.LiveStream
+::: argaze.utils.contexts.PupilLabsNeon.LiveStream
### JSON sample
```json
{
- "argaze.utils.contexts.PupilLabs.LiveStream": {
- "name": "Pupil Labs live stream",
+ "argaze.utils.contexts.PupilLabsNeon.LiveStream": {
+ "name": "Pupil Labs Neon live stream",
"project": "my_experiment",
"pipeline": ...
}
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
index fba6931..6ff44bd 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -42,16 +42,16 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
}
```
-## Post Processing
+## Segment Playback
-::: argaze.utils.contexts.TobiiProGlasses2.PostProcessing
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
### JSON sample
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "./src/argaze/utils/demo/tobii_record/segments/1",
"pipeline": ...
}
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md
new file mode 100644
index 0000000..3d37fcc
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md
@@ -0,0 +1,32 @@
+Tobii Pro Glasses 3
+===================
+
+ArGaze provides a ready-made context to work with Tobii Pro Glasses 3 devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.TobiiProGlasses3.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses3.LiveStream": {
+ "name": "Tobii Pro Glasses 3 live stream",
+ "pipeline": ...
+ }
+}
+```
+
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
index 4970dba..effee18 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
@@ -7,7 +7,7 @@ The calibration algorithm can be selected by instantiating a particular [GazePos
## Enable ArFrame calibration
-Gaze position calibration can be enabled thanks to a dedicated JSON entry.
+Gaze position calibration can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a [Linear Regression](../../../argaze.md/#argaze.GazeAnalysis.LinearRegression) calibration algorithm is selected with no parameters:
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 264e866..843274a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -158,7 +158,7 @@ Last [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) identi
This could also be the current gaze movement if [ArFrame.filter_in_progress_identification](../../../argaze.md/#argaze.ArFeatures.ArFrame) attribute is false.
In that case, the last gaze movement *finished* flag is false.
-Then, the last gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
+Then, the last gaze movement type can be tested with [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
### *ar_frame.is_analysis_available()*
@@ -182,7 +182,7 @@ This an iterator to access to all aoi scan path analysis. Notice that each aoi s
## Setup ArFrame image parameters
-[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArFrame is loaded
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index 2b64091..c2a6ac3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -100,6 +100,11 @@ The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step a
Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
+!!! warning "GazeFeatures.OutsideAOI"
+ When a fixation is not looking at any AOI, a step associated with an AOI called [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) is added. As long as fixations are not looking at any AOI, all fixations/saccades are stored in this step. In this way, further analysis are calculated considering those extra [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) steps.
+
+ This is particularly important when calculating transition matrices, because otherwise we could have arcs between two AOIs when in fact the gaze could have fixed itself outside in the meantime.
+
The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added.
!!! note "Optional"
diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md
index 900d151..11285e3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/background.md
+++ b/docs/user_guide/gaze_analysis_pipeline/background.md
@@ -7,7 +7,7 @@ Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)
## Load and display ArFrame background
-[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a background picture is loaded and displayed:
@@ -28,7 +28,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p
```
!!! note
- As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
index 2057dbe..77b2be0 100644
--- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md
+++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
@@ -7,7 +7,7 @@ Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pip
## Enable and display ArFrame heatmap
-[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where heatmap is enabled and displayed:
@@ -31,7 +31,7 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena
}
```
!!! note
- [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md
index 32395c3..08b5465 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualization.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md
@@ -7,7 +7,7 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze
## Add image parameters to ArFrame JSON configuration
-[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry.
+[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file with a sample where image parameters are added:
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index dd1b8e0..c7560eb 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -9,30 +9,70 @@ Collection of command-line scripts for demonstration purpose.
!!! note
*Use -h option to get command arguments documentation.*
+!!! note
+ Each demonstration outputs metrics into *_export/records* folder.
+
## Random context
-Load **random_context.json** file to process random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
```
-## OpenCV cursor context
+## CSV file context
+
+Load **csv_file_context_xy_joined.json** file to analyze gaze positions from a CSV file where gaze position coordinates are joined as a list in one single column:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_joined.json
+```
+
+Load **csv_file_context_xy_splitted.json** file to analyze gaze positions from a CSV file where gaze position coordinates are splitted in two seperated column:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_splitted.json
+```
+
+Load **csv_file_context_left_right_eyes.json** file to analyze gaze positions from a CSV file where gaze position coordinates and validity are given for each eye in six separated columns.:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_left_right_eyes.json
+```
+
+!!! note
+ The left/right eyes context allows to parse Tobii Spectrum data for example.
+
+## OpenCV
-Load **opencv_cursor_context.json** file to process cursor pointer positions over OpenCV window:
+### Cursor context
+
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
```
-## OpenCV movie context
+### Movie context
-Load **opencv_movie_context.json** file to process movie pictures and also cursor pointer positions over OpenCV window:
+Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
```
+### Camera context
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to set a consistent *sides_mask* value.
+
+Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0).
+
+Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json
+```
+
## Tobii Pro Glasses 2
### Live stream context
@@ -40,7 +80,9 @@ python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
!!! note
This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
-Edit **tobii_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters:
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
+
+Edit **tobii_g2_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters:
```json
{
@@ -63,35 +105,50 @@ Edit **tobii_live_stream_context.json** file as to select exisiting IP *address*
}
```
-Then, load **tobii_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_g2_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_g2_live_stream_context.json
```
-### Post-processing context
+### Segment playback context
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_g2_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_g2_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/tobii_g2_segment_playback_context.json
+```
+
+## Tobii Pro Glasses 3
+
+### Live stream context
+
+!!! note
+ This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
+
+Load **tobii_g3_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_g3_live_stream_context.json
```
+
## Pupil Invisible
### Live stream context
@@ -99,8 +156,25 @@ python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
!!! note
This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
-Load **pupillabs_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1088, 1080]) and to set *sides_mask* value to 4.
+
+Load **pupillabs_invisible_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/pupillabs_invisible_live_stream_context.json
+```
+
+## Pupil Neon
+
+### Live stream context
+
+!!! note
+ This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1600, 1200]) and to set *sides_mask* value to 200.
+
+Load **pupillabs_neon_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/pupillabs_live_stream_context.json
+python -m argaze load ./src/argaze/utils/demo/pupillabs_neon_live_stream_context.json
```
diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md
index 3d34972..55bd232 100644
--- a/docs/user_guide/utils/estimate_aruco_markers_pose.md
+++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md
@@ -15,7 +15,7 @@ Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a m
}
```
-Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
+Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
```json
{
@@ -27,7 +27,7 @@ Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUc
"pose_size": 4,
"pose_ids": [],
"parameters": {
- "useAruco3Detection": 1
+ "useAruco3Detection": true
},
"observers":{
"observers.ArUcoMarkersPoseRecorder": {
diff --git a/docs/user_guide/utils/main_commands.md b/docs/user_guide/utils/main_commands.md
index 4dd3434..9227d8d 100644
--- a/docs/user_guide/utils/main_commands.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -35,13 +35,13 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
@@ -54,3 +54,6 @@ Modify the content of JSON CONFIGURATION file with another JSON CHANGES file the
```shell
python -m argaze edit CONFIGURATION CHANGES OUTPUT
```
+
+!!! note
+ Use *null* value to remove an entry.