aboutsummaryrefslogtreecommitdiff
path: root/docs/use_cases
diff options
context:
space:
mode:
Diffstat (limited to 'docs/use_cases')
-rw-r--r--docs/use_cases/air_controller_gaze_study/context.md22
-rw-r--r--docs/use_cases/air_controller_gaze_study/introduction.md48
-rw-r--r--docs/use_cases/air_controller_gaze_study/observers.md90
-rw-r--r--docs/use_cases/air_controller_gaze_study/pipeline.md366
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/context.md7
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/introduction.md12
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/observers.md6
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/pipeline.md6
-rw-r--r--docs/use_cases/pilot_gaze_tracking/context.md (renamed from docs/use_cases/pilot_gaze_monitoring/context.md)13
-rw-r--r--docs/use_cases/pilot_gaze_tracking/introduction.md (renamed from docs/use_cases/pilot_gaze_monitoring/introduction.md)7
-rw-r--r--docs/use_cases/pilot_gaze_tracking/observers.md (renamed from docs/use_cases/pilot_gaze_monitoring/observers.md)4
-rw-r--r--docs/use_cases/pilot_gaze_tracking/pipeline.md (renamed from docs/use_cases/pilot_gaze_monitoring/pipeline.md)51
12 files changed, 607 insertions, 25 deletions
diff --git a/docs/use_cases/air_controller_gaze_study/context.md b/docs/use_cases/air_controller_gaze_study/context.md
new file mode 100644
index 0000000..8bb4ef8
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/context.md
@@ -0,0 +1,22 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
+For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to playback data from records made by this device.
+
+While *segment* entry is specific to the [TobiiProGlasses2.SegmentPlayback](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback": {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "/Volumes/projects/fbr6k3e/records/4rcbdzk/segments/1",
+ "pipeline": "post_processing_pipeline.json"
+ }
+}
+```
+
+The [post_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
diff --git a/docs/use_cases/air_controller_gaze_study/introduction.md b/docs/use_cases/air_controller_gaze_study/introduction.md
new file mode 100644
index 0000000..f188eec
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/introduction.md
@@ -0,0 +1,48 @@
+Post-processing head-mounted eye tracking records
+=================================================
+
+**ArGaze** enabled a study of air traffic controller gaze strategy.
+
+The following use case has integrated the [ArUco marker pipeline](../../user_guide/aruco_marker_pipeline/introduction.md) to map air traffic controllers gaze onto multiple screens environment in post-processing then, enable scan path study using the [gaze analysis pipeline](../../user_guide/gaze_analysis_pipeline/introduction.md).
+
+## Background
+
+The next-gen air traffic control system (4Flight) aims to enhance the operational capacity of the en-route control center by offering new tools to air traffic controllers. However, it entails significant changes in their working method, which will consequently have an impact on how they are trained.
+Several research projects on visual patterns of air traffic controllers indicate the urgent need to improve the effectiveness of training in visual information seeking behavior.
+An exploratory study was initiated by a group of trainee air traffic controllers with the aim of analyzing the visual patterns of novice controllers and instructors, intending to propose guidelines regarding the visual pattern for training.
+
+## Environment
+
+The 4Flight control position consists of two screens: the first displays the radar image along with other information regarding the observed sector, the second displays the agenda, which allows the controller to link conflicting aircraft by creating data blocks, and the Dyp info, which displays some information about the flight.
+During their training, controllers are taught to visually follow all aircraft streams along a given route, focusing on their planned flight path and potential interactions with other aircraft.
+
+![4Flight Workspace](../../img/4flight_workspace.png)
+
+A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft simultaneously was performed by air traffic controllers. The controller could encounter lateral conflicts (same altitude) between 2 and 3 aircraft and conflicts between aircraft that need to ascend or descend within the sector.
+After the simulation, a directed interview about the gaze pattern was conducted.
+Eye tracking data was recorded with a Tobii Pro Glasses 2, a head-mounted eye tracker.
+The gaze and scene camera video were captured with Tobii Pro Lab software and post-processed with **ArGaze** software library.
+As the eye tracker model is head mounted, ArUco markers were placed around the two screens to ensure that several of them were always visible in the field of view of the eye tracker camera.
+
+Various metrics were exported with specific pipeline observers, including average fixation duration, explore/exploit ratio, K-coefficient, AOI distribution, transition matrix, entropy and N-grams.
+Although statistical analysis is not possible due to the small sample size of the study (6 instructors, 5 qualified controllers, and 5 trainees), visual pattern summaries have been manually built from transition matrix export to produce a qualitative interpretation showing what instructors attend during training and how qualified controllers work. Red arcs are more frequent than the blue ones. Instructors (Fig. a) and four different qualified controllers (Fig. b, c, d, e).
+
+![4Flight Visual pattern](../../img/4flight_visual_pattern.png)
+
+## Setup
+
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
+
+* The context file that playback gaze data and scene camera video records: [data_playback_context.json](context.md)
+* The pipeline file that processes gaze data and scene camera video: [post_processing_pipeline.json](pipeline.md)
+* The observers file that exports analysis outputs: [observers.py](observers.md)
+
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
+
+```shell
+python -m argaze load segment_playback_context.json
+```
+
+This command opens one GUI window per frame (one for the scene camera, one for the sector screen and one for the info screen) that allow to monitor gaze mapping while processing.
+
+![ArGaze load GUI for PFE study](../../img/argaze_load_gui_pfe.png)
diff --git a/docs/use_cases/air_controller_gaze_study/observers.md b/docs/use_cases/air_controller_gaze_study/observers.md
new file mode 100644
index 0000000..500d573
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/observers.md
@@ -0,0 +1,90 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
+
+For this use case we need to record gaze analysis metrics on *ArUcoCamera.on_look* call and to record sector screen image on *ArUcoCamera.on_copy_background_into_scenes_frames* signal.
+
+```python
+import logging
+
+from argaze.utils import UtilsFeatures
+
+import cv2
+
+class ScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Explore/Exploit ratio", "K coefficient"
+
+ def on_look(self, timestamp, frame, exception):
+ """Log scan path metrics."""
+
+ if frame.is_analysis_available():
+
+ analysis = frame.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer'].explore_exploit_ratio,
+ analysis['argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer'].K
+ )
+
+ self.write(data)
+
+class AOIScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Transition matrix probabilities", "Transition matrix density", "N-Grams count", "Stationary entropy", "Transition entropy"
+
+ def on_look(self, timestamp, layer, exception):
+ """Log aoi scan path metrics"""
+
+ if layer.is_analysis_available():
+
+ analysis = layer.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_probabilities,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_density,
+ analysis['argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer'].ngrams_count,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].stationary_entropy,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].transition_entropy
+ )
+
+ self.write(data)
+
+class VideoRecorder(UtilsFeatures.VideoWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ def on_copy_background_into_scenes_frames(self, timestamp, frame, exception):
+ """Write frame image."""
+
+ logging.debug('VideoRecorder.on_map')
+
+ image = frame.image()
+
+ # Write video timing
+ cv2.rectangle(image, (0, 0), (550, 50), (63, 63, 63), -1)
+ cv2.putText(image, f'Time: {int(timestamp)} ms', (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
+
+ self.write(image)
+```
diff --git a/docs/use_cases/air_controller_gaze_study/pipeline.md b/docs/use_cases/air_controller_gaze_study/pipeline.md
new file mode 100644
index 0000000..69fdd2c
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/pipeline.md
@@ -0,0 +1,366 @@
+Post processing pipeline
+========================
+
+The pipeline processes camera image and gaze data to enable gaze mapping and gaze analysis.
+
+## post_processing_pipeline.json
+
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "ATC_Study",
+ "size": [1920, 1080],
+ "sides_mask": 420,
+ "copy_background_into_scenes_frames": true,
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "optic_parameters": "optic_parameters.json",
+ "parameters": {
+ "adaptiveThreshConstant": 20,
+ "useAruco3Detection": true
+ }
+ },
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "layers": {
+ "Main" : {
+ "aoi_matcher": {
+ "argaze.GazeAnalysis.DeviationCircleCoverage.AOIMatcher": {
+ "coverage_threshold": 0.5
+ }
+ },
+ "aoi_scan_path" : {
+ "duration_max": 60000
+ },
+ "aoi_scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer": {
+ "n_min": 3,
+ "n_max": 5
+ },
+ "argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer": {}
+ },
+ "observers": {
+ "observers.AOIScanPathAnalysisRecorder": {
+ "path": "aoi_metrics.csv"
+ }
+ }
+ }
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [0, 255, 0]
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "update_looked_aoi": true,
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [0, -10]
+ }
+ }
+ }
+ },
+ "scenes": {
+ "Workspace": {
+ "aruco_markers_group": "workspace_markers.obj",
+ "layers": {
+ "Main" : {
+ "aoi_scene": "workspace_aois.obj"
+ }
+ },
+ "frames": {
+ "Sector_Screen": {
+ "size": [1080, 1017],
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "scan_path": {
+ "duration_max": 30000
+ },
+ "scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.ScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer": {
+ "short_fixation_duration_threshold": 0
+ },
+ "argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer": {}
+ },
+ "layers" :{
+ "Main": {
+ "aoi_scene": "sector_screen_aois.svg"
+ }
+ },
+ "heatmap": {
+ "size": [80, 60]
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "heatmap_weight": 0.5,
+ "draw_gaze_positions": {
+ "color": [0, 127, 127],
+ "size": 4
+ },
+ "draw_scan_path": {
+ "draw_fixations": {
+ "deviation_circle_color": [255, 255, 255],
+ "duration_border_color": [0, 127, 127],
+ "duration_factor": 1e-2
+ },
+ "draw_saccades": {
+ "line_color": [0, 255, 255]
+ },
+ "deepness": 0
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "draw_matched_fixation": {
+ "deviation_circle_color": [255, 255, 255],
+ "draw_positions": {
+ "position_color": [0, 255, 0],
+ "line_color": [0, 0, 0]
+ }
+ },
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [10, 10]
+ }
+ }
+ }
+ },
+ "observers": {
+ "observers.ScanPathAnalysisRecorder": {
+ "path": "sector_screen.csv"
+ },
+ "observers.VideoRecorder": {
+ "path": "sector_screen.mp4",
+ "width": 1080,
+ "height": 1024,
+ "fps": 25
+ }
+ }
+ },
+ "Info_Screen": {
+ "size": [640, 1080],
+ "layers" : {
+ "Main": {
+ "aoi_scene": "info_screen_aois.svg"
+ }
+ }
+ }
+ }
+ }
+ },
+ "observers": {
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "watch_performance.csv"
+ }
+ }
+ }
+}
+```
+
+All the files mentioned above are described below.
+
+The *ScanPathAnalysisRecorder* and *AOIScanPathAnalysisRecorder* observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+
+## optic_parameters.json
+
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
+```json
+{
+ "rms": 0.6688921504088245,
+ "dimensions": [
+ 1920,
+ 1080
+ ],
+ "K": [
+ [
+ 1135.6524381415752,
+ 0.0,
+ 956.0685325355497
+ ],
+ [
+ 0.0,
+ 1135.9272506869524,
+ 560.059099810324
+ ],
+ [
+ 0.0,
+ 0.0,
+ 1.0
+ ]
+ ],
+ "D": [
+ 0.01655492265003404,
+ 0.1985524264972037,
+ 0.002129965902489484,
+ -0.0019528582922179365,
+ -0.5792910353639452
+ ]
+}
+```
+
+## workspace_markers.obj
+
+This file defines the place where are the ArUco markers into the workspace geometry. Markers' positions have been edited in Blender software from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o DICT_APRILTAG_16h5#1_Marker
+v -2.532475 48.421242 0.081627
+v 2.467094 48.355682 0.077174
+v 2.532476 53.352734 -0.081634
+v -2.467093 53.418293 -0.077182
+s off
+f 1 2 3 4
+o DICT_APRILTAG_16h5#6_Marker
+v 88.144676 23.084166 -0.070246
+v 93.144661 23.094980 -0.072225
+v 93.133904 28.092941 0.070232
+v 88.133919 28.082127 0.072211
+s off
+f 5 6 7 8
+o DICT_APRILTAG_16h5#2_Marker
+v -6.234516 27.087950 0.176944
+v -1.244015 27.005413 -0.119848
+v -1.164732 32.004459 -0.176936
+v -6.155232 32.086998 0.119855
+s off
+f 9 10 11 12
+o DICT_APRILTAG_16h5#3_Marker
+v -2.518053 -2.481743 -0.018721
+v 2.481756 -2.518108 0.005601
+v 2.518059 2.481743 0.018721
+v -2.481749 2.518108 -0.005601
+s off
+f 13 14 15 16
+o DICT_APRILTAG_16h5#5_Marker
+v 48.746418 48.319012 -0.015691
+v 53.746052 48.374046 0.009490
+v 53.690983 53.373741 0.015698
+v 48.691349 53.318699 -0.009490
+s off
+f 17 18 19 20
+o DICT_APRILTAG_16h5#4_Marker
+v 23.331947 -3.018721 5.481743
+v 28.331757 -2.994399 5.518108
+v 28.368059 -2.981279 0.518257
+v 23.368252 -3.005600 0.481892
+s off
+f 21 22 23 24
+
+```
+
+## workspace_aois.obj
+
+This file defines the place of the AOI into the workspace geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o Sector_Screen
+v 0.000000 1.008786 0.000000
+v 51.742416 1.008786 0.000000
+v 0.000000 52.998108 0.000000
+v 51.742416 52.998108 0.000000
+s off
+f 1 2 4 3
+o Info_Screen
+v 56.407101 0.000000 0.000000
+v 91.407104 0.000000 0.000000
+v 56.407101 52.499996 0.000000
+v 91.407104 52.499996 0.000000
+s off
+f 5 6 8 7
+
+```
+
+## sector_screen_aois.svg
+
+This file defines the place of the AOI into the sector screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the sector screen then exported at SVG format.
+
+```svg
+<svg >
+ <path id="Area_1" d="M317.844,198.526L507.431,426.837L306.453,595.073L110.442,355.41L317.844,198.526Z"/>
+ <path id="Area_2" d="M507.431,426.837L611.554,563.624L444.207,750.877L306.453,595.073L507.431,426.837Z"/>
+ <path id="Area_3" d="M395.175,1017L444.207,750.877L611.554,563.624L1080,954.462L1080,1017L395.175,1017Z"/>
+ <path id="Area_4" d="M611.554,563.624L756.528,293.236L562.239,198.526L471.45,382.082L611.554,563.624Z"/>
+ <path id="Area_5" d="M0,900.683L306.453,595.073L444.207,750.877L395.175,1017L0,1017L0,900.683Z"/>
+ <path id="Area_6" d="M471.45,381.938L557.227,207.284L354.832,65.656L237.257,104.014L471.45,381.938Z"/>
+ <path id="Area_7" d="M0,22.399L264.521,24.165L318.672,77.325L237.257,103.625L248.645,118.901L0,80.963L0,22.399Z"/>
+</svg>
+```
+
+## info_screen_aois.svg
+
+This file defines the place of the AOI into the info screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the info screen then exported at SVG format.
+
+```svg
+<svg >
+ <rect id="Strips" x="0" y="880" width="640" height="200"/>
+</svg>
+```
+
+## aoi_metrics.csv
+
+The file contains all the metrics recorded by the *AOIScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.csv
+
+The file contains all the metrics recorded by the *ScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.mp4
+
+The video file is a record of the sector screen frame image.
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is created into the folder where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2.3GHz Intel Core i9 8 cores), the *look* method execution time is ~1ms and it is called ~51 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is created into the folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2.3GHz Intel Core i9 8 cores) without CUDA acceleration, the *watch* method execution time is ~52ms and it is called more than 12 times per second.
diff --git a/docs/use_cases/gaze_based_candidate_selection/context.md b/docs/use_cases/gaze_based_candidate_selection/context.md
new file mode 100644
index 0000000..96547ea
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/context.md
@@ -0,0 +1,7 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
diff --git a/docs/use_cases/gaze_based_candidate_selection/introduction.md b/docs/use_cases/gaze_based_candidate_selection/introduction.md
new file mode 100644
index 0000000..da8d6f9
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/introduction.md
@@ -0,0 +1,12 @@
+Post-processing screen-based eye tracker data
+=================================================
+
+**ArGaze** enabled ...
+
+The following use case has integrated ...
+
+## Background
+
+## Environment
+
+## Setup
diff --git a/docs/use_cases/gaze_based_candidate_selection/observers.md b/docs/use_cases/gaze_based_candidate_selection/observers.md
new file mode 100644
index 0000000..a1f1fce
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/observers.md
@@ -0,0 +1,6 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
diff --git a/docs/use_cases/gaze_based_candidate_selection/pipeline.md b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
new file mode 100644
index 0000000..6fae01a
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
@@ -0,0 +1,6 @@
+Post processing pipeline
+========================
+
+The pipeline processes gaze data to enable gaze analysis.
+
+## post_processing_pipeline.json
diff --git a/docs/use_cases/pilot_gaze_monitoring/context.md b/docs/use_cases/pilot_gaze_tracking/context.md
index 417ed13..8839cb6 100644
--- a/docs/use_cases/pilot_gaze_monitoring/context.md
+++ b/docs/use_cases/pilot_gaze_tracking/context.md
@@ -1,12 +1,11 @@
-Live streaming context
-======================
+Data capture context
+====================
-The context handles pipeline inputs.
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
## live_streaming_context.json
-For this use case we need to connect to a Tobii Pro Glasses 2 device.
-**ArGaze** provides a context class to live stream data from this device.
+For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to capture data from this device.
While *address*, *project*, *participant* and *configuration* entries are specific to the [TobiiProGlasses2.LiveStream](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.LiveStream) class, *name*, *pipeline* and *observers* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
@@ -37,6 +36,6 @@ While *address*, *project*, *participant* and *configuration* entries are specif
}
```
-The [live_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
+The [live_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
-The observers objects are defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
+The *IvyBus* observer object is defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
diff --git a/docs/use_cases/pilot_gaze_monitoring/introduction.md b/docs/use_cases/pilot_gaze_tracking/introduction.md
index 453a443..7e88c69 100644
--- a/docs/use_cases/pilot_gaze_monitoring/introduction.md
+++ b/docs/use_cases/pilot_gaze_tracking/introduction.md
@@ -30,17 +30,18 @@ Finally, fixation events were sent in real-time through [Ivy bus middleware](htt
## Setup
-The setup to integrate **ArGaze** to the experiment is defined by 3 main files:
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
* The context file that captures gaze data and scene camera video: [live_streaming_context.json](context.md)
* The pipeline file that processes gaze data and scene camera video: [live_processing_pipeline.json](pipeline.md)
* The observers file that send fixation events via Ivy bus middleware: [observers.py](observers.md)
-As any **ArGaze** setup, it is loaded by executing the following command:
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
```shell
python -m argaze load live_streaming_context.json
```
-## Performance
+This command opens a GUI window that allows to start gaze calibration, to launch recording and to monitor gaze mapping. Another window is also opened to display gaze mapping onto PFD screen.
+![ArGaze load GUI for Haiku](../../img/argaze_load_gui_haiku.png)
diff --git a/docs/use_cases/pilot_gaze_monitoring/observers.md b/docs/use_cases/pilot_gaze_tracking/observers.md
index 2e3f394..5f5bc78 100644
--- a/docs/use_cases/pilot_gaze_monitoring/observers.md
+++ b/docs/use_cases/pilot_gaze_tracking/observers.md
@@ -1,8 +1,12 @@
Fixation events sending
=======================
+Observers are attached to pipeline steps to be notified when a method is called.
+
## observers.py
+For this use case we need to enable [Ivy bus communication](https://gitlab.com/ivybus/ivy-python/) to log ArUco detection results (on *ArUcoCamera.on_watch* call) and fixation identification with AOI matching (on *ArUcoCamera.on_look* call).
+
```python
import logging
diff --git a/docs/use_cases/pilot_gaze_monitoring/pipeline.md b/docs/use_cases/pilot_gaze_tracking/pipeline.md
index 8f8dad0..65fccc3 100644
--- a/docs/use_cases/pilot_gaze_monitoring/pipeline.md
+++ b/docs/use_cases/pilot_gaze_tracking/pipeline.md
@@ -5,8 +5,7 @@ The pipeline processes camera image and gaze data to enable gaze mapping and gaz
## live_processing_pipeline.json
-For this use case we need to detect ArUco markers to enable gaze mapping.
-**ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
```json
{
@@ -37,12 +36,6 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
"PIC_PFD": {
"size": [960, 1080],
"background": "PIC_PFD.png",
- "gaze_movement_identifier": {
- "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
- "deviation_max_threshold": 50,
- "duration_min_threshold": 150
- }
- },
"layers": {
"Main": {
"aoi_scene": "PIC_PFD.svg"
@@ -56,9 +49,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
}
}
- },
- "angle_tolerance": 15.0,
- "distance_tolerance": 10.0
+ }
}
},
"layers": {
@@ -119,18 +110,26 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
},
"observers": {
- "observers.ArUcoCameraLogger": {}
+ "observers.ArUcoCameraLogger": {},
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "_export/look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "_export/watch_performance.csv"
+ }
}
}
}
```
-All the files mentioned aboved are described below.
+All the files mentioned above are described below.
-The observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+The *ArUcoCameraLogger* observer object is defined into the [observers.py](observers.md) file that is described in the next chapter.
## optic_parameters.json
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
```json
{
"rms": 0.6688921504088245,
@@ -167,15 +166,19 @@ The observers objects are defined into the [observers.py](observers.md) file tha
## detector_parameters.json
+This file defines the ArUco detector parameters as explained into [the detection improvement chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md).
+
```json
{
"adaptiveThreshConstant": 7,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
```
## aruco_scene.obj
+This file defines the place where are the ArUco markers into the cockpit geometry. Markers' positions have been edited in Blender software from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -239,6 +242,8 @@ f 29 30 32 31
## Cockpit.obj
+This file defines the place of the AOI into the cockpit geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -274,10 +279,14 @@ f 13 14 16 15
## PIC_PFD.png
+This file is a screenshot of the PFD screen used to monitor where the gaze is projected after gaze mapping processing.
+
![PFD frame background](../../img/haiku_PIC_PFD_background.png)
## PIC_PFD.svg
+This file defines the place of the AOI into the PFD frame. AOI positions have been edited with [Inkscape software](https://inkscape.org/fr/) from a screenshot of the PFD screen then exported at SVG format.
+
```svg
<svg>
<rect id="PIC_PFD_Air_Speed" x="93.228" y="193.217" width="135.445" height="571.812"/>
@@ -288,3 +297,15 @@ f 13 14 16 15
<rect id="PIC_PFD_Vertical_Speed" x="819.913" y="193.217" width="85.185" height="609.09"/>
</svg>
```
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer, the *look* method execution time is 5.7ms and it is called ~100 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer with CUDA acceleration, the *watch* method execution time is 46.5ms and it is called more than 12 times per second.