aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/img/4flight_visual_pattern.pngbin0 -> 331959 bytes
-rw-r--r--docs/img/4flight_workspace.pngbin0 -> 311033 bytes
-rw-r--r--docs/img/argaze_load_gui.pngbin0 -> 151200 bytes
-rw-r--r--docs/img/argaze_load_gui_haiku.pngbin0 -> 321011 bytes
-rw-r--r--docs/img/argaze_load_gui_opencv_frame.pngbin0 -> 339281 bytes
-rw-r--r--docs/img/argaze_load_gui_opencv_pipeline.pngbin0 -> 517856 bytes
-rw-r--r--docs/img/argaze_load_gui_pfe.pngbin0 -> 454631 bytes
-rw-r--r--docs/img/argaze_load_gui_random.pngbin0 -> 33593 bytes
-rw-r--r--docs/img/argaze_load_gui_random_pipeline.pngbin0 -> 74788 bytes
-rw-r--r--docs/img/argaze_pipeline.pngbin92231 -> 92553 bytes
-rw-r--r--docs/img/eye_tracker_context.pngbin0 -> 41128 bytes
-rw-r--r--docs/img/pipeline_input_context.pngbin49064 -> 0 bytes
-rw-r--r--docs/index.md18
-rw-r--r--docs/use_cases/air_controller_gaze_study/context.md22
-rw-r--r--docs/use_cases/air_controller_gaze_study/introduction.md48
-rw-r--r--docs/use_cases/air_controller_gaze_study/observers.md90
-rw-r--r--docs/use_cases/air_controller_gaze_study/pipeline.md366
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/context.md7
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/introduction.md12
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/observers.md6
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/pipeline.md6
-rw-r--r--docs/use_cases/pilot_gaze_monitoring/context.md11
-rw-r--r--docs/use_cases/pilot_gaze_monitoring/introduction.md7
-rw-r--r--docs/use_cases/pilot_gaze_monitoring/observers.md4
-rw-r--r--docs/use_cases/pilot_gaze_monitoring/pipeline.md49
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md4
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md79
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md37
-rw-r--r--docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md90
-rw-r--r--docs/user_guide/aruco_marker_pipeline/introduction.md3
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md195
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md106
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md (renamed from docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md)15
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md65
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/opencv.md63
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/random.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md59
-rw-r--r--docs/user_guide/eye_tracking_context/introduction.md19
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md58
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md7
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/background.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md58
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/heatmap.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/introduction.md8
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/recording.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualization.md35
-rw-r--r--docs/user_guide/pipeline_input_context/configuration_and_connection.md35
-rw-r--r--docs/user_guide/pipeline_input_context/context_definition.md57
-rw-r--r--docs/user_guide/pipeline_input_context/introduction.md24
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md48
-rw-r--r--docs/user_guide/utils/estimate_aruco_markers_pose.md60
-rw-r--r--docs/user_guide/utils/main_commands.md (renamed from docs/user_guide/utils/ready-made_scripts.md)35
56 files changed, 1507 insertions, 379 deletions
diff --git a/docs/img/4flight_visual_pattern.png b/docs/img/4flight_visual_pattern.png
new file mode 100644
index 0000000..0550063
--- /dev/null
+++ b/docs/img/4flight_visual_pattern.png
Binary files differ
diff --git a/docs/img/4flight_workspace.png b/docs/img/4flight_workspace.png
new file mode 100644
index 0000000..f899ab2
--- /dev/null
+++ b/docs/img/4flight_workspace.png
Binary files differ
diff --git a/docs/img/argaze_load_gui.png b/docs/img/argaze_load_gui.png
new file mode 100644
index 0000000..e012adc
--- /dev/null
+++ b/docs/img/argaze_load_gui.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_haiku.png b/docs/img/argaze_load_gui_haiku.png
new file mode 100644
index 0000000..6a4e1ec
--- /dev/null
+++ b/docs/img/argaze_load_gui_haiku.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_opencv_frame.png b/docs/img/argaze_load_gui_opencv_frame.png
new file mode 100644
index 0000000..3ab3b5e
--- /dev/null
+++ b/docs/img/argaze_load_gui_opencv_frame.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_opencv_pipeline.png b/docs/img/argaze_load_gui_opencv_pipeline.png
new file mode 100644
index 0000000..227a91d
--- /dev/null
+++ b/docs/img/argaze_load_gui_opencv_pipeline.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_pfe.png b/docs/img/argaze_load_gui_pfe.png
new file mode 100644
index 0000000..0e622d3
--- /dev/null
+++ b/docs/img/argaze_load_gui_pfe.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_random.png b/docs/img/argaze_load_gui_random.png
new file mode 100644
index 0000000..c95a9f5
--- /dev/null
+++ b/docs/img/argaze_load_gui_random.png
Binary files differ
diff --git a/docs/img/argaze_load_gui_random_pipeline.png b/docs/img/argaze_load_gui_random_pipeline.png
new file mode 100644
index 0000000..210d410
--- /dev/null
+++ b/docs/img/argaze_load_gui_random_pipeline.png
Binary files differ
diff --git a/docs/img/argaze_pipeline.png b/docs/img/argaze_pipeline.png
index 953cbba..61606b2 100644
--- a/docs/img/argaze_pipeline.png
+++ b/docs/img/argaze_pipeline.png
Binary files differ
diff --git a/docs/img/eye_tracker_context.png b/docs/img/eye_tracker_context.png
new file mode 100644
index 0000000..638e9a6
--- /dev/null
+++ b/docs/img/eye_tracker_context.png
Binary files differ
diff --git a/docs/img/pipeline_input_context.png b/docs/img/pipeline_input_context.png
deleted file mode 100644
index 8c195ea..0000000
--- a/docs/img/pipeline_input_context.png
+++ /dev/null
Binary files differ
diff --git a/docs/index.md b/docs/index.md
index 2d00d16..ca9271a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -7,20 +7,26 @@ title: What is ArGaze?
**Useful links**: [Installation](installation.md) | [Source Repository](https://gitpub.recherche.enac.fr/argaze) | [Issue Tracker](https://git.recherche.enac.fr/projects/argaze/issues) | [Contact](mailto:argaze-contact@recherche.enac.fr)
**ArGaze** is an open and flexible Python software library designed to provide a unified and modular approach to gaze analysis or gaze interaction.
-**ArGaze** facilitates **real-time and/or post-processing analysis** for both **screen-based and head-mounted** eye tracking systems.
+
By offering a wide array of gaze metrics and supporting easy extension to incorporate additional metrics, **ArGaze** empowers researchers and practitioners to explore novel analytical approaches efficiently.
![ArGaze pipeline](img/argaze_pipeline.png)
+## Eye tracking context
+
+**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **live data capture and afterward data playback**.
+
+[Learn how to handle various eye tracking context by reading the dedicated user guide section](./user_guide/eye_tracking_context/introduction.md).
+
## Gaze analysis pipeline
-**ArGaze** provides an extensible modules library, allowing to select application-specific algorithms at each pipeline step:
+Once incoming eye tracking data available, **ArGaze** provides an extensible modules library, allowing to select application-specific algorithms at each pipeline step:
* **Fixation/Saccade identification**: dispersion threshold identification, velocity threshold identification, etc.
* **Area Of Interest (AOI) matching**: focus point inside, deviation circle coverage, etc.
* **Scan path analysis**: transition matrix, entropy, explore/exploit ratio, etc.
-Once the incoming data is formatted as required, all those gaze analysis features can be used with any screen-based eye tracker devices.
+All those gaze analysis features can be used with any screen-based eye tracker devices.
[Learn how to build gaze analysis pipelines for various use cases by reading the dedicated user guide section](./user_guide/gaze_analysis_pipeline/introduction.md).
@@ -37,3 +43,9 @@ This ArUco marker pipeline can be combined with any wearable eye tracking device
!!! note
*ArUco marker pipeline is greatly inspired by [Andrew T. Duchowski, Vsevolod Peysakhovich and Krzysztof Krejtz article](https://git.recherche.enac.fr/attachments/download/1990/Using_Pose_Estimation_to_Map_Gaze_to_Detected_Fidu.pdf) about using pose estimation to map gaze to detected fiducial markers.*
+
+## Demonstration
+
+![type:video](https://achil.recherche.enac.fr/videos/argaze_features.mp4)
+
+[Test **ArGaze** by reading the dedicated user guide section](./user_guide/utils/demonstrations_scripts.md). \ No newline at end of file
diff --git a/docs/use_cases/air_controller_gaze_study/context.md b/docs/use_cases/air_controller_gaze_study/context.md
new file mode 100644
index 0000000..5b13ca5
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/context.md
@@ -0,0 +1,22 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
+For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to playback data from records made by this device.
+
+While *segment* entry is specific to the [TobiiProGlasses2.SegmentPlayback](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback": {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "/Volumes/projects/fbr6k3e/records/4rcbdzk/segments/1",
+ "pipeline": "post_processing_pipeline.json"
+ }
+}
+```
+
+The [post_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
diff --git a/docs/use_cases/air_controller_gaze_study/introduction.md b/docs/use_cases/air_controller_gaze_study/introduction.md
new file mode 100644
index 0000000..f188eec
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/introduction.md
@@ -0,0 +1,48 @@
+Post-processing head-mounted eye tracking records
+=================================================
+
+**ArGaze** enabled a study of air traffic controller gaze strategy.
+
+The following use case has integrated the [ArUco marker pipeline](../../user_guide/aruco_marker_pipeline/introduction.md) to map air traffic controllers gaze onto multiple screens environment in post-processing then, enable scan path study using the [gaze analysis pipeline](../../user_guide/gaze_analysis_pipeline/introduction.md).
+
+## Background
+
+The next-gen air traffic control system (4Flight) aims to enhance the operational capacity of the en-route control center by offering new tools to air traffic controllers. However, it entails significant changes in their working method, which will consequently have an impact on how they are trained.
+Several research projects on visual patterns of air traffic controllers indicate the urgent need to improve the effectiveness of training in visual information seeking behavior.
+An exploratory study was initiated by a group of trainee air traffic controllers with the aim of analyzing the visual patterns of novice controllers and instructors, intending to propose guidelines regarding the visual pattern for training.
+
+## Environment
+
+The 4Flight control position consists of two screens: the first displays the radar image along with other information regarding the observed sector, the second displays the agenda, which allows the controller to link conflicting aircraft by creating data blocks, and the Dyp info, which displays some information about the flight.
+During their training, controllers are taught to visually follow all aircraft streams along a given route, focusing on their planned flight path and potential interactions with other aircraft.
+
+![4Flight Workspace](../../img/4flight_workspace.png)
+
+A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft simultaneously was performed by air traffic controllers. The controller could encounter lateral conflicts (same altitude) between 2 and 3 aircraft and conflicts between aircraft that need to ascend or descend within the sector.
+After the simulation, a directed interview about the gaze pattern was conducted.
+Eye tracking data was recorded with a Tobii Pro Glasses 2, a head-mounted eye tracker.
+The gaze and scene camera video were captured with Tobii Pro Lab software and post-processed with **ArGaze** software library.
+As the eye tracker model is head mounted, ArUco markers were placed around the two screens to ensure that several of them were always visible in the field of view of the eye tracker camera.
+
+Various metrics were exported with specific pipeline observers, including average fixation duration, explore/exploit ratio, K-coefficient, AOI distribution, transition matrix, entropy and N-grams.
+Although statistical analysis is not possible due to the small sample size of the study (6 instructors, 5 qualified controllers, and 5 trainees), visual pattern summaries have been manually built from transition matrix export to produce a qualitative interpretation showing what instructors attend during training and how qualified controllers work. Red arcs are more frequent than the blue ones. Instructors (Fig. a) and four different qualified controllers (Fig. b, c, d, e).
+
+![4Flight Visual pattern](../../img/4flight_visual_pattern.png)
+
+## Setup
+
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
+
+* The context file that playback gaze data and scene camera video records: [data_playback_context.json](context.md)
+* The pipeline file that processes gaze data and scene camera video: [post_processing_pipeline.json](pipeline.md)
+* The observers file that exports analysis outputs: [observers.py](observers.md)
+
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
+
+```shell
+python -m argaze load segment_playback_context.json
+```
+
+This command opens one GUI window per frame (one for the scene camera, one for the sector screen and one for the info screen) that allow to monitor gaze mapping while processing.
+
+![ArGaze load GUI for PFE study](../../img/argaze_load_gui_pfe.png)
diff --git a/docs/use_cases/air_controller_gaze_study/observers.md b/docs/use_cases/air_controller_gaze_study/observers.md
new file mode 100644
index 0000000..500d573
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/observers.md
@@ -0,0 +1,90 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
+
+For this use case we need to record gaze analysis metrics on *ArUcoCamera.on_look* call and to record sector screen image on *ArUcoCamera.on_copy_background_into_scenes_frames* signal.
+
+```python
+import logging
+
+from argaze.utils import UtilsFeatures
+
+import cv2
+
+class ScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Explore/Exploit ratio", "K coefficient"
+
+ def on_look(self, timestamp, frame, exception):
+ """Log scan path metrics."""
+
+ if frame.is_analysis_available():
+
+ analysis = frame.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.ScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer'].explore_exploit_ratio,
+ analysis['argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer'].K
+ )
+
+ self.write(data)
+
+class AOIScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ self.header = "Timestamp (ms)", "Path duration (ms)", "Steps number", "Fixation durations average (ms)", "Transition matrix probabilities", "Transition matrix density", "N-Grams count", "Stationary entropy", "Transition entropy"
+
+ def on_look(self, timestamp, layer, exception):
+ """Log aoi scan path metrics"""
+
+ if layer.is_analysis_available():
+
+ analysis = layer.analysis()
+
+ data = (
+ int(timestamp),
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].path_duration,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].steps_number,
+ analysis['argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer'].step_fixation_durations_average,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_probabilities,
+ analysis['argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer'].transition_matrix_density,
+ analysis['argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer'].ngrams_count,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].stationary_entropy,
+ analysis['argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer'].transition_entropy
+ )
+
+ self.write(data)
+
+class VideoRecorder(UtilsFeatures.VideoWriter):
+
+ def __init__(self, **kwargs):
+
+ super().__init__(**kwargs)
+
+ def on_copy_background_into_scenes_frames(self, timestamp, frame, exception):
+ """Write frame image."""
+
+ logging.debug('VideoRecorder.on_map')
+
+ image = frame.image()
+
+ # Write video timing
+ cv2.rectangle(image, (0, 0), (550, 50), (63, 63, 63), -1)
+ cv2.putText(image, f'Time: {int(timestamp)} ms', (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
+
+ self.write(image)
+```
diff --git a/docs/use_cases/air_controller_gaze_study/pipeline.md b/docs/use_cases/air_controller_gaze_study/pipeline.md
new file mode 100644
index 0000000..b1df62a
--- /dev/null
+++ b/docs/use_cases/air_controller_gaze_study/pipeline.md
@@ -0,0 +1,366 @@
+Post processing pipeline
+========================
+
+The pipeline processes camera image and gaze data to enable gaze mapping and gaze analysis.
+
+## post_processing_pipeline.json
+
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "ATC_Study",
+ "size": [1920, 1080],
+ "sides_mask": 420,
+ "copy_background_into_scenes_frames": true,
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "optic_parameters": "optic_parameters.json",
+ "parameters": {
+ "adaptiveThreshConstant": 20,
+ "useAruco3Detection": true
+ }
+ },
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "layers": {
+ "Main" : {
+ "aoi_matcher": {
+ "argaze.GazeAnalysis.DeviationCircleCoverage.AOIMatcher": {
+ "coverage_threshold": 0.5
+ }
+ },
+ "aoi_scan_path" : {
+ "duration_max": 60000
+ },
+ "aoi_scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.TransitionMatrix.AOIScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.NGram.AOIScanPathAnalyzer": {
+ "n_min": 3,
+ "n_max": 5
+ },
+ "argaze.GazeAnalysis.Entropy.AOIScanPathAnalyzer": {}
+ },
+ "observers": {
+ "observers.AOIScanPathAnalysisRecorder": {
+ "path": "aoi_metrics.csv"
+ }
+ }
+ }
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [0, 255, 0]
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "update_looked_aoi": true,
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [0, -10]
+ }
+ }
+ }
+ },
+ "scenes": {
+ "Workspace": {
+ "aruco_markers_group": "workspace_markers.obj",
+ "layers": {
+ "Main" : {
+ "aoi_scene": "workspace_aois.obj"
+ }
+ },
+ "frames": {
+ "Sector_Screen": {
+ "size": [1080, 1017],
+ "gaze_movement_identifier": {
+ "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
+ "deviation_max_threshold": 25,
+ "duration_min_threshold": 150
+ }
+ },
+ "scan_path": {
+ "duration_max": 30000
+ },
+ "scan_path_analyzers": {
+ "argaze.GazeAnalysis.Basic.ScanPathAnalyzer": {},
+ "argaze.GazeAnalysis.ExploreExploitRatio.ScanPathAnalyzer": {
+ "short_fixation_duration_threshold": 0
+ },
+ "argaze.GazeAnalysis.KCoefficient.ScanPathAnalyzer": {}
+ },
+ "layers" :{
+ "Main": {
+ "aoi_scene": "sector_screen_aois.svg"
+ }
+ },
+ "heatmap": {
+ "size": [80, 60]
+ },
+ "image_parameters": {
+ "background_weight": 1,
+ "heatmap_weight": 0.5,
+ "draw_gaze_positions": {
+ "color": [0, 127, 127],
+ "size": 4
+ },
+ "draw_scan_path": {
+ "draw_fixations": {
+ "deviation_circle_color": [255, 255, 255],
+ "duration_border_color": [0, 127, 127],
+ "duration_factor": 1e-2
+ },
+ "draw_saccades": {
+ "line_color": [0, 255, 255]
+ },
+ "deepness": 0
+ },
+ "draw_layers": {
+ "Main": {
+ "draw_aoi_scene": {
+ "draw_aoi": {
+ "color": [255, 255, 255],
+ "border_size": 1
+ }
+ },
+ "draw_aoi_matching": {
+ "draw_matched_fixation": {
+ "deviation_circle_color": [255, 255, 255],
+ "draw_positions": {
+ "position_color": [0, 255, 0],
+ "line_color": [0, 0, 0]
+ }
+ },
+ "draw_looked_aoi": {
+ "color": [0, 255, 0],
+ "border_size": 2
+ },
+ "looked_aoi_name_color": [255, 255, 255],
+ "looked_aoi_name_offset": [10, 10]
+ }
+ }
+ }
+ },
+ "observers": {
+ "observers.ScanPathAnalysisRecorder": {
+ "path": "sector_screen.csv"
+ },
+ "observers.VideoRecorder": {
+ "path": "sector_screen.mp4",
+ "width": 1080,
+ "height": 1024,
+ "fps": 25
+ }
+ }
+ },
+ "Info_Screen": {
+ "size": [640, 1080],
+ "layers" : {
+ "Main": {
+ "aoi_scene": "info_screen_aois.svg"
+ }
+ }
+ }
+ }
+ }
+ },
+ "observers": {
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "_export/look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "_export/watch_performance.csv"
+ }
+ }
+ }
+}
+```
+
+All the files mentioned aboved are described below.
+
+The *ScanPathAnalysisRecorder* and *AOIScanPathAnalysisRecorder* observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+
+## optic_parameters.json
+
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
+```json
+{
+ "rms": 0.6688921504088245,
+ "dimensions": [
+ 1920,
+ 1080
+ ],
+ "K": [
+ [
+ 1135.6524381415752,
+ 0.0,
+ 956.0685325355497
+ ],
+ [
+ 0.0,
+ 1135.9272506869524,
+ 560.059099810324
+ ],
+ [
+ 0.0,
+ 0.0,
+ 1.0
+ ]
+ ],
+ "D": [
+ 0.01655492265003404,
+ 0.1985524264972037,
+ 0.002129965902489484,
+ -0.0019528582922179365,
+ -0.5792910353639452
+ ]
+}
+```
+
+## workspace_markers.obj
+
+This file defines the place where are the ArUco markers into the workspace geometry. Markers' positions have been edited in Blender software from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o DICT_APRILTAG_16h5#1_Marker
+v -2.532475 48.421242 0.081627
+v 2.467094 48.355682 0.077174
+v 2.532476 53.352734 -0.081634
+v -2.467093 53.418293 -0.077182
+s off
+f 1 2 3 4
+o DICT_APRILTAG_16h5#6_Marker
+v 88.144676 23.084166 -0.070246
+v 93.144661 23.094980 -0.072225
+v 93.133904 28.092941 0.070232
+v 88.133919 28.082127 0.072211
+s off
+f 5 6 7 8
+o DICT_APRILTAG_16h5#2_Marker
+v -6.234516 27.087950 0.176944
+v -1.244015 27.005413 -0.119848
+v -1.164732 32.004459 -0.176936
+v -6.155232 32.086998 0.119855
+s off
+f 9 10 11 12
+o DICT_APRILTAG_16h5#3_Marker
+v -2.518053 -2.481743 -0.018721
+v 2.481756 -2.518108 0.005601
+v 2.518059 2.481743 0.018721
+v -2.481749 2.518108 -0.005601
+s off
+f 13 14 15 16
+o DICT_APRILTAG_16h5#5_Marker
+v 48.746418 48.319012 -0.015691
+v 53.746052 48.374046 0.009490
+v 53.690983 53.373741 0.015698
+v 48.691349 53.318699 -0.009490
+s off
+f 17 18 19 20
+o DICT_APRILTAG_16h5#4_Marker
+v 23.331947 -3.018721 5.481743
+v 28.331757 -2.994399 5.518108
+v 28.368059 -2.981279 0.518257
+v 23.368252 -3.005600 0.481892
+s off
+f 21 22 23 24
+
+```
+
+## workspace_aois.obj
+
+This file defines the place of the AOI into the workspace geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D model of the workspace built manually then exported at OBJ format.
+
+```obj
+# Blender v3.0.1 OBJ File: 'workspace.blend'
+# www.blender.org
+o Sector_Screen
+v 0.000000 1.008786 0.000000
+v 51.742416 1.008786 0.000000
+v 0.000000 52.998108 0.000000
+v 51.742416 52.998108 0.000000
+s off
+f 1 2 4 3
+o Info_Screen
+v 56.407101 0.000000 0.000000
+v 91.407104 0.000000 0.000000
+v 56.407101 52.499996 0.000000
+v 91.407104 52.499996 0.000000
+s off
+f 5 6 8 7
+
+```
+
+## sector_screen_aois.svg
+
+This file defines the place of the AOI into the sector screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the sector screen then exported at SVG format.
+
+```svg
+<svg >
+ <path id="Area_1" d="M317.844,198.526L507.431,426.837L306.453,595.073L110.442,355.41L317.844,198.526Z"/>
+ <path id="Area_2" d="M507.431,426.837L611.554,563.624L444.207,750.877L306.453,595.073L507.431,426.837Z"/>
+ <path id="Area_3" d="M395.175,1017L444.207,750.877L611.554,563.624L1080,954.462L1080,1017L395.175,1017Z"/>
+ <path id="Area_4" d="M611.554,563.624L756.528,293.236L562.239,198.526L471.45,382.082L611.554,563.624Z"/>
+ <path id="Area_5" d="M0,900.683L306.453,595.073L444.207,750.877L395.175,1017L0,1017L0,900.683Z"/>
+ <path id="Area_6" d="M471.45,381.938L557.227,207.284L354.832,65.656L237.257,104.014L471.45,381.938Z"/>
+ <path id="Area_7" d="M0,22.399L264.521,24.165L318.672,77.325L237.257,103.625L248.645,118.901L0,80.963L0,22.399Z"/>
+</svg>
+```
+
+## info_screen_aois.svg
+
+This file defines the place of the AOI into the info screen frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the info screen then exported at SVG format.
+
+```svg
+<svg >
+ <rect id="Strips" x="0" y="880" width="640" height="200"/>
+</svg>
+```
+
+## aoi_metrics.csv
+
+The file contains all the metrics recorded by the *AOIScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.csv
+
+The file contains all the metrics recorded by the *ScanPathAnalysisRecorder* objects as defined into the [observers.py](observers.md) file.
+
+## sector_screen.mp4
+
+The video file is a record of the sector screen frame image.
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2,3GHz Intel Core i9 8 cores), the *look* method execution time is ~5,6ms and it is called ~163 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a MacBookPro (2,3GHz Intel Core i9 8 cores), the *watch* method execution time is ~52ms and it is called ~11,8 times per second.
diff --git a/docs/use_cases/gaze_based_candidate_selection/context.md b/docs/use_cases/gaze_based_candidate_selection/context.md
new file mode 100644
index 0000000..96547ea
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/context.md
@@ -0,0 +1,7 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
diff --git a/docs/use_cases/gaze_based_candidate_selection/introduction.md b/docs/use_cases/gaze_based_candidate_selection/introduction.md
new file mode 100644
index 0000000..da8d6f9
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/introduction.md
@@ -0,0 +1,12 @@
+Post-processing screen-based eye tracker data
+=================================================
+
+**ArGaze** enabled ...
+
+The following use case has integrated ...
+
+## Background
+
+## Environment
+
+## Setup
diff --git a/docs/use_cases/gaze_based_candidate_selection/observers.md b/docs/use_cases/gaze_based_candidate_selection/observers.md
new file mode 100644
index 0000000..a1f1fce
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/observers.md
@@ -0,0 +1,6 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
diff --git a/docs/use_cases/gaze_based_candidate_selection/pipeline.md b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
new file mode 100644
index 0000000..6fae01a
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
@@ -0,0 +1,6 @@
+Post processing pipeline
+========================
+
+The pipeline processes gaze data to enable gaze analysis.
+
+## post_processing_pipeline.json
diff --git a/docs/use_cases/pilot_gaze_monitoring/context.md b/docs/use_cases/pilot_gaze_monitoring/context.md
index 417ed13..477276d 100644
--- a/docs/use_cases/pilot_gaze_monitoring/context.md
+++ b/docs/use_cases/pilot_gaze_monitoring/context.md
@@ -1,12 +1,11 @@
-Live streaming context
-======================
+Data capture context
+====================
-The context handles pipeline inputs.
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
## live_streaming_context.json
-For this use case we need to connect to a Tobii Pro Glasses 2 device.
-**ArGaze** provides a context class to live stream data from this device.
+For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to capture data from this device.
While *address*, *project*, *participant* and *configuration* entries are specific to the [TobiiProGlasses2.LiveStream](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.LiveStream) class, *name*, *pipeline* and *observers* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
@@ -39,4 +38,4 @@ While *address*, *project*, *participant* and *configuration* entries are specif
The [live_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
-The observers objects are defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
+The *IvyBus* observer object is defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
diff --git a/docs/use_cases/pilot_gaze_monitoring/introduction.md b/docs/use_cases/pilot_gaze_monitoring/introduction.md
index 453a443..7e88c69 100644
--- a/docs/use_cases/pilot_gaze_monitoring/introduction.md
+++ b/docs/use_cases/pilot_gaze_monitoring/introduction.md
@@ -30,17 +30,18 @@ Finally, fixation events were sent in real-time through [Ivy bus middleware](htt
## Setup
-The setup to integrate **ArGaze** to the experiment is defined by 3 main files:
+The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
* The context file that captures gaze data and scene camera video: [live_streaming_context.json](context.md)
* The pipeline file that processes gaze data and scene camera video: [live_processing_pipeline.json](pipeline.md)
* The observers file that send fixation events via Ivy bus middleware: [observers.py](observers.md)
-As any **ArGaze** setup, it is loaded by executing the following command:
+As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
```shell
python -m argaze load live_streaming_context.json
```
-## Performance
+This command opens a GUI window that allows to start gaze calibration, to launch recording and to monitor gaze mapping. Another window is also opened to display gaze mapping onto PFD screen.
+![ArGaze load GUI for Haiku](../../img/argaze_load_gui_haiku.png)
diff --git a/docs/use_cases/pilot_gaze_monitoring/observers.md b/docs/use_cases/pilot_gaze_monitoring/observers.md
index 2e3f394..5f5bc78 100644
--- a/docs/use_cases/pilot_gaze_monitoring/observers.md
+++ b/docs/use_cases/pilot_gaze_monitoring/observers.md
@@ -1,8 +1,12 @@
Fixation events sending
=======================
+Observers are attached to pipeline steps to be notified when a method is called.
+
## observers.py
+For this use case we need to enable [Ivy bus communication](https://gitlab.com/ivybus/ivy-python/) to log ArUco detection results (on *ArUcoCamera.on_watch* call) and fixation identification with AOI matching (on *ArUcoCamera.on_look* call).
+
```python
import logging
diff --git a/docs/use_cases/pilot_gaze_monitoring/pipeline.md b/docs/use_cases/pilot_gaze_monitoring/pipeline.md
index 8f8dad0..1450fed 100644
--- a/docs/use_cases/pilot_gaze_monitoring/pipeline.md
+++ b/docs/use_cases/pilot_gaze_monitoring/pipeline.md
@@ -5,8 +5,7 @@ The pipeline processes camera image and gaze data to enable gaze mapping and gaz
## live_processing_pipeline.json
-For this use case we need to detect ArUco markers to enable gaze mapping.
-**ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
+For this use case we need to detect ArUco markers to enable gaze mapping: **ArGaze** provides the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class to setup an [ArUco markers pipeline](../../user_guide/aruco_marker_pipeline/introduction.md).
```json
{
@@ -37,12 +36,6 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
"PIC_PFD": {
"size": [960, 1080],
"background": "PIC_PFD.png",
- "gaze_movement_identifier": {
- "argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
- "deviation_max_threshold": 50,
- "duration_min_threshold": 150
- }
- },
"layers": {
"Main": {
"aoi_scene": "PIC_PFD.svg"
@@ -56,9 +49,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
}
}
- },
- "angle_tolerance": 15.0,
- "distance_tolerance": 10.0
+ }
}
},
"layers": {
@@ -119,7 +110,13 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
}
},
"observers": {
- "observers.ArUcoCameraLogger": {}
+ "observers.ArUcoCameraLogger": {},
+ "argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
+ "path": "_export/look_performance.csv"
+ },
+ "argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
+ "path": "_export/watch_performance.csv"
+ }
}
}
}
@@ -127,10 +124,12 @@ For this use case we need to detect ArUco markers to enable gaze mapping.
All the files mentioned aboved are described below.
-The observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
+The *ArUcoCameraLogger* observer object is defined into the [observers.py](observers.md) file that is described in the next chapter.
## optic_parameters.json
+This file defines the Tobii Pro glasses 2 scene camera optic parameters which has been calculated as explained into [the camera calibration chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md).
+
```json
{
"rms": 0.6688921504088245,
@@ -167,15 +166,19 @@ The observers objects are defined into the [observers.py](observers.md) file tha
## detector_parameters.json
+This file defines the ArUco detector parameters as explained into [the detection improvement chapter](../../user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md).
+
```json
{
"adaptiveThreshConstant": 7,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
```
## aruco_scene.obj
+This file defines the place where are the ArUco markers into the cockpit geometry. Markers' positions have been edited in Blender software from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -239,6 +242,8 @@ f 29 30 32 31
## Cockpit.obj
+This file defines the place of the AOI into the cockpit geometry. AOI positions have been edited in [Blender software](https://www.blender.org/) from a 3D scan of the cockpit then exported at OBJ format.
+
```obj
# Blender v3.0.1 OBJ File: 'scene.blend'
# www.blender.org
@@ -274,10 +279,14 @@ f 13 14 16 15
## PIC_PFD.png
+This file is a screenshot of the PFD screen used to monitor where the gaze is projected after gaze mapping processing.
+
![PFD frame background](../../img/haiku_PIC_PFD_background.png)
## PIC_PFD.svg
+This file defines the place of the AOI into the PFD frame. AOI positions have been edited with [Inkscape software](https://inkscape.org/fr/) from a screenshot of the PFD screen then exported at SVG format.
+
```svg
<svg>
<rect id="PIC_PFD_Air_Speed" x="93.228" y="193.217" width="135.445" height="571.812"/>
@@ -288,3 +297,15 @@ f 13 14 16 15
<rect id="PIC_PFD_Vertical_Speed" x="819.913" y="193.217" width="85.185" height="609.09"/>
</svg>
```
+
+## look_performance.csv
+
+This file contains the logs of *ArUcoCamera.look* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer, the *look* method execution time is ~0.5ms and it is called ~100 times per second.
+
+## watch_performance.csv
+
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+
+On a Jetson Xavier computer, the *watch* method execution time is ~50ms and it is called ~10 times per second.
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
index 975f278..311916b 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
@@ -5,7 +5,7 @@ As explain in the [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dc
## Load ArUcoDetector parameters
-[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file with ArUco detector parameters:
@@ -18,7 +18,7 @@ Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoM
"dictionary": "DICT_APRILTAG_16h5",
"parameters": {
"adaptiveThreshConstant": 10,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
},
...
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
index 625f257..e9ce740 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
@@ -134,7 +134,7 @@ Below, an optic_parameters JSON file example:
## Load and display optic parameters
-[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file where optic parameters are loaded and displayed:
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
index c81d57d..f258e04 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
@@ -74,38 +74,83 @@ from argaze import ArFeatures
...
```
-## Pipeline execution outputs
+## Pipeline execution
-The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution.
+### Detect ArUco markers, estimate scene pose and project 3D AOI
+
+Pass each camera image with timestamp information to the [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco marker detection, scene pose estimation and 3D AOI projection.
+
+!!! warning "Mandatory"
+
+ The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions.
```python
-# Assuming that timestamped images are available
+# Assuming that Full HD (1920x1080) images are available with timestamp values
...:
+ # Edit timestamped image
+ timestamped_image = DataFeatures.TimestampedImage(image, timestamp=timestamp)
+
try:
- # Watch image with ArUco camera
- aruco_camera.watch(image, timestamp=timestamp)
+ # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
+ aruco_camera.watch(timestamped_image)
# Do something with pipeline exception
except Exception as e:
...
- # Do something with detected_markers
- ... aruco_camera.aruco_detector.detected_markers()
+ # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualization.
+ ... aruco_camera.image()
+```
+
+### Detection outputs
+
+The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution.
+
+```python
+# Assuming that watch method has been called
+
+# Do something with detected_markers
+... aruco_camera.aruco_detector.detected_markers()
```
Let's understand the meaning of each returned data.
-### *aruco_camera.aruco_detector.detected_markers()*
+#### *aruco_camera.aruco_detector.detected_markers()*
A dictionary containing all detected markers is provided by [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector) class.
+### Analyse timestamped gaze positions into the camera frame
+
+As mentioned above, [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all the services described in the [gaze analysis pipeline section](../../gaze_analysis_pipeline/introduction.md).
+
+Particularly, timestamped gaze positions can be passed one by one to the [ArUcoCamera.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis.
+
+!!! warning "Mandatory"
+
+ The [ArUcoCamera.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ try:
+
+ # Look ArUcoCamera frame at a timestamped gaze position
+ aruco_camera.look(timestamped_gaze_position)
+
+ # Do something with pipeline exception
+ except Exception as e:
+
+ ...
+```
+
## Setup ArUcoCamera image parameters
-Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArUcoCamera is loaded
@@ -133,4 +178,18 @@ aruco_camera_image = aruco_camera.image(**image_parameters)
```
!!! note
- [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all image parameters described in [gaze analysis pipeline visualization section](../../gaze_analysis_pipeline/visualization.md). \ No newline at end of file
+ [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all image parameters described in [gaze analysis pipeline visualization section](../../gaze_analysis_pipeline/visualization.md).
+
+
+## Display ArUcoScene frames
+
+All [ArUcoScene](../../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image can be displayed as any [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame).
+
+```python
+ ...
+
+ # Display all ArUcoScene frames
+ for frame in aruco_camera.scene_frames():
+
+ ... frame.image()
+``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
index 46422b8..78a513a 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
@@ -1,7 +1,7 @@
Describe 3D AOI
===============
-Now that the [scene pose is estimated](aruco_marker_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
+Now that the [scene pose is estimated](aruco_marker_description.md) considering the ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
In the example scene, the two screens—the control panel and the window—are considered to be areas of interest.
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
index 7323f2e..3a029b0 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
@@ -69,7 +69,8 @@ Here is the previous extract where "Left_Screen" and "Right_Screen" AOI are defi
}
}
}
- }
+ },
+ "copy_background_into_scenes_frames": true
...
}
}
@@ -96,40 +97,18 @@ The names of 3D AOI **and** their related [ArFrames](../../argaze.md/#argaze.ArF
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frame layers are projected into their dedicated [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) layers when the JSON configuration file is loaded.
-## Pipeline execution
-
-### Map ArUcoCamera image into ArUcoScenes frames
-
-After the timestamped camera image is passed to the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perspective transformation in order to project the watched image into each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) [frame's background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
-
-```python
-# Assuming that Full HD (1920x1080) timestamped images are available
-...:
-
- # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
- aruco_camera.watch(timestamped_image)
+### *copy_background_into_scenes_frames*
- # Map watched image into ArUcoScene frames background
- aruco_camera.map(timestamp=timestamp)
-```
+When the timestamped camera image is passed to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) method, it is possible to apply a perspective transformation in order to project the watched image into each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) [frame's background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
-### Analyze timestamped gaze positions into ArUcoScene frames
+## Pipeline execution
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
!!! note
- Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) passed to the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method are projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames if applicable.
-
-### Display each ArUcoScene frames
-
-All [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image can be displayed as any [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame).
-
-```python
- ...
+ Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) passed to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) are automatically projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames if applicable.
- # Display all ArUcoScene frames
- for frame in aruco_camera.scene_frames():
+Each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image is displayed in a separate window.
- ... frame.image()
-``` \ No newline at end of file
+![ArGaze load GUI](../../img/argaze_load_gui_opencv_frame.png) \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
index f4bd2d4..56846e2 100644
--- a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
@@ -1,17 +1,17 @@
-Load and execute pipeline
+Edit and execute pipeline
=========================
-Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
+Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected by the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
-![ArUco camera frame](../../img/aruco_camera_frame.png)
+Once defined, an ArUco marker pipeline needs to embedded inside a context that will provides it both gaze positions and camera images to process.
-## Load JSON configuration file
+![ArUco camera frame](../../img/aruco_camera_frame.png)
-An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) pipeline can be loaded from a JSON configuration file thanks to [argaze.load](../../argaze.md/#argaze.load) package method.
+## Edit JSON configuration
-Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file example:
+Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration example:
```json
{
@@ -52,19 +52,7 @@ Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCam
}
```
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArUcoCamera
-with argaze.load('./configuration.json') as aruco_camera:
-
- # Do something with ArUcoCamera
- ...
-```
-
-Now, let's understand the meaning of each JSON entry.
+Let's understand the meaning of each JSON entry.
### argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera
@@ -101,62 +89,32 @@ The usual [ArFrame visualization parameters](../gaze_analysis_pipeline/visualiza
## Pipeline execution
-### Detect ArUco markers, estimate scene pose and project 3D AOI
-
-Pass each camera image with timestamp information to the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco marker detection, scene pose estimation and 3D AOI projection.
-
-!!! warning "Mandatory"
-
- The [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions.
-
-```python
-# Assuming that Full HD (1920x1080) images are available with timestamp values
-...:
-
- # Edit timestamped image
- timestamped_image = DataFeatures.TimestampedImage(image, timestamp=timestamp)
-
- try:
+A pipeline needs to be embedded into a context to be executed.
- # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
- aruco_camera.watch(timestamped_image)
+Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-
- # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualization.
- ... aruco_camera.image()
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie": {
+ "name": "Movie player",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": JSON CONFIGURATION
+ }
+}
```
-### Analyse timestamped gaze positions into the camera frame
-
-As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
-
-Particularly, timestamped gaze positions can be passed one by one to the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis.
+Then, use the [*load* command](../utils/main_commands.md) to execute the context.
-!!! warning "Mandatory"
-
- The [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
-
-```python
-# Assuming that timestamped gaze positions are available
-...
-
- try:
+```shell
+python -m argaze load CONFIGURATION
+```
- # Look ArUcoCamera frame at a timestamped gaze position
- aruco_camera.look(timestamped_gaze_position)
+This command should open a GUI window with the detected markers and identified cursor fixations circles when the mouse moves over the window.
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-```
+![ArGaze load GUI](../../img/argaze_load_gui_opencv_pipeline.png)
!!! note ""
- At this point, the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method only detects ArUco marker and the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArCamera.look) method only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
+ At this point, the pipeline only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe a 3D scene's AOI](aoi_3d_description.md) and [how to project them into the camera frame](aoi_3d_projection.md). \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/introduction.md b/docs/user_guide/aruco_marker_pipeline/introduction.md
index ef2e4da..54e1a1f 100644
--- a/docs/user_guide/aruco_marker_pipeline/introduction.md
+++ b/docs/user_guide/aruco_marker_pipeline/introduction.md
@@ -9,6 +9,9 @@ The OpenCV library provides a module to detect fiducial markers in a picture and
The ArGaze [ArUcoMarker submodule](../../argaze.md/#argaze.ArUcoMarker) eases markers creation, markers detection, and 3D scene pose estimation through a set of high-level classes.
+!!! warning "Read eye tracking context and gaze analysis pipeline sections before"
+ This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md) and also assumes that the way a [gaze analysis pipeline](../gaze_analysis_pipeline/introduction.md) works is understood.
+
First, let's look at the schema below. It gives an overview of the main notions involved in the following chapters.
![ArUco marker pipeline](../../img/aruco_marker_pipeline.png)
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
new file mode 100644
index 0000000..a543bc7
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -0,0 +1,195 @@
+Define a context class
+======================
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process.
+
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement.
+
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
+
+## Write data capture context
+
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataCaptureExample(ArFeatures.DataCaptureContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
+ self.__gaze_thread.start()
+
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
+ self.__camera_thread.start()
+
+ return self
+
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop capture loops
+ self.stop()
+
+ # Stop capture threads
+ threading.Thread.join(self.__gaze_thread)
+ threading.Thread.join(self.__camera_thread)
+
+ def calibrate(self):
+ """Handle device calibration process."""
+
+ ...
+```
+
+## Write data playback context
+
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
+
+ return self
+
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
+
+ # Playback loop
+ while self.is_running():
+
+ # Pause playback
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop playback loop
+ self.stop()
+
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
+
+ @property
+ def duration(self) -> int|float:
+ """Get data duration."""
+ ...
+
+ @property
+ def progression(self) -> float:
+ """Get data playback progression between 0 and 1."""
+ ...
+```
+
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
new file mode 100644
index 0000000..d8eb389
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -0,0 +1,106 @@
+Scritp the context
+==================
+
+Context objects are accessible from a Python script.
+
+## Load configuration from JSON file
+
+A context configuration can be loaded from a JSON file using the [*load*](../../../argaze.md/#argaze.load) function.
+
+```python
+from argaze import load
+
+# Load a context
+with load(configuration_filepath) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+!!! note
+ The **with** statement enables context by calling its **enter** method then ensures that its **exit** method is always called at the end.
+
+## Load configuration from dictionary
+
+A context configuration can be loaded from a Python dictionary using the [*from_dict*](../../../argaze.md/#argaze.DataFeatures.from_dict) function.
+
+```python
+from argaze import DataFeatures
+
+import my_package
+
+# Set working directory to enable relative file path loading
+DataFeatures.set_working_directory('path/to/folder')
+
+# Edit a dict with context configuration
+configuration = {
+ "name": "My context",
+ "parameter": ...,
+ "pipeline": ...
+}
+
+# Load a context from a package
+with DataFeatures.from_dict(my_package.MyContext, configuration) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+## Manage context
+
+Check the context or the pipeline type to adapt features.
+
+```python
+from argaze import ArFeatures
+
+# Assuming the context is loaded and is running
+...
+
+ # Check context type
+
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
+ ...
+
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
+ ...
+
+ # Check pipeline type
+
+ # Screen-based case: only gaze positions are processes
+ if issubclass(type(context.pipeline), ArFeatures.ArFrame):
+ ...
+
+ # Head-mounted case: camera images also processes
+ if issubclass(type(context.pipeline), ArFeatures.ArCamera):
+ ...
+```
+
+## Display context
+
+The context image can be displayed in low priority to not block pipeline processing.
+
+```python
+# Assuming the context is loaded and is running
+...
+
+ # Display context if the pipeline is available
+ try:
+
+ ... = context.image(wait = False)
+
+ except DataFeatures.SharedObjectBusy:
+
+ pass
+```
diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
index 026d287..959d955 100644
--- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
@@ -3,7 +3,7 @@ Edit timestamped gaze positions
Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before going further.
-![Timestamped gaze positions](../../img/timestamped_gaze_positions.png)
+![Timestamped gaze positions](../../../img/timestamped_gaze_positions.png)
## Import timestamped gaze positions from CSV file
@@ -28,8 +28,8 @@ for timestamped_gaze_position in ts_gaze_positions:
## Edit timestamped gaze positions from live stream
-Real-time gaze positions can be edited thanks to the [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
-Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html).
+Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
+Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html).
```python
from argaze import GazeFeatures
@@ -64,12 +64,3 @@ start_time = time.time()
!!! warning "Free time unit"
Timestamps can either be integers or floats, seconds, milliseconds or what ever you need. The only concern is that all time values used in further configurations have to be in the same unit.
-
-<!--
-!!! note "Eyetracker connectors"
-
- [Read the use cases section to discover examples using specific eyetrackers](./user_cases/introduction.md).
-!-->
-
-!!! note ""
- Now we have timestamped gaze positions at expected format, read the next chapter to start learning [how to analyze them](./configuration_and_execution.md). \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
new file mode 100644
index 0000000..e1123fb
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -0,0 +1,65 @@
+Edit and execute context
+========================
+
+The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
+
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Pupil Labs](context_modules/pupil_labs.md) data capture context,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
+* [Random](context_modules/random.md) gaze position generator.
+
+## Edit JSON configuration
+
+Here is a JSON configuration that loads a [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) context:
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": {
+ "argaze.ArFeatures.ArFrame": {
+ "size": [1280, 720]
+ }
+ }
+ }
+}
+```
+
+Let's understand the meaning of each JSON entry.
+
+### argaze.utils.contexts.Random.GazePositionGenerator
+
+The class name of the object being loaded from the [utils.contexts module](../../argaze.md/#argaze.utils.contexts).
+
+### *name*
+
+The name of the [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext). Basically useful for visualization purposes.
+
+### *range*
+
+The range of the gaze position being generated. This property is specific to the [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) class.
+
+### *pipeline*
+
+A minimal gaze processing pipeline that only draws last gaze position.
+
+## Context execution
+
+A context can be loaded from a JSON configuration file using the [*load* command](../utils/main_commands.md).
+
+```shell
+python -m argaze load CONFIGURATION
+```
+
+This command should open a GUI window with a random yellow dot inside.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random.png)
+
+!!! note ""
+
+ At this point, it is possible to load any ready-made context from [utils.contexts](../../argaze.md/#argaze.utils.contexts) module.
+
+ However, the incoming gaze positions are not processed and gaze mapping would not be available for head-mounted eye tracker context.
+
+ Read the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md) to learn how to process gaze positions then, the [ArUco markers pipeline section](../aruco_marker_pipeline/introduction.md) to learn how to enable gaze mapping with an ArUco markers setup.
diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
new file mode 100644
index 0000000..7d73a03
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
@@ -0,0 +1,63 @@
+OpenCV
+======
+
+ArGaze provides a ready-made contexts to process cursor position over Open CV window and process movie images.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Cursor
+
+::: argaze.utils.contexts.OpenCV.Cursor
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Cursor": {
+ "name": "Open CV cursor",
+ "pipeline": ...
+ }
+}
+```
+
+## Movie
+
+::: argaze.utils.contexts.OpenCV.Movie
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie": {
+ "name": "Open CV movie",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": ...
+ }
+}
+```
+
+## Camera
+
+::: argaze.utils.contexts.OpenCV.Camera
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Camera": {
+ "name": "Open CV camera",
+ "identifier": 0,
+ "pipeline": ...
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
new file mode 100644
index 0000000..d2ec336
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
@@ -0,0 +1,32 @@
+Pupil Labs
+==========
+
+ArGaze provides a ready-made context to work with Pupil Labs devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.PupilLabs.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.PupilLabs.LiveStream": {
+ "name": "Pupil Labs live stream",
+ "project": "my_experiment",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/random.md b/docs/user_guide/eye_tracking_context/context_modules/random.md
new file mode 100644
index 0000000..89d7501
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/random.md
@@ -0,0 +1,32 @@
+Random
+======
+
+ArGaze provides a ready-made context to generate random gaze positions.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Gaze Position Generator
+
+::: argaze.utils.contexts.Random.GazePositionGenerator
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
new file mode 100644
index 0000000..6ff44bd
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -0,0 +1,59 @@
+Tobii Pro Glasses 2
+===================
+
+ArGaze provides a ready-made context to work with Tobii Pro Glasses 2 devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.TobiiProGlasses2.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.LiveStream": {
+ "name": "Tobii Pro Glasses 2 live stream",
+ "address": "10.34.0.17",
+ "project": "my_experiment",
+ "participant": "subject-A",
+ "configuration": {
+ "sys_ec_preset": "Indoor",
+ "sys_sc_width": 1920,
+ "sys_sc_height": 1080,
+ "sys_sc_fps": 25,
+ "sys_sc_preset": "Auto",
+ "sys_et_freq": 50,
+ "sys_mems_freq": 100
+ },
+ "pipeline": ...
+ }
+}
+```
+
+## Segment Playback
+
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "./src/argaze/utils/demo/tobii_record/segments/1",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/introduction.md b/docs/user_guide/eye_tracking_context/introduction.md
new file mode 100644
index 0000000..a6208b2
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/introduction.md
@@ -0,0 +1,19 @@
+Overview
+========
+
+This section explains how to handle eye tracker data from various sources as live streams or archived files before to passing them to a processing pipeline. Those various usages are covered by the notion of **eye tracking context**.
+
+To use a ready-made eye tracking context, you only need to know:
+
+* [How to edit and execute a context](configuration_and_execution.md)
+
+More advanced features are also explained like:
+
+* [How to script context](./advanced_topics/scripting.md),
+* [How to define a context](./advanced_topics/context_definition.md),
+* [How to edit timestamped gaze positions](advanced_topics/timestamped_gaze_positions_edition.md)
+
+To get deeper in how context works, the schema below mentions *enter* and *exit* methods which are related to the notion of [Python context manager](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+![ArContext class](../../img/eye_tracker_context.png)
+
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
index 4970dba..effee18 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
@@ -7,7 +7,7 @@ The calibration algorithm can be selected by instantiating a particular [GazePos
## Enable ArFrame calibration
-Gaze position calibration can be enabled thanks to a dedicated JSON entry.
+Gaze position calibration can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a [Linear Regression](../../../argaze.md/#argaze.GazeAnalysis.LinearRegression) calibration algorithm is selected with no parameters:
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 026cb3f..843274a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -66,7 +66,28 @@ from argaze import ArFeatures
...
```
-## Pipeline execution updates
+## Pipeline execution
+
+Timestamped [GazePositions](../../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+
+!!! warning "Mandatory"
+
+ The [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ try:
+
+ # Look ArFrame at a timestamped gaze position
+ ar_frame.look(timestamped_gaze_position)
+
+ # Do something with pipeline exception
+ except Exception as e:
+
+ ...
+```
Calling [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method leads to update many data into the pipeline.
@@ -137,7 +158,7 @@ Last [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) identi
This could also be the current gaze movement if [ArFrame.filter_in_progress_identification](../../../argaze.md/#argaze.ArFeatures.ArFrame) attribute is false.
In that case, the last gaze movement *finished* flag is false.
-Then, the last gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
+Then, the last gaze movement type can be tested with [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
### *ar_frame.is_analysis_available()*
@@ -161,7 +182,7 @@ This an iterator to access to all aoi scan path analysis. Notice that each aoi s
## Setup ArFrame image parameters
-[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArFrame is loaded
@@ -186,3 +207,34 @@ ar_frame_image = ar_frame.image(**image_parameters)
# Do something with ArFrame image
...
```
+
+Then, [ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
+
+### Live window display
+
+While timestamped gaze positions are processed by [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
+
+```python
+import cv2
+
+def main():
+
+ # Assuming ArFrame is loaded
+ ...
+
+ # Create a window to display ArFrame
+ cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
+
+ # Assuming that timestamped gaze positions are being processed by ArFrame.look method
+ ...
+
+ # Update ArFrame image display
+ cv2.imshow(ar_frame.name, ar_frame.image())
+
+ # Wait 10 ms
+ cv2.waitKey(10)
+
+if __name__ == '__main__':
+
+ main()
+``` \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index be27c69..c2a6ac3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -5,7 +5,7 @@ Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](confi
![Layer](../../img/ar_layer.png)
-## Add ArLayer to ArFrame JSON configuration file
+## Add ArLayer to ArFrame JSON configuration
The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to match fixations with AOI and inside which those matches need to be analyzed.
@@ -100,6 +100,11 @@ The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step a
Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
+!!! warning "GazeFeatures.OutsideAOI"
+ When a fixation is not looking at any AOI, a step associated with an AOI called [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) is added. As long as fixations are not looking at any AOI, all fixations/saccades are stored in this step. In this way, further analysis are calculated considering those extra [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) steps.
+
+ This is particularly important when calculating transition matrices, because otherwise we could have arcs between two AOIs when in fact the gaze could have fixed itself outside in the meantime.
+
The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added.
!!! note "Optional"
diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md
index 900d151..11285e3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/background.md
+++ b/docs/user_guide/gaze_analysis_pipeline/background.md
@@ -7,7 +7,7 @@ Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)
## Load and display ArFrame background
-[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a background picture is loaded and displayed:
@@ -28,7 +28,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p
```
!!! note
- As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
index 57a9d71..58919e5 100644
--- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
@@ -1,15 +1,15 @@
-Load and execute pipeline
+Edit and execute pipeline
=========================
The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed.
-![Frame](../../img/ar_frame.png)
+Once defined, a gaze analysis pipeline needs to embedded inside a context that will provides it gaze positions to process.
-## Load JSON configuration file
+![Frame](../../img/ar_frame.png)
-An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method.
+## Edit JSON configuration
-Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration file example:
+Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration example:
```json
{
@@ -35,19 +35,7 @@ Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) conf
}
```
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArFrame
-with argaze.load('./configuration.json') as ar_frame:
-
- # Do something with ArFrame
- ...
-```
-
-Now, let's understand the meaning of each JSON entry.
+Let's understand the meaning of each JSON entry.
### argaze.ArFeatures.ArFrame
@@ -103,28 +91,32 @@ In the example file, the chosen analysis algorithms are the [Basic](../../argaze
## Pipeline execution
-Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+A pipeline needs to be embedded into a context to be executed.
-!!! warning "Mandatory"
+Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
- The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1920, 1080],
+ "pipeline": JSON CONFIGURATION
+ }
+}
+```
-```python
-# Assuming that timestamped gaze positions are available
-...
+Then, use the [*load* command](../utils/main_commands.md) to execute the context.
- try:
+```shell
+python -m argaze load CONFIGURATION
+```
- # Look ArFrame at a timestamped gaze position
- ar_frame.look(timestamped_gaze_position)
+This command should open a GUI window with a random yellow dot and identified fixations circles.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random_pipeline.png)
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-```
!!! note ""
- At this point, the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
+ At this point, the pipeline only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
Read the next chapters to learn how to [describe AOI](aoi_2d_description.md), [add AOI analysis](aoi_analysis.md), [record gaze analysis](recording.md) and [visualize pipeline steps](visualization.md). \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
index 2057dbe..77b2be0 100644
--- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md
+++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
@@ -7,7 +7,7 @@ Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pip
## Enable and display ArFrame heatmap
-[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where heatmap is enabled and displayed:
@@ -31,7 +31,7 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena
}
```
!!! note
- [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md
index c12f669..1b06ff6 100644
--- a/docs/user_guide/gaze_analysis_pipeline/introduction.md
+++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md
@@ -1,7 +1,10 @@
Overview
========
-This section explains how to create gaze analysis pipelines for various use cases.
+This section explains how to process incoming gaze positions through a **gaze analysis pipeline**.
+
+!!! warning "Read eye tracking context section before"
+ This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md).
First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
@@ -9,8 +12,7 @@ First, let's look at the schema below: it gives an overview of the main notions
To build your own gaze analysis pipeline, you need to know:
-* [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md),
-* [How to load and execute gaze analysis pipeline](configuration_and_execution.md),
+* [How to edit and execute a pipeline](configuration_and_execution.md),
* [How to describe AOI](aoi_2d_description.md),
* [How to enable AOI analysis](aoi_analysis.md),
* [How to visualize pipeline steps outputs](visualization.md),
diff --git a/docs/user_guide/gaze_analysis_pipeline/recording.md b/docs/user_guide/gaze_analysis_pipeline/recording.md
index 826442f..2a92403 100644
--- a/docs/user_guide/gaze_analysis_pipeline/recording.md
+++ b/docs/user_guide/gaze_analysis_pipeline/recording.md
@@ -52,7 +52,7 @@ class ScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
# Init FileWriter
super().__init__(**kwargs)
- # Edit hearder line
+ # Edit header line
self.header = "Timestamp (ms)", "Duration (ms)", "Steps number"
def on_look(self, timestamp, ar_frame, exception):
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md
index 6b9805c..08b5465 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualization.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md
@@ -5,9 +5,9 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze
![ArFrame visualization](../../img/visualization.png)
-## Add image parameters to ArFrame JSON configuration file
+## Add image parameters to ArFrame JSON configuration
-[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry.
+[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file with a sample where image parameters are added:
@@ -82,37 +82,6 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
Most of *image_parameters* entries work if related ArFrame/ArLayer pipeline steps are enabled.
For example, a JSON *draw_scan_path* entry needs GazeMovementIdentifier and ScanPath steps to be enabled.
-Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
-
-## Live window display
-
-While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
-
-```python
-import cv2
-
-def main():
-
- # Assuming ArFrame is loaded
- ...
-
- # Create a window to display ArFrame
- cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
-
- # Assuming that timestamped gaze positions are being processed by ArFrame.look method
- ...
-
- # Update ArFrame image display
- cv2.imshow(ar_frame.name, ar_frame.image())
-
- # Wait 10 ms
- cv2.waitKey(10)
-
-if __name__ == '__main__':
-
- main()
-```
-
!!! note "Export to video file"
Video exportation is detailed in [gaze analysis recording chapter](recording.md). \ No newline at end of file
diff --git a/docs/user_guide/pipeline_input_context/configuration_and_connection.md b/docs/user_guide/pipeline_input_context/configuration_and_connection.md
deleted file mode 100644
index 4aac88a..0000000
--- a/docs/user_guide/pipeline_input_context/configuration_and_connection.md
+++ /dev/null
@@ -1,35 +0,0 @@
-Load and connect a context
-==========================
-
-Once an [ArContext is defined](context_definition.md), it have to be connected to a pipeline.
-
-# Load JSON configuration file
-
-An [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method.
-
-Here is a JSON configuration file related to the [previously defined Example context](context_definition.md):
-
-```json
-{
- "my_context.Example": {
- "name": "My example context",
- "parameter": ...,
- "pipeline": "pipeline.json"
- }
-}
-```
-
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArContext
-with argaze.load('./configuration.json') as ar_context:
-
- # Do something with ArContext
- ...
-```
-
-!!! note
- There is nothing to do to execute a loaded context as it is handled inside its own **__enter__** method.
diff --git a/docs/user_guide/pipeline_input_context/context_definition.md b/docs/user_guide/pipeline_input_context/context_definition.md
deleted file mode 100644
index 7d30438..0000000
--- a/docs/user_guide/pipeline_input_context/context_definition.md
+++ /dev/null
@@ -1,57 +0,0 @@
-Define a context class
-======================
-
-The [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic class interface to handle pipeline inputs according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
-
-# Write Python context file
-
-A specific [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be defined into a Python file.
-
-Here is an example context defined into *my_context.py* file:
-
-```python
-from argaze import ArFeatures, DataFeatures
-
-class Example(ArFeatures.ArContext):
-
- @DataFeatures.PipelineStepInit
- def __init__(self, **kwargs):
-
- # Init ArContext class
- super().__init__()
-
- # Init private attribute
- self.__parameter = ...
-
- @property
- def parameter(self):
- """Any context specific parameter."""
- return self.__parameter
-
- @parameter.setter
- def parameter(self, parameter):
- self.__parameter = parameter
-
- @DataFeatures.PipelineStepEnter
- def __enter__(self):
-
- # Start context according any specific parameter
- ... self.parameter
-
- # Assuming that timestamp, x and y values are available
- ...
-
- # Process timestamped gaze position
- self._process_gaze_position(timestamp = timestamp, x = x, y = y)
-
- @DataFeatures.PipelineStepExit
- def __exit__(self, exception_type, exception_value, exception_traceback):
-
- # End context
- ...
-```
-
-!!! note ""
-
- The next chapter explains how to [load a context to connect it with a pipeline](configuration_and_connection.md).
- \ No newline at end of file
diff --git a/docs/user_guide/pipeline_input_context/introduction.md b/docs/user_guide/pipeline_input_context/introduction.md
deleted file mode 100644
index e31ad54..0000000
--- a/docs/user_guide/pipeline_input_context/introduction.md
+++ /dev/null
@@ -1,24 +0,0 @@
-Overview
-========
-
-This section explains how to connect [gaze analysis](../gaze_analysis_pipeline/introduction.md) or [augmented reality](../aruco_marker_pipeline/introduction.md) pipelines with various input contexts.
-
-First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
-
-![Pipeline input context](../../img/pipeline_input_context.png)
-
-To build your own input context, you need to know:
-
-* [How to define a context class](context_definition.md),
-* [How to load a context to connect with a pipeline](configuration_and_connection.md),
-
-!!! warning "Documentation in progress"
-
- This section is not yet fully done. Please look at the [demonstrations scripts chapter](../utils/demonstrations_scripts.md) to know more about this notion.
-
-<!--
-* [How to stop a context](stop.md),
-* [How to pause and resume a context](pause_and_resume.md),
-* [How to visualize a context](visualization.md),
-* [How to handle pipeline exceptions](exceptions.md)
-!-->
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index f293980..59df85b 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -9,20 +9,45 @@ Collection of command-line scripts for demonstration purpose.
!!! note
*Use -h option to get command arguments documentation.*
+!!! note
+ Each demonstration outputs metrics into *_export/records* folder.
+
## Random context
-Load **random_context.json** file to analyze random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
```
-## OpenCV window context
+## OpenCV
+
+### Cursor context
+
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
+```
+
+### Movie context
+
+Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
+```
+
+### Camera context
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to reduce the value of the *sides_mask*.
-Load **opencv_window_context.json** file to analyze mouse pointer positions over OpenCV window:
+Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0).
+
+Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window:
```shell
-python -m argaze load ./src/argaze/utils/demo/opencv_window_context.json
+python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json
```
## Tobii Pro Glasses 2
@@ -61,27 +86,24 @@ Then, load **tobii_live_stream_context.json** file to find ArUco marker into cam
python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
```
-### Post-processing context
-
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+### Segment playback context
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_segment_playback_context.json
```
## Pupil Invisible
diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md
new file mode 100644
index 0000000..55bd232
--- /dev/null
+++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md
@@ -0,0 +1,60 @@
+Estimate ArUco markers pose
+===========================
+
+This **ArGaze** application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
+
+Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie" : {
+ "name": "ArUco markers pose estimator",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": "pipeline.json"
+ }
+}
+```
+
+Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "Full HD Camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "pose_size": 4,
+ "pose_ids": [],
+ "parameters": {
+ "useAruco3Detection": true
+ },
+ "observers":{
+ "observers.ArUcoMarkersPoseRecorder": {
+ "output_folder": "_export/records/aruco_markers_group"
+ }
+ }
+ },
+ "sides_mask": 420,
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [255, 255, 255],
+ "draw_axes": {
+ "thickness": 4
+ }
+ }
+ }
+ }
+}
+```
+
+Then, launch the application.
+
+```shell
+python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
+``` \ No newline at end of file
diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/main_commands.md
index 892fef8..c4887a4 100644
--- a/docs/user_guide/utils/ready-made_scripts.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -1,15 +1,12 @@
-Ready-made scripts
-==================
+Main commands
+=============
-Collection of command-line scripts to provide useful features.
-
-!!! note
- *Consider that all inline commands below have to be executed at the root of ArGaze package folder.*
+The **ArGaze** package comes with top-level commands.
!!! note
*Use -h option to get command arguments documentation.*
-## Load ArContext JSON configuration
+## Load
Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) from a JSON CONFIGURATION file
@@ -17,6 +14,10 @@ Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) f
python -m argaze load CONFIGURATION
```
+This command should open a GUI window to display the image of the context's pipeline.
+
+![ArGaze load GUI](../../img/argaze_load_gui.png)
+
### Send command
Use -p option to enable pipe communication at given address:
@@ -34,36 +35,22 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
```
-## Edit JSON configuration
+## Edit
Modify the content of JSON CONFIGURATION file with another JSON CHANGES file then, save the result into an OUTPUT file
```shell
python -m argaze edit CONFIGURATION CHANGES OUTPUT
```
-
-## Estimate ArUco markers pose
-
-This application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
-
-Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
-
-Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
-
-Then, launch the application.
-
-```shell
-python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
-``` \ No newline at end of file