aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/contributor_guide/build_package.md36
-rw-r--r--docs/img/4flight_aoi.pngbin311033 -> 0 bytes
-rw-r--r--docs/img/4flight_visual_pattern.pngbin0 -> 331959 bytes
-rw-r--r--docs/img/4flight_workspace.pngbin432921 -> 311033 bytes
-rw-r--r--docs/img/argaze_load_gui.pngbin168761 -> 151200 bytes
-rw-r--r--docs/img/argaze_pipeline.pngbin92231 -> 92553 bytes
-rw-r--r--docs/index.md2
-rw-r--r--docs/installation.md4
-rw-r--r--docs/use_cases/air_controller_gaze_study/context.md14
-rw-r--r--docs/use_cases/air_controller_gaze_study/introduction.md17
-rw-r--r--docs/use_cases/air_controller_gaze_study/observers.md4
-rw-r--r--docs/use_cases/air_controller_gaze_study/pipeline.md22
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/context.md7
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/introduction.md12
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/observers.md6
-rw-r--r--docs/use_cases/gaze_based_candidate_selection/pipeline.md6
-rw-r--r--docs/use_cases/pilot_gaze_tracking/context.md (renamed from docs/use_cases/pilot_gaze_monitoring/context.md)8
-rw-r--r--docs/use_cases/pilot_gaze_tracking/introduction.md (renamed from docs/use_cases/pilot_gaze_monitoring/introduction.md)0
-rw-r--r--docs/use_cases/pilot_gaze_tracking/observers.md (renamed from docs/use_cases/pilot_gaze_monitoring/observers.md)0
-rw-r--r--docs/use_cases/pilot_gaze_tracking/pipeline.md (renamed from docs/use_cases/pilot_gaze_monitoring/pipeline.md)18
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md4
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md2
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md82
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md8
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md4
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md9
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/file.md75
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/opencv.md18
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md (renamed from docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md)10
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md8
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md32
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md5
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/background.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/heatmap.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualization.md2
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md110
-rw-r--r--docs/user_guide/utils/estimate_aruco_markers_pose.md4
-rw-r--r--docs/user_guide/utils/main_commands.md7
44 files changed, 450 insertions, 138 deletions
diff --git a/docs/contributor_guide/build_package.md b/docs/contributor_guide/build_package.md
new file mode 100644
index 0000000..fae1730
--- /dev/null
+++ b/docs/contributor_guide/build_package.md
@@ -0,0 +1,36 @@
+Build package
+=============
+
+ArGaze build system is based on [setuptools](https://setuptools.pypa.io/en/latest/userguide/index.html) and [setuptools-scm](https://setuptools-scm.readthedocs.io/en/latest/) to use Git tag as package version number.
+
+!!! note
+
+ *Consider that all inline commands below have to be executed at the root of ArGaze Git repository.*
+
+Install or upgrade the required packages:
+
+```console
+pip install build setuptools setuptools-scm
+```
+
+Commit last changes then, tag the Git repository with a VERSION that follows the [setuptools versionning schemes](https://setuptools.pypa.io/en/latest/userguide/distribution.html):
+
+```console
+git tag -a VERSION -m "Version message"
+```
+
+Push commits and tags:
+
+```console
+git push && git push --tags
+```
+
+Then, build package:
+```console
+python -m build
+```
+
+Once the build is done, two files are created in a *dist* folder:
+
+* **argaze-VERSION-py3-none-any.whl**: the built package (*none* means for no specific OS, *any* means for any architecture).
+* **argaze-VERSION.tar.gz**: the source package.
diff --git a/docs/img/4flight_aoi.png b/docs/img/4flight_aoi.png
deleted file mode 100644
index f899ab2..0000000
--- a/docs/img/4flight_aoi.png
+++ /dev/null
Binary files differ
diff --git a/docs/img/4flight_visual_pattern.png b/docs/img/4flight_visual_pattern.png
new file mode 100644
index 0000000..0550063
--- /dev/null
+++ b/docs/img/4flight_visual_pattern.png
Binary files differ
diff --git a/docs/img/4flight_workspace.png b/docs/img/4flight_workspace.png
index 1c405c4..f899ab2 100644
--- a/docs/img/4flight_workspace.png
+++ b/docs/img/4flight_workspace.png
Binary files differ
diff --git a/docs/img/argaze_load_gui.png b/docs/img/argaze_load_gui.png
index b8874b2..e012adc 100644
--- a/docs/img/argaze_load_gui.png
+++ b/docs/img/argaze_load_gui.png
Binary files differ
diff --git a/docs/img/argaze_pipeline.png b/docs/img/argaze_pipeline.png
index 953cbba..61606b2 100644
--- a/docs/img/argaze_pipeline.png
+++ b/docs/img/argaze_pipeline.png
Binary files differ
diff --git a/docs/index.md b/docs/index.md
index 2b668a3..ca9271a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -14,7 +14,7 @@ By offering a wide array of gaze metrics and supporting easy extension to incorp
## Eye tracking context
-**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **real-time and/or post-processing analysis**.
+**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **live data capture and afterward data playback**.
[Learn how to handle various eye tracking context by reading the dedicated user guide section](./user_guide/eye_tracking_context/introduction.md).
diff --git a/docs/installation.md b/docs/installation.md
index 66b801b..fe4cfa4 100644
--- a/docs/installation.md
+++ b/docs/installation.md
@@ -37,8 +37,8 @@ pip install ./dist/argaze-VERSION.whl
!!! note "As ArGaze package contributor"
- *You should prefer to install the package in developer mode to test live code changes:*
+ *You should prefer to install the package in editable mode to test live code changes:*
```
- pip install -e .
+ pip install --editable .
```
diff --git a/docs/use_cases/air_controller_gaze_study/context.md b/docs/use_cases/air_controller_gaze_study/context.md
index ca9adf7..8bb4ef8 100644
--- a/docs/use_cases/air_controller_gaze_study/context.md
+++ b/docs/use_cases/air_controller_gaze_study/context.md
@@ -1,22 +1,22 @@
-Live streaming context
+Data playback context
======================
The context handles incoming eye tracker data before to pass them to a processing pipeline.
-## post_processing_context.json
+## data_playback_context.json
-For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to read data from records made by this device.
+For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to playback data from records made by this device.
-While *segment* entries are specific to the [TobiiProGlasses2.PostProcessing](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.PostProcessing) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
+While *segment* entry is specific to the [TobiiProGlasses2.SegmentPlayback](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing": {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback": {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "/Volumes/projects/fbr6k3e/records/4rcbdzk/segments/1",
"pipeline": "post_processing_pipeline.json"
}
}
```
-The [post_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
+The [post_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
diff --git a/docs/use_cases/air_controller_gaze_study/introduction.md b/docs/use_cases/air_controller_gaze_study/introduction.md
index 313e492..f188eec 100644
--- a/docs/use_cases/air_controller_gaze_study/introduction.md
+++ b/docs/use_cases/air_controller_gaze_study/introduction.md
@@ -3,7 +3,7 @@ Post-processing head-mounted eye tracking records
**ArGaze** enabled a study of air traffic controller gaze strategy.
-The following use case has integrated the [ArUco marker pipeline](../../user_guide/aruco_marker_pipeline/introduction.md) to map air traffic controllers gaze onto multiple screens environment in post-processing then, enable scan path study thanks to the [gaze analysis pipeline](../../user_guide/gaze_analysis_pipeline/introduction.md).
+The following use case has integrated the [ArUco marker pipeline](../../user_guide/aruco_marker_pipeline/introduction.md) to map air traffic controllers gaze onto multiple screens environment in post-processing then, enable scan path study using the [gaze analysis pipeline](../../user_guide/gaze_analysis_pipeline/introduction.md).
## Background
@@ -18,22 +18,29 @@ During their training, controllers are taught to visually follow all aircraft st
![4Flight Workspace](../../img/4flight_workspace.png)
-A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft simultaneously was performed by air traffic controllers. The controller could encounter lateral conflicts (same altitude) between 2 and 3 aircraft and conflicts between aircraft that need to ascend or descend within the sector. After the simulation, a directed interview about the gaze pattern was conducted. Eye tracking data was recorded with a Tobii Pro Glasses 2, a head-mounted eye tracker. The gaze and scene camera video were captured with Tobii Pro Lab software and post-processed with **ArGaze** software library. As the eye tracker model is head mounted, ArUco markers were placed around the two screens to ensure that several of them were always visible in the field of view of the eye tracker camera.
+A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft simultaneously was performed by air traffic controllers. The controller could encounter lateral conflicts (same altitude) between 2 and 3 aircraft and conflicts between aircraft that need to ascend or descend within the sector.
+After the simulation, a directed interview about the gaze pattern was conducted.
+Eye tracking data was recorded with a Tobii Pro Glasses 2, a head-mounted eye tracker.
+The gaze and scene camera video were captured with Tobii Pro Lab software and post-processed with **ArGaze** software library.
+As the eye tracker model is head mounted, ArUco markers were placed around the two screens to ensure that several of them were always visible in the field of view of the eye tracker camera.
-![4Flight Workspace](../../img/4flight_aoi.png)
+Various metrics were exported with specific pipeline observers, including average fixation duration, explore/exploit ratio, K-coefficient, AOI distribution, transition matrix, entropy and N-grams.
+Although statistical analysis is not possible due to the small sample size of the study (6 instructors, 5 qualified controllers, and 5 trainees), visual pattern summaries have been manually built from transition matrix export to produce a qualitative interpretation showing what instructors attend during training and how qualified controllers work. Red arcs are more frequent than the blue ones. Instructors (Fig. a) and four different qualified controllers (Fig. b, c, d, e).
+
+![4Flight Visual pattern](../../img/4flight_visual_pattern.png)
## Setup
The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
-* The context file that reads gaze data and scene camera video records: [post_processing_context.json](context.md)
+* The context file that playback gaze data and scene camera video records: [data_playback_context.json](context.md)
* The pipeline file that processes gaze data and scene camera video: [post_processing_pipeline.json](pipeline.md)
* The observers file that exports analysis outputs: [observers.py](observers.md)
As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
```shell
-python -m argaze load post_processing_context.json
+python -m argaze load segment_playback_context.json
```
This command opens one GUI window per frame (one for the scene camera, one for the sector screen and one for the info screen) that allow to monitor gaze mapping while processing.
diff --git a/docs/use_cases/air_controller_gaze_study/observers.md b/docs/use_cases/air_controller_gaze_study/observers.md
index aad870f..500d573 100644
--- a/docs/use_cases/air_controller_gaze_study/observers.md
+++ b/docs/use_cases/air_controller_gaze_study/observers.md
@@ -1,5 +1,5 @@
-Fixation events sending
-=======================
+Metrics and video recording
+===========================
Observers are attached to pipeline steps to be notified when a method is called.
diff --git a/docs/use_cases/air_controller_gaze_study/pipeline.md b/docs/use_cases/air_controller_gaze_study/pipeline.md
index ec1aa59..69fdd2c 100644
--- a/docs/use_cases/air_controller_gaze_study/pipeline.md
+++ b/docs/use_cases/air_controller_gaze_study/pipeline.md
@@ -1,4 +1,4 @@
-Live processing pipeline
+Post processing pipeline
========================
The pipeline processes camera image and gaze data to enable gaze mapping and gaze analysis.
@@ -19,7 +19,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping: **ArGa
"optic_parameters": "optic_parameters.json",
"parameters": {
"adaptiveThreshConstant": 20,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
},
"gaze_movement_identifier": {
@@ -182,24 +182,22 @@ For this use case we need to detect ArUco markers to enable gaze mapping: **ArGa
}
}
}
- },
- "angle_tolerance": 15.0,
- "distance_tolerance": 2.54
+ }
}
},
"observers": {
"argaze.utils.UtilsFeatures.LookPerformanceRecorder": {
- "path": "_export/look_performance.csv"
+ "path": "look_performance.csv"
},
"argaze.utils.UtilsFeatures.WatchPerformanceRecorder": {
- "path": "_export/watch_performance.csv"
+ "path": "watch_performance.csv"
}
}
}
}
```
-All the files mentioned aboved are described below.
+All the files mentioned above are described below.
The *ScanPathAnalysisRecorder* and *AOIScanPathAnalysisRecorder* observers objects are defined into the [observers.py](observers.md) file that is described in the next chapter.
@@ -357,12 +355,12 @@ The video file is a record of the sector screen frame image.
## look_performance.csv
-This file contains the logs of *ArUcoCamera.look* method execution info. It is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+This file contains the logs of *ArUcoCamera.look* method execution info. It is created into the folder where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
-On a MacBookPro (2,3GHz Intel Core i9 8 cores), the *look* method execution time is ~7ms and it is called ~115 times per second.
+On a MacBookPro (2.3GHz Intel Core i9 8 cores), the *look* method execution time is ~1ms and it is called ~51 times per second.
## watch_performance.csv
-This file contains the logs of *ArUcoCamera.watch* method execution info. It file is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is created into the folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
-On a MacBookPro (2,3GHz Intel Core i9 8 cores), the *watch* method execution time is ~60ms and it is called ~10 times per second.
+On a MacBookPro (2.3GHz Intel Core i9 8 cores) without CUDA acceleration, the *watch* method execution time is ~52ms and it is called more than 12 times per second.
diff --git a/docs/use_cases/gaze_based_candidate_selection/context.md b/docs/use_cases/gaze_based_candidate_selection/context.md
new file mode 100644
index 0000000..96547ea
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/context.md
@@ -0,0 +1,7 @@
+Data playback context
+======================
+
+The context handles incoming eye tracker data before to pass them to a processing pipeline.
+
+## data_playback_context.json
+
diff --git a/docs/use_cases/gaze_based_candidate_selection/introduction.md b/docs/use_cases/gaze_based_candidate_selection/introduction.md
new file mode 100644
index 0000000..da8d6f9
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/introduction.md
@@ -0,0 +1,12 @@
+Post-processing screen-based eye tracker data
+=================================================
+
+**ArGaze** enabled ...
+
+The following use case has integrated ...
+
+## Background
+
+## Environment
+
+## Setup
diff --git a/docs/use_cases/gaze_based_candidate_selection/observers.md b/docs/use_cases/gaze_based_candidate_selection/observers.md
new file mode 100644
index 0000000..a1f1fce
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/observers.md
@@ -0,0 +1,6 @@
+Metrics and video recording
+===========================
+
+Observers are attached to pipeline steps to be notified when a method is called.
+
+## observers.py
diff --git a/docs/use_cases/gaze_based_candidate_selection/pipeline.md b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
new file mode 100644
index 0000000..6fae01a
--- /dev/null
+++ b/docs/use_cases/gaze_based_candidate_selection/pipeline.md
@@ -0,0 +1,6 @@
+Post processing pipeline
+========================
+
+The pipeline processes gaze data to enable gaze analysis.
+
+## post_processing_pipeline.json
diff --git a/docs/use_cases/pilot_gaze_monitoring/context.md b/docs/use_cases/pilot_gaze_tracking/context.md
index 71d2628..8839cb6 100644
--- a/docs/use_cases/pilot_gaze_monitoring/context.md
+++ b/docs/use_cases/pilot_gaze_tracking/context.md
@@ -1,11 +1,11 @@
-Live streaming context
-======================
+Data capture context
+====================
The context handles incoming eye tracker data before to pass them to a processing pipeline.
## live_streaming_context.json
-For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to live stream data from this device.
+For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to capture data from this device.
While *address*, *project*, *participant* and *configuration* entries are specific to the [TobiiProGlasses2.LiveStream](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.LiveStream) class, *name*, *pipeline* and *observers* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
@@ -36,6 +36,6 @@ While *address*, *project*, *participant* and *configuration* entries are specif
}
```
-The [live_processing_pipeline.json](pipeline.md) file mentioned aboved is described in the next chapter.
+The [live_processing_pipeline.json](pipeline.md) file mentioned above is described in the next chapter.
The *IvyBus* observer object is defined into the [observers.py](observers.md) file that is described in a next chapter. \ No newline at end of file
diff --git a/docs/use_cases/pilot_gaze_monitoring/introduction.md b/docs/use_cases/pilot_gaze_tracking/introduction.md
index 7e88c69..7e88c69 100644
--- a/docs/use_cases/pilot_gaze_monitoring/introduction.md
+++ b/docs/use_cases/pilot_gaze_tracking/introduction.md
diff --git a/docs/use_cases/pilot_gaze_monitoring/observers.md b/docs/use_cases/pilot_gaze_tracking/observers.md
index 5f5bc78..5f5bc78 100644
--- a/docs/use_cases/pilot_gaze_monitoring/observers.md
+++ b/docs/use_cases/pilot_gaze_tracking/observers.md
diff --git a/docs/use_cases/pilot_gaze_monitoring/pipeline.md b/docs/use_cases/pilot_gaze_tracking/pipeline.md
index 83a71af..65fccc3 100644
--- a/docs/use_cases/pilot_gaze_monitoring/pipeline.md
+++ b/docs/use_cases/pilot_gaze_tracking/pipeline.md
@@ -49,9 +49,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping: **ArGa
}
}
}
- },
- "angle_tolerance": 15.0,
- "distance_tolerance": 10.0
+ }
}
},
"layers": {
@@ -124,7 +122,7 @@ For this use case we need to detect ArUco markers to enable gaze mapping: **ArGa
}
```
-All the files mentioned aboved are described below.
+All the files mentioned above are described below.
The *ArUcoCameraLogger* observer object is defined into the [observers.py](observers.md) file that is described in the next chapter.
@@ -173,7 +171,7 @@ This file defines the ArUco detector parameters as explained into [the detection
```json
{
"adaptiveThreshConstant": 7,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
```
@@ -287,7 +285,7 @@ This file is a screenshot of the PFD screen used to monitor where the gaze is pr
## PIC_PFD.svg
-This file defines the place of the AOI into the PFD frame. AOI positions have been edited [Inkscape software](https://inkscape.org/fr/) from a screenshot of the PFD screen then exported at SVG format.
+This file defines the place of the AOI into the PFD frame. AOI positions have been edited with [Inkscape software](https://inkscape.org/fr/) from a screenshot of the PFD screen then exported at SVG format.
```svg
<svg>
@@ -302,12 +300,12 @@ This file defines the place of the AOI into the PFD frame. AOI positions have be
## look_performance.csv
-This file contains the logs of *ArUcoCamera.look* method execution info. It is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+This file contains the logs of *ArUcoCamera.look* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
-On a Jetson Xavier computer, the *look* method execution time is ~0.5ms and it is called ~100 times per second.
+On a Jetson Xavier computer, the *look* method execution time is 5.7ms and it is called ~100 times per second.
## watch_performance.csv
-This file contains the logs of *ArUcoCamera.watch* method execution info. It file is created into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
+This file contains the logs of *ArUcoCamera.watch* method execution info. It is saved into an *_export* folder from where the [*load* command](../../user_guide/utils/main_commands.md) is launched.
-On a Jetson Xavier computer, the *watch* method execution time is ~50ms and it is called ~10 times per second.
+On a Jetson Xavier computer with CUDA acceleration, the *watch* method execution time is 46.5ms and it is called more than 12 times per second.
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
index 975f278..311916b 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
@@ -5,7 +5,7 @@ As explain in the [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dc
## Load ArUcoDetector parameters
-[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file with ArUco detector parameters:
@@ -18,7 +18,7 @@ Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoM
"dictionary": "DICT_APRILTAG_16h5",
"parameters": {
"adaptiveThreshConstant": 10,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
},
...
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
index 625f257..e9ce740 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
@@ -134,7 +134,7 @@ Below, an optic_parameters JSON file example:
## Load and display optic parameters
-[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file where optic parameters are loaded and displayed:
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
index a9d66e9..f258e04 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
@@ -150,7 +150,7 @@ Particularly, timestamped gaze positions can be passed one by one to the [ArUcoC
## Setup ArUcoCamera image parameters
-Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArUcoCamera is loaded
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
index 46422b8..78a513a 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
@@ -1,7 +1,7 @@
Describe 3D AOI
===============
-Now that the [scene pose is estimated](aruco_marker_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
+Now that the [scene pose is estimated](aruco_marker_description.md) considering the ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
In the example scene, the two screens—the control panel and the window—are considered to be areas of interest.
diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
index c2ee1b9..56846e2 100644
--- a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
@@ -1,7 +1,7 @@
Edit and execute pipeline
=========================
-Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
+Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected by the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
index c163696..a543bc7 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -3,27 +3,27 @@ Define a context class
The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
-The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides playback features to stop or pause processings, performance assement features to measure how many times processings are called and the time spent by the process.
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process.
-Besides, there is also a [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
-In the same way, there is a [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and processing advancement.
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement.
-Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) or [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class.
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
-## Write live processing context
+## Write data capture context
-Here is a live processing context example that processes gaze positions and camera images in two separated threads:
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
```python
from argaze import ArFeatures, DataFeatures
-class LiveProcessingExample(ArFeatures.LiveProcessingContext):
+class DataCaptureExample(ArFeatures.DataCaptureContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -45,23 +45,23 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a gaze position processing thread
- self.__gaze_thread = threading.Thread(target = self.__gaze_position_processing)
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
self.__gaze_thread.start()
- # Start a camera image processing thread if applicable
- self.__camera_thread = threading.Thread(target = self.__camera_image_processing)
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
self.__camera_thread.start()
return self
- def __gaze_position_processing(self):
- """Process gaze position."""
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, x and y values are available
@@ -73,13 +73,13 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Wait some time eventually
...
- def __camera_image_processing(self):
- """Process camera image if applicable."""
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -95,10 +95,10 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop capture loops
self.stop()
- # Stop processing threads
+ # Stop capture threads
threading.Thread.join(self.__gaze_thread)
threading.Thread.join(self.__camera_thread)
@@ -108,19 +108,19 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
...
```
-## Write post processing context
+## Write data playback context
-Here is a post processing context example that processes gaze positions and camera images in a same thread:
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
```python
from argaze import ArFeatures, DataFeatures
-class PostProcessingExample(ArFeatures.PostProcessingContext):
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -142,19 +142,19 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a reading data thread
- self.__read_thread = threading.Thread(target = self.__data_reading)
- self.__read_thread.start()
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
return self
- def __data_reading(self):
- """Process gaze position and camera image if applicable."""
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
- # Processing loop
+ # Playback loop
while self.is_running():
- # Pause processing
+ # Pause playback
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -176,18 +176,20 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop playback loop
self.stop()
- # Stop processing threads
- threading.Thread.join(self.__read_thread)
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
- def previous(self):
- """Go to previous camera image frame."""
+ @property
+ def duration(self) -> int|float:
+ """Get data duration."""
...
- def next(self):
- """Go to next camera image frame."""
+ @property
+ def progression(self) -> float:
+ """Get data playback progression between 0 and 1."""
...
```
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
index 8753eb6..d8eb389 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -68,12 +68,12 @@ from argaze import ArFeatures
# Check context type
- # Live processing case: calibration method is available
- if issubclass(type(context), ArFeatures.LiveProcessingContext):
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
...
- # Post processing case: more playback methods are available
- if issubclass(type(context), ArFeatures.PostProcessingContext):
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
...
# Check pipeline type
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
index 340dbaf..959d955 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
@@ -28,8 +28,8 @@ for timestamped_gaze_position in ts_gaze_positions:
## Edit timestamped gaze positions from live stream
-Real-time gaze positions can be edited thanks to the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
-Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html).
+Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
+Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html).
```python
from argaze import GazeFeatures
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
index f13c6a2..3deeb57 100644
--- a/docs/user_guide/eye_tracking_context/configuration_and_execution.md
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -3,9 +3,12 @@ Edit and execute context
The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
-* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) live stream and post processing contexts,
-* [Pupil Labs](context_modules/pupil_labs.md) live stream context,
-* [OpenCV](context_modules/opencv.md) window cursor position and movie processing,
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Tobii Pro Glasses 3](context_modules/tobii_pro_glasses_3.md) data capture context,
+* [Pupil Labs Invisible](context_modules/pupil_labs_invisible.md) data capture context,
+* [Pupil Labs Neon](context_modules/pupil_labs_neon.md) data capture context,
+* [File](context_modules/file.md) data playback contexts,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
* [Random](context_modules/random.md) gaze position generator.
## Edit JSON configuration
diff --git a/docs/user_guide/eye_tracking_context/context_modules/file.md b/docs/user_guide/eye_tracking_context/context_modules/file.md
new file mode 100644
index 0000000..5b5c8e9
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/file.md
@@ -0,0 +1,75 @@
+File
+======
+
+ArGaze provides a ready-made contexts to read data from various file format.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## CSV
+
+::: argaze.utils.contexts.File.CSV
+
+### JSON sample: splitted case
+
+To use when gaze position coordinates are splitted in two separated columns.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV": {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_splitted.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "x_column": "Gaze Position X (px)",
+ "y_column": "Gaze Position Y (px)",
+ "pipeline": ...
+ }
+}
+```
+
+### JSON sample: joined case
+
+To use when gaze position coordinates are joined as a list in one single column.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV" : {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_xy_joined.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "xy_column": "Gaze Position (px)",
+ "pipeline": ...
+ }
+}
+```
+
+### JSON sample: left and right eyes
+
+To use when gaze position coordinates and validity are given for each eye in six separated columns.
+
+```json
+{
+ "argaze.utils.contexts.File.CSV": {
+ "name": "CSV file data playback",
+ "path": "./src/argaze/utils/demo/gaze_positions_left_right_eyes.csv",
+ "timestamp_column": "Timestamp (ms)",
+ "left_eye_x_column": "Left eye X",
+ "left_eye_y_column": "Left eye Y",
+ "left_eye_validity_column": "Left eye validity",
+ "right_eye_x_column": "Right eye X",
+ "right_eye_y_column": "Right eye Y",
+ "right_eye_validity_column": "Right eye validity",
+ "rescale_to_pipeline_size": true,
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
index 7244cd4..7d73a03 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/opencv.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
@@ -39,9 +39,25 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
```json
{
"argaze.utils.contexts.OpenCV.Movie": {
- "name": "Open CV cursor",
+ "name": "Open CV movie",
"path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
"pipeline": ...
}
}
```
+
+## Camera
+
+::: argaze.utils.contexts.OpenCV.Camera
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Camera": {
+ "name": "Open CV camera",
+ "identifier": 0,
+ "pipeline": ...
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md
new file mode 100644
index 0000000..1f4a94f
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md
@@ -0,0 +1,32 @@
+Pupil Labs Invisible
+==========
+
+ArGaze provides a ready-made context to work with Pupil Labs Invisible device.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.PupilLabsInvisible.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.PupilLabsInvisible.LiveStream": {
+ "name": "Pupil Labs Invisible live stream",
+ "project": "my_experiment",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md
index d2ec336..535f5d5 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md
@@ -1,7 +1,7 @@
-Pupil Labs
+Pupil Labs Neon
==========
-ArGaze provides a ready-made context to work with Pupil Labs devices.
+ArGaze provides a ready-made context to work with Pupil Labs Neon device.
To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
Notice that the *pipeline* entry is mandatory.
@@ -17,14 +17,14 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
## Live Stream
-::: argaze.utils.contexts.PupilLabs.LiveStream
+::: argaze.utils.contexts.PupilLabsNeon.LiveStream
### JSON sample
```json
{
- "argaze.utils.contexts.PupilLabs.LiveStream": {
- "name": "Pupil Labs live stream",
+ "argaze.utils.contexts.PupilLabsNeon.LiveStream": {
+ "name": "Pupil Labs Neon live stream",
"project": "my_experiment",
"pipeline": ...
}
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
index fba6931..6ff44bd 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -42,16 +42,16 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
}
```
-## Post Processing
+## Segment Playback
-::: argaze.utils.contexts.TobiiProGlasses2.PostProcessing
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
### JSON sample
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "./src/argaze/utils/demo/tobii_record/segments/1",
"pipeline": ...
}
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md
new file mode 100644
index 0000000..3d37fcc
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md
@@ -0,0 +1,32 @@
+Tobii Pro Glasses 3
+===================
+
+ArGaze provides a ready-made context to work with Tobii Pro Glasses 3 devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.TobiiProGlasses3.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses3.LiveStream": {
+ "name": "Tobii Pro Glasses 3 live stream",
+ "pipeline": ...
+ }
+}
+```
+
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
index 4970dba..effee18 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
@@ -7,7 +7,7 @@ The calibration algorithm can be selected by instantiating a particular [GazePos
## Enable ArFrame calibration
-Gaze position calibration can be enabled thanks to a dedicated JSON entry.
+Gaze position calibration can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a [Linear Regression](../../../argaze.md/#argaze.GazeAnalysis.LinearRegression) calibration algorithm is selected with no parameters:
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 264e866..843274a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -158,7 +158,7 @@ Last [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) identi
This could also be the current gaze movement if [ArFrame.filter_in_progress_identification](../../../argaze.md/#argaze.ArFeatures.ArFrame) attribute is false.
In that case, the last gaze movement *finished* flag is false.
-Then, the last gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
+Then, the last gaze movement type can be tested with [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
### *ar_frame.is_analysis_available()*
@@ -182,7 +182,7 @@ This an iterator to access to all aoi scan path analysis. Notice that each aoi s
## Setup ArFrame image parameters
-[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArFrame is loaded
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index 2b64091..c2a6ac3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -100,6 +100,11 @@ The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step a
Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
+!!! warning "GazeFeatures.OutsideAOI"
+ When a fixation is not looking at any AOI, a step associated with an AOI called [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) is added. As long as fixations are not looking at any AOI, all fixations/saccades are stored in this step. In this way, further analysis are calculated considering those extra [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) steps.
+
+ This is particularly important when calculating transition matrices, because otherwise we could have arcs between two AOIs when in fact the gaze could have fixed itself outside in the meantime.
+
The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added.
!!! note "Optional"
diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md
index 900d151..11285e3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/background.md
+++ b/docs/user_guide/gaze_analysis_pipeline/background.md
@@ -7,7 +7,7 @@ Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)
## Load and display ArFrame background
-[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a background picture is loaded and displayed:
@@ -28,7 +28,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p
```
!!! note
- As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
index 2057dbe..77b2be0 100644
--- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md
+++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
@@ -7,7 +7,7 @@ Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pip
## Enable and display ArFrame heatmap
-[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where heatmap is enabled and displayed:
@@ -31,7 +31,7 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena
}
```
!!! note
- [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md
index 32395c3..08b5465 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualization.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md
@@ -7,7 +7,7 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze
## Add image parameters to ArFrame JSON configuration
-[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry.
+[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file with a sample where image parameters are added:
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index dd1b8e0..c7560eb 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -9,30 +9,70 @@ Collection of command-line scripts for demonstration purpose.
!!! note
*Use -h option to get command arguments documentation.*
+!!! note
+ Each demonstration outputs metrics into *_export/records* folder.
+
## Random context
-Load **random_context.json** file to process random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
```
-## OpenCV cursor context
+## CSV file context
+
+Load **csv_file_context_xy_joined.json** file to analyze gaze positions from a CSV file where gaze position coordinates are joined as a list in one single column:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_joined.json
+```
+
+Load **csv_file_context_xy_splitted.json** file to analyze gaze positions from a CSV file where gaze position coordinates are splitted in two seperated column:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_splitted.json
+```
+
+Load **csv_file_context_left_right_eyes.json** file to analyze gaze positions from a CSV file where gaze position coordinates and validity are given for each eye in six separated columns.:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/csv_file_context_left_right_eyes.json
+```
+
+!!! note
+ The left/right eyes context allows to parse Tobii Spectrum data for example.
+
+## OpenCV
-Load **opencv_cursor_context.json** file to process cursor pointer positions over OpenCV window:
+### Cursor context
+
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
```
-## OpenCV movie context
+### Movie context
-Load **opencv_movie_context.json** file to process movie pictures and also cursor pointer positions over OpenCV window:
+Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
```
+### Camera context
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to set a consistent *sides_mask* value.
+
+Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0).
+
+Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json
+```
+
## Tobii Pro Glasses 2
### Live stream context
@@ -40,7 +80,9 @@ python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
!!! note
This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
-Edit **tobii_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters:
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
+
+Edit **tobii_g2_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters:
```json
{
@@ -63,35 +105,50 @@ Edit **tobii_live_stream_context.json** file as to select exisiting IP *address*
}
```
-Then, load **tobii_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_g2_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_g2_live_stream_context.json
```
-### Post-processing context
+### Segment playback context
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_g2_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_g2_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/tobii_g2_segment_playback_context.json
+```
+
+## Tobii Pro Glasses 3
+
+### Live stream context
+
+!!! note
+ This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420.
+
+Load **tobii_g3_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_g3_live_stream_context.json
```
+
## Pupil Invisible
### Live stream context
@@ -99,8 +156,25 @@ python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
!!! note
This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
-Load **pupillabs_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1088, 1080]) and to set *sides_mask* value to 4.
+
+Load **pupillabs_invisible_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/pupillabs_invisible_live_stream_context.json
+```
+
+## Pupil Neon
+
+### Live stream context
+
+!!! note
+ This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1600, 1200]) and to set *sides_mask* value to 200.
+
+Load **pupillabs_neon_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/pupillabs_live_stream_context.json
+python -m argaze load ./src/argaze/utils/demo/pupillabs_neon_live_stream_context.json
```
diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md
index 3d34972..55bd232 100644
--- a/docs/user_guide/utils/estimate_aruco_markers_pose.md
+++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md
@@ -15,7 +15,7 @@ Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a m
}
```
-Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
+Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
```json
{
@@ -27,7 +27,7 @@ Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUc
"pose_size": 4,
"pose_ids": [],
"parameters": {
- "useAruco3Detection": 1
+ "useAruco3Detection": true
},
"observers":{
"observers.ArUcoMarkersPoseRecorder": {
diff --git a/docs/user_guide/utils/main_commands.md b/docs/user_guide/utils/main_commands.md
index 4dd3434..9227d8d 100644
--- a/docs/user_guide/utils/main_commands.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -35,13 +35,13 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
@@ -54,3 +54,6 @@ Modify the content of JSON CONFIGURATION file with another JSON CHANGES file the
```shell
python -m argaze edit CONFIGURATION CHANGES OUTPUT
```
+
+!!! note
+ Use *null* value to remove an entry.