aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guide')
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md4
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md79
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md2
-rw-r--r--docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md37
-rw-r--r--docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md90
-rw-r--r--docs/user_guide/aruco_marker_pipeline/introduction.md3
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md195
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md106
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md (renamed from docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md)15
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md65
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/opencv.md63
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/random.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md59
-rw-r--r--docs/user_guide/eye_tracking_context/introduction.md19
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md58
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md7
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/background.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md58
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/heatmap.md4
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/introduction.md8
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/recording.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualization.md35
-rw-r--r--docs/user_guide/pipeline_input_context/configuration_and_connection.md35
-rw-r--r--docs/user_guide/pipeline_input_context/context_definition.md57
-rw-r--r--docs/user_guide/pipeline_input_context/introduction.md24
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md48
-rw-r--r--docs/user_guide/utils/estimate_aruco_markers_pose.md60
-rw-r--r--docs/user_guide/utils/main_commands.md (renamed from docs/user_guide/utils/ready-made_scripts.md)35
31 files changed, 887 insertions, 353 deletions
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
index 975f278..311916b 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md
@@ -5,7 +5,7 @@ As explain in the [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dc
## Load ArUcoDetector parameters
-[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file with ArUco detector parameters:
@@ -18,7 +18,7 @@ Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoM
"dictionary": "DICT_APRILTAG_16h5",
"parameters": {
"adaptiveThreshConstant": 10,
- "useAruco3Detection": 1
+ "useAruco3Detection": true
}
},
...
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
index 625f257..e9ce740 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md
@@ -134,7 +134,7 @@ Below, an optic_parameters JSON file example:
## Load and display optic parameters
-[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry.
+[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file where optic parameters are loaded and displayed:
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
index c81d57d..f258e04 100644
--- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md
@@ -74,38 +74,83 @@ from argaze import ArFeatures
...
```
-## Pipeline execution outputs
+## Pipeline execution
-The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution.
+### Detect ArUco markers, estimate scene pose and project 3D AOI
+
+Pass each camera image with timestamp information to the [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco marker detection, scene pose estimation and 3D AOI projection.
+
+!!! warning "Mandatory"
+
+ The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions.
```python
-# Assuming that timestamped images are available
+# Assuming that Full HD (1920x1080) images are available with timestamp values
...:
+ # Edit timestamped image
+ timestamped_image = DataFeatures.TimestampedImage(image, timestamp=timestamp)
+
try:
- # Watch image with ArUco camera
- aruco_camera.watch(image, timestamp=timestamp)
+ # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
+ aruco_camera.watch(timestamped_image)
# Do something with pipeline exception
except Exception as e:
...
- # Do something with detected_markers
- ... aruco_camera.aruco_detector.detected_markers()
+ # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualization.
+ ... aruco_camera.image()
+```
+
+### Detection outputs
+
+The [ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution.
+
+```python
+# Assuming that watch method has been called
+
+# Do something with detected_markers
+... aruco_camera.aruco_detector.detected_markers()
```
Let's understand the meaning of each returned data.
-### *aruco_camera.aruco_detector.detected_markers()*
+#### *aruco_camera.aruco_detector.detected_markers()*
A dictionary containing all detected markers is provided by [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector) class.
+### Analyse timestamped gaze positions into the camera frame
+
+As mentioned above, [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all the services described in the [gaze analysis pipeline section](../../gaze_analysis_pipeline/introduction.md).
+
+Particularly, timestamped gaze positions can be passed one by one to the [ArUcoCamera.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis.
+
+!!! warning "Mandatory"
+
+ The [ArUcoCamera.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ try:
+
+ # Look ArUcoCamera frame at a timestamped gaze position
+ aruco_camera.look(timestamped_gaze_position)
+
+ # Do something with pipeline exception
+ except Exception as e:
+
+ ...
+```
+
## Setup ArUcoCamera image parameters
-Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArUcoCamera is loaded
@@ -133,4 +178,18 @@ aruco_camera_image = aruco_camera.image(**image_parameters)
```
!!! note
- [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all image parameters described in [gaze analysis pipeline visualization section](../../gaze_analysis_pipeline/visualization.md). \ No newline at end of file
+ [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all image parameters described in [gaze analysis pipeline visualization section](../../gaze_analysis_pipeline/visualization.md).
+
+
+## Display ArUcoScene frames
+
+All [ArUcoScene](../../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image can be displayed as any [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame).
+
+```python
+ ...
+
+ # Display all ArUcoScene frames
+ for frame in aruco_camera.scene_frames():
+
+ ... frame.image()
+``` \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
index 46422b8..78a513a 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md
@@ -1,7 +1,7 @@
Describe 3D AOI
===============
-Now that the [scene pose is estimated](aruco_marker_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
+Now that the [scene pose is estimated](aruco_marker_description.md) considering the ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential.
In the example scene, the two screens—the control panel and the window—are considered to be areas of interest.
diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
index 7323f2e..3a029b0 100644
--- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
+++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md
@@ -69,7 +69,8 @@ Here is the previous extract where "Left_Screen" and "Right_Screen" AOI are defi
}
}
}
- }
+ },
+ "copy_background_into_scenes_frames": true
...
}
}
@@ -96,40 +97,18 @@ The names of 3D AOI **and** their related [ArFrames](../../argaze.md/#argaze.ArF
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frame layers are projected into their dedicated [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) layers when the JSON configuration file is loaded.
-## Pipeline execution
-
-### Map ArUcoCamera image into ArUcoScenes frames
-
-After the timestamped camera image is passed to the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perspective transformation in order to project the watched image into each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) [frame's background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
-
-```python
-# Assuming that Full HD (1920x1080) timestamped images are available
-...:
-
- # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
- aruco_camera.watch(timestamped_image)
+### *copy_background_into_scenes_frames*
- # Map watched image into ArUcoScene frames background
- aruco_camera.map(timestamp=timestamp)
-```
+When the timestamped camera image is passed to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) method, it is possible to apply a perspective transformation in order to project the watched image into each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) [frame's background](../../argaze.md/#argaze.ArFeatures.ArFrame) image.
-### Analyze timestamped gaze positions into ArUcoScene frames
+## Pipeline execution
[ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
!!! note
- Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) passed to the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method are projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames if applicable.
-
-### Display each ArUcoScene frames
-
-All [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image can be displayed as any [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame).
-
-```python
- ...
+ Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) passed to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) are automatically projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames if applicable.
- # Display all ArUcoScene frames
- for frame in aruco_camera.scene_frames():
+Each [ArUcoScene](../../argaze.md/#argaze.ArUcoMarker.ArUcoScene) frames image is displayed in a separate window.
- ... frame.image()
-``` \ No newline at end of file
+![ArGaze load GUI](../../img/argaze_load_gui_opencv_frame.png) \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
index f4bd2d4..56846e2 100644
--- a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
@@ -1,17 +1,17 @@
-Load and execute pipeline
+Edit and execute pipeline
=========================
-Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
+Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected by the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.
As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
-![ArUco camera frame](../../img/aruco_camera_frame.png)
+Once defined, an ArUco marker pipeline needs to embedded inside a context that will provides it both gaze positions and camera images to process.
-## Load JSON configuration file
+![ArUco camera frame](../../img/aruco_camera_frame.png)
-An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) pipeline can be loaded from a JSON configuration file thanks to [argaze.load](../../argaze.md/#argaze.load) package method.
+## Edit JSON configuration
-Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file example:
+Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration example:
```json
{
@@ -52,19 +52,7 @@ Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCam
}
```
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArUcoCamera
-with argaze.load('./configuration.json') as aruco_camera:
-
- # Do something with ArUcoCamera
- ...
-```
-
-Now, let's understand the meaning of each JSON entry.
+Let's understand the meaning of each JSON entry.
### argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera
@@ -101,62 +89,32 @@ The usual [ArFrame visualization parameters](../gaze_analysis_pipeline/visualiza
## Pipeline execution
-### Detect ArUco markers, estimate scene pose and project 3D AOI
-
-Pass each camera image with timestamp information to the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco marker detection, scene pose estimation and 3D AOI projection.
-
-!!! warning "Mandatory"
-
- The [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions.
-
-```python
-# Assuming that Full HD (1920x1080) images are available with timestamp values
-...:
-
- # Edit timestamped image
- timestamped_image = DataFeatures.TimestampedImage(image, timestamp=timestamp)
-
- try:
+A pipeline needs to be embedded into a context to be executed.
- # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame
- aruco_camera.watch(timestamped_image)
+Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-
- # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualization.
- ... aruco_camera.image()
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie": {
+ "name": "Movie player",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": JSON CONFIGURATION
+ }
+}
```
-### Analyse timestamped gaze positions into the camera frame
-
-As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and, so, benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).
-
-Particularly, timestamped gaze positions can be passed one by one to the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis.
+Then, use the [*load* command](../utils/main_commands.md) to execute the context.
-!!! warning "Mandatory"
-
- The [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
-
-```python
-# Assuming that timestamped gaze positions are available
-...
-
- try:
+```shell
+python -m argaze load CONFIGURATION
+```
- # Look ArUcoCamera frame at a timestamped gaze position
- aruco_camera.look(timestamped_gaze_position)
+This command should open a GUI window with the detected markers and identified cursor fixations circles when the mouse moves over the window.
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-```
+![ArGaze load GUI](../../img/argaze_load_gui_opencv_pipeline.png)
!!! note ""
- At this point, the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method only detects ArUco marker and the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArCamera.look) method only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
+ At this point, the pipeline only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.
Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe a 3D scene's AOI](aoi_3d_description.md) and [how to project them into the camera frame](aoi_3d_projection.md). \ No newline at end of file
diff --git a/docs/user_guide/aruco_marker_pipeline/introduction.md b/docs/user_guide/aruco_marker_pipeline/introduction.md
index ef2e4da..54e1a1f 100644
--- a/docs/user_guide/aruco_marker_pipeline/introduction.md
+++ b/docs/user_guide/aruco_marker_pipeline/introduction.md
@@ -9,6 +9,9 @@ The OpenCV library provides a module to detect fiducial markers in a picture and
The ArGaze [ArUcoMarker submodule](../../argaze.md/#argaze.ArUcoMarker) eases markers creation, markers detection, and 3D scene pose estimation through a set of high-level classes.
+!!! warning "Read eye tracking context and gaze analysis pipeline sections before"
+ This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md) and also assumes that the way a [gaze analysis pipeline](../gaze_analysis_pipeline/introduction.md) works is understood.
+
First, let's look at the schema below. It gives an overview of the main notions involved in the following chapters.
![ArUco marker pipeline](../../img/aruco_marker_pipeline.png)
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
new file mode 100644
index 0000000..a543bc7
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -0,0 +1,195 @@
+Define a context class
+======================
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process.
+
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement.
+
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
+
+## Write data capture context
+
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataCaptureExample(ArFeatures.DataCaptureContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
+ self.__gaze_thread.start()
+
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
+ self.__camera_thread.start()
+
+ return self
+
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop capture loops
+ self.stop()
+
+ # Stop capture threads
+ threading.Thread.join(self.__gaze_thread)
+ threading.Thread.join(self.__camera_thread)
+
+ def calibrate(self):
+ """Handle device calibration process."""
+
+ ...
+```
+
+## Write data playback context
+
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
+
+ return self
+
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
+
+ # Playback loop
+ while self.is_running():
+
+ # Pause playback
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop playback loop
+ self.stop()
+
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
+
+ @property
+ def duration(self) -> int|float:
+ """Get data duration."""
+ ...
+
+ @property
+ def progression(self) -> float:
+ """Get data playback progression between 0 and 1."""
+ ...
+```
+
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
new file mode 100644
index 0000000..d8eb389
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -0,0 +1,106 @@
+Scritp the context
+==================
+
+Context objects are accessible from a Python script.
+
+## Load configuration from JSON file
+
+A context configuration can be loaded from a JSON file using the [*load*](../../../argaze.md/#argaze.load) function.
+
+```python
+from argaze import load
+
+# Load a context
+with load(configuration_filepath) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+!!! note
+ The **with** statement enables context by calling its **enter** method then ensures that its **exit** method is always called at the end.
+
+## Load configuration from dictionary
+
+A context configuration can be loaded from a Python dictionary using the [*from_dict*](../../../argaze.md/#argaze.DataFeatures.from_dict) function.
+
+```python
+from argaze import DataFeatures
+
+import my_package
+
+# Set working directory to enable relative file path loading
+DataFeatures.set_working_directory('path/to/folder')
+
+# Edit a dict with context configuration
+configuration = {
+ "name": "My context",
+ "parameter": ...,
+ "pipeline": ...
+}
+
+# Load a context from a package
+with DataFeatures.from_dict(my_package.MyContext, configuration) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+## Manage context
+
+Check the context or the pipeline type to adapt features.
+
+```python
+from argaze import ArFeatures
+
+# Assuming the context is loaded and is running
+...
+
+ # Check context type
+
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
+ ...
+
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
+ ...
+
+ # Check pipeline type
+
+ # Screen-based case: only gaze positions are processes
+ if issubclass(type(context.pipeline), ArFeatures.ArFrame):
+ ...
+
+ # Head-mounted case: camera images also processes
+ if issubclass(type(context.pipeline), ArFeatures.ArCamera):
+ ...
+```
+
+## Display context
+
+The context image can be displayed in low priority to not block pipeline processing.
+
+```python
+# Assuming the context is loaded and is running
+...
+
+ # Display context if the pipeline is available
+ try:
+
+ ... = context.image(wait = False)
+
+ except DataFeatures.SharedObjectBusy:
+
+ pass
+```
diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
index 026d287..959d955 100644
--- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
@@ -3,7 +3,7 @@ Edit timestamped gaze positions
Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before going further.
-![Timestamped gaze positions](../../img/timestamped_gaze_positions.png)
+![Timestamped gaze positions](../../../img/timestamped_gaze_positions.png)
## Import timestamped gaze positions from CSV file
@@ -28,8 +28,8 @@ for timestamped_gaze_position in ts_gaze_positions:
## Edit timestamped gaze positions from live stream
-Real-time gaze positions can be edited thanks to the [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
-Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html).
+Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
+Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html).
```python
from argaze import GazeFeatures
@@ -64,12 +64,3 @@ start_time = time.time()
!!! warning "Free time unit"
Timestamps can either be integers or floats, seconds, milliseconds or what ever you need. The only concern is that all time values used in further configurations have to be in the same unit.
-
-<!--
-!!! note "Eyetracker connectors"
-
- [Read the use cases section to discover examples using specific eyetrackers](./user_cases/introduction.md).
-!-->
-
-!!! note ""
- Now we have timestamped gaze positions at expected format, read the next chapter to start learning [how to analyze them](./configuration_and_execution.md). \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
new file mode 100644
index 0000000..e1123fb
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -0,0 +1,65 @@
+Edit and execute context
+========================
+
+The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
+
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Pupil Labs](context_modules/pupil_labs.md) data capture context,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
+* [Random](context_modules/random.md) gaze position generator.
+
+## Edit JSON configuration
+
+Here is a JSON configuration that loads a [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) context:
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": {
+ "argaze.ArFeatures.ArFrame": {
+ "size": [1280, 720]
+ }
+ }
+ }
+}
+```
+
+Let's understand the meaning of each JSON entry.
+
+### argaze.utils.contexts.Random.GazePositionGenerator
+
+The class name of the object being loaded from the [utils.contexts module](../../argaze.md/#argaze.utils.contexts).
+
+### *name*
+
+The name of the [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext). Basically useful for visualization purposes.
+
+### *range*
+
+The range of the gaze position being generated. This property is specific to the [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) class.
+
+### *pipeline*
+
+A minimal gaze processing pipeline that only draws last gaze position.
+
+## Context execution
+
+A context can be loaded from a JSON configuration file using the [*load* command](../utils/main_commands.md).
+
+```shell
+python -m argaze load CONFIGURATION
+```
+
+This command should open a GUI window with a random yellow dot inside.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random.png)
+
+!!! note ""
+
+ At this point, it is possible to load any ready-made context from [utils.contexts](../../argaze.md/#argaze.utils.contexts) module.
+
+ However, the incoming gaze positions are not processed and gaze mapping would not be available for head-mounted eye tracker context.
+
+ Read the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md) to learn how to process gaze positions then, the [ArUco markers pipeline section](../aruco_marker_pipeline/introduction.md) to learn how to enable gaze mapping with an ArUco markers setup.
diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
new file mode 100644
index 0000000..7d73a03
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
@@ -0,0 +1,63 @@
+OpenCV
+======
+
+ArGaze provides a ready-made contexts to process cursor position over Open CV window and process movie images.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Cursor
+
+::: argaze.utils.contexts.OpenCV.Cursor
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Cursor": {
+ "name": "Open CV cursor",
+ "pipeline": ...
+ }
+}
+```
+
+## Movie
+
+::: argaze.utils.contexts.OpenCV.Movie
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie": {
+ "name": "Open CV movie",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": ...
+ }
+}
+```
+
+## Camera
+
+::: argaze.utils.contexts.OpenCV.Camera
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Camera": {
+ "name": "Open CV camera",
+ "identifier": 0,
+ "pipeline": ...
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
new file mode 100644
index 0000000..d2ec336
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
@@ -0,0 +1,32 @@
+Pupil Labs
+==========
+
+ArGaze provides a ready-made context to work with Pupil Labs devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.PupilLabs.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.PupilLabs.LiveStream": {
+ "name": "Pupil Labs live stream",
+ "project": "my_experiment",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/random.md b/docs/user_guide/eye_tracking_context/context_modules/random.md
new file mode 100644
index 0000000..89d7501
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/random.md
@@ -0,0 +1,32 @@
+Random
+======
+
+ArGaze provides a ready-made context to generate random gaze positions.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Gaze Position Generator
+
+::: argaze.utils.contexts.Random.GazePositionGenerator
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
new file mode 100644
index 0000000..6ff44bd
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -0,0 +1,59 @@
+Tobii Pro Glasses 2
+===================
+
+ArGaze provides a ready-made context to work with Tobii Pro Glasses 2 devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.TobiiProGlasses2.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.LiveStream": {
+ "name": "Tobii Pro Glasses 2 live stream",
+ "address": "10.34.0.17",
+ "project": "my_experiment",
+ "participant": "subject-A",
+ "configuration": {
+ "sys_ec_preset": "Indoor",
+ "sys_sc_width": 1920,
+ "sys_sc_height": 1080,
+ "sys_sc_fps": 25,
+ "sys_sc_preset": "Auto",
+ "sys_et_freq": 50,
+ "sys_mems_freq": 100
+ },
+ "pipeline": ...
+ }
+}
+```
+
+## Segment Playback
+
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "./src/argaze/utils/demo/tobii_record/segments/1",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/introduction.md b/docs/user_guide/eye_tracking_context/introduction.md
new file mode 100644
index 0000000..a6208b2
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/introduction.md
@@ -0,0 +1,19 @@
+Overview
+========
+
+This section explains how to handle eye tracker data from various sources as live streams or archived files before to passing them to a processing pipeline. Those various usages are covered by the notion of **eye tracking context**.
+
+To use a ready-made eye tracking context, you only need to know:
+
+* [How to edit and execute a context](configuration_and_execution.md)
+
+More advanced features are also explained like:
+
+* [How to script context](./advanced_topics/scripting.md),
+* [How to define a context](./advanced_topics/context_definition.md),
+* [How to edit timestamped gaze positions](advanced_topics/timestamped_gaze_positions_edition.md)
+
+To get deeper in how context works, the schema below mentions *enter* and *exit* methods which are related to the notion of [Python context manager](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+![ArContext class](../../img/eye_tracker_context.png)
+
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
index 4970dba..effee18 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md
@@ -7,7 +7,7 @@ The calibration algorithm can be selected by instantiating a particular [GazePos
## Enable ArFrame calibration
-Gaze position calibration can be enabled thanks to a dedicated JSON entry.
+Gaze position calibration can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a [Linear Regression](../../../argaze.md/#argaze.GazeAnalysis.LinearRegression) calibration algorithm is selected with no parameters:
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 026cb3f..843274a 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -66,7 +66,28 @@ from argaze import ArFeatures
...
```
-## Pipeline execution updates
+## Pipeline execution
+
+Timestamped [GazePositions](../../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+
+!!! warning "Mandatory"
+
+ The [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ try:
+
+ # Look ArFrame at a timestamped gaze position
+ ar_frame.look(timestamped_gaze_position)
+
+ # Do something with pipeline exception
+ except Exception as e:
+
+ ...
+```
Calling [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method leads to update many data into the pipeline.
@@ -137,7 +158,7 @@ Last [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) identi
This could also be the current gaze movement if [ArFrame.filter_in_progress_identification](../../../argaze.md/#argaze.ArFeatures.ArFrame) attribute is false.
In that case, the last gaze movement *finished* flag is false.
-Then, the last gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
+Then, the last gaze movement type can be tested with [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions.
### *ar_frame.is_analysis_available()*
@@ -161,7 +182,7 @@ This an iterator to access to all aoi scan path analysis. Notice that each aoi s
## Setup ArFrame image parameters
-[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary.
+[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary.
```python
# Assuming ArFrame is loaded
@@ -186,3 +207,34 @@ ar_frame_image = ar_frame.image(**image_parameters)
# Do something with ArFrame image
...
```
+
+Then, [ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
+
+### Live window display
+
+While timestamped gaze positions are processed by [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
+
+```python
+import cv2
+
+def main():
+
+ # Assuming ArFrame is loaded
+ ...
+
+ # Create a window to display ArFrame
+ cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
+
+ # Assuming that timestamped gaze positions are being processed by ArFrame.look method
+ ...
+
+ # Update ArFrame image display
+ cv2.imshow(ar_frame.name, ar_frame.image())
+
+ # Wait 10 ms
+ cv2.waitKey(10)
+
+if __name__ == '__main__':
+
+ main()
+``` \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index be27c69..c2a6ac3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -5,7 +5,7 @@ Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](confi
![Layer](../../img/ar_layer.png)
-## Add ArLayer to ArFrame JSON configuration file
+## Add ArLayer to ArFrame JSON configuration
The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to match fixations with AOI and inside which those matches need to be analyzed.
@@ -100,6 +100,11 @@ The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step a
Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required.
+!!! warning "GazeFeatures.OutsideAOI"
+ When a fixation is not looking at any AOI, a step associated with an AOI called [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) is added. As long as fixations are not looking at any AOI, all fixations/saccades are stored in this step. In this way, further analysis are calculated considering those extra [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) steps.
+
+ This is particularly important when calculating transition matrices, because otherwise we could have arcs between two AOIs when in fact the gaze could have fixed itself outside in the meantime.
+
The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added.
!!! note "Optional"
diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md
index 900d151..11285e3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/background.md
+++ b/docs/user_guide/gaze_analysis_pipeline/background.md
@@ -7,7 +7,7 @@ Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)
## Load and display ArFrame background
-[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where a background picture is loaded and displayed:
@@ -28,7 +28,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p
```
!!! note
- As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
index 57a9d71..58919e5 100644
--- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
@@ -1,15 +1,15 @@
-Load and execute pipeline
+Edit and execute pipeline
=========================
The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed.
-![Frame](../../img/ar_frame.png)
+Once defined, a gaze analysis pipeline needs to embedded inside a context that will provides it gaze positions to process.
-## Load JSON configuration file
+![Frame](../../img/ar_frame.png)
-An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method.
+## Edit JSON configuration
-Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration file example:
+Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration example:
```json
{
@@ -35,19 +35,7 @@ Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) conf
}
```
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArFrame
-with argaze.load('./configuration.json') as ar_frame:
-
- # Do something with ArFrame
- ...
-```
-
-Now, let's understand the meaning of each JSON entry.
+Let's understand the meaning of each JSON entry.
### argaze.ArFeatures.ArFrame
@@ -103,28 +91,32 @@ In the example file, the chosen analysis algorithms are the [Basic](../../argaze
## Pipeline execution
-Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+A pipeline needs to be embedded into a context to be executed.
-!!! warning "Mandatory"
+Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
- The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1920, 1080],
+ "pipeline": JSON CONFIGURATION
+ }
+}
+```
-```python
-# Assuming that timestamped gaze positions are available
-...
+Then, use the [*load* command](../utils/main_commands.md) to execute the context.
- try:
+```shell
+python -m argaze load CONFIGURATION
+```
- # Look ArFrame at a timestamped gaze position
- ar_frame.look(timestamped_gaze_position)
+This command should open a GUI window with a random yellow dot and identified fixations circles.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random_pipeline.png)
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-```
!!! note ""
- At this point, the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
+ At this point, the pipeline only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
Read the next chapters to learn how to [describe AOI](aoi_2d_description.md), [add AOI analysis](aoi_analysis.md), [record gaze analysis](recording.md) and [visualize pipeline steps](visualization.md). \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
index 2057dbe..77b2be0 100644
--- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md
+++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md
@@ -7,7 +7,7 @@ Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pip
## Enable and display ArFrame heatmap
-[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled thanks to a dedicated JSON entry.
+[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file where heatmap is enabled and displayed:
@@ -31,7 +31,7 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena
}
```
!!! note
- [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
+ [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method.
Now, let's understand the meaning of each JSON entry.
diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md
index c12f669..1b06ff6 100644
--- a/docs/user_guide/gaze_analysis_pipeline/introduction.md
+++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md
@@ -1,7 +1,10 @@
Overview
========
-This section explains how to create gaze analysis pipelines for various use cases.
+This section explains how to process incoming gaze positions through a **gaze analysis pipeline**.
+
+!!! warning "Read eye tracking context section before"
+ This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md).
First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
@@ -9,8 +12,7 @@ First, let's look at the schema below: it gives an overview of the main notions
To build your own gaze analysis pipeline, you need to know:
-* [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md),
-* [How to load and execute gaze analysis pipeline](configuration_and_execution.md),
+* [How to edit and execute a pipeline](configuration_and_execution.md),
* [How to describe AOI](aoi_2d_description.md),
* [How to enable AOI analysis](aoi_analysis.md),
* [How to visualize pipeline steps outputs](visualization.md),
diff --git a/docs/user_guide/gaze_analysis_pipeline/recording.md b/docs/user_guide/gaze_analysis_pipeline/recording.md
index 826442f..2a92403 100644
--- a/docs/user_guide/gaze_analysis_pipeline/recording.md
+++ b/docs/user_guide/gaze_analysis_pipeline/recording.md
@@ -52,7 +52,7 @@ class ScanPathAnalysisRecorder(UtilsFeatures.FileWriter):
# Init FileWriter
super().__init__(**kwargs)
- # Edit hearder line
+ # Edit header line
self.header = "Timestamp (ms)", "Duration (ms)", "Steps number"
def on_look(self, timestamp, ar_frame, exception):
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md
index 6b9805c..08b5465 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualization.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md
@@ -5,9 +5,9 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze
![ArFrame visualization](../../img/visualization.png)
-## Add image parameters to ArFrame JSON configuration file
+## Add image parameters to ArFrame JSON configuration
-[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry.
+[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a dedicated JSON entry.
Here is an extract from the JSON ArFrame configuration file with a sample where image parameters are added:
@@ -82,37 +82,6 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
Most of *image_parameters* entries work if related ArFrame/ArLayer pipeline steps are enabled.
For example, a JSON *draw_scan_path* entry needs GazeMovementIdentifier and ScanPath steps to be enabled.
-Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
-
-## Live window display
-
-While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
-
-```python
-import cv2
-
-def main():
-
- # Assuming ArFrame is loaded
- ...
-
- # Create a window to display ArFrame
- cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
-
- # Assuming that timestamped gaze positions are being processed by ArFrame.look method
- ...
-
- # Update ArFrame image display
- cv2.imshow(ar_frame.name, ar_frame.image())
-
- # Wait 10 ms
- cv2.waitKey(10)
-
-if __name__ == '__main__':
-
- main()
-```
-
!!! note "Export to video file"
Video exportation is detailed in [gaze analysis recording chapter](recording.md). \ No newline at end of file
diff --git a/docs/user_guide/pipeline_input_context/configuration_and_connection.md b/docs/user_guide/pipeline_input_context/configuration_and_connection.md
deleted file mode 100644
index 4aac88a..0000000
--- a/docs/user_guide/pipeline_input_context/configuration_and_connection.md
+++ /dev/null
@@ -1,35 +0,0 @@
-Load and connect a context
-==========================
-
-Once an [ArContext is defined](context_definition.md), it have to be connected to a pipeline.
-
-# Load JSON configuration file
-
-An [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method.
-
-Here is a JSON configuration file related to the [previously defined Example context](context_definition.md):
-
-```json
-{
- "my_context.Example": {
- "name": "My example context",
- "parameter": ...,
- "pipeline": "pipeline.json"
- }
-}
-```
-
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArContext
-with argaze.load('./configuration.json') as ar_context:
-
- # Do something with ArContext
- ...
-```
-
-!!! note
- There is nothing to do to execute a loaded context as it is handled inside its own **__enter__** method.
diff --git a/docs/user_guide/pipeline_input_context/context_definition.md b/docs/user_guide/pipeline_input_context/context_definition.md
deleted file mode 100644
index 7d30438..0000000
--- a/docs/user_guide/pipeline_input_context/context_definition.md
+++ /dev/null
@@ -1,57 +0,0 @@
-Define a context class
-======================
-
-The [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic class interface to handle pipeline inputs according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
-
-# Write Python context file
-
-A specific [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be defined into a Python file.
-
-Here is an example context defined into *my_context.py* file:
-
-```python
-from argaze import ArFeatures, DataFeatures
-
-class Example(ArFeatures.ArContext):
-
- @DataFeatures.PipelineStepInit
- def __init__(self, **kwargs):
-
- # Init ArContext class
- super().__init__()
-
- # Init private attribute
- self.__parameter = ...
-
- @property
- def parameter(self):
- """Any context specific parameter."""
- return self.__parameter
-
- @parameter.setter
- def parameter(self, parameter):
- self.__parameter = parameter
-
- @DataFeatures.PipelineStepEnter
- def __enter__(self):
-
- # Start context according any specific parameter
- ... self.parameter
-
- # Assuming that timestamp, x and y values are available
- ...
-
- # Process timestamped gaze position
- self._process_gaze_position(timestamp = timestamp, x = x, y = y)
-
- @DataFeatures.PipelineStepExit
- def __exit__(self, exception_type, exception_value, exception_traceback):
-
- # End context
- ...
-```
-
-!!! note ""
-
- The next chapter explains how to [load a context to connect it with a pipeline](configuration_and_connection.md).
- \ No newline at end of file
diff --git a/docs/user_guide/pipeline_input_context/introduction.md b/docs/user_guide/pipeline_input_context/introduction.md
deleted file mode 100644
index e31ad54..0000000
--- a/docs/user_guide/pipeline_input_context/introduction.md
+++ /dev/null
@@ -1,24 +0,0 @@
-Overview
-========
-
-This section explains how to connect [gaze analysis](../gaze_analysis_pipeline/introduction.md) or [augmented reality](../aruco_marker_pipeline/introduction.md) pipelines with various input contexts.
-
-First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
-
-![Pipeline input context](../../img/pipeline_input_context.png)
-
-To build your own input context, you need to know:
-
-* [How to define a context class](context_definition.md),
-* [How to load a context to connect with a pipeline](configuration_and_connection.md),
-
-!!! warning "Documentation in progress"
-
- This section is not yet fully done. Please look at the [demonstrations scripts chapter](../utils/demonstrations_scripts.md) to know more about this notion.
-
-<!--
-* [How to stop a context](stop.md),
-* [How to pause and resume a context](pause_and_resume.md),
-* [How to visualize a context](visualization.md),
-* [How to handle pipeline exceptions](exceptions.md)
-!-->
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index f293980..59df85b 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -9,20 +9,45 @@ Collection of command-line scripts for demonstration purpose.
!!! note
*Use -h option to get command arguments documentation.*
+!!! note
+ Each demonstration outputs metrics into *_export/records* folder.
+
## Random context
-Load **random_context.json** file to analyze random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
```
-## OpenCV window context
+## OpenCV
+
+### Cursor context
+
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
+```
+
+### Movie context
+
+Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
+```
+
+### Camera context
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to reduce the value of the *sides_mask*.
-Load **opencv_window_context.json** file to analyze mouse pointer positions over OpenCV window:
+Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0).
+
+Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window:
```shell
-python -m argaze load ./src/argaze/utils/demo/opencv_window_context.json
+python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json
```
## Tobii Pro Glasses 2
@@ -61,27 +86,24 @@ Then, load **tobii_live_stream_context.json** file to find ArUco marker into cam
python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
```
-### Post-processing context
-
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+### Segment playback context
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_segment_playback_context.json
```
## Pupil Invisible
diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md
new file mode 100644
index 0000000..55bd232
--- /dev/null
+++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md
@@ -0,0 +1,60 @@
+Estimate ArUco markers pose
+===========================
+
+This **ArGaze** application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
+
+Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie" : {
+ "name": "ArUco markers pose estimator",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": "pipeline.json"
+ }
+}
+```
+
+Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "Full HD Camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "pose_size": 4,
+ "pose_ids": [],
+ "parameters": {
+ "useAruco3Detection": true
+ },
+ "observers":{
+ "observers.ArUcoMarkersPoseRecorder": {
+ "output_folder": "_export/records/aruco_markers_group"
+ }
+ }
+ },
+ "sides_mask": 420,
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [255, 255, 255],
+ "draw_axes": {
+ "thickness": 4
+ }
+ }
+ }
+ }
+}
+```
+
+Then, launch the application.
+
+```shell
+python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
+``` \ No newline at end of file
diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/main_commands.md
index 892fef8..c4887a4 100644
--- a/docs/user_guide/utils/ready-made_scripts.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -1,15 +1,12 @@
-Ready-made scripts
-==================
+Main commands
+=============
-Collection of command-line scripts to provide useful features.
-
-!!! note
- *Consider that all inline commands below have to be executed at the root of ArGaze package folder.*
+The **ArGaze** package comes with top-level commands.
!!! note
*Use -h option to get command arguments documentation.*
-## Load ArContext JSON configuration
+## Load
Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) from a JSON CONFIGURATION file
@@ -17,6 +14,10 @@ Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) f
python -m argaze load CONFIGURATION
```
+This command should open a GUI window to display the image of the context's pipeline.
+
+![ArGaze load GUI](../../img/argaze_load_gui.png)
+
### Send command
Use -p option to enable pipe communication at given address:
@@ -34,36 +35,22 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
```
-## Edit JSON configuration
+## Edit
Modify the content of JSON CONFIGURATION file with another JSON CHANGES file then, save the result into an OUTPUT file
```shell
python -m argaze edit CONFIGURATION CHANGES OUTPUT
```
-
-## Estimate ArUco markers pose
-
-This application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
-
-Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
-
-Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
-
-Then, launch the application.
-
-```shell
-python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
-``` \ No newline at end of file