aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/gaze_analysis_pipeline
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guide/gaze_analysis_pipeline')
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md54
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md2
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md58
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/introduction.md9
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md75
-rw-r--r--docs/user_guide/gaze_analysis_pipeline/visualization.md33
6 files changed, 86 insertions, 145 deletions
diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
index 026cb3f..f3ec6cd 100644
--- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
+++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md
@@ -66,7 +66,28 @@ from argaze import ArFeatures
...
```
-## Pipeline execution updates
+## Pipeline execution
+
+Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+
+!!! warning "Mandatory"
+
+ The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ try:
+
+ # Look ArFrame at a timestamped gaze position
+ ar_frame.look(timestamped_gaze_position)
+
+ # Do something with pipeline exception
+ except Exception as e:
+
+ ...
+```
Calling [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method leads to update many data into the pipeline.
@@ -186,3 +207,34 @@ ar_frame_image = ar_frame.image(**image_parameters)
# Do something with ArFrame image
...
```
+
+Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
+
+### Live window display
+
+While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
+
+```python
+import cv2
+
+def main():
+
+ # Assuming ArFrame is loaded
+ ...
+
+ # Create a window to display ArFrame
+ cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
+
+ # Assuming that timestamped gaze positions are being processed by ArFrame.look method
+ ...
+
+ # Update ArFrame image display
+ cv2.imshow(ar_frame.name, ar_frame.image())
+
+ # Wait 10 ms
+ cv2.waitKey(10)
+
+if __name__ == '__main__':
+
+ main()
+``` \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
index be27c69..2b64091 100644
--- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
+++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md
@@ -5,7 +5,7 @@ Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](confi
![Layer](../../img/ar_layer.png)
-## Add ArLayer to ArFrame JSON configuration file
+## Add ArLayer to ArFrame JSON configuration
The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to match fixations with AOI and inside which those matches need to be analyzed.
diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
index 57a9d71..58919e5 100644
--- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
+++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md
@@ -1,15 +1,15 @@
-Load and execute pipeline
+Edit and execute pipeline
=========================
The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed.
-![Frame](../../img/ar_frame.png)
+Once defined, a gaze analysis pipeline needs to embedded inside a context that will provides it gaze positions to process.
-## Load JSON configuration file
+![Frame](../../img/ar_frame.png)
-An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method.
+## Edit JSON configuration
-Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration file example:
+Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration example:
```json
{
@@ -35,19 +35,7 @@ Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) conf
}
```
-Then, here is how to load the JSON file:
-
-```python
-import argaze
-
-# Load ArFrame
-with argaze.load('./configuration.json') as ar_frame:
-
- # Do something with ArFrame
- ...
-```
-
-Now, let's understand the meaning of each JSON entry.
+Let's understand the meaning of each JSON entry.
### argaze.ArFeatures.ArFrame
@@ -103,28 +91,32 @@ In the example file, the chosen analysis algorithms are the [Basic](../../argaze
## Pipeline execution
-Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline.
+A pipeline needs to be embedded into a context to be executed.
-!!! warning "Mandatory"
+Copy the gaze analysis pipeline configuration defined above inside the following context configuration.
- The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions.
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1920, 1080],
+ "pipeline": JSON CONFIGURATION
+ }
+}
+```
-```python
-# Assuming that timestamped gaze positions are available
-...
+Then, use the [*load* command](../utils/main_commands.md) to execute the context.
- try:
+```shell
+python -m argaze load CONFIGURATION
+```
- # Look ArFrame at a timestamped gaze position
- ar_frame.look(timestamped_gaze_position)
+This command should open a GUI window with a random yellow dot and identified fixations circles.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random_pipeline.png)
- # Do something with pipeline exception
- except Exception as e:
-
- ...
-```
!!! note ""
- At this point, the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
+ At this point, the pipeline only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports.
Read the next chapters to learn how to [describe AOI](aoi_2d_description.md), [add AOI analysis](aoi_analysis.md), [record gaze analysis](recording.md) and [visualize pipeline steps](visualization.md). \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md
index c12f669..29eeed5 100644
--- a/docs/user_guide/gaze_analysis_pipeline/introduction.md
+++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md
@@ -1,7 +1,10 @@
Overview
========
-This section explains how to create gaze analysis pipelines for various use cases.
+This section explains how to process incoming gaze positions through a **gaze analysis pipeline**.
+
+!!! warning "Read eye tracking context section before"
+ This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md).
First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
@@ -9,8 +12,7 @@ First, let's look at the schema below: it gives an overview of the main notions
To build your own gaze analysis pipeline, you need to know:
-* [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md),
-* [How to load and execute gaze analysis pipeline](configuration_and_execution.md),
+* [How to edit and execute a pipeline](configuration_and_execution.md),
* [How to describe AOI](aoi_2d_description.md),
* [How to enable AOI analysis](aoi_analysis.md),
* [How to visualize pipeline steps outputs](visualization.md),
@@ -20,6 +22,7 @@ To build your own gaze analysis pipeline, you need to know:
More advanced features are also explained like:
+* [How to edit timestamped gaze positions](advanced_topics/timestamped_gaze_positions_edition.md),
* [How to script gaze analysis pipeline](advanced_topics/scripting.md),
* [How to load module from another package](advanced_topics/module_loading.md).
* [How to calibrate gaze position](advanced_topics/gaze_position_calibration.md).
diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
deleted file mode 100644
index 026d287..0000000
--- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md
+++ /dev/null
@@ -1,75 +0,0 @@
-Edit timestamped gaze positions
-===============================
-
-Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before going further.
-
-![Timestamped gaze positions](../../img/timestamped_gaze_positions.png)
-
-## Import timestamped gaze positions from CSV file
-
-It is possible to load timestamped gaze positions from a [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) object which can be loaded from a CSV file.
-
-```python
-from argaze import GazeFeatures
-import pandas
-
-# Load gaze positions from a CSV file into Panda Dataframe
-dataframe = pandas.read_csv('gaze_positions.csv', delimiter=",", low_memory=False)
-
-# Convert Panda dataframe into timestamped gaze positions precising the use of each specific column labels
-ts_gaze_positions = GazeFeatures.TimeStampedGazePositions.from_dataframe(dataframe, timestamp = 'Recording timestamp [ms]', x = 'Gaze point X [px]', y = 'Gaze point Y [px]')
-
-# Iterate over timestamped gaze positions
-for timestamped_gaze_position in ts_gaze_positions:
-
- # Do something with each timestamped gaze position
- ...
-```
-
-## Edit timestamped gaze positions from live stream
-
-Real-time gaze positions can be edited thanks to the [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
-Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html).
-
-```python
-from argaze import GazeFeatures
-
-# Assuming to be inside the function where timestamp_µs, gaze_x and gaze_y values are catched
-...
-
- # Define a timestamped gaze position converting microsecond timestamp into second timestamp
- timestamped_gaze_position = GazeFeatures.GazePosition((gaze_x, gaze_y), timestamp=timestamp_µs * 1e-6)
-
- # Do something with each timestamped gaze position
- ...
-```
-
-```python
-from argaze import GazeFeatures
-
-import time
-
-# Initialize timestamp
-start_time = time.time()
-
-# Assuming to be inside the function where only gaze_x and gaze_y values are catched (no timestamp)
-...
-
- # Define a timestamped gaze position with millisecond timestamp
- timestamped_gaze_position = GazeFeatures.GazePosition((gaze_x, gaze_y), timestamp=int((time.time() - start_time) * 1e3))
-
- # Do something with each timestamped gaze position
- ...
-```
-
-!!! warning "Free time unit"
- Timestamps can either be integers or floats, seconds, milliseconds or what ever you need. The only concern is that all time values used in further configurations have to be in the same unit.
-
-<!--
-!!! note "Eyetracker connectors"
-
- [Read the use cases section to discover examples using specific eyetrackers](./user_cases/introduction.md).
-!-->
-
-!!! note ""
- Now we have timestamped gaze positions at expected format, read the next chapter to start learning [how to analyze them](./configuration_and_execution.md). \ No newline at end of file
diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md
index 6b9805c..32395c3 100644
--- a/docs/user_guide/gaze_analysis_pipeline/visualization.md
+++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md
@@ -5,7 +5,7 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze
![ArFrame visualization](../../img/visualization.png)
-## Add image parameters to ArFrame JSON configuration file
+## Add image parameters to ArFrame JSON configuration
[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry.
@@ -82,37 +82,6 @@ Here is an extract from the JSON ArFrame configuration file with a sample where
Most of *image_parameters* entries work if related ArFrame/ArLayer pipeline steps are enabled.
For example, a JSON *draw_scan_path* entry needs GazeMovementIdentifier and ScanPath steps to be enabled.
-Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations.
-
-## Live window display
-
-While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/).
-
-```python
-import cv2
-
-def main():
-
- # Assuming ArFrame is loaded
- ...
-
- # Create a window to display ArFrame
- cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE)
-
- # Assuming that timestamped gaze positions are being processed by ArFrame.look method
- ...
-
- # Update ArFrame image display
- cv2.imshow(ar_frame.name, ar_frame.image())
-
- # Wait 10 ms
- cv2.waitKey(10)
-
-if __name__ == '__main__':
-
- main()
-```
-
!!! note "Export to video file"
Video exportation is detailed in [gaze analysis recording chapter](recording.md). \ No newline at end of file