aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/eye_tracking_context
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guide/eye_tracking_context')
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md195
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md106
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md66
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md65
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/opencv.md63
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/random.md32
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md59
-rw-r--r--docs/user_guide/eye_tracking_context/introduction.md19
9 files changed, 637 insertions, 0 deletions
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
new file mode 100644
index 0000000..a543bc7
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -0,0 +1,195 @@
+Define a context class
+======================
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process.
+
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement.
+
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
+
+## Write data capture context
+
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataCaptureExample(ArFeatures.DataCaptureContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
+ self.__gaze_thread.start()
+
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
+ self.__camera_thread.start()
+
+ return self
+
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
+
+ # Capture loop
+ while self.is_running():
+
+ # Pause capture
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop capture loops
+ self.stop()
+
+ # Stop capture threads
+ threading.Thread.join(self.__gaze_thread)
+ threading.Thread.join(self.__camera_thread)
+
+ def calibrate(self):
+ """Handle device calibration process."""
+
+ ...
+```
+
+## Write data playback context
+
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
+
+```python
+from argaze import ArFeatures, DataFeatures
+
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
+
+ @DataFeatures.PipelineStepInit
+ def __init__(self, **kwargs):
+
+ # Init DataCaptureContext class
+ super().__init__()
+
+ # Init private attribute
+ self.__parameter = ...
+
+ @property
+ def parameter(self):
+ """Any context specific parameter."""
+ return self.__parameter
+
+ @parameter.setter
+ def parameter(self, parameter):
+ self.__parameter = parameter
+
+ @DataFeatures.PipelineStepEnter
+ def __enter__(self):
+ """Start context."""
+
+ # Start context according any specific parameter
+ ... self.parameter
+
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
+
+ return self
+
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
+
+ # Playback loop
+ while self.is_running():
+
+ # Pause playback
+ if not self.is_paused():
+
+ # Assuming that timestamp, camera_image are available
+ ...
+
+ # Process timestamped camera image
+ self._process_camera_image(timestamp = timestamp, image = camera_image)
+
+ # Assuming that timestamp, x and y values are available
+ ...
+
+ # Process timestamped gaze position
+ self._process_gaze_position(timestamp = timestamp, x = x, y = y)
+
+ # Wait some time eventually
+ ...
+
+ @DataFeatures.PipelineStepExit
+ def __exit__(self, exception_type, exception_value, exception_traceback):
+ """End context."""
+
+ # Stop playback loop
+ self.stop()
+
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
+
+ @property
+ def duration(self) -> int|float:
+ """Get data duration."""
+ ...
+
+ @property
+ def progression(self) -> float:
+ """Get data playback progression between 0 and 1."""
+ ...
+```
+
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
new file mode 100644
index 0000000..d8eb389
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -0,0 +1,106 @@
+Scritp the context
+==================
+
+Context objects are accessible from a Python script.
+
+## Load configuration from JSON file
+
+A context configuration can be loaded from a JSON file using the [*load*](../../../argaze.md/#argaze.load) function.
+
+```python
+from argaze import load
+
+# Load a context
+with load(configuration_filepath) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+!!! note
+ The **with** statement enables context by calling its **enter** method then ensures that its **exit** method is always called at the end.
+
+## Load configuration from dictionary
+
+A context configuration can be loaded from a Python dictionary using the [*from_dict*](../../../argaze.md/#argaze.DataFeatures.from_dict) function.
+
+```python
+from argaze import DataFeatures
+
+import my_package
+
+# Set working directory to enable relative file path loading
+DataFeatures.set_working_directory('path/to/folder')
+
+# Edit a dict with context configuration
+configuration = {
+ "name": "My context",
+ "parameter": ...,
+ "pipeline": ...
+}
+
+# Load a context from a package
+with DataFeatures.from_dict(my_package.MyContext, configuration) as context:
+
+ while context.is_running():
+
+ # Do something with context
+ ...
+
+ # Wait some time eventually
+ ...
+```
+
+## Manage context
+
+Check the context or the pipeline type to adapt features.
+
+```python
+from argaze import ArFeatures
+
+# Assuming the context is loaded and is running
+...
+
+ # Check context type
+
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
+ ...
+
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
+ ...
+
+ # Check pipeline type
+
+ # Screen-based case: only gaze positions are processes
+ if issubclass(type(context.pipeline), ArFeatures.ArFrame):
+ ...
+
+ # Head-mounted case: camera images also processes
+ if issubclass(type(context.pipeline), ArFeatures.ArCamera):
+ ...
+```
+
+## Display context
+
+The context image can be displayed in low priority to not block pipeline processing.
+
+```python
+# Assuming the context is loaded and is running
+...
+
+ # Display context if the pipeline is available
+ try:
+
+ ... = context.image(wait = False)
+
+ except DataFeatures.SharedObjectBusy:
+
+ pass
+```
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
new file mode 100644
index 0000000..959d955
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md
@@ -0,0 +1,66 @@
+Edit timestamped gaze positions
+===============================
+
+Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before going further.
+
+![Timestamped gaze positions](../../../img/timestamped_gaze_positions.png)
+
+## Import timestamped gaze positions from CSV file
+
+It is possible to load timestamped gaze positions from a [Pandas DataFrame](https://pandas.pydata.org/docs/getting_started/intro_tutorials/01_table_oriented.html#min-tut-01-tableoriented) object which can be loaded from a CSV file.
+
+```python
+from argaze import GazeFeatures
+import pandas
+
+# Load gaze positions from a CSV file into Panda Dataframe
+dataframe = pandas.read_csv('gaze_positions.csv', delimiter=",", low_memory=False)
+
+# Convert Panda dataframe into timestamped gaze positions precising the use of each specific column labels
+ts_gaze_positions = GazeFeatures.TimeStampedGazePositions.from_dataframe(dataframe, timestamp = 'Recording timestamp [ms]', x = 'Gaze point X [px]', y = 'Gaze point Y [px]')
+
+# Iterate over timestamped gaze positions
+for timestamped_gaze_position in ts_gaze_positions:
+
+ # Do something with each timestamped gaze position
+ ...
+```
+
+## Edit timestamped gaze positions from live stream
+
+Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class.
+Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html).
+
+```python
+from argaze import GazeFeatures
+
+# Assuming to be inside the function where timestamp_µs, gaze_x and gaze_y values are catched
+...
+
+ # Define a timestamped gaze position converting microsecond timestamp into second timestamp
+ timestamped_gaze_position = GazeFeatures.GazePosition((gaze_x, gaze_y), timestamp=timestamp_µs * 1e-6)
+
+ # Do something with each timestamped gaze position
+ ...
+```
+
+```python
+from argaze import GazeFeatures
+
+import time
+
+# Initialize timestamp
+start_time = time.time()
+
+# Assuming to be inside the function where only gaze_x and gaze_y values are catched (no timestamp)
+...
+
+ # Define a timestamped gaze position with millisecond timestamp
+ timestamped_gaze_position = GazeFeatures.GazePosition((gaze_x, gaze_y), timestamp=int((time.time() - start_time) * 1e3))
+
+ # Do something with each timestamped gaze position
+ ...
+```
+
+!!! warning "Free time unit"
+ Timestamps can either be integers or floats, seconds, milliseconds or what ever you need. The only concern is that all time values used in further configurations have to be in the same unit.
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
new file mode 100644
index 0000000..e1123fb
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -0,0 +1,65 @@
+Edit and execute context
+========================
+
+The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
+
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Pupil Labs](context_modules/pupil_labs.md) data capture context,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
+* [Random](context_modules/random.md) gaze position generator.
+
+## Edit JSON configuration
+
+Here is a JSON configuration that loads a [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) context:
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": {
+ "argaze.ArFeatures.ArFrame": {
+ "size": [1280, 720]
+ }
+ }
+ }
+}
+```
+
+Let's understand the meaning of each JSON entry.
+
+### argaze.utils.contexts.Random.GazePositionGenerator
+
+The class name of the object being loaded from the [utils.contexts module](../../argaze.md/#argaze.utils.contexts).
+
+### *name*
+
+The name of the [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext). Basically useful for visualization purposes.
+
+### *range*
+
+The range of the gaze position being generated. This property is specific to the [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) class.
+
+### *pipeline*
+
+A minimal gaze processing pipeline that only draws last gaze position.
+
+## Context execution
+
+A context can be loaded from a JSON configuration file using the [*load* command](../utils/main_commands.md).
+
+```shell
+python -m argaze load CONFIGURATION
+```
+
+This command should open a GUI window with a random yellow dot inside.
+
+![ArGaze load GUI](../../img/argaze_load_gui_random.png)
+
+!!! note ""
+
+ At this point, it is possible to load any ready-made context from [utils.contexts](../../argaze.md/#argaze.utils.contexts) module.
+
+ However, the incoming gaze positions are not processed and gaze mapping would not be available for head-mounted eye tracker context.
+
+ Read the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md) to learn how to process gaze positions then, the [ArUco markers pipeline section](../aruco_marker_pipeline/introduction.md) to learn how to enable gaze mapping with an ArUco markers setup.
diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
new file mode 100644
index 0000000..7d73a03
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md
@@ -0,0 +1,63 @@
+OpenCV
+======
+
+ArGaze provides a ready-made contexts to process cursor position over Open CV window and process movie images.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Cursor
+
+::: argaze.utils.contexts.OpenCV.Cursor
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Cursor": {
+ "name": "Open CV cursor",
+ "pipeline": ...
+ }
+}
+```
+
+## Movie
+
+::: argaze.utils.contexts.OpenCV.Movie
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie": {
+ "name": "Open CV movie",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": ...
+ }
+}
+```
+
+## Camera
+
+::: argaze.utils.contexts.OpenCV.Camera
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Camera": {
+ "name": "Open CV camera",
+ "identifier": 0,
+ "pipeline": ...
+ }
+}
+``` \ No newline at end of file
diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
new file mode 100644
index 0000000..d2ec336
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md
@@ -0,0 +1,32 @@
+Pupil Labs
+==========
+
+ArGaze provides a ready-made context to work with Pupil Labs devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.PupilLabs.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.PupilLabs.LiveStream": {
+ "name": "Pupil Labs live stream",
+ "project": "my_experiment",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/random.md b/docs/user_guide/eye_tracking_context/context_modules/random.md
new file mode 100644
index 0000000..89d7501
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/random.md
@@ -0,0 +1,32 @@
+Random
+======
+
+ArGaze provides a ready-made context to generate random gaze positions.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Gaze Position Generator
+
+::: argaze.utils.contexts.Random.GazePositionGenerator
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.Random.GazePositionGenerator": {
+ "name": "Random gaze position generator",
+ "range": [1280, 720],
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
new file mode 100644
index 0000000..6ff44bd
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -0,0 +1,59 @@
+Tobii Pro Glasses 2
+===================
+
+ArGaze provides a ready-made context to work with Tobii Pro Glasses 2 devices.
+
+To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file.
+Notice that the *pipeline* entry is mandatory.
+
+```json
+{
+ JSON sample
+ "pipeline": ...
+}
+```
+
+Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext).
+
+## Live Stream
+
+::: argaze.utils.contexts.TobiiProGlasses2.LiveStream
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.LiveStream": {
+ "name": "Tobii Pro Glasses 2 live stream",
+ "address": "10.34.0.17",
+ "project": "my_experiment",
+ "participant": "subject-A",
+ "configuration": {
+ "sys_ec_preset": "Indoor",
+ "sys_sc_width": 1920,
+ "sys_sc_height": 1080,
+ "sys_sc_fps": 25,
+ "sys_sc_preset": "Auto",
+ "sys_et_freq": 50,
+ "sys_mems_freq": 100
+ },
+ "pipeline": ...
+ }
+}
+```
+
+## Segment Playback
+
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
+
+### JSON sample
+
+```json
+{
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
+ "segment": "./src/argaze/utils/demo/tobii_record/segments/1",
+ "pipeline": ...
+ }
+}
+```
diff --git a/docs/user_guide/eye_tracking_context/introduction.md b/docs/user_guide/eye_tracking_context/introduction.md
new file mode 100644
index 0000000..a6208b2
--- /dev/null
+++ b/docs/user_guide/eye_tracking_context/introduction.md
@@ -0,0 +1,19 @@
+Overview
+========
+
+This section explains how to handle eye tracker data from various sources as live streams or archived files before to passing them to a processing pipeline. Those various usages are covered by the notion of **eye tracking context**.
+
+To use a ready-made eye tracking context, you only need to know:
+
+* [How to edit and execute a context](configuration_and_execution.md)
+
+More advanced features are also explained like:
+
+* [How to script context](./advanced_topics/scripting.md),
+* [How to define a context](./advanced_topics/context_definition.md),
+* [How to edit timestamped gaze positions](advanced_topics/timestamped_gaze_positions_edition.md)
+
+To get deeper in how context works, the schema below mentions *enter* and *exit* methods which are related to the notion of [Python context manager](https://docs.python.org/3/reference/datamodel.html#context-managers).
+
+![ArContext class](../../img/eye_tracker_context.png)
+