diff options
Diffstat (limited to 'docs')
26 files changed, 730 insertions, 226 deletions
diff --git a/docs/img/argaze_load_gui.png b/docs/img/argaze_load_gui.png Binary files differnew file mode 100644 index 0000000..b8874b2 --- /dev/null +++ b/docs/img/argaze_load_gui.png diff --git a/docs/img/argaze_load_gui_random.png b/docs/img/argaze_load_gui_random.png Binary files differnew file mode 100644 index 0000000..c95a9f5 --- /dev/null +++ b/docs/img/argaze_load_gui_random.png diff --git a/docs/img/argaze_load_gui_random_pipeline.png b/docs/img/argaze_load_gui_random_pipeline.png Binary files differnew file mode 100644 index 0000000..210d410 --- /dev/null +++ b/docs/img/argaze_load_gui_random_pipeline.png diff --git a/docs/img/eye_tracker_context.png b/docs/img/eye_tracker_context.png Binary files differnew file mode 100644 index 0000000..638e9a6 --- /dev/null +++ b/docs/img/eye_tracker_context.png diff --git a/docs/img/pipeline_input_context.png b/docs/img/pipeline_input_context.png Binary files differdeleted file mode 100644 index 8c195ea..0000000 --- a/docs/img/pipeline_input_context.png +++ /dev/null diff --git a/docs/index.md b/docs/index.md index 2d00d16..00b8ed7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -7,20 +7,26 @@ title: What is ArGaze? **Useful links**: [Installation](installation.md) | [Source Repository](https://gitpub.recherche.enac.fr/argaze) | [Issue Tracker](https://git.recherche.enac.fr/projects/argaze/issues) | [Contact](mailto:argaze-contact@recherche.enac.fr) **ArGaze** is an open and flexible Python software library designed to provide a unified and modular approach to gaze analysis or gaze interaction. -**ArGaze** facilitates **real-time and/or post-processing analysis** for both **screen-based and head-mounted** eye tracking systems. + By offering a wide array of gaze metrics and supporting easy extension to incorporate additional metrics, **ArGaze** empowers researchers and practitioners to explore novel analytical approaches efficiently. ![ArGaze pipeline](img/argaze_pipeline.png) +## Eye tracking context + +**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **real-time and/or post-processing analysis**. + +[Learn how to handle various eye tracking context by reading the dedicated user guide section](./user_guide/eye_tracking_context/introduction.md). + ## Gaze analysis pipeline -**ArGaze** provides an extensible modules library, allowing to select application-specific algorithms at each pipeline step: +Once incoming eye tracking data available, **ArGaze** provides an extensible modules library, allowing to select application-specific algorithms at each pipeline step: * **Fixation/Saccade identification**: dispersion threshold identification, velocity threshold identification, etc. * **Area Of Interest (AOI) matching**: focus point inside, deviation circle coverage, etc. * **Scan path analysis**: transition matrix, entropy, explore/exploit ratio, etc. -Once the incoming data is formatted as required, all those gaze analysis features can be used with any screen-based eye tracker devices. +All those gaze analysis features can be used with any screen-based eye tracker devices. [Learn how to build gaze analysis pipelines for various use cases by reading the dedicated user guide section](./user_guide/gaze_analysis_pipeline/introduction.md). diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md new file mode 100644 index 0000000..99b6c7a --- /dev/null +++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md @@ -0,0 +1,193 @@ +Define a context class +====================== + +The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers). + +The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides playback features to stop or pause processings, performance assement features to measure how many times processings are called and the time spent by the process. + +Besides, there is also a [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process. + +In the same way, there is a [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and processing advancment. + +Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) or [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class. + +## Write live processing context + +Here is a live processing context example that processes gaze positions and camera images in two separated threads: + +```python +from argaze import ArFeatures, DataFeatures + +class LiveProcessingExample(ArFeatures.LiveProcessingContext): + + @DataFeatures.PipelineStepInit + def __init__(self, **kwargs): + + # Init LiveProcessingContext class + super().__init__() + + # Init private attribute + self.__parameter = ... + + @property + def parameter(self): + """Any context specific parameter.""" + return self.__parameter + + @parameter.setter + def parameter(self, parameter): + self.__parameter = parameter + + @DataFeatures.PipelineStepEnter + def __enter__(self): + """Start context.""" + + # Start context according any specific parameter + ... self.parameter + + # Start a gaze position processing thread + self.__gaze_thread = threading.Thread(target = self.__gaze_position_processing) + self.__gaze_thread.start() + + # Start a camera image processing thread if applicable + self.__camera_thread = threading.Thread(target = self.__camera_image_processing) + self.__camera_thread.start() + + return self + + def __gaze_position_processing(self): + """Process gaze position.""" + + # Processing loop + while self.is_running(): + + # Pause processing + if not self.is_paused(): + + # Assuming that timestamp, x and y values are available + ... + + # Process timestamped gaze position + self._process_gaze_position(timestamp = timestamp, x = x, y = y) + + # Wait some time eventually + ... + + def __camera_image_processing(self): + """Process camera image if applicable.""" + + # Processing loop + while self.is_running(): + + # Pause processing + if not self.is_paused(): + + # Assuming that timestamp, camera_image are available + ... + + # Process timestamped camera image + self._process_camera_image(timestamp = timestamp, image = camera_image) + + # Wait some time eventually + ... + + @DataFeatures.PipelineStepExit + def __exit__(self, exception_type, exception_value, exception_traceback): + """End context.""" + + # Stop processing loops + self.stop() + + # Stop processing threads + threading.Thread.join(self.__gaze_thread) + threading.Thread.join(self.__camera_thread) + + def calibrate(self): + """Handle device calibration process.""" + + ... +``` + +## Write post processing context + +Here is a post processing context example that processes gaze positions and camera images in a same thread: + +```python +from argaze import ArFeatures, DataFeatures + +class PostProcessingExample(ArFeatures.PostProcessingContext): + + @DataFeatures.PipelineStepInit + def __init__(self, **kwargs): + + # Init LiveProcessingContext class + super().__init__() + + # Init private attribute + self.__parameter = ... + + @property + def parameter(self): + """Any context specific parameter.""" + return self.__parameter + + @parameter.setter + def parameter(self, parameter): + self.__parameter = parameter + + @DataFeatures.PipelineStepEnter + def __enter__(self): + """Start context.""" + + # Start context according any specific parameter + ... self.parameter + + # Start a reading data thread + self.__read_thread = threading.Thread(target = self.__data_reading) + self.__read_thread.start() + + return self + + def __data_reading(self): + """Process gaze position and camera image if applicable.""" + + # Processing loop + while self.is_running(): + + # Pause processing + if not self.is_paused(): + + # Assuming that timestamp, camera_image are available + ... + + # Process timestamped camera image + self._process_camera_image(timestamp = timestamp, image = camera_image) + + # Assuming that timestamp, x and y values are available + ... + + # Process timestamped gaze position + self._process_gaze_position(timestamp = timestamp, x = x, y = y) + + # Wait some time eventually + ... + + @DataFeatures.PipelineStepExit + def __exit__(self, exception_type, exception_value, exception_traceback): + """End context.""" + + # Stop processing loops + self.stop() + + # Stop processing threads + threading.Thread.join(self.__read_thread) + + def previous(self): + """Go to previous camera image frame.""" + ... + + def next(self): + """Go to next camera image frame.""" + ... +``` + diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md new file mode 100644 index 0000000..8753eb6 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md @@ -0,0 +1,106 @@ +Scritp the context +================== + +Context objects are accessible from a Python script. + +## Load configuration from JSON file + +A context configuration can be loaded from a JSON file using the [*load*](../../../argaze.md/#argaze.load) function. + +```python +from argaze import load + +# Load a context +with load(configuration_filepath) as context: + + while context.is_running(): + + # Do something with context + ... + + # Wait some time eventually + ... +``` + +!!! note + The **with** statement enables context by calling its **enter** method then ensures that its **exit** method is always called at the end. + +## Load configuration from dictionary + +A context configuration can be loaded from a Python dictionary using the [*from_dict*](../../../argaze.md/#argaze.DataFeatures.from_dict) function. + +```python +from argaze import DataFeatures + +import my_package + +# Set working directory to enable relative file path loading +DataFeatures.set_working_directory('path/to/folder') + +# Edit a dict with context configuration +configuration = { + "name": "My context", + "parameter": ..., + "pipeline": ... +} + +# Load a context from a package +with DataFeatures.from_dict(my_package.MyContext, configuration) as context: + + while context.is_running(): + + # Do something with context + ... + + # Wait some time eventually + ... +``` + +## Manage context + +Check the context or the pipeline type to adapt features. + +```python +from argaze import ArFeatures + +# Assuming the context is loaded and is running +... + + # Check context type + + # Live processing case: calibration method is available + if issubclass(type(context), ArFeatures.LiveProcessingContext): + ... + + # Post processing case: more playback methods are available + if issubclass(type(context), ArFeatures.PostProcessingContext): + ... + + # Check pipeline type + + # Screen-based case: only gaze positions are processes + if issubclass(type(context.pipeline), ArFeatures.ArFrame): + ... + + # Head-mounted case: camera images also processes + if issubclass(type(context.pipeline), ArFeatures.ArCamera): + ... +``` + +## Display context + +The context image can be displayed in low priority to not block pipeline processing. + +```python +# Assuming the context is loaded and is running +... + + # Display context if the pipeline is available + try: + + ... = context.image(wait = False) + + except DataFeatures.SharedObjectBusy: + + pass +``` diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md index 026d287..340dbaf 100644 --- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md +++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md @@ -3,7 +3,7 @@ Edit timestamped gaze positions Whatever eye data comes from a file on disk or from a live stream, timestamped gaze positions are required before going further. -![Timestamped gaze positions](../../img/timestamped_gaze_positions.png) +![Timestamped gaze positions](../../../img/timestamped_gaze_positions.png) ## Import timestamped gaze positions from CSV file @@ -28,7 +28,7 @@ for timestamped_gaze_position in ts_gaze_positions: ## Edit timestamped gaze positions from live stream -Real-time gaze positions can be edited thanks to the [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class. +Real-time gaze positions can be edited thanks to the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class. Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html). ```python @@ -64,12 +64,3 @@ start_time = time.time() !!! warning "Free time unit" Timestamps can either be integers or floats, seconds, milliseconds or what ever you need. The only concern is that all time values used in further configurations have to be in the same unit. - -<!-- -!!! note "Eyetracker connectors" - - [Read the use cases section to discover examples using specific eyetrackers](./user_cases/introduction.md). -!--> - -!!! note "" - Now we have timestamped gaze positions at expected format, read the next chapter to start learning [how to analyze them](./configuration_and_execution.md).
\ No newline at end of file diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md new file mode 100644 index 0000000..f13c6a2 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md @@ -0,0 +1,65 @@ +Edit and execute context +======================== + +The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like: + +* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) live stream and post processing contexts, +* [Pupil Labs](context_modules/pupil_labs.md) live stream context, +* [OpenCV](context_modules/opencv.md) window cursor position and movie processing, +* [Random](context_modules/random.md) gaze position generator. + +## Edit JSON configuration + +Here is a JSON configuration that loads a [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) context: + +```json +{ + "argaze.utils.contexts.Random.GazePositionGenerator": { + "name": "Random gaze position generator", + "range": [1280, 720], + "pipeline": { + "argaze.ArFeatures.ArFrame": { + "size": [1280, 720] + } + } + } +} +``` + +Let's understand the meaning of each JSON entry. + +### argaze.utils.contexts.Random.GazePositionGenerator + +The class name of the object being loaded from the [utils.contexts module](../../argaze.md/#argaze.utils.contexts). + +### *name* + +The name of the [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext). Basically useful for visualization purposes. + +### *range* + +The range of the gaze position being generated. This property is specific to the [Random.GazePositionGenerator](../../argaze.md/#argaze.utils.contexts.Random.GazePositionGenerator) class. + +### *pipeline* + +A minimal gaze processing pipeline that only draws last gaze position. + +## Context execution + +A context can be loaded from a JSON configuration file using the [*load* command](../utils/main_commands.md). + +```shell +python -m argaze load CONFIGURATION +``` + +This command should open a GUI window with a random yellow dot inside. + +![ArGaze load GUI](../../img/argaze_load_gui_random.png) + +!!! note "" + + At this point, it is possible to load any ready-made context from [utils.contexts](../../argaze.md/#argaze.utils.contexts) module. + + However, the incoming gaze positions are not processed and gaze mapping would not be available for head-mounted eye tracker context. + + Read the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md) to learn how to process gaze positions then, the [ArUco markers pipeline section](../aruco_marker_pipeline/introduction.md) to learn how to enable gaze mapping with an ArUco markers setup. diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md new file mode 100644 index 0000000..7244cd4 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md @@ -0,0 +1,47 @@ +OpenCV +====== + +ArGaze provides a ready-made contexts to process cursor position over Open CV window and process movie images. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Cursor + +::: argaze.utils.contexts.OpenCV.Cursor + +### JSON sample + +```json +{ + "argaze.utils.contexts.OpenCV.Cursor": { + "name": "Open CV cursor", + "pipeline": ... + } +} +``` + +## Movie + +::: argaze.utils.contexts.OpenCV.Movie + +### JSON sample + +```json +{ + "argaze.utils.contexts.OpenCV.Movie": { + "name": "Open CV cursor", + "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4", + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md new file mode 100644 index 0000000..d2ec336 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md @@ -0,0 +1,32 @@ +Pupil Labs +========== + +ArGaze provides a ready-made context to work with Pupil Labs devices. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Live Stream + +::: argaze.utils.contexts.PupilLabs.LiveStream + +### JSON sample + +```json +{ + "argaze.utils.contexts.PupilLabs.LiveStream": { + "name": "Pupil Labs live stream", + "project": "my_experiment", + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/context_modules/random.md b/docs/user_guide/eye_tracking_context/context_modules/random.md new file mode 100644 index 0000000..89d7501 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/random.md @@ -0,0 +1,32 @@ +Random +====== + +ArGaze provides a ready-made context to generate random gaze positions. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Gaze Position Generator + +::: argaze.utils.contexts.Random.GazePositionGenerator + +### JSON sample + +```json +{ + "argaze.utils.contexts.Random.GazePositionGenerator": { + "name": "Random gaze position generator", + "range": [1280, 720], + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md new file mode 100644 index 0000000..fba6931 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md @@ -0,0 +1,59 @@ +Tobii Pro Glasses 2 +=================== + +ArGaze provides a ready-made context to work with Tobii Pro Glasses 2 devices. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Live Stream + +::: argaze.utils.contexts.TobiiProGlasses2.LiveStream + +### JSON sample + +```json +{ + "argaze.utils.contexts.TobiiProGlasses2.LiveStream": { + "name": "Tobii Pro Glasses 2 live stream", + "address": "10.34.0.17", + "project": "my_experiment", + "participant": "subject-A", + "configuration": { + "sys_ec_preset": "Indoor", + "sys_sc_width": 1920, + "sys_sc_height": 1080, + "sys_sc_fps": 25, + "sys_sc_preset": "Auto", + "sys_et_freq": 50, + "sys_mems_freq": 100 + }, + "pipeline": ... + } +} +``` + +## Post Processing + +::: argaze.utils.contexts.TobiiProGlasses2.PostProcessing + +### JSON sample + +```json +{ + "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : { + "name": "Tobii Pro Glasses 2 post-processing", + "segment": "./src/argaze/utils/demo/tobii_record/segments/1", + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/introduction.md b/docs/user_guide/eye_tracking_context/introduction.md new file mode 100644 index 0000000..8fe6c81 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/introduction.md @@ -0,0 +1,18 @@ +Overview +======== + +This section explains how to handle eye tracker data from various sources as live streams or archived files before to passing them to a processing pipeline. Those various usages are covered by the notion of **eye tracking context**. + +To use a ready-made eye tracking context, you only need to know: + +* [How to edit and execute a context](configuration_and_execution.md) + +More advanced features are also explained like: + +* [How to script context](./advanced_topics/scripting.md), +* [How to define a context](./advanced_topics/context_definition.md) + +To get deeper in how context works, the schema below mentions *enter* and *exit* methods which are related to the notion of [Python context manager](https://docs.python.org/3/reference/datamodel.html#context-managers). + +![ArContext class](../../../img/eye_tracker_context.png) + diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md index 026cb3f..f3ec6cd 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md @@ -66,7 +66,28 @@ from argaze import ArFeatures ... ``` -## Pipeline execution updates +## Pipeline execution + +Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline. + +!!! warning "Mandatory" + + The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions. + +```python +# Assuming that timestamped gaze positions are available +... + + try: + + # Look ArFrame at a timestamped gaze position + ar_frame.look(timestamped_gaze_position) + + # Do something with pipeline exception + except Exception as e: + + ... +``` Calling [ArFrame.look](../../../argaze.md/#argaze.ArFeatures.ArFrame.look) method leads to update many data into the pipeline. @@ -186,3 +207,34 @@ ar_frame_image = ar_frame.image(**image_parameters) # Do something with ArFrame image ... ``` + +Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations. + +### Live window display + +While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/). + +```python +import cv2 + +def main(): + + # Assuming ArFrame is loaded + ... + + # Create a window to display ArFrame + cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE) + + # Assuming that timestamped gaze positions are being processed by ArFrame.look method + ... + + # Update ArFrame image display + cv2.imshow(ar_frame.name, ar_frame.image()) + + # Wait 10 ms + cv2.waitKey(10) + +if __name__ == '__main__': + + main() +```
\ No newline at end of file diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md index be27c69..2b64091 100644 --- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md +++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md @@ -5,7 +5,7 @@ Once [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) is [configured](confi ![Layer](../../img/ar_layer.png) -## Add ArLayer to ArFrame JSON configuration file +## Add ArLayer to ArFrame JSON configuration The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class defines a space where to match fixations with AOI and inside which those matches need to be analyzed. diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md index 57a9d71..58919e5 100644 --- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md +++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md @@ -1,15 +1,15 @@ -Load and execute pipeline +Edit and execute pipeline ========================= The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed. -![Frame](../../img/ar_frame.png) +Once defined, a gaze analysis pipeline needs to embedded inside a context that will provides it gaze positions to process. -## Load JSON configuration file +![Frame](../../img/ar_frame.png) -An [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method. +## Edit JSON configuration -Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration file example: +Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) configuration example: ```json { @@ -35,19 +35,7 @@ Here is a simple JSON [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) conf } ``` -Then, here is how to load the JSON file: - -```python -import argaze - -# Load ArFrame -with argaze.load('./configuration.json') as ar_frame: - - # Do something with ArFrame - ... -``` - -Now, let's understand the meaning of each JSON entry. +Let's understand the meaning of each JSON entry. ### argaze.ArFeatures.ArFrame @@ -103,28 +91,32 @@ In the example file, the chosen analysis algorithms are the [Basic](../../argaze ## Pipeline execution -Timestamped [GazePositions](../../argaze.md/#argaze.GazeFeatures.GazePosition) have to be passed one by one to the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole instantiated pipeline. +A pipeline needs to be embedded into a context to be executed. -!!! warning "Mandatory" +Copy the gaze analysis pipeline configuration defined above inside the following context configuration. - The [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions. +```json +{ + "argaze.utils.contexts.Random.GazePositionGenerator": { + "name": "Random gaze position generator", + "range": [1920, 1080], + "pipeline": JSON CONFIGURATION + } +} +``` -```python -# Assuming that timestamped gaze positions are available -... +Then, use the [*load* command](../utils/main_commands.md) to execute the context. - try: +```shell +python -m argaze load CONFIGURATION +``` - # Look ArFrame at a timestamped gaze position - ar_frame.look(timestamped_gaze_position) +This command should open a GUI window with a random yellow dot and identified fixations circles. + +![ArGaze load GUI](../../img/argaze_load_gui_random_pipeline.png) - # Do something with pipeline exception - except Exception as e: - - ... -``` !!! note "" - At this point, the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports. + At this point, the pipeline only processes gaze movement identification and scan path analysis without any AOI neither any recording or visualization supports. Read the next chapters to learn how to [describe AOI](aoi_2d_description.md), [add AOI analysis](aoi_analysis.md), [record gaze analysis](recording.md) and [visualize pipeline steps](visualization.md).
\ No newline at end of file diff --git a/docs/user_guide/gaze_analysis_pipeline/introduction.md b/docs/user_guide/gaze_analysis_pipeline/introduction.md index c12f669..29eeed5 100644 --- a/docs/user_guide/gaze_analysis_pipeline/introduction.md +++ b/docs/user_guide/gaze_analysis_pipeline/introduction.md @@ -1,7 +1,10 @@ Overview ======== -This section explains how to create gaze analysis pipelines for various use cases. +This section explains how to process incoming gaze positions through a **gaze analysis pipeline**. + +!!! warning "Read eye tracking context section before" + This section assumes that the incoming gaze positions are provided by an [eye tracking context](../eye_tracking_context/introduction.md). First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. @@ -9,8 +12,7 @@ First, let's look at the schema below: it gives an overview of the main notions To build your own gaze analysis pipeline, you need to know: -* [How to edit timestamped gaze positions](timestamped_gaze_positions_edition.md), -* [How to load and execute gaze analysis pipeline](configuration_and_execution.md), +* [How to edit and execute a pipeline](configuration_and_execution.md), * [How to describe AOI](aoi_2d_description.md), * [How to enable AOI analysis](aoi_analysis.md), * [How to visualize pipeline steps outputs](visualization.md), @@ -20,6 +22,7 @@ To build your own gaze analysis pipeline, you need to know: More advanced features are also explained like: +* [How to edit timestamped gaze positions](advanced_topics/timestamped_gaze_positions_edition.md), * [How to script gaze analysis pipeline](advanced_topics/scripting.md), * [How to load module from another package](advanced_topics/module_loading.md). * [How to calibrate gaze position](advanced_topics/gaze_position_calibration.md). diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md index 6b9805c..32395c3 100644 --- a/docs/user_guide/gaze_analysis_pipeline/visualization.md +++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md @@ -5,7 +5,7 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze ![ArFrame visualization](../../img/visualization.png) -## Add image parameters to ArFrame JSON configuration file +## Add image parameters to ArFrame JSON configuration [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry. @@ -82,37 +82,6 @@ Here is an extract from the JSON ArFrame configuration file with a sample where Most of *image_parameters* entries work if related ArFrame/ArLayer pipeline steps are enabled. For example, a JSON *draw_scan_path* entry needs GazeMovementIdentifier and ScanPath steps to be enabled. -Then, [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method can be called in various situations. - -## Live window display - -While timestamped gaze positions are processed by [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method, it is possible to display the [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) image thanks to the [OpenCV package](https://pypi.org/project/opencv-python/). - -```python -import cv2 - -def main(): - - # Assuming ArFrame is loaded - ... - - # Create a window to display ArFrame - cv2.namedWindow(ar_frame.name, cv2.WINDOW_AUTOSIZE) - - # Assuming that timestamped gaze positions are being processed by ArFrame.look method - ... - - # Update ArFrame image display - cv2.imshow(ar_frame.name, ar_frame.image()) - - # Wait 10 ms - cv2.waitKey(10) - -if __name__ == '__main__': - - main() -``` - !!! note "Export to video file" Video exportation is detailed in [gaze analysis recording chapter](recording.md).
\ No newline at end of file diff --git a/docs/user_guide/pipeline_input_context/configuration_and_connection.md b/docs/user_guide/pipeline_input_context/configuration_and_connection.md deleted file mode 100644 index 4aac88a..0000000 --- a/docs/user_guide/pipeline_input_context/configuration_and_connection.md +++ /dev/null @@ -1,35 +0,0 @@ -Load and connect a context -========================== - -Once an [ArContext is defined](context_definition.md), it have to be connected to a pipeline. - -# Load JSON configuration file - -An [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be loaded from a JSON configuration file thanks to the [argaze.load](../../argaze.md/#argaze.load) package method. - -Here is a JSON configuration file related to the [previously defined Example context](context_definition.md): - -```json -{ - "my_context.Example": { - "name": "My example context", - "parameter": ..., - "pipeline": "pipeline.json" - } -} -``` - -Then, here is how to load the JSON file: - -```python -import argaze - -# Load ArContext -with argaze.load('./configuration.json') as ar_context: - - # Do something with ArContext - ... -``` - -!!! note - There is nothing to do to execute a loaded context as it is handled inside its own **__enter__** method. diff --git a/docs/user_guide/pipeline_input_context/context_definition.md b/docs/user_guide/pipeline_input_context/context_definition.md deleted file mode 100644 index 7d30438..0000000 --- a/docs/user_guide/pipeline_input_context/context_definition.md +++ /dev/null @@ -1,57 +0,0 @@ -Define a context class -====================== - -The [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic class interface to handle pipeline inputs according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers). - -# Write Python context file - -A specific [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) can be defined into a Python file. - -Here is an example context defined into *my_context.py* file: - -```python -from argaze import ArFeatures, DataFeatures - -class Example(ArFeatures.ArContext): - - @DataFeatures.PipelineStepInit - def __init__(self, **kwargs): - - # Init ArContext class - super().__init__() - - # Init private attribute - self.__parameter = ... - - @property - def parameter(self): - """Any context specific parameter.""" - return self.__parameter - - @parameter.setter - def parameter(self, parameter): - self.__parameter = parameter - - @DataFeatures.PipelineStepEnter - def __enter__(self): - - # Start context according any specific parameter - ... self.parameter - - # Assuming that timestamp, x and y values are available - ... - - # Process timestamped gaze position - self._process_gaze_position(timestamp = timestamp, x = x, y = y) - - @DataFeatures.PipelineStepExit - def __exit__(self, exception_type, exception_value, exception_traceback): - - # End context - ... -``` - -!!! note "" - - The next chapter explains how to [load a context to connect it with a pipeline](configuration_and_connection.md). -
\ No newline at end of file diff --git a/docs/user_guide/pipeline_input_context/introduction.md b/docs/user_guide/pipeline_input_context/introduction.md deleted file mode 100644 index e31ad54..0000000 --- a/docs/user_guide/pipeline_input_context/introduction.md +++ /dev/null @@ -1,24 +0,0 @@ -Overview -======== - -This section explains how to connect [gaze analysis](../gaze_analysis_pipeline/introduction.md) or [augmented reality](../aruco_marker_pipeline/introduction.md) pipelines with various input contexts. - -First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. - -![Pipeline input context](../../img/pipeline_input_context.png) - -To build your own input context, you need to know: - -* [How to define a context class](context_definition.md), -* [How to load a context to connect with a pipeline](configuration_and_connection.md), - -!!! warning "Documentation in progress" - - This section is not yet fully done. Please look at the [demonstrations scripts chapter](../utils/demonstrations_scripts.md) to know more about this notion. - -<!-- -* [How to stop a context](stop.md), -* [How to pause and resume a context](pause_and_resume.md), -* [How to visualize a context](visualization.md), -* [How to handle pipeline exceptions](exceptions.md) -!--> diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md index f293980..dd1b8e0 100644 --- a/docs/user_guide/utils/demonstrations_scripts.md +++ b/docs/user_guide/utils/demonstrations_scripts.md @@ -11,18 +11,26 @@ Collection of command-line scripts for demonstration purpose. ## Random context -Load **random_context.json** file to analyze random gaze positions: +Load **random_context.json** file to process random gaze positions: ```shell python -m argaze load ./src/argaze/utils/demo/random_context.json ``` -## OpenCV window context +## OpenCV cursor context -Load **opencv_window_context.json** file to analyze mouse pointer positions over OpenCV window: +Load **opencv_cursor_context.json** file to process cursor pointer positions over OpenCV window: ```shell -python -m argaze load ./src/argaze/utils/demo/opencv_window_context.json +python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json +``` + +## OpenCV movie context + +Load **opencv_movie_context.json** file to process movie pictures and also cursor pointer positions over OpenCV window: + +```shell +python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json ``` ## Tobii Pro Glasses 2 diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md new file mode 100644 index 0000000..3d34972 --- /dev/null +++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md @@ -0,0 +1,60 @@ +Estimate ArUco markers pose +=========================== + +This **ArGaze** application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder. + +Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*. + +```json +{ + "argaze.utils.contexts.OpenCV.Movie" : { + "name": "ArUco markers pose estimator", + "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4", + "pipeline": "pipeline.json" + } +} +``` + +Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes. + +```json +{ + "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": { + "name": "Full HD Camera", + "size": [1920, 1080], + "aruco_detector": { + "dictionary": "DICT_APRILTAG_16h5", + "pose_size": 4, + "pose_ids": [], + "parameters": { + "useAruco3Detection": 1 + }, + "observers":{ + "observers.ArUcoMarkersPoseRecorder": { + "output_folder": "_export/records/aruco_markers_group" + } + } + }, + "sides_mask": 420, + "image_parameters": { + "background_weight": 1, + "draw_gaze_positions": { + "color": [0, 255, 255], + "size": 4 + }, + "draw_detected_markers": { + "color": [255, 255, 255], + "draw_axes": { + "thickness": 4 + } + } + } + } +} +``` + +Then, launch the application. + +```shell +python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json +```
\ No newline at end of file diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/main_commands.md index 892fef8..4dd3434 100644 --- a/docs/user_guide/utils/ready-made_scripts.md +++ b/docs/user_guide/utils/main_commands.md @@ -1,15 +1,12 @@ -Ready-made scripts -================== +Main commands +============= -Collection of command-line scripts to provide useful features. - -!!! note - *Consider that all inline commands below have to be executed at the root of ArGaze package folder.* +The **ArGaze** package comes with top-level commands. !!! note *Use -h option to get command arguments documentation.* -## Load ArContext JSON configuration +## Load Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) from a JSON CONFIGURATION file @@ -17,6 +14,10 @@ Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) f python -m argaze load CONFIGURATION ``` +This command should open a GUI window to display the image of the context's pipeline. + +![ArGaze load GUI](../../img/argaze_load_gui.png) + ### Send command Use -p option to enable pipe communication at given address: @@ -46,24 +47,10 @@ echo "context.pause()" > /tmp/argaze echo "context.resume()" > /tmp/argaze ``` -## Edit JSON configuration +## Edit Modify the content of JSON CONFIGURATION file with another JSON CHANGES file then, save the result into an OUTPUT file ```shell python -m argaze edit CONFIGURATION CHANGES OUTPUT ``` - -## Estimate ArUco markers pose - -This application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder. - -Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*. - -Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes. - -Then, launch the application. - -```shell -python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json -```
\ No newline at end of file |