diff options
Diffstat (limited to 'docs/user_guide')
24 files changed, 337 insertions, 95 deletions
diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md index 975f278..311916b 100644 --- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md @@ -5,7 +5,7 @@ As explain in the [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dc ## Load ArUcoDetector parameters -[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry. +[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector.Parameters) can be loaded with a dedicated JSON entry. Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file with ArUco detector parameters: @@ -18,7 +18,7 @@ Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoM "dictionary": "DICT_APRILTAG_16h5", "parameters": { "adaptiveThreshConstant": 10, - "useAruco3Detection": 1 + "useAruco3Detection": true } }, ... diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md index 625f257..e9ce740 100644 --- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md @@ -134,7 +134,7 @@ Below, an optic_parameters JSON file example: ## Load and display optic parameters -[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry. +[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarker.ArUcoOpticCalibrator.OpticParameters) can be enabled with a dedicated JSON entry. Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration file where optic parameters are loaded and displayed: diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md index a9d66e9..f258e04 100644 --- a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md @@ -150,7 +150,7 @@ Particularly, timestamped gaze positions can be passed one by one to the [ArUcoC ## Setup ArUcoCamera image parameters -Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary. +Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary. ```python # Assuming ArUcoCamera is loaded diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md index 46422b8..78a513a 100644 --- a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md +++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md @@ -1,7 +1,7 @@ Describe 3D AOI =============== -Now that the [scene pose is estimated](aruco_marker_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential. +Now that the [scene pose is estimated](aruco_marker_description.md) considering the ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential. In the example scene, the two screens—the control panel and the window—are considered to be areas of interest. diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md index c2ee1b9..56846e2 100644 --- a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md +++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md @@ -1,7 +1,7 @@ Edit and execute pipeline ========================= -Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class. +Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected by the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class. As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md index c163696..a543bc7 100644 --- a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md +++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md @@ -3,27 +3,27 @@ Define a context class The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers). -The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides playback features to stop or pause processings, performance assement features to measure how many times processings are called and the time spent by the process. +The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assessment features to measure how many times processings are called and the time spent by the process. -Besides, there is also a [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process. +Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process. -In the same way, there is a [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and processing advancement. +In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines *duration* and *progression* properties to get information about a record length and playback advancement. -Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) or [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class. +Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class. -## Write live processing context +## Write data capture context -Here is a live processing context example that processes gaze positions and camera images in two separated threads: +Here is a data cpature context example that processes gaze positions and camera images in two separated threads: ```python from argaze import ArFeatures, DataFeatures -class LiveProcessingExample(ArFeatures.LiveProcessingContext): +class DataCaptureExample(ArFeatures.DataCaptureContext): @DataFeatures.PipelineStepInit def __init__(self, **kwargs): - # Init LiveProcessingContext class + # Init DataCaptureContext class super().__init__() # Init private attribute @@ -45,23 +45,23 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext): # Start context according any specific parameter ... self.parameter - # Start a gaze position processing thread - self.__gaze_thread = threading.Thread(target = self.__gaze_position_processing) + # Start a gaze position capture thread + self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture) self.__gaze_thread.start() - # Start a camera image processing thread if applicable - self.__camera_thread = threading.Thread(target = self.__camera_image_processing) + # Start a camera image capture thread if applicable + self.__camera_thread = threading.Thread(target = self.__camera_image_capture) self.__camera_thread.start() return self - def __gaze_position_processing(self): - """Process gaze position.""" + def __gaze_position_capture(self): + """Capture gaze position.""" - # Processing loop + # Capture loop while self.is_running(): - # Pause processing + # Pause capture if not self.is_paused(): # Assuming that timestamp, x and y values are available @@ -73,13 +73,13 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext): # Wait some time eventually ... - def __camera_image_processing(self): - """Process camera image if applicable.""" + def __camera_image_capture(self): + """Capture camera image if applicable.""" - # Processing loop + # Capture loop while self.is_running(): - # Pause processing + # Pause capture if not self.is_paused(): # Assuming that timestamp, camera_image are available @@ -95,10 +95,10 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext): def __exit__(self, exception_type, exception_value, exception_traceback): """End context.""" - # Stop processing loops + # Stop capture loops self.stop() - # Stop processing threads + # Stop capture threads threading.Thread.join(self.__gaze_thread) threading.Thread.join(self.__camera_thread) @@ -108,19 +108,19 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext): ... ``` -## Write post processing context +## Write data playback context -Here is a post processing context example that processes gaze positions and camera images in a same thread: +Here is a data playback context example that reads gaze positions and camera images in a same thread: ```python from argaze import ArFeatures, DataFeatures -class PostProcessingExample(ArFeatures.PostProcessingContext): +class DataPlaybackExample(ArFeatures.DataPlaybackContext): @DataFeatures.PipelineStepInit def __init__(self, **kwargs): - # Init LiveProcessingContext class + # Init DataCaptureContext class super().__init__() # Init private attribute @@ -142,19 +142,19 @@ class PostProcessingExample(ArFeatures.PostProcessingContext): # Start context according any specific parameter ... self.parameter - # Start a reading data thread - self.__read_thread = threading.Thread(target = self.__data_reading) - self.__read_thread.start() + # Start a data playback thread + self.__data_thread = threading.Thread(target = self.__data_playback) + self.__data_thread.start() return self - def __data_reading(self): - """Process gaze position and camera image if applicable.""" + def __data_playback(self): + """Playback gaze position and camera image if applicable.""" - # Processing loop + # Playback loop while self.is_running(): - # Pause processing + # Pause playback if not self.is_paused(): # Assuming that timestamp, camera_image are available @@ -176,18 +176,20 @@ class PostProcessingExample(ArFeatures.PostProcessingContext): def __exit__(self, exception_type, exception_value, exception_traceback): """End context.""" - # Stop processing loops + # Stop playback loop self.stop() - # Stop processing threads - threading.Thread.join(self.__read_thread) + # Stop playback threads + threading.Thread.join(self.__data_thread) - def previous(self): - """Go to previous camera image frame.""" + @property + def duration(self) -> int|float: + """Get data duration.""" ... - def next(self): - """Go to next camera image frame.""" + @property + def progression(self) -> float: + """Get data playback progression between 0 and 1.""" ... ``` diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md index 8753eb6..d8eb389 100644 --- a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md +++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md @@ -68,12 +68,12 @@ from argaze import ArFeatures # Check context type - # Live processing case: calibration method is available - if issubclass(type(context), ArFeatures.LiveProcessingContext): + # Data capture case: calibration method is available + if issubclass(type(context), ArFeatures.DataCaptureContext): ... - # Post processing case: more playback methods are available - if issubclass(type(context), ArFeatures.PostProcessingContext): + # Data playback case: playback methods are available + if issubclass(type(context), ArFeatures.DataPlaybackContext): ... # Check pipeline type diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md index 340dbaf..959d955 100644 --- a/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md +++ b/docs/user_guide/eye_tracking_context/advanced_topics/timestamped_gaze_positions_edition.md @@ -28,8 +28,8 @@ for timestamped_gaze_position in ts_gaze_positions: ## Edit timestamped gaze positions from live stream -Real-time gaze positions can be edited thanks to the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class. -Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html). +Real-time gaze positions can be edited using directly the [GazePosition](../../../argaze.md/#argaze.GazeFeatures.GazePosition) class. +Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited using the Python [time package](https://docs.python.org/3/library/time.html). ```python from argaze import GazeFeatures diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md index f13c6a2..3deeb57 100644 --- a/docs/user_guide/eye_tracking_context/configuration_and_execution.md +++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md @@ -3,9 +3,12 @@ Edit and execute context The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like: -* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) live stream and post processing contexts, -* [Pupil Labs](context_modules/pupil_labs.md) live stream context, -* [OpenCV](context_modules/opencv.md) window cursor position and movie processing, +* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts, +* [Tobii Pro Glasses 3](context_modules/tobii_pro_glasses_3.md) data capture context, +* [Pupil Labs Invisible](context_modules/pupil_labs_invisible.md) data capture context, +* [Pupil Labs Neon](context_modules/pupil_labs_neon.md) data capture context, +* [File](context_modules/file.md) data playback contexts, +* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback, * [Random](context_modules/random.md) gaze position generator. ## Edit JSON configuration diff --git a/docs/user_guide/eye_tracking_context/context_modules/file.md b/docs/user_guide/eye_tracking_context/context_modules/file.md new file mode 100644 index 0000000..5b5c8e9 --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/file.md @@ -0,0 +1,75 @@ +File +====== + +ArGaze provides a ready-made contexts to read data from various file format. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## CSV + +::: argaze.utils.contexts.File.CSV + +### JSON sample: splitted case + +To use when gaze position coordinates are splitted in two separated columns. + +```json +{ + "argaze.utils.contexts.File.CSV": { + "name": "CSV file data playback", + "path": "./src/argaze/utils/demo/gaze_positions_splitted.csv", + "timestamp_column": "Timestamp (ms)", + "x_column": "Gaze Position X (px)", + "y_column": "Gaze Position Y (px)", + "pipeline": ... + } +} +``` + +### JSON sample: joined case + +To use when gaze position coordinates are joined as a list in one single column. + +```json +{ + "argaze.utils.contexts.File.CSV" : { + "name": "CSV file data playback", + "path": "./src/argaze/utils/demo/gaze_positions_xy_joined.csv", + "timestamp_column": "Timestamp (ms)", + "xy_column": "Gaze Position (px)", + "pipeline": ... + } +} +``` + +### JSON sample: left and right eyes + +To use when gaze position coordinates and validity are given for each eye in six separated columns. + +```json +{ + "argaze.utils.contexts.File.CSV": { + "name": "CSV file data playback", + "path": "./src/argaze/utils/demo/gaze_positions_left_right_eyes.csv", + "timestamp_column": "Timestamp (ms)", + "left_eye_x_column": "Left eye X", + "left_eye_y_column": "Left eye Y", + "left_eye_validity_column": "Left eye validity", + "right_eye_x_column": "Right eye X", + "right_eye_y_column": "Right eye Y", + "right_eye_validity_column": "Right eye validity", + "rescale_to_pipeline_size": true, + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/context_modules/opencv.md b/docs/user_guide/eye_tracking_context/context_modules/opencv.md index 7244cd4..7d73a03 100644 --- a/docs/user_guide/eye_tracking_context/context_modules/opencv.md +++ b/docs/user_guide/eye_tracking_context/context_modules/opencv.md @@ -39,9 +39,25 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg ```json { "argaze.utils.contexts.OpenCV.Movie": { - "name": "Open CV cursor", + "name": "Open CV movie", "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4", "pipeline": ... } } ``` + +## Camera + +::: argaze.utils.contexts.OpenCV.Camera + +### JSON sample + +```json +{ + "argaze.utils.contexts.OpenCV.Camera": { + "name": "Open CV camera", + "identifier": 0, + "pipeline": ... + } +} +```
\ No newline at end of file diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md new file mode 100644 index 0000000..1f4a94f --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_invisible.md @@ -0,0 +1,32 @@ +Pupil Labs Invisible +========== + +ArGaze provides a ready-made context to work with Pupil Labs Invisible device. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Live Stream + +::: argaze.utils.contexts.PupilLabsInvisible.LiveStream + +### JSON sample + +```json +{ + "argaze.utils.contexts.PupilLabsInvisible.LiveStream": { + "name": "Pupil Labs Invisible live stream", + "project": "my_experiment", + "pipeline": ... + } +} +``` diff --git a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md index d2ec336..535f5d5 100644 --- a/docs/user_guide/eye_tracking_context/context_modules/pupil_labs.md +++ b/docs/user_guide/eye_tracking_context/context_modules/pupil_labs_neon.md @@ -1,7 +1,7 @@ -Pupil Labs +Pupil Labs Neon ========== -ArGaze provides a ready-made context to work with Pupil Labs devices. +ArGaze provides a ready-made context to work with Pupil Labs Neon device. To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. Notice that the *pipeline* entry is mandatory. @@ -17,14 +17,14 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg ## Live Stream -::: argaze.utils.contexts.PupilLabs.LiveStream +::: argaze.utils.contexts.PupilLabsNeon.LiveStream ### JSON sample ```json { - "argaze.utils.contexts.PupilLabs.LiveStream": { - "name": "Pupil Labs live stream", + "argaze.utils.contexts.PupilLabsNeon.LiveStream": { + "name": "Pupil Labs Neon live stream", "project": "my_experiment", "pipeline": ... } diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md index fba6931..6ff44bd 100644 --- a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md +++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md @@ -42,16 +42,16 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg } ``` -## Post Processing +## Segment Playback -::: argaze.utils.contexts.TobiiProGlasses2.PostProcessing +::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback ### JSON sample ```json { - "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : { - "name": "Tobii Pro Glasses 2 post-processing", + "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : { + "name": "Tobii Pro Glasses 2 segment playback", "segment": "./src/argaze/utils/demo/tobii_record/segments/1", "pipeline": ... } diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md new file mode 100644 index 0000000..3d37fcc --- /dev/null +++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_3.md @@ -0,0 +1,32 @@ +Tobii Pro Glasses 3 +=================== + +ArGaze provides a ready-made context to work with Tobii Pro Glasses 3 devices. + +To select a desired context, the JSON samples have to be edited and saved inside an [ArContext configuration](../configuration_and_execution.md) file. +Notice that the *pipeline* entry is mandatory. + +```json +{ + JSON sample + "pipeline": ... +} +``` + +Read more about [ArContext base class in code reference](../../../argaze.md/#argaze.ArFeatures.ArContext). + +## Live Stream + +::: argaze.utils.contexts.TobiiProGlasses3.LiveStream + +### JSON sample + +```json +{ + "argaze.utils.contexts.TobiiProGlasses3.LiveStream": { + "name": "Tobii Pro Glasses 3 live stream", + "pipeline": ... + } +} +``` + diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md index 4970dba..effee18 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md @@ -7,7 +7,7 @@ The calibration algorithm can be selected by instantiating a particular [GazePos ## Enable ArFrame calibration -Gaze position calibration can be enabled thanks to a dedicated JSON entry. +Gaze position calibration can be enabled with a dedicated JSON entry. Here is an extract from the JSON ArFrame configuration file where a [Linear Regression](../../../argaze.md/#argaze.GazeAnalysis.LinearRegression) calibration algorithm is selected with no parameters: diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md index 264e866..843274a 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md @@ -158,7 +158,7 @@ Last [GazeMovement](../../../argaze.md/#argaze.GazeFeatures.GazeMovement) identi This could also be the current gaze movement if [ArFrame.filter_in_progress_identification](../../../argaze.md/#argaze.ArFeatures.ArFrame) attribute is false. In that case, the last gaze movement *finished* flag is false. -Then, the last gaze movement type can be tested thanks to [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions. +Then, the last gaze movement type can be tested with [GazeFeatures.is_fixation](../../../argaze.md/#argaze.GazeFeatures.is_fixation) and [GazeFeatures.is_saccade](../../../argaze.md/#argaze.GazeFeatures.is_saccade) functions. ### *ar_frame.is_analysis_available()* @@ -182,7 +182,7 @@ This an iterator to access to all aoi scan path analysis. Notice that each aoi s ## Setup ArFrame image parameters -[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary. +[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a Python dictionary. ```python # Assuming ArFrame is loaded diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md index 2b64091..c2a6ac3 100644 --- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md +++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md @@ -100,6 +100,11 @@ The second [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step a Once gaze movements are matched to AOI, they are automatically appended to the AOIScanPath if required. +!!! warning "GazeFeatures.OutsideAOI" + When a fixation is not looking at any AOI, a step associated with an AOI called [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) is added. As long as fixations are not looking at any AOI, all fixations/saccades are stored in this step. In this way, further analysis are calculated considering those extra [GazeFeatures.OutsideAOI](../../argaze.md/#argaze.GazeFeatures.OutsideAOI) steps. + + This is particularly important when calculating transition matrices, because otherwise we could have arcs between two AOIs when in fact the gaze could have fixed itself outside in the meantime. + The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath.duration_max) attribute is the duration from which older AOI scan steps are removed each time new AOI scan steps are added. !!! note "Optional" diff --git a/docs/user_guide/gaze_analysis_pipeline/background.md b/docs/user_guide/gaze_analysis_pipeline/background.md index 900d151..11285e3 100644 --- a/docs/user_guide/gaze_analysis_pipeline/background.md +++ b/docs/user_guide/gaze_analysis_pipeline/background.md @@ -7,7 +7,7 @@ Background is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) ## Load and display ArFrame background -[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled thanks to a dedicated JSON entry. +[ArFrame.background](../../argaze.md/#argaze.ArFeatures.ArFrame.background) can be enabled with a dedicated JSON entry. Here is an extract from the JSON ArFrame configuration file where a background picture is loaded and displayed: @@ -28,7 +28,7 @@ Here is an extract from the JSON ArFrame configuration file where a background p ``` !!! note - As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method. + As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method. Now, let's understand the meaning of each JSON entry. diff --git a/docs/user_guide/gaze_analysis_pipeline/heatmap.md b/docs/user_guide/gaze_analysis_pipeline/heatmap.md index 2057dbe..77b2be0 100644 --- a/docs/user_guide/gaze_analysis_pipeline/heatmap.md +++ b/docs/user_guide/gaze_analysis_pipeline/heatmap.md @@ -7,7 +7,7 @@ Heatmap is an optional [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pip ## Enable and display ArFrame heatmap -[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled thanks to a dedicated JSON entry. +[ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) can be enabled with a dedicated JSON entry. Here is an extract from the JSON ArFrame configuration file where heatmap is enabled and displayed: @@ -31,7 +31,7 @@ Here is an extract from the JSON ArFrame configuration file where heatmap is ena } ``` !!! note - [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible thanks to [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method. + [ArFrame.heatmap](../../argaze.md/#argaze.ArFeatures.ArFrame.heatmap) is automatically updated each time the [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method is called. As explained in [visualization chapter](visualization.md), the resulting image is accessible with [ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method. Now, let's understand the meaning of each JSON entry. diff --git a/docs/user_guide/gaze_analysis_pipeline/visualization.md b/docs/user_guide/gaze_analysis_pipeline/visualization.md index 32395c3..08b5465 100644 --- a/docs/user_guide/gaze_analysis_pipeline/visualization.md +++ b/docs/user_guide/gaze_analysis_pipeline/visualization.md @@ -7,7 +7,7 @@ Visualization is not a pipeline step, but each [ArFrame](../../argaze.md/#argaze ## Add image parameters to ArFrame JSON configuration -[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a dedicated JSON entry. +[ArFrame.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured with a dedicated JSON entry. Here is an extract from the JSON ArFrame configuration file with a sample where image parameters are added: diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md index dd1b8e0..c7560eb 100644 --- a/docs/user_guide/utils/demonstrations_scripts.md +++ b/docs/user_guide/utils/demonstrations_scripts.md @@ -9,30 +9,70 @@ Collection of command-line scripts for demonstration purpose. !!! note *Use -h option to get command arguments documentation.* +!!! note + Each demonstration outputs metrics into *_export/records* folder. + ## Random context -Load **random_context.json** file to process random gaze positions: +Load **random_context.json** file to generate random gaze positions: ```shell python -m argaze load ./src/argaze/utils/demo/random_context.json ``` -## OpenCV cursor context +## CSV file context + +Load **csv_file_context_xy_joined.json** file to analyze gaze positions from a CSV file where gaze position coordinates are joined as a list in one single column: + +```shell +python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_joined.json +``` + +Load **csv_file_context_xy_splitted.json** file to analyze gaze positions from a CSV file where gaze position coordinates are splitted in two seperated column: + +```shell +python -m argaze load ./src/argaze/utils/demo/csv_file_context_xy_splitted.json +``` + +Load **csv_file_context_left_right_eyes.json** file to analyze gaze positions from a CSV file where gaze position coordinates and validity are given for each eye in six separated columns.: + +```shell +python -m argaze load ./src/argaze/utils/demo/csv_file_context_left_right_eyes.json +``` + +!!! note + The left/right eyes context allows to parse Tobii Spectrum data for example. + +## OpenCV -Load **opencv_cursor_context.json** file to process cursor pointer positions over OpenCV window: +### Cursor context + +Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window: ```shell python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json ``` -## OpenCV movie context +### Movie context -Load **opencv_movie_context.json** file to process movie pictures and also cursor pointer positions over OpenCV window: +Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window: ```shell python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json ``` +### Camera context + +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to set a consistent *sides_mask* value. + +Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0). + +Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window: + +```shell +python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json +``` + ## Tobii Pro Glasses 2 ### Live stream context @@ -40,7 +80,9 @@ python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json !!! note This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. -Edit **tobii_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters: +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420. + +Edit **tobii_g2_live_stream_context.json** file as to select exisiting IP *address*, *project* or *participant* names and setup Tobii *configuration* parameters: ```json { @@ -63,35 +105,50 @@ Edit **tobii_live_stream_context.json** file as to select exisiting IP *address* } ``` -Then, load **tobii_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: +Then, load **tobii_g2_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: ```shell -python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json +python -m argaze load ./src/argaze/utils/demo/tobii_g2_live_stream_context.json ``` -### Post-processing context +### Segment playback context -!!! note - This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420. -Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder: +Edit **tobii_g2_segment_playback_context.json** file to select an existing Tobii *segment* folder: ```json { - "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : { - "name": "Tobii Pro Glasses 2 post-processing", + "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : { + "name": "Tobii Pro Glasses 2 segment playback", "segment": "record/segments/1", "pipeline": "aruco_markers_pipeline.json" } } ``` -Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: +Then, load **tobii_g2_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: + +```shell +python -m argaze load ./src/argaze/utils/demo/tobii_g2_segment_playback_context.json +``` + +## Tobii Pro Glasses 3 + +### Live stream context + +!!! note + This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. + +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1920, 1080]) and to set *sides_mask* value to 420. + +Load **tobii_g3_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: ```shell -python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json +python -m argaze load ./src/argaze/utils/demo/tobii_g3_live_stream_context.json ``` + ## Pupil Invisible ### Live stream context @@ -99,8 +156,25 @@ python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json !!! note This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. -Load **pupillabs_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1088, 1080]) and to set *sides_mask* value to 4. + +Load **pupillabs_invisible_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: + +```shell +python -m argaze load ./src/argaze/utils/demo/pupillabs_invisible_live_stream_context.json +``` + +## Pupil Neon + +### Live stream context + +!!! note + This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. + +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution ([1600, 1200]) and to set *sides_mask* value to 200. + +Load **pupillabs_neon_live_stream_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: ```shell -python -m argaze load ./src/argaze/utils/demo/pupillabs_live_stream_context.json +python -m argaze load ./src/argaze/utils/demo/pupillabs_neon_live_stream_context.json ``` diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md index 3d34972..55bd232 100644 --- a/docs/user_guide/utils/estimate_aruco_markers_pose.md +++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md @@ -15,7 +15,7 @@ Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a m } ``` -Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes. +Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes. ```json { @@ -27,7 +27,7 @@ Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUc "pose_size": 4, "pose_ids": [], "parameters": { - "useAruco3Detection": 1 + "useAruco3Detection": true }, "observers":{ "observers.ArUcoMarkersPoseRecorder": { diff --git a/docs/user_guide/utils/main_commands.md b/docs/user_guide/utils/main_commands.md index 4dd3434..9227d8d 100644 --- a/docs/user_guide/utils/main_commands.md +++ b/docs/user_guide/utils/main_commands.md @@ -35,13 +35,13 @@ For example: echo "print(context)" > /tmp/argaze ``` -* Pause context processing: +* Pause context: ```shell echo "context.pause()" > /tmp/argaze ``` -* Resume context processing: +* Resume context: ```shell echo "context.resume()" > /tmp/argaze @@ -54,3 +54,6 @@ Modify the content of JSON CONFIGURATION file with another JSON CHANGES file the ```shell python -m argaze edit CONFIGURATION CHANGES OUTPUT ``` + +!!! note + Use *null* value to remove an entry. |