From 03013286100d4a3cc49439afc6f432f7be0c494b Mon Sep 17 00:00:00 2001 From: Théo de la Hogue Date: Wed, 10 Apr 2024 15:13:12 +0200 Subject: orthographic corrections --- docs/img/aruco_marker_pipeline.png | Bin 0 -> 102120 bytes docs/img/aruco_markers_pipeline.png | Bin 102120 -> 0 bytes docs/index.md | 18 +- .../aruco_detector_configuration.md | 37 ++++ .../optic_parameters_calibration.md | 190 +++++++++++++++++++++ .../advanced_topics/scripting.md | 132 ++++++++++++++ .../aruco_marker_pipeline/aoi_3d_description.md | 68 ++++++++ .../aruco_marker_pipeline/aoi_3d_frame.md | 133 +++++++++++++++ .../aruco_marker_pipeline/aoi_3d_projection.md | 168 ++++++++++++++++++ .../aruco_markers_description.md | 120 +++++++++++++ .../configuration_and_execution.md | 153 +++++++++++++++++ .../aruco_marker_pipeline/introduction.md | 29 ++++ .../aruco_marker_pipeline/pose_estimation.md | 84 +++++++++ .../aruco_detector_configuration.md | 37 ---- .../optic_parameters_calibration.md | 190 --------------------- .../advanced_topics/scripting.md | 132 -------------- .../aruco_markers_pipeline/aoi_3d_description.md | 68 -------- .../aruco_markers_pipeline/aoi_3d_frame.md | 133 --------------- .../aruco_markers_pipeline/aoi_3d_projection.md | 168 ------------------ .../aruco_markers_description.md | 120 ------------- .../configuration_and_execution.md | 153 ----------------- .../aruco_markers_pipeline/introduction.md | 29 ---- .../aruco_markers_pipeline/pose_estimation.md | 84 --------- .../advanced_topics/gaze_position_calibration.md | 2 +- .../advanced_topics/module_loading.md | 4 +- .../advanced_topics/scripting.md | 4 +- .../gaze_analysis_pipeline/aoi_analysis.md | 4 +- .../configuration_and_execution.md | 4 +- .../timestamped_gaze_positions_edition.md | 6 +- mkdocs.yml | 22 +-- 30 files changed, 1146 insertions(+), 1146 deletions(-) create mode 100644 docs/img/aruco_marker_pipeline.png delete mode 100644 docs/img/aruco_markers_pipeline.png create mode 100644 docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md create mode 100644 docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md create mode 100644 docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md create mode 100644 docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md create mode 100644 docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md create mode 100644 docs/user_guide/aruco_marker_pipeline/aoi_3d_projection.md create mode 100644 docs/user_guide/aruco_marker_pipeline/aruco_markers_description.md create mode 100644 docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md create mode 100644 docs/user_guide/aruco_marker_pipeline/introduction.md create mode 100644 docs/user_guide/aruco_marker_pipeline/pose_estimation.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/introduction.md delete mode 100644 docs/user_guide/aruco_markers_pipeline/pose_estimation.md diff --git a/docs/img/aruco_marker_pipeline.png b/docs/img/aruco_marker_pipeline.png new file mode 100644 index 0000000..178da7f Binary files /dev/null and b/docs/img/aruco_marker_pipeline.png differ diff --git a/docs/img/aruco_markers_pipeline.png b/docs/img/aruco_markers_pipeline.png deleted file mode 100644 index 178da7f..0000000 Binary files a/docs/img/aruco_markers_pipeline.png and /dev/null differ diff --git a/docs/index.md b/docs/index.md index 3c398ba..3784bdd 100644 --- a/docs/index.md +++ b/docs/index.md @@ -2,36 +2,36 @@ title: What is ArGaze? --- -# Build real-time or post-processing eye tracking applications +# Develop post- or real-time gaze processing applications **Useful links**: [Installation](installation.md) | [Source Repository](https://git.recherche.enac.fr/projects/argaze/repository) | [Issue Tracker](https://git.recherche.enac.fr/projects/argaze/issues) | [Contact](mailto:achil-contact@recherche.enac.fr) -**ArGaze** python toolkit provides a set of classes to build **custom-made gaze processing pipelines** that works with **any kind of eye tracker devices** whether on **live data stream** or for **data post-processing**. +**ArGaze** is a Python software library that lets you build **custom-made gaze processing pipelines** for **any kind of eye tracker device,** whether for **post- or real-time data processing**. ![ArGaze pipeline](img/argaze_pipeline.png) ## Gaze analysis pipeline -First of all, **ArGaze** provides extensible modules library allowing to select application specific algorithms at each pipeline step: +**ArGaze** provides an extensible modules library, allowing to select application-specific algorithms at each pipeline step: * **Fixation/Saccade identification**: dispersion threshold identification, velocity threshold identification, ... * **Area Of Interest (AOI) matching**: focus point inside, deviation circle coverage, ... * **Scan path analysis**: transition matrix, entropy, explore/exploit ratio, ... -Once incoming data are formatted as required, all those gaze analysis features can be used with any screen-based eye tracker devices. +Once the incoming data is formatted as required, all those gaze analysis features can be used with any screen-based eye tracker devices. -[Learn how to build gaze analysis pipelines for various use cases by reading user guide dedicated section](./user_guide/gaze_analysis_pipeline/introduction.md). +[Learn how to build gaze analysis pipelines for various use cases by reading the dedicated user guide section](./user_guide/gaze_analysis_pipeline/introduction.md). -## Augmented reality based on ArUco markers pipeline +## Augmented reality based on ArUco marker pipeline Things goes harder when gaze data comes from head-mounted eye tracker devices. That's why **ArGaze** provides **Augmented Reality (AR)** support to map **Areas Of Interest (AOI)** on [OpenCV ArUco markers](https://www.sciencedirect.com/science/article/abs/pii/S0031320314000235). ![ArUco pipeline axis](img/aruco_pipeline_axis.png) -This ArUco markers pipeline can be combined with any wearable eye tracking device python library like Tobii or Pupill glasses. +This ArUco marker pipeline can be combined with any wearable eye tracking device Python library, like Tobii or Pupil glasses. -[Learn how to build ArUco markers pipelines for various use cases by reading user guide dedicated section](./user_guide/aruco_markers_pipeline/introduction.md). +[Learn how to build ArUco marker pipelines for various use cases by reading the dedicated user guide section](./user_guide/aruco_marker_pipeline/introduction.md). !!! note - *ArUco markers pipeline is greatly inspired by [Andrew T. Duchowski, Vsevolod Peysakhovich and Krzysztof Krejtz article](https://git.recherche.enac.fr/attachments/download/1990/Using_Pose_Estimation_to_Map_Gaze_to_Detected_Fidu.pdf) about using pose estimation to map gaze to detected fiducial markers.* + *ArUco marker pipeline is greatly inspired by [Andrew T. Duchowski, Vsevolod Peysakhovich and Krzysztof Krejtz article](https://git.recherche.enac.fr/attachments/download/1990/Using_Pose_Estimation_to_Map_Gaze_to_Detected_Fidu.pdf) about using pose estimation to map gaze to detected fiducial markers.* diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md new file mode 100644 index 0000000..410e2d7 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md @@ -0,0 +1,37 @@ +Improve ArUco markers detection +=============================== + +As explain in [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html), ArUco markers detection is highly configurable. + +## Load ArUcoDetector parameters + +[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry. + +Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with ArUco detector parameters: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + "aruco_detector": { + "dictionary": "DICT_APRILTAG_16h5", + "parameters": { + "adaptiveThreshConstant": 10, + "useAruco3Detection": 1 + } + }, + ... +``` + +## Print ArUcoDetector parameters + +```python +# Assuming ArUcoCamera is loaded +... + +# Print all ArUcoDetector parameters +print(aruco_camera.aruco_detector.parameters) + +# Print only modified ArUcoDetector parameters +print(f'{aruco_camera.aruco_detector.parameters:modified}') +``` diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md new file mode 100644 index 0000000..54d0c94 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md @@ -0,0 +1,190 @@ +Calibrate optic parameters +========================== + +A camera device have to be calibrated to compensate its optical distorsion. + +![Optic parameters calibration](../../../img/optic_calibration.png) + +## Print calibration board + +The first step to calibrate a camera is to create an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below: + +```python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard + +# Create ArUco dictionary +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Create an ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm ArUco markers inside +aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) + +# Export ArUco board with 300 dpi resolution +aruco_board.save('./calibration_board.png', 300) +``` + +!!! note + There is **A3_DICT_APRILTAG_16h5_3cm_35cmx25cm.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A3 paper sheet. + +Let's print the calibration board before to go further. + +## Capture board pictures + +Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process. + +![Calibration step](../../../img/optic_calibration_step.png) + +The sample of code below illustrates how to: + +* load all required ArGaze objects, +* detect board corners into a Full HD camera video stream, +* store detected corners as calibration data then, +* once enough captures are made, process them to find optic parameters and, +* finally, save optic parameters into a JSON file. + +```python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoOpticCalibrator, ArUcoBoard, ArUcoDetector + +# Create ArUco dictionary +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Create ArUco optic calibrator +aruco_optic_calibrator = ArUcoOpticCalibrator.ArUcoOpticCalibrator() + +# Create ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm aruco markers inside +# Note: This board is the one expected during further board tracking +expected_aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) + +# Create ArUco detector +aruco_detector = ArUcoDetector.ArUcoDetector(dictionary=aruco_dictionary) + +# Assuming that live Full HD (1920x1080) video stream is enabled +... + +# Assuming there is a way to escape the while loop +... + + while video_stream.is_alive(): + + # Capture image from video stream + image = video_stream.read() + + # Detect all board corners in image + aruco_detector.detect_board(image, expected_aruco_board, expected_aruco_board.markers_number) + + # If all board corners are detected + if aruco_detector.board_corners_number() == expected_aruco_board.corners_number: + + # Draw board corners to show that board tracking succeeded + aruco_detector.draw_board(image) + + # Append tracked board data for further calibration processing + aruco_optic_calibrator.store_calibration_data(aruco_detector.board_corners(), aruco_detector.board_corners_identifier()) + +# Start optic calibration processing for Full HD image resolution +print('Calibrating optic...') +optic_parameters = aruco_optic_calibrator.calibrate(expected_aruco_board, dimensions=(1920, 1080)) + +if optic_parameters: + + # Export optic parameters + optic_parameters.to_json('./optic_parameters.json') + + print('Calibration succeeded: optic_parameters.json file exported.') + +else: + + print('Calibration failed.') +``` + +Below, an optic_parameters JSON file example: + +```json +{ + "rms": 0.6688921504088245, + "dimensions": [ + 1920, + 1080 + ], + "K": [ + [ + 1135.6524381415752, + 0.0, + 956.0685325355497 + ], + [ + 0.0, + 1135.9272506869524, + 560.059099810324 + ], + [ + 0.0, + 0.0, + 1.0 + ] + ], + "D": [ + 0.01655492265003404, + 0.1985524264972037, + 0.002129965902489484, + -0.0019528582922179365, + -0.5792910353639452 + ] +} +``` + +## Load and display optic parameters + +[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry. + +Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file where optic parameters are loaded and displayed: + +```json +{ + "name": "My FullHD Camera", + "size": [1920, 1080], + "aruco_detector": { + "dictionary": "DICT_APRILTAG_16h5", + "optic_parameters": { + "rms": 0.6688921504088245, + "dimensions": [ + 1920, + 1080 + ], + "K": [ + [ + 1135.6524381415752, + 0.0, + 956.0685325355497 + ], + [ + 0.0, + 1135.9272506869524, + 560.059099810324 + ], + [ + 0.0, + 0.0, + 1.0 + ] + ], + "D": [ + 0.01655492265003404, + 0.1985524264972037, + 0.002129965902489484, + -0.0019528582922179365, + -0.5792910353639452 + ] + } + }, + ... + "image_parameters": { + ... + "draw_optic_parameters_grid": { + "width": 192, + "height": 108, + "z": 100, + "point_size": 1, + "point_color": [0, 0, 255] + } + } +``` \ No newline at end of file diff --git a/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md new file mode 100644 index 0000000..2eb64f8 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/advanced_topics/scripting.md @@ -0,0 +1,132 @@ +Script the pipeline +=================== + +All aruco markers pipeline objects are accessible from Python script. +This could be particularly useful for realtime AR interaction applications. + +## Load ArUcoCamera configuration from dictionary + +First of all, [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration can be loaded from a python dictionary. + +```python +from argaze.ArUcoMarkers import ArUcoCamera + +# Edit a dict with ArUcoCamera configuration +configuration = { + "name": "My FullHD camera", + "size": (1920, 1080), + ... + "aruco_detector": { + ... + }, + "scenes": { + "MyScene" : { + "aruco_markers_group": { + ... + }, + "layers": { + "MyLayer": { + "aoi_scene": { + ... + } + }, + ... + } + }, + ... + } + "layers": { + "MyLayer": { + ... + }, + ... + }, + "image_parameters": { + ... + } +} + +# Load ArUcoCamera +aruco_camera = ArUcoCamera.ArUcoCamera.from_dict(configuration) + +# Do something with ArUcoCamera +... +``` + +## Access to ArUcoCamera and ArScenes attributes + +Then, once the configuration is loaded, it is possible to access to its attributes: [read ArUcoCamera code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) to get a complete list of what is available. + +Thus, the [ArUcoCamera.scenes](../../../argaze.md/#argaze.ArFeatures.ArCamera) attribute allows to access each loaded aruco scene and so, access to their attributes: [read ArUcoScene code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) to get a complete list of what is available. + +```python +from argaze import ArFeatures + +# Assuming the ArUcoCamera is loaded +... + +# Iterate over each ArUcoCamera scene +for name, aruco_scene in aruco_camera.scenes.items(): + ... +``` + +## Pipeline execution outputs + +[ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution. + +```python +# Assuming that timestamped images are available +...: + + try: + + # Watch image with ArUco camera + aruco_camera.watch(image, timestamp=timestamp) + + # Do something with pipeline exception + except Exception as e: + + ... + + # Do something with detected_markers + ... aruco_camera.aruco_detector.detected_markers() + +``` + +Let's understand the meaning of each returned data. + +### *aruco_camera.aruco_detector.detected_markers()* + +A dictionary containing all detected markers provided by [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) class. + +## Setup ArUcoCamera image parameters + +Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary. + +```python +# Assuming ArUcoCamera is loaded +... + +# Edit a dict with ArUcoCamera image parameters +image_parameters = { + "draw_detected_markers": { + ... + }, + "draw_scenes": { + ... + }, + "draw_optic_parameters_grid": { + ... + }, + ... +} + +# Pass image parameters to ArUcoCamera +aruco_camera_image = aruco_camera.image(**image_parameters) + +# Do something with ArUcoCamera image +... +``` + +!!! note + [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all image parameters described in [gaze analysis pipeline visualisation section](../../gaze_analysis_pipeline/visualisation.md). \ No newline at end of file diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md new file mode 100644 index 0000000..23ea550 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_description.md @@ -0,0 +1,68 @@ +Describe 3D AOI +=============== + +Now [scene pose is estimated](aruco_markers_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential. + +In the example scene, the two screens, the control panel and the window are considered as areas of interest. + +![3D AOI description](../../img/aoi_3d_description.png) + +All AOI need to be described from same origin than markers in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where: + +* +X is pointing to the right, +* +Y is pointing to the top, +* +Z is pointing to the backward. + +!!! warning + All AOI spatial values must be given in **centimeters**. + +### Edit OBJ file description + +OBJ file format could be exported from most 3D editors. + +``` obj +o Left_Screen +v 0.000000 -0.000000 -0.000000 +v 15.000000 -0.000000 -0.000000 +v 0.000000 18.963333 -6.355470 +v 15.000000 18.963333 -6.355470 +f 1 2 4 3 +o Right_Screen +v 20.000000 0.000000 -0.000000 +v 35.000000 0.000000 -0.000000 +v 20.000000 18.963337 -6.355472 +v 35.000000 18.963337 -6.355472 +f 5 6 8 7 +o Control_Panel +v 49.500000 30.000000 18.333333 +v 55.500000 30.000000 18.333333 +v 49.500000 38.000000 18.333333 +v 55.500000 38.000000 18.333333 +f 9 10 12 11 +o Window +v -57.800000 5.500000 -33.500000 +v 46.000000 15.500000 -35.000000 +v 1.500000 53.000000 -1.000000 +v 50.200000 61.000000 6.000000 +v -35.850000 35.000000 -15.000000 +f 13 14 16 15 17 +``` + +Here are common OBJ file features needed to describe AOI: + +* Object lines (starting with *o* key) indicate AOI name. +* Vertice lines (starting with *v* key) indicate AOI vertices. +* Face (starting with *f* key) link vertices together. + +### Edit JSON file description + +JSON file format allows to describe AOI vertices. + +``` json +{ + "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], + "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337, -6.355472], [35, 18.963337, -6.355472]], + "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], + "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] +} +``` diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md new file mode 100644 index 0000000..cf4a07e --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_frame.md @@ -0,0 +1,133 @@ +Define a 3D AOI as a frame +========================== + +When an 3D AOI of the scene contains others coplanar 3D AOI, like a screen with GUI elements displayed on, it is better to described them as 2D AOI inside 2D coordinates system related to the containing 3D AOI. + +![3D AOI frame](../../img/aruco_camera_aoi_frame.png) + +## Add ArFrame to ArUcoScene + +The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed. + +Here is the previous extract where "Left_Screen" and "Right_Screen" AOI are defined as a frame into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + ... + "scenes": { + "MyScene" : { + "aruco_markers_group": { + ... + }, + "layers": { + "MyLayer": { + "aoi_scene": { + "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], + "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337 ,-6.355472], [35, 18.963337, -6.355472]], + "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], + "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] + } + } + }, + "frames": { + "Left_Screen": { + "size": [768, 1024], + "layers": { + "MyLayer": { + "aoi_scene": { + "LeftPanel": { + "Rectangle": { + "x": 0, + "y": 0, + "width": 768, + "height": 180 + } + }, + "CircularWidget": { + "Circle": { + "cx": 384, + "cy": 600, + "radius": 180 + } + } + } + } + } + }, + "Right_Screen": { + "size": [768, 1024], + "layers": { + "MyLayer": { + "aoi_scene": { + "GeoSector": [[724, 421], [537, 658], [577, 812], [230, 784], [70, 700], [44, 533], [190, 254], [537, 212]] + } + } + } + } + } + } + } + ... +} +``` +Now, let's understand the meaning of each JSON entry. + +### *frames* + +An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArFrames](../../argaze.md/#argaze.ArFeatures.ArFrame) stored by name. + +### Left_Screen & Right_Screen + +The names of 3D AOI **and** their related [ArFrames](../../argaze.md/#argaze.ArFeatures.ArFrame). Basically useful for visualisation purpose. + +!!! warning "AOI / Frame names policy" + + An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer 3D AOI is defined as an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame, **provided they have the same name**. + +!!! warning "Layer name policy" + + An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layer is projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer, **provided they have the same name**. + +!!! note + + [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layers are projected into their dedicated [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers when the JSON configuration file is loaded. + +## Pipeline execution + +### Map ArUcoCamera image into ArUcoScenes frames + +After camera image is passed to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perpective transformation in order to project watched image into each [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) [frames background](../../argaze.md/#argaze.ArFeatures.ArFrame) image. + +```python +# Assuming that Full HD (1920x1080) timestamped images are available +...: + + # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame + aruco_camera.watch(image, timestamp=timestamp) + + # Map watched image into ArUcoScenes frames background + aruco_camera.map(timestamp=timestamp) +``` + +### Analyse timestamped gaze positions into ArUcoScenes frames + +[ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). + +!!! note + + Timestamped gaze positions passed to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method are projected into [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames if applicable. + +### Display each ArUcoScenes frames + +All [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames image can be displayed as any [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame). + +```python + ... + + # Display all ArUcoScenes frames + for frame in aruco_camera.scene_frames: + + ... frame.image() +``` \ No newline at end of file diff --git a/docs/user_guide/aruco_marker_pipeline/aoi_3d_projection.md b/docs/user_guide/aruco_marker_pipeline/aoi_3d_projection.md new file mode 100644 index 0000000..64f5fc8 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/aoi_3d_projection.md @@ -0,0 +1,168 @@ +Project 3D AOI into camera frame +================================ + +Once [ArUcoScene pose is estimated](pose_estimation.md) and [3D AOI are described](aoi_3d_description.md), AOI can be projected into [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. + +![3D AOI projection](../../img/aruco_camera_aoi_projection.png) + +## Add ArLayer to ArUcoScene to load 3D AOI + +The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class allows to load 3D AOI description. + +Here is the previous extract where one layer is added to [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + ... + "scenes": { + "MyScene" : { + "aruco_markers_group": { + ... + }, + "layers": { + "MyLayer": { + "aoi_scene": { + "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], + "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337, -6.355472], [35, 18.963337, -6.355472]], + "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], + "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] + } + } + } + } + } + ... +} +``` + +Now, let's understand the meaning of each JSON entry. + +### *layers* + +An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name. + +### MyLayer + +The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose. + +### *aoi_scene* + +The set of 3D AOI into the layer as defined at [3D AOI description chapter](aoi_3d_description.md). + +## Add ArLayer to ArUcoCamera to project 3D AOI into + +Here is the previous extract where one layer is added to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration and displayed: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + ... + "scenes": { + "MyScene" : { + "aruco_markers_group": { + ... + }, + "layers": { + "MyLayer": { + "aoi_scene": { + ... + } + } + } + } + }, + "layers": { + "MyLayer": {} + } + ... + "image_parameters": { + ... + "draw_layers": { + "MyLayer": { + "draw_aoi_scene": { + "draw_aoi": { + "color": [255, 255, 255], + "border_size": 1 + } + } + } + } + } +} +``` + +Now, let's understand the meaning of each JSON entry. + +### *layers* + +An [ArUcoCamera](../../argaze.md/#argaze.ArFeatures.ArFrame) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name. + +### MyLayer + +The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose. + +!!! warning "Layer name policy" + + An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer is projected into an [ ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer, **provided they have the same name**. + +!!! note + + [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers are projected into their dedicated [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layers when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method. + +## Add AOI analysis features to ArUcoCamera layer + +When a scene layer is projected into a camera layer, it means that the 3D scene's AOI are transformed into 2D camera's AOI. + +Therefore, it means that [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) benefits from all the services described in [AOI analysis pipeline section](../gaze_analysis_pipeline/aoi_analysis.md). + +Here is the previous extract where AOI matcher, AOI scan path and AOI scan path analyzers are added to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + ... + "scenes": { + "MyScene" : { + "aruco_markers_group": { + ... + }, + "layers": { + "MyLayer": { + "aoi_scene": { + ... + } + } + } + } + }, + "layers": { + "MyLayer": { + "aoi_matcher": { + "DeviationCircleCoverage": { + "coverage_threshold": 0.5 + } + }, + "aoi_scan_path": { + "duration_max": 30000 + }, + "aoi_scan_path_analyzers": { + "Basic": {}, + "TransitionMatrix": {}, + "NGram": { + "n_min": 3, + "n_max": 5 + } + } + } + } + ... +} +``` + +!!! warning + + Adding scan path and scan path analyzers to an [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer doesn't make sense as the space viewed thru camera frame doesn't necessary reflect the space the gaze is covering. diff --git a/docs/user_guide/aruco_marker_pipeline/aruco_markers_description.md b/docs/user_guide/aruco_marker_pipeline/aruco_markers_description.md new file mode 100644 index 0000000..66a0581 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/aruco_markers_description.md @@ -0,0 +1,120 @@ +Set up ArUco markers +==================== + +First of all, ArUco markers needs to be printed and placed into the scene. + +Here is an example scene where markers are surrounding a workspace with two screens, a control panel and a window. + +![Scene](../../img/scene.png) + +## Print ArUco markers from an ArUco dictionary + +ArUco markers always belongs to a set of markers called ArUco markers dictionary. + +![ArUco dictionaries](../../img/aruco_dictionaries.png) + +Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking. + +Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975). + +The creation of [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures from a dictionary is illustrated in the code below: + +```python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary + +# Create a dictionary of specific April tags +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Export marker n°5 as 3.5 cm picture with 300 dpi resolution +aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300) + +# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution +aruco_dictionary.save('./markers/', 3.5, 300) +``` + +!!! note + There is **A4_DICT_APRILTAG_16h5_5cm_0-7.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A4 paper sheet. + +Let's print some of them before to go further. + +!!! warning + Print markers with a blank zone around them to help in their detection. + +## Describe ArUco markers place + +Once [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures are placed into a scene it is possible to describe their 3D places into a file. + +![ArUco markers description](../../img/aruco_markers_description.png) + +Where ever the origin point is, all markers places need to be described in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where: + +* +X is pointing to the right, +* +Y is pointing to the top, +* +Z is pointing to the backward. + +!!! warning + All ArUco markers spatial values must be given in **centimeters**. + +### Edit OBJ file description + +OBJ file format could be exported from most 3D editors. + +``` obj +o DICT_APRILTAG_16h5#0_Marker +v 15.000000 0.378741 0.330527 +v 20.000000 0.378741 0.330527 +v 15.000000 5.120359 -1.255996 +v 20.000000 5.120359 -1.255996 +f 1 2 4 3 +o DICT_APRILTAG_16h5#1_Marker +v 43.500000 31.428055 18.333317 +v 48.500000 31.428055 18.333317 +v 43.500000 36.428055 18.333317 +v 48.500000 36.428055 18.333317 +f 5 6 8 7 +o DICT_APRILTAG_16h5#2_Marker +v 38.500000 2.678055 5.498381 +v 43.500000 2.678055 5.498381 +v 38.500000 5.178055 1.168253 +v 43.500000 5.178055 1.168253 +f 9 10 12 11 +``` + +Here are common OBJ file features needed to describe ArUco markers places: + +* Object lines (starting with *o* key) indicate markers dictionary and id by following this format: **DICTIONARY**#**ID**\_Marker. +* Vertice lines (starting with *v* key) indicate markers corners. The marker size will be automatically deducted from the geometry. +* Face (starting with *f* key) link vertices and normals indexes together. + +!!! warning + Markers have to belong to the same dictionary. + +!!! note + Markers can have different size. + +### Edit JSON file description + +JSON file format allows to describe markers places using translation and euler angle rotation vectors. + +``` json +{ + "dictionary": "DICT_APRILTAG_16h5", + "places": { + "0": { + "translation": [17.5, 2.75, -0.5], + "rotation": [-18.5, 0, 0], + "size": 5 + }, + "1": { + "translation": [46, 34, 18.333], + "rotation": [0, 70, 0], + "size": 5 + }, + "2": { + "translation": [41, 4, 3.333], + "rotation": [-60, 0, 0], + "size": 5 + } + } +} +``` diff --git a/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md new file mode 100644 index 0000000..dd36ed3 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md @@ -0,0 +1,153 @@ +Load and execute pipeline +========================= + +Once [ArUco markers are placed into a scene](aruco_markers_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class. + +As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). + +![ArUco camera frame](../../img/aruco_camera_frame.png) + +## Load JSON configuration file + +An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) pipeline can be loaded from a JSON configuration file thanks to [argaze.load](../../argaze.md/#argaze.load) package method. + +Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file example: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + "aruco_detector": { + "dictionary": "DICT_APRILTAG_16h5" + }, + "gaze_movement_identifier": { + "DispersionThresholdIdentification": { + "deviation_max_threshold": 25, + "duration_min_threshold": 150 + } + }, + "image_parameters": { + "background_weight": 1, + "draw_detected_markers": { + "color": [0, 255, 0], + "draw_axes": { + "thickness": 3 + } + }, + "draw_gaze_positions": { + "color": [0, 255, 255], + "size": 2 + }, + "draw_fixations": { + "deviation_circle_color": [255, 0, 255], + "duration_border_color": [127, 0, 127], + "duration_factor": 1e-2 + }, + "draw_saccades": { + "line_color": [255, 0, 255] + } + } +} +``` + +Then, here is how to load the JSON file: + +```python +import argaze + +# Load ArUcoCamera +with argaze.load('./configuration.json') as aruco_camera: + + # Do something with ArUcoCamera + ... +``` + +Now, let's understand the meaning of each JSON entry. + +### *name - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* + +The name of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. Basically useful for visualisation purpose. + +### *size - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* + +The size of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame in pixels. Be aware that gaze positions have to be in the same range of value to be projected in. + +### *aruco_detector* + +The first [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) pipeline step is to detect ArUco markers inside input image. + +![ArUco markers detection](../../img/aruco_camera_markers_detection.png) + +The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) is in charge to detect all markers from a specific dictionary. + +!!! warning "Mandatory" + JSON *aruco_detector* entry is mandatory. + +### *gaze_movement_identifier - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* + +The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step dedicated to identify fixations or saccades from consecutive timestamped gaze positions. + +![Gaze movement identification](../../img/aruco_camera_gaze_movement_identification.png) + +### *image_parameters - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* + +The usual [ArFrame visualisation parameters](../gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field. + +## Pipeline execution + +### Detect ArUco markers, estimate scene pose and project 3D AOI + +Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco markers detection, scene pose estimation and 3D AOI projection. + +!!! warning "Mandatory" + + [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions. + +```python +# Assuming that Full HD (1920x1080) timestamped images are available +...: + + try: + + # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame + aruco_camera.watch(image, timestamp=timestamp) + + # Do something with pipeline exception + except Exception as e: + + ... + + # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualisation. + ... aruco_camera.image() +``` + +### Analyse timestamped gaze positions into camera frame + +As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). + +Particularly, timestamped gaze positions can be passed one by one to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis. + +!!! warning "Mandatory" + + [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions. + +```python +# Assuming that timestamped gaze positions are available +... + + try: + + # Look ArUcoCamera frame at a timestamped gaze position + aruco_camera.look(timestamped_gaze_position) + + # Do something with pipeline exception + except Exception as e: + + ... +``` + +!!! note "" + + At this point, the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method only detects ArUco markers and the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArCamera.look) method only process gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file. + + Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe 3D scene's AOI](aoi_3d_description.md) and [how to project them into camera frame](aoi_3d_projection.md). \ No newline at end of file diff --git a/docs/user_guide/aruco_marker_pipeline/introduction.md b/docs/user_guide/aruco_marker_pipeline/introduction.md new file mode 100644 index 0000000..7e662f7 --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/introduction.md @@ -0,0 +1,29 @@ +Overview +======== + +This section explains how to build augmented reality pipelines based on [ArUco Markers technology](https://www.sciencedirect.com/science/article/abs/pii/S0031320314000235) for various use cases. + +The OpenCV library provides a module to detect fiducial markers into a picture and estimate their poses. + +![OpenCV ArUco markers](../../img/opencv_aruco.png) + +The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, markers detection and 3D scene pose estimation through a set of high level classes. + +First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. + +![ArUco marker pipeline](../../img/aruco_marker_pipeline.png) + +To build your own ArUco marker pipeline, you need to know: + +* [How to setup ArUco markers into a scene](aruco_markers_description.md), +* [How to load and execute ArUco marker pipeline](configuration_and_execution.md), +* [How to estimate scene pose](pose_estimation.md), +* [How to describe scene's AOI](aoi_3d_description.md), +* [How to project 3D AOI into camera frame](aoi_3d_projection.md), +* [How to define a 3D AOI as a frame](aoi_3d_frame.md). + +More advanced features are also explained like: + +* [How to script ArUco marker pipeline](advanced_topics/scripting.md), +* [How to calibrate optic parameters](advanced_topics/optic_parameters_calibration.md), +* [How to improve ArUco markers detection](advanced_topics/aruco_detector_configuration.md). diff --git a/docs/user_guide/aruco_marker_pipeline/pose_estimation.md b/docs/user_guide/aruco_marker_pipeline/pose_estimation.md new file mode 100644 index 0000000..7f6573c --- /dev/null +++ b/docs/user_guide/aruco_marker_pipeline/pose_estimation.md @@ -0,0 +1,84 @@ +Estimate scene pose +=================== + +Once [ArUco markers are placed into a scene](aruco_markers_description.md) and [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) is [configured](configuration_and_execution.md), scene pose can be estimated. + +![Scene pose estimation](../../img/aruco_camera_pose_estimation.png) + +## Add ArUcoScene to ArUcoCamera JSON configuration file + +An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) class defines a space with [ArUco markers inside](aruco_markers_description.md) helping to estimate scene pose when they are watched by [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera). + +Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with a sample where one scene is added and displayed: + +```json +{ + "name": "My FullHD camera", + "size": [1920, 1080], + ... + "scenes": { + "MyScene" : { + "aruco_markers_group": { + "dictionary": "DICT_APRILTAG_16h5", + "places": { + "0": { + "translation": [17.5, 2.75, -0.5], + "rotation": [-18.5, 0, 0], + "size": 5 + }, + "1": { + "translation": [46, 34, 18.333], + "rotation": [0, 70, 0], + "size": 5 + }, + "2": { + "translation": [41, 4, 3.333], + "rotation": [-60, 0, 0], + "size": 5 + } + } + } + } + }, + ... + "image_parameters": { + ... + "draw_scenes": { + "MyScene": { + "draw_aruco_markers_group": { + "draw_axes": { + "thickness": 3, + "length": 10 + }, + "draw_places": { + "color": [0, 0, 0], + "border_size": 1 + } + } + } + } + } +} +``` + +Now, let's understand the meaning of each JSON entry. + +### *scenes* + +An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) instance can contains multiples [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) stored by name. + +### MyScene + +The name of an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene). Basically useful for visualisation purpose. + +### *aruco_markers_group* + +The 3D places of ArUco markers into the scene as defined at [ArUco markers description chapter](aruco_markers_description.md). Thanks to this description, it is possible to estimate the pose of [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. + +!!! note + + [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) pose estimation is done when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method. + +### *draw_scenes* + +The drawing parameters of each loaded [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image). diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md deleted file mode 100644 index 410e2d7..0000000 --- a/docs/user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md +++ /dev/null @@ -1,37 +0,0 @@ -Improve ArUco markers detection -=============================== - -As explain in [OpenCV ArUco documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html), ArUco markers detection is highly configurable. - -## Load ArUcoDetector parameters - -[ArUcoCamera.detector.parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.Parameters) can be loaded thanks to a dedicated JSON entry. - -Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with ArUco detector parameters: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - "aruco_detector": { - "dictionary": "DICT_APRILTAG_16h5", - "parameters": { - "adaptiveThreshConstant": 10, - "useAruco3Detection": 1 - } - }, - ... -``` - -## Print ArUcoDetector parameters - -```python -# Assuming ArUcoCamera is loaded -... - -# Print all ArUcoDetector parameters -print(aruco_camera.aruco_detector.parameters) - -# Print only modified ArUcoDetector parameters -print(f'{aruco_camera.aruco_detector.parameters:modified}') -``` diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md deleted file mode 100644 index c5cecac..0000000 --- a/docs/user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md +++ /dev/null @@ -1,190 +0,0 @@ -Calibrate optic parameters -========================== - -A camera device have to be calibrated to compensate its optical distorsion. - -![Optic parameters calibration](../../../img/optic_calibration.png) - -## Print calibration board - -The first step to calibrate a camera is to create an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below: - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard - -# Create ArUco dictionary -aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') - -# Create an ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm ArUco markers inside -aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) - -# Export ArUco board with 300 dpi resolution -aruco_board.save('./calibration_board.png', 300) -``` - -!!! note - There is **A3_DICT_APRILTAG_16h5_3cm_35cmx25cm.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A3 paper sheet. - -Let's print the calibration board before to go further. - -## Capture board pictures - -Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process. - -![Calibration step](../../../img/optic_calibration_step.png) - -The sample of code below illustrates how to: - -* load all required ArGaze objects, -* detect board corners into a Full HD camera video stream, -* store detected corners as calibration data then, -* once enough captures are made, process them to find optic parameters and, -* finally, save optic parameters into a JSON file. - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoOpticCalibrator, ArUcoBoard, ArUcoDetector - -# Create ArUco dictionary -aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') - -# Create ArUco optic calibrator -aruco_optic_calibrator = ArUcoOpticCalibrator.ArUcoOpticCalibrator() - -# Create ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm aruco markers inside -# Note: This board is the one expected during further board tracking -expected_aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) - -# Create ArUco detector -aruco_detector = ArUcoDetector.ArUcoDetector(dictionary=aruco_dictionary) - -# Assuming that live Full HD (1920x1080) video stream is enabled -... - -# Assuming there is a way to escape the while loop -... - - while video_stream.is_alive(): - - # Capture image from video stream - image = video_stream.read() - - # Detect all board corners in image - aruco_detector.detect_board(image, expected_aruco_board, expected_aruco_board.markers_number) - - # If all board corners are detected - if aruco_detector.board_corners_number() == expected_aruco_board.corners_number: - - # Draw board corners to show that board tracking succeeded - aruco_detector.draw_board(image) - - # Append tracked board data for further calibration processing - aruco_optic_calibrator.store_calibration_data(aruco_detector.board_corners(), aruco_detector.board_corners_identifier()) - -# Start optic calibration processing for Full HD image resolution -print('Calibrating optic...') -optic_parameters = aruco_optic_calibrator.calibrate(expected_aruco_board, dimensions=(1920, 1080)) - -if optic_parameters: - - # Export optic parameters - optic_parameters.to_json('./optic_parameters.json') - - print('Calibration succeeded: optic_parameters.json file exported.') - -else: - - print('Calibration failed.') -``` - -Below, an optic_parameters JSON file example: - -```json -{ - "rms": 0.6688921504088245, - "dimensions": [ - 1920, - 1080 - ], - "K": [ - [ - 1135.6524381415752, - 0.0, - 956.0685325355497 - ], - [ - 0.0, - 1135.9272506869524, - 560.059099810324 - ], - [ - 0.0, - 0.0, - 1.0 - ] - ], - "D": [ - 0.01655492265003404, - 0.1985524264972037, - 0.002129965902489484, - -0.0019528582922179365, - -0.5792910353639452 - ] -} -``` - -## Load and display optic parameters - -[ArUcoCamera.detector.optic_parameters](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator.OpticParameters) can be enabled thanks to a dedicated JSON entry. - -Here is an extract from the JSON [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file where optic parameters are loaded and displayed: - -```json -{ - "name": "My FullHD Camera", - "size": [1920, 1080], - "aruco_detector": { - "dictionary": "DICT_APRILTAG_16h5", - "optic_parameters": { - "rms": 0.6688921504088245, - "dimensions": [ - 1920, - 1080 - ], - "K": [ - [ - 1135.6524381415752, - 0.0, - 956.0685325355497 - ], - [ - 0.0, - 1135.9272506869524, - 560.059099810324 - ], - [ - 0.0, - 0.0, - 1.0 - ] - ], - "D": [ - 0.01655492265003404, - 0.1985524264972037, - 0.002129965902489484, - -0.0019528582922179365, - -0.5792910353639452 - ] - } - }, - ... - "image_parameters": { - ... - "draw_optic_parameters_grid": { - "width": 192, - "height": 108, - "z": 100, - "point_size": 1, - "point_color": [0, 0, 255] - } - } -``` \ No newline at end of file diff --git a/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md b/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md deleted file mode 100644 index 04d6a2f..0000000 --- a/docs/user_guide/aruco_markers_pipeline/advanced_topics/scripting.md +++ /dev/null @@ -1,132 +0,0 @@ -Script the pipeline -=================== - -All aruco markers pipeline objects are accessible from Python script. -This could be particularly useful for realtime AR interaction applications. - -## Load ArUcoCamera configuration from dictionary - -First of all, [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration can be loaded from a python dictionary. - -```python -from argaze.ArUcoMarkers import ArUcoCamera - -# Edit a dict with ArUcoCamera configuration -configuration = { - "name": "My FullHD camera", - "size": (1920, 1080), - ... - "aruco_detector": { - ... - }, - "scenes": { - "MyScene" : { - "aruco_markers_group": { - ... - }, - "layers": { - "MyLayer": { - "aoi_scene": { - ... - } - }, - ... - } - }, - ... - } - "layers": { - "MyLayer": { - ... - }, - ... - }, - "image_parameters": { - ... - } -} - -# Load ArUcoCamera -aruco_camera = ArUcoCamera.ArUcoCamera.from_dict(configuration) - -# Do something with ArUcoCamera -... -``` - -## Access to ArUcoCamera and ArScenes attributes - -Then, once the configuration is loaded, it is possible to access to its attributes: [read ArUcoCamera code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) to get a complete list of what is available. - -Thus, the [ArUcoCamera.scenes](../../../argaze.md/#argaze.ArFeatures.ArCamera) attribute allows to access each loaded aruco scene and so, access to their attributes: [read ArUcoScene code reference](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) to get a complete list of what is available. - -```python -from argaze import ArFeatures - -# Assuming the ArUcoCamera is loaded -... - -# Iterate over each ArUcoCamera scene -for name, aruco_scene in aruco_camera.scenes.items(): - ... -``` - -## Pipeline execution outputs - -[ArUcoCamera.watch](../../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method returns data about pipeline execution. - -```python -# Assuming that timestamped images are available -...: - - try: - - # Watch image with ArUco camera - aruco_camera.watch(image, timestamp=timestamp) - - # Do something with pipeline exception - except Exception as e: - - ... - - # Do something with detected_markers - ... aruco_camera.aruco_detector.detected_markers() - -``` - -Let's understand the meaning of each returned data. - -### *aruco_camera.aruco_detector.detected_markers()* - -A dictionary containing all detected markers provided by [ArUcoDetector](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) class. - -## Setup ArUcoCamera image parameters - -Specific [ArUcoCamera.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary. - -```python -# Assuming ArUcoCamera is loaded -... - -# Edit a dict with ArUcoCamera image parameters -image_parameters = { - "draw_detected_markers": { - ... - }, - "draw_scenes": { - ... - }, - "draw_optic_parameters_grid": { - ... - }, - ... -} - -# Pass image parameters to ArUcoCamera -aruco_camera_image = aruco_camera.image(**image_parameters) - -# Do something with ArUcoCamera image -... -``` - -!!! note - [ArUcoCamera](../../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all image parameters described in [gaze analysis pipeline visualisation section](../../gaze_analysis_pipeline/visualisation.md). \ No newline at end of file diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md deleted file mode 100644 index 23ea550..0000000 --- a/docs/user_guide/aruco_markers_pipeline/aoi_3d_description.md +++ /dev/null @@ -1,68 +0,0 @@ -Describe 3D AOI -=============== - -Now [scene pose is estimated](aruco_markers_description.md) thanks to ArUco markers description, [areas of interest (AOI)](../../argaze.md/#argaze.AreaOfInterest.AOIFeatures.AreaOfInterest) need to be described into the same 3D referential. - -In the example scene, the two screens, the control panel and the window are considered as areas of interest. - -![3D AOI description](../../img/aoi_3d_description.png) - -All AOI need to be described from same origin than markers in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where: - -* +X is pointing to the right, -* +Y is pointing to the top, -* +Z is pointing to the backward. - -!!! warning - All AOI spatial values must be given in **centimeters**. - -### Edit OBJ file description - -OBJ file format could be exported from most 3D editors. - -``` obj -o Left_Screen -v 0.000000 -0.000000 -0.000000 -v 15.000000 -0.000000 -0.000000 -v 0.000000 18.963333 -6.355470 -v 15.000000 18.963333 -6.355470 -f 1 2 4 3 -o Right_Screen -v 20.000000 0.000000 -0.000000 -v 35.000000 0.000000 -0.000000 -v 20.000000 18.963337 -6.355472 -v 35.000000 18.963337 -6.355472 -f 5 6 8 7 -o Control_Panel -v 49.500000 30.000000 18.333333 -v 55.500000 30.000000 18.333333 -v 49.500000 38.000000 18.333333 -v 55.500000 38.000000 18.333333 -f 9 10 12 11 -o Window -v -57.800000 5.500000 -33.500000 -v 46.000000 15.500000 -35.000000 -v 1.500000 53.000000 -1.000000 -v 50.200000 61.000000 6.000000 -v -35.850000 35.000000 -15.000000 -f 13 14 16 15 17 -``` - -Here are common OBJ file features needed to describe AOI: - -* Object lines (starting with *o* key) indicate AOI name. -* Vertice lines (starting with *v* key) indicate AOI vertices. -* Face (starting with *f* key) link vertices together. - -### Edit JSON file description - -JSON file format allows to describe AOI vertices. - -``` json -{ - "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], - "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337, -6.355472], [35, 18.963337, -6.355472]], - "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], - "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] -} -``` diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md deleted file mode 100644 index cf4a07e..0000000 --- a/docs/user_guide/aruco_markers_pipeline/aoi_3d_frame.md +++ /dev/null @@ -1,133 +0,0 @@ -Define a 3D AOI as a frame -========================== - -When an 3D AOI of the scene contains others coplanar 3D AOI, like a screen with GUI elements displayed on, it is better to described them as 2D AOI inside 2D coordinates system related to the containing 3D AOI. - -![3D AOI frame](../../img/aruco_camera_aoi_frame.png) - -## Add ArFrame to ArUcoScene - -The [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) class defines a rectangular area where timestamped gaze positions are projected in and inside which they need to be analyzed. - -Here is the previous extract where "Left_Screen" and "Right_Screen" AOI are defined as a frame into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - ... - "scenes": { - "MyScene" : { - "aruco_markers_group": { - ... - }, - "layers": { - "MyLayer": { - "aoi_scene": { - "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], - "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337 ,-6.355472], [35, 18.963337, -6.355472]], - "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], - "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] - } - } - }, - "frames": { - "Left_Screen": { - "size": [768, 1024], - "layers": { - "MyLayer": { - "aoi_scene": { - "LeftPanel": { - "Rectangle": { - "x": 0, - "y": 0, - "width": 768, - "height": 180 - } - }, - "CircularWidget": { - "Circle": { - "cx": 384, - "cy": 600, - "radius": 180 - } - } - } - } - } - }, - "Right_Screen": { - "size": [768, 1024], - "layers": { - "MyLayer": { - "aoi_scene": { - "GeoSector": [[724, 421], [537, 658], [577, 812], [230, 784], [70, 700], [44, 533], [190, 254], [537, 212]] - } - } - } - } - } - } - } - ... -} -``` -Now, let's understand the meaning of each JSON entry. - -### *frames* - -An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArFrames](../../argaze.md/#argaze.ArFeatures.ArFrame) stored by name. - -### Left_Screen & Right_Screen - -The names of 3D AOI **and** their related [ArFrames](../../argaze.md/#argaze.ArFeatures.ArFrame). Basically useful for visualisation purpose. - -!!! warning "AOI / Frame names policy" - - An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer 3D AOI is defined as an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame, **provided they have the same name**. - -!!! warning "Layer name policy" - - An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layer is projected into [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer, **provided they have the same name**. - -!!! note - - [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frame layers are projected into their dedicated [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers when the JSON configuration file is loaded. - -## Pipeline execution - -### Map ArUcoCamera image into ArUcoScenes frames - -After camera image is passed to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method, it is possible to apply a perpective transformation in order to project watched image into each [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) [frames background](../../argaze.md/#argaze.ArFeatures.ArFrame) image. - -```python -# Assuming that Full HD (1920x1080) timestamped images are available -...: - - # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame - aruco_camera.watch(image, timestamp=timestamp) - - # Map watched image into ArUcoScenes frames background - aruco_camera.map(timestamp=timestamp) -``` - -### Analyse timestamped gaze positions into ArUcoScenes frames - -[ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). - -!!! note - - Timestamped gaze positions passed to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method are projected into [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames if applicable. - -### Display each ArUcoScenes frames - -All [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) frames image can be displayed as any [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame). - -```python - ... - - # Display all ArUcoScenes frames - for frame in aruco_camera.scene_frames: - - ... frame.image() -``` \ No newline at end of file diff --git a/docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md b/docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md deleted file mode 100644 index 64f5fc8..0000000 --- a/docs/user_guide/aruco_markers_pipeline/aoi_3d_projection.md +++ /dev/null @@ -1,168 +0,0 @@ -Project 3D AOI into camera frame -================================ - -Once [ArUcoScene pose is estimated](pose_estimation.md) and [3D AOI are described](aoi_3d_description.md), AOI can be projected into [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. - -![3D AOI projection](../../img/aruco_camera_aoi_projection.png) - -## Add ArLayer to ArUcoScene to load 3D AOI - -The [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) class allows to load 3D AOI description. - -Here is the previous extract where one layer is added to [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) configuration: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - ... - "scenes": { - "MyScene" : { - "aruco_markers_group": { - ... - }, - "layers": { - "MyLayer": { - "aoi_scene": { - "Left_Screen": [[0, 0, 0], [15, 0, 0], [0, 18.963333, -6.355470], [15, 18.963333, -6.355470]], - "Right_Screen": [[20, 0, 0], [35, 0, 0], [20, 18.963337, -6.355472], [35, 18.963337, -6.355472]], - "Control_Panel": [[49.5, 30, 18.333333], [55.5, 30, 18.333333], [49.5, 38, 18.333333], [55.5, 38, 18.333333]], - "Window": [[-57.8, 5.5, -33.5], [46, 15.5, -35], [1.5, 53, -1], [50.2, 61, 6], [-35.85, 35, -15]] - } - } - } - } - } - ... -} -``` - -Now, let's understand the meaning of each JSON entry. - -### *layers* - -An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name. - -### MyLayer - -The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose. - -### *aoi_scene* - -The set of 3D AOI into the layer as defined at [3D AOI description chapter](aoi_3d_description.md). - -## Add ArLayer to ArUcoCamera to project 3D AOI into - -Here is the previous extract where one layer is added to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration and displayed: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - ... - "scenes": { - "MyScene" : { - "aruco_markers_group": { - ... - }, - "layers": { - "MyLayer": { - "aoi_scene": { - ... - } - } - } - } - }, - "layers": { - "MyLayer": {} - } - ... - "image_parameters": { - ... - "draw_layers": { - "MyLayer": { - "draw_aoi_scene": { - "draw_aoi": { - "color": [255, 255, 255], - "border_size": 1 - } - } - } - } - } -} -``` - -Now, let's understand the meaning of each JSON entry. - -### *layers* - -An [ArUcoCamera](../../argaze.md/#argaze.ArFeatures.ArFrame) instance can contains multiples [ArLayers](../../argaze.md/#argaze.ArFeatures.ArLayer) stored by name. - -### MyLayer - -The name of an [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer). Basically useful for visualisation purpose. - -!!! warning "Layer name policy" - - An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layer is projected into an [ ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer, **provided they have the same name**. - -!!! note - - [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) layers are projected into their dedicated [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layers when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method. - -## Add AOI analysis features to ArUcoCamera layer - -When a scene layer is projected into a camera layer, it means that the 3D scene's AOI are transformed into 2D camera's AOI. - -Therefore, it means that [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) benefits from all the services described in [AOI analysis pipeline section](../gaze_analysis_pipeline/aoi_analysis.md). - -Here is the previous extract where AOI matcher, AOI scan path and AOI scan path analyzers are added to the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - ... - "scenes": { - "MyScene" : { - "aruco_markers_group": { - ... - }, - "layers": { - "MyLayer": { - "aoi_scene": { - ... - } - } - } - } - }, - "layers": { - "MyLayer": { - "aoi_matcher": { - "DeviationCircleCoverage": { - "coverage_threshold": 0.5 - } - }, - "aoi_scan_path": { - "duration_max": 30000 - }, - "aoi_scan_path_analyzers": { - "Basic": {}, - "TransitionMatrix": {}, - "NGram": { - "n_min": 3, - "n_max": 5 - } - } - } - } - ... -} -``` - -!!! warning - - Adding scan path and scan path analyzers to an [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) layer doesn't make sense as the space viewed thru camera frame doesn't necessary reflect the space the gaze is covering. diff --git a/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md b/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md deleted file mode 100644 index 055d1de..0000000 --- a/docs/user_guide/aruco_markers_pipeline/aruco_markers_description.md +++ /dev/null @@ -1,120 +0,0 @@ -Set up ArUco markers -==================== - -First of all, ArUco markers needs to be printed and placed into the scene. - -Here is an example scene where markers are surrounding a workspace with two screens, a control panel and a window. - -![Scene](../../img/scene.png) - -## Print ArUco markers from an ArUco dictionary - -ArUco markers always belongs to a set of markers called ArUco markers dictionary. - -![ArUco dictionaries](../../img/aruco_dictionaries.png) - -Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking. - -Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975). - -The creation of [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures from a dictionary is illustrated in the code below: - -``` python -from argaze.ArUcoMarkers import ArUcoMarkersDictionary - -# Create a dictionary of specific April tags -aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') - -# Export marker n°5 as 3.5 cm picture with 300 dpi resolution -aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300) - -# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution -aruco_dictionary.save('./markers/', 3.5, 300) -``` - -!!! note - There is **A4_DICT_APRILTAG_16h5_5cm_0-7.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A4 paper sheet. - -Let's print some of them before to go further. - -!!! warning - Print markers with a blank zone around them to help in their detection. - -## Describe ArUco markers place - -Once [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures are placed into a scene it is possible to describe their 3D places into a file. - -![ArUco markers description](../../img/aruco_markers_description.png) - -Where ever the origin point is, all markers places need to be described in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where: - -* +X is pointing to the right, -* +Y is pointing to the top, -* +Z is pointing to the backward. - -!!! warning - All ArUco markers spatial values must be given in **centimeters**. - -### Edit OBJ file description - -OBJ file format could be exported from most 3D editors. - -``` obj -o DICT_APRILTAG_16h5#0_Marker -v 15.000000 0.378741 0.330527 -v 20.000000 0.378741 0.330527 -v 15.000000 5.120359 -1.255996 -v 20.000000 5.120359 -1.255996 -f 1 2 4 3 -o DICT_APRILTAG_16h5#1_Marker -v 43.500000 31.428055 18.333317 -v 48.500000 31.428055 18.333317 -v 43.500000 36.428055 18.333317 -v 48.500000 36.428055 18.333317 -f 5 6 8 7 -o DICT_APRILTAG_16h5#2_Marker -v 38.500000 2.678055 5.498381 -v 43.500000 2.678055 5.498381 -v 38.500000 5.178055 1.168253 -v 43.500000 5.178055 1.168253 -f 9 10 12 11 -``` - -Here are common OBJ file features needed to describe ArUco markers places: - -* Object lines (starting with *o* key) indicate markers dictionary and id by following this format: **DICTIONARY**#**ID**\_Marker. -* Vertice lines (starting with *v* key) indicate markers corners. The marker size will be automatically deducted from the geometry. -* Face (starting with *f* key) link vertices and normals indexes together. - -!!! warning - Markers have to belong to the same dictionary. - -!!! note - Markers can have different size. - -### Edit JSON file description - -JSON file format allows to describe markers places using translation and euler angle rotation vectors. - -``` json -{ - "dictionary": "DICT_APRILTAG_16h5", - "places": { - "0": { - "translation": [17.5, 2.75, -0.5], - "rotation": [-18.5, 0, 0], - "size": 5 - }, - "1": { - "translation": [46, 34, 18.333], - "rotation": [0, 70, 0], - "size": 5 - }, - "2": { - "translation": [41, 4, 3.333], - "rotation": [-60, 0, 0], - "size": 5 - } - } -} -``` diff --git a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md b/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md deleted file mode 100644 index dd36ed3..0000000 --- a/docs/user_guide/aruco_markers_pipeline/configuration_and_execution.md +++ /dev/null @@ -1,153 +0,0 @@ -Load and execute pipeline -========================= - -Once [ArUco markers are placed into a scene](aruco_markers_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class. - -As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class also benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). - -![ArUco camera frame](../../img/aruco_camera_frame.png) - -## Load JSON configuration file - -An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) pipeline can be loaded from a JSON configuration file thanks to [argaze.load](../../argaze.md/#argaze.load) package method. - -Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file example: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - "aruco_detector": { - "dictionary": "DICT_APRILTAG_16h5" - }, - "gaze_movement_identifier": { - "DispersionThresholdIdentification": { - "deviation_max_threshold": 25, - "duration_min_threshold": 150 - } - }, - "image_parameters": { - "background_weight": 1, - "draw_detected_markers": { - "color": [0, 255, 0], - "draw_axes": { - "thickness": 3 - } - }, - "draw_gaze_positions": { - "color": [0, 255, 255], - "size": 2 - }, - "draw_fixations": { - "deviation_circle_color": [255, 0, 255], - "duration_border_color": [127, 0, 127], - "duration_factor": 1e-2 - }, - "draw_saccades": { - "line_color": [255, 0, 255] - } - } -} -``` - -Then, here is how to load the JSON file: - -```python -import argaze - -# Load ArUcoCamera -with argaze.load('./configuration.json') as aruco_camera: - - # Do something with ArUcoCamera - ... -``` - -Now, let's understand the meaning of each JSON entry. - -### *name - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* - -The name of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. Basically useful for visualisation purpose. - -### *size - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* - -The size of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame in pixels. Be aware that gaze positions have to be in the same range of value to be projected in. - -### *aruco_detector* - -The first [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) pipeline step is to detect ArUco markers inside input image. - -![ArUco markers detection](../../img/aruco_camera_markers_detection.png) - -The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector) is in charge to detect all markers from a specific dictionary. - -!!! warning "Mandatory" - JSON *aruco_detector* entry is mandatory. - -### *gaze_movement_identifier - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* - -The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step dedicated to identify fixations or saccades from consecutive timestamped gaze positions. - -![Gaze movement identification](../../img/aruco_camera_gaze_movement_identification.png) - -### *image_parameters - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)* - -The usual [ArFrame visualisation parameters](../gaze_analysis_pipeline/visualisation.md) plus one additional *draw_detected_markers* field. - -## Pipeline execution - -### Detect ArUco markers, estimate scene pose and project 3D AOI - -Pass each camera image to [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method to execute the whole pipeline dedicated to ArUco markers detection, scene pose estimation and 3D AOI projection. - -!!! warning "Mandatory" - - [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method must be called from a *try* block to catch pipeline exceptions. - -```python -# Assuming that Full HD (1920x1080) timestamped images are available -...: - - try: - - # Detect ArUco markers, estimate scene pose then, project 3D AOI into camera frame - aruco_camera.watch(image, timestamp=timestamp) - - # Do something with pipeline exception - except Exception as e: - - ... - - # Display ArUcoCamera frame image to display detected ArUco markers, scene pose, 2D AOI projection and ArFrame visualisation. - ... aruco_camera.image() -``` - -### Analyse timestamped gaze positions into camera frame - -As mentioned above, [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) and so, benefits from all the services described in [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md). - -Particularly, timestamped gaze positions can be passed one by one to [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole pipeline dedicated to gaze analysis. - -!!! warning "Mandatory" - - [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method must be called from a *try* block to catch pipeline exceptions. - -```python -# Assuming that timestamped gaze positions are available -... - - try: - - # Look ArUcoCamera frame at a timestamped gaze position - aruco_camera.look(timestamped_gaze_position) - - # Do something with pipeline exception - except Exception as e: - - ... -``` - -!!! note "" - - At this point, the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method only detects ArUco markers and the [ArUcoCamera.look](../../argaze.md/#argaze.ArFeatures.ArCamera.look) method only process gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file. - - Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe 3D scene's AOI](aoi_3d_description.md) and [how to project them into camera frame](aoi_3d_projection.md). \ No newline at end of file diff --git a/docs/user_guide/aruco_markers_pipeline/introduction.md b/docs/user_guide/aruco_markers_pipeline/introduction.md deleted file mode 100644 index 94370f4..0000000 --- a/docs/user_guide/aruco_markers_pipeline/introduction.md +++ /dev/null @@ -1,29 +0,0 @@ -Overview -======== - -This section explains how to build augmented reality pipelines based on [ArUco Markers technology](https://www.sciencedirect.com/science/article/abs/pii/S0031320314000235) for various use cases. - -The OpenCV library provides a module to detect fiducial markers into a picture and estimate their poses. - -![OpenCV ArUco markers](../../img/opencv_aruco.png) - -The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, markers detection and 3D scene pose estimation through a set of high level classes. - -First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters. - -![ArUco markers pipeline](../../img/aruco_markers_pipeline.png) - -To build your own ArUco markers pipeline, you need to know: - -* [How to setup ArUco markers into a scene](aruco_markers_description.md), -* [How to load and execute ArUco markers pipeline](configuration_and_execution.md), -* [How to estimate scene pose](pose_estimation.md), -* [How to describe scene's AOI](aoi_3d_description.md), -* [How to project 3D AOI into camera frame](aoi_3d_projection.md), -* [How to define a 3D AOI as a frame](aoi_3d_frame.md). - -More advanced features are also explained like: - -* [How to script ArUco markers pipeline](advanced_topics/scripting.md), -* [How to calibrate optic parameters](advanced_topics/optic_parameters_calibration.md), -* [How to improve ArUco markers detection](advanced_topics/aruco_detector_configuration.md). diff --git a/docs/user_guide/aruco_markers_pipeline/pose_estimation.md b/docs/user_guide/aruco_markers_pipeline/pose_estimation.md deleted file mode 100644 index 7f6573c..0000000 --- a/docs/user_guide/aruco_markers_pipeline/pose_estimation.md +++ /dev/null @@ -1,84 +0,0 @@ -Estimate scene pose -=================== - -Once [ArUco markers are placed into a scene](aruco_markers_description.md) and [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) is [configured](configuration_and_execution.md), scene pose can be estimated. - -![Scene pose estimation](../../img/aruco_camera_pose_estimation.png) - -## Add ArUcoScene to ArUcoCamera JSON configuration file - -An [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) class defines a space with [ArUco markers inside](aruco_markers_description.md) helping to estimate scene pose when they are watched by [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera). - -Here is an extract from the JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) configuration file with a sample where one scene is added and displayed: - -```json -{ - "name": "My FullHD camera", - "size": [1920, 1080], - ... - "scenes": { - "MyScene" : { - "aruco_markers_group": { - "dictionary": "DICT_APRILTAG_16h5", - "places": { - "0": { - "translation": [17.5, 2.75, -0.5], - "rotation": [-18.5, 0, 0], - "size": 5 - }, - "1": { - "translation": [46, 34, 18.333], - "rotation": [0, 70, 0], - "size": 5 - }, - "2": { - "translation": [41, 4, 3.333], - "rotation": [-60, 0, 0], - "size": 5 - } - } - } - } - }, - ... - "image_parameters": { - ... - "draw_scenes": { - "MyScene": { - "draw_aruco_markers_group": { - "draw_axes": { - "thickness": 3, - "length": 10 - }, - "draw_places": { - "color": [0, 0, 0], - "border_size": 1 - } - } - } - } - } -} -``` - -Now, let's understand the meaning of each JSON entry. - -### *scenes* - -An [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) instance can contains multiples [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) stored by name. - -### MyScene - -The name of an [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene). Basically useful for visualisation purpose. - -### *aruco_markers_group* - -The 3D places of ArUco markers into the scene as defined at [ArUco markers description chapter](aruco_markers_description.md). Thanks to this description, it is possible to estimate the pose of [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) frame. - -!!! note - - [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) pose estimation is done when calling the [ArUcoCamera.watch](../../argaze.md/#argaze.ArFeatures.ArCamera.watch) method. - -### *draw_scenes* - -The drawing parameters of each loaded [ArUcoScene](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene) in [ArUcoCamera.image](../../argaze.md/#argaze.ArFeatures.ArFrame.image). diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md index 4d80c05..4d2780a 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md @@ -3,7 +3,7 @@ Calibrate gaze position Gaze position calibration is an optional [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step. It processes each new gaze position before any further pipeline steps. -The calibration algorithm can be selected by instantiating a particular [GazePositionCalibrator from GazeAnalysis submodule](../pipeline_modules/gaze_position_calibrators.md) or [from another python package](module_loading.md). +The calibration algorithm can be selected by instantiating a particular [GazePositionCalibrator from GazeAnalysis submodule](../pipeline_modules/gaze_position_calibrators.md) or [from another Python package](module_loading.md). ## Enable ArFrame calibration diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md index 0e439a9..8250382 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md @@ -1,7 +1,7 @@ Load modules from another package ================================= -It is possible to load [GazeMovementIdentifier](../../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier), [ScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer), [AOIMatcher](../../../argaze.md/#argaze.GazeFeatures.AOIMatcher) or [AOIScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) modules from another [python package](https://docs.python.org/3/tutorial/modules.html#packages). +It is possible to load [GazeMovementIdentifier](../../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier), [ScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer), [AOIMatcher](../../../argaze.md/#argaze.GazeFeatures.AOIMatcher) or [AOIScanPathAnalyzer](../../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer) modules [from another Python package](https://docs.python.org/3/tutorial/modules.html#packages). To do so, simply prepend the package where to find the module into the JSON configuration file: @@ -34,7 +34,7 @@ To do so, simply prepend the package where to find the module into the JSON conf } ``` -Then, load your package from the python script where the [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) is created. +Then, load your package from the Python script where the [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) is created. ```python import argaze diff --git a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md index 927e6d7..9c4fb60 100644 --- a/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md +++ b/docs/user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md @@ -6,7 +6,7 @@ This could be particularly useful for realtime gaze interaction applications. ## Load ArFrame configuration from dictionary -First of all, [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) configuration can be loaded from a python dictionary. +First of all, [ArFrame](../../../argaze.md/#argaze.ArFeatures.ArFrame) configuration can be loaded from a Python dictionary. ```python from argaze import ArFeatures @@ -154,7 +154,7 @@ This an iterator to access to all aoi scan path analysis. ## Setup ArFrame image parameters -[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a python dictionary. +[ArFrame.image](../../../argaze.md/#argaze.ArFeatures.ArFrame.image) method parameters can be configured thanks to a Python dictionary. ```python # Assuming ArFrame is loaded diff --git a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md index b7a4342..feab4e4 100644 --- a/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md +++ b/docs/user_guide/gaze_analysis_pipeline/aoi_analysis.md @@ -83,7 +83,7 @@ The first [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step ai ![AOI matcher](../../img/aoi_matcher.png) -The matching algorithm can be selected by instantiating a particular [AOIMatcher from GazeAnalysis submodule](pipeline_modules/aoi_matchers.md) or [from another python package](advanced_topics/module_loading.md). +The matching algorithm can be selected by instantiating a particular [AOIMatcher from GazeAnalysis submodule](pipeline_modules/aoi_matchers.md) or [from another Python package](advanced_topics/module_loading.md). In the example file, the choosen matching algorithm is the [Deviation Circle Coverage](../../argaze.md/#argaze.GazeAnalysis.DeviationCircleCoverage) which has one specific *coverage_threshold* attribute. @@ -107,6 +107,6 @@ The [AOIScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.AOIScanPath. Finally, the last [ArLayer](../../argaze.md/#argaze.ArFeatures.ArLayer) pipeline step consists in passing the previously built [AOIScanPath](../../argaze.md/#argaze.GazeFeatures.AOIScanPath) to each loaded [AOIScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.AOIScanPathAnalyzer). -Each analysis algorithm can be selected by instantiating a particular [AOIScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/aoi_scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md). +Each analysis algorithm can be selected by instantiating a particular [AOIScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/aoi_scan_path_analyzers.md) or [from another Python package](advanced_topics/module_loading.md). In the example file, the choosen analysis algorithms are the [Basic](../../argaze.md/#argaze.GazeAnalysis.Basic) module, the [TransitionMatrix](../../argaze.md/#argaze.GazeAnalysis.TransitionMatrix) module and the [NGram](../../argaze.md/#argaze.GazeAnalysis.NGram) module which has two specific *n_min* and *n_max* attributes. diff --git a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md index 47b820b..ed8a5d1 100644 --- a/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md +++ b/docs/user_guide/gaze_analysis_pipeline/configuration_and_execution.md @@ -64,7 +64,7 @@ The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step is ![Gaze movement identifier](../../img/gaze_movement_identifier.png) -The identification algorithm can be selected by instantiating a particular [GazeMovementIdentifier from GazeAnalysis submodule](pipeline_modules/gaze_movement_identifiers.md) or [from another python package](advanced_topics/module_loading.md). +The identification algorithm can be selected by instantiating a particular [GazeMovementIdentifier from GazeAnalysis submodule](pipeline_modules/gaze_movement_identifiers.md) or [from another Python package](advanced_topics/module_loading.md). In the example file, the choosen identification algorithm is the [Dispersion Threshold Identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) which has two specific *deviation_max_threshold* and *duration_min_threshold* attributes. @@ -91,7 +91,7 @@ The [ScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.ScanPath.durati Finally, the last [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step consists in passing the previously built [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) to each loaded [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer). -Each analysis algorithm can be selected by instantiating a particular [ScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/scan_path_analyzers.md) or [from another python package](advanced_topics/module_loading.md). +Each analysis algorithm can be selected by instantiating a particular [ScanPathAnalyzer from GazeAnalysis submodule](pipeline_modules/scan_path_analyzers.md) or [from another Python package](advanced_topics/module_loading.md). In the example file, the choosen analysis algorithms are the [Basic](../../argaze.md/#argaze.GazeAnalysis.Basic) module and the [ExploreExploitRatio](../../argaze.md/#argaze.GazeAnalysis.ExploreExploitRatio) module which has one specific *short_fixation_duration_threshold* attribute. diff --git a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md b/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md index 9a04bb6..388b0ac 100644 --- a/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md +++ b/docs/user_guide/gaze_analysis_pipeline/timestamped_gaze_positions_edition.md @@ -29,9 +29,9 @@ for timestamped_gaze_position in ts_gaze_positions: ## Edit timestamped gaze positions from live stream When gaze positions comes from a real-time input, gaze position can be edited thanks to [GazePosition](../../argaze.md/#argaze.GazeFeatures.GazePosition) class. -Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the python [time package](https://docs.python.org/3/library/time.html). +Besides, timestamps can be edited from the incoming data stream or, if not available, they can be edited thanks to the Python [time package](https://docs.python.org/3/library/time.html). -``` python +```python from argaze import GazeFeatures # Assuming to be inside the function where timestamp_µs, gaze_x and gaze_y values are catched @@ -44,7 +44,7 @@ from argaze import GazeFeatures ... ``` -``` python +```python from argaze import GazeFeatures import time diff --git a/mkdocs.yml b/mkdocs.yml index a5305aa..eef6008 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -24,18 +24,18 @@ nav: - user_guide/gaze_analysis_pipeline/advanced_topics/scripting.md - user_guide/gaze_analysis_pipeline/advanced_topics/module_loading.md - user_guide/gaze_analysis_pipeline/advanced_topics/gaze_position_calibration.md - - ArUco markers pipeline: - - user_guide/aruco_markers_pipeline/introduction.md - - user_guide/aruco_markers_pipeline/aruco_markers_description.md - - user_guide/aruco_markers_pipeline/configuration_and_execution.md - - user_guide/aruco_markers_pipeline/pose_estimation.md - - user_guide/aruco_markers_pipeline/aoi_3d_description.md - - user_guide/aruco_markers_pipeline/aoi_3d_projection.md - - user_guide/aruco_markers_pipeline/aoi_3d_frame.md + - ArUco marker pipeline: + - user_guide/aruco_marker_pipeline/introduction.md + - user_guide/aruco_marker_pipeline/aruco_markers_description.md + - user_guide/aruco_marker_pipeline/configuration_and_execution.md + - user_guide/aruco_marker_pipeline/pose_estimation.md + - user_guide/aruco_marker_pipeline/aoi_3d_description.md + - user_guide/aruco_marker_pipeline/aoi_3d_projection.md + - user_guide/aruco_marker_pipeline/aoi_3d_frame.md - Advanced Topics: - - user_guide/aruco_markers_pipeline/advanced_topics/scripting.md - - user_guide/aruco_markers_pipeline/advanced_topics/optic_parameters_calibration.md - - user_guide/aruco_markers_pipeline/advanced_topics/aruco_detector_configuration.md + - user_guide/aruco_marker_pipeline/advanced_topics/scripting.md + - user_guide/aruco_marker_pipeline/advanced_topics/optic_parameters_calibration.md + - user_guide/aruco_marker_pipeline/advanced_topics/aruco_detector_configuration.md - utils: - user_guide/utils/ready-made_scripts.md - user_guide/utils/demonstrations_scripts.md -- cgit v1.1