aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/aruco_markers_pipeline
diff options
context:
space:
mode:
authorThéo de la Hogue2023-09-06 08:34:32 +0200
committerThéo de la Hogue2023-09-06 08:34:32 +0200
commit3b8681b848fd91989a03d0ff6a03c7deaec4addd (patch)
tree438e0eb91d4954564c774b16421c94ad2d3bbdbd /docs/user_guide/aruco_markers_pipeline
parent3dba9640ad57e48d1979d19cfe8cab8c9be2d621 (diff)
downloadargaze-3b8681b848fd91989a03d0ff6a03c7deaec4addd.zip
argaze-3b8681b848fd91989a03d0ff6a03c7deaec4addd.tar.gz
argaze-3b8681b848fd91989a03d0ff6a03c7deaec4addd.tar.bz2
argaze-3b8681b848fd91989a03d0ff6a03c7deaec4addd.tar.xz
Renaming folder.
Diffstat (limited to 'docs/user_guide/aruco_markers_pipeline')
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aruco_camera_configuration_and_execution.md110
-rw-r--r--docs/user_guide/aruco_markers_pipeline/aruco_scene_creation.md130
-rw-r--r--docs/user_guide/aruco_markers_pipeline/introduction.md26
-rw-r--r--docs/user_guide/aruco_markers_pipeline/optic_parameters_calibration.md133
4 files changed, 399 insertions, 0 deletions
diff --git a/docs/user_guide/aruco_markers_pipeline/aruco_camera_configuration_and_execution.md b/docs/user_guide/aruco_markers_pipeline/aruco_camera_configuration_and_execution.md
new file mode 100644
index 0000000..a0ec2bf
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/aruco_camera_configuration_and_execution.md
@@ -0,0 +1,110 @@
+Configure and execute ArUcoCamera
+=================================
+
+Once [ArUco markers are placed into a scene](aruco_scene_creation.md) and [the camera optic have been calibrated](optic_parameters_calibration.md), everything is ready to setup an ArUco marker pipeline thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class.
+
+As it inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class benefits from all the services of a [gaze analysis pipeline](./user_guide/gaze_analysis_pipeline/introduction.md).
+
+Besides, the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) class projects [ArUcoScenes](../../argaze.md/#argaze.ArUcoMarkers.ArUcoScene)'s layers into its own layers thanks to ArUco markers pose estimations made by its [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector).
+
+![ArUco camera frame](../../img/aruco_camera_frame.png)
+
+## Load JSON configuration file
+
+The [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) internal pipeline loads from a JSON configuration file thanks to [ArUcoCamera.from_json](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera.from_json) class method.
+
+Here is a simple JSON ArUcoCamera configuration file example:
+
+```json
+{
+ "name": "My FullHD camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": {
+ "name": "DICT_APRILTAG_16h5"
+ },
+ "marker_size": 5,
+ "optic_parameters": "optic_parameters.json",
+ "parameters": {
+ "cornerRefinementMethod": 1,
+ "aprilTagQuadSigma": 2,
+ "aprilTagDeglitch": 1
+ }
+ },
+ "scenes": {
+
+ },
+ "layers": {
+ "main_layer": {}
+ },
+}
+```
+
+Then, here is how to load the JSON file:
+
+```python
+from argaze.ArUcoMarkers import ArUcoCamera
+
+# Load ArUcoCamera
+aruco_camera = ArUcoCamera.ArUcoCamera.from_json('./configuration.json')
+```
+
+Now, let's understand the meaning of each JSON entry.
+
+### Name
+
+The name of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera). Basically useful for visualisation purpose.
+
+### Size
+
+The size of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarkers.ArUcoCamera) defines the dimension of the rectangular area where gaze positions are projected. Be aware that gaze positions have to be in the same range of value.
+
+!!! warning
+ **ArGaze doesn't impose any spatial unit.** Gaze positions can either be integer or float, pixels, millimeters or what ever you need. The only concern is that all spatial values used in further configurations have to be all the same unit.
+
+### ArUco detector
+
+The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step is to identify fixations or saccades from consecutive timestamped gaze positions.
+
+![Gaze Movement Identifier](../../img/ar_frame_gaze_movement_identifier.png)
+
+The identification algorithm can be selected by instantiating a particular [GazeMovementIdentifier](../../argaze.md/#argaze.GazeFeatures.GazeMovementIdentifier) from the [argaze.GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule or [from another python package](advanced_topics/module_loading.md).
+
+In the example file, the choosen identification algorithm is the [Dispersion Threshold Identification (I-DT)](../../argaze.md/#argaze.GazeAnalysis.DispersionThresholdIdentification) which has two specific *deviation_max_threshold* and *duration_min_threshold* attributes.
+
+!!! note
+ In ArGaze, [Fixation](../../argaze.md/#argaze.GazeFeatures.Fixation) and [Saccade](../../argaze.md/#argaze.GazeFeatures.Saccade) are considered as particular [GazeMovements](../../argaze.md/#argaze.GazeFeatures.GazeMovement).
+
+!!! warning
+ JSON *gaze_movement_identifier* entry is mandatory. Otherwise, the ScanPath and ScanPathAnalyzers steps are disabled.
+
+### Scan Path
+
+The second [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step aims to build a [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) defined as a list of [ScanSteps](../../argaze.md/#argaze.GazeFeatures.ScanStep) made by a fixation and a consecutive saccade.
+
+![Scan Path](../../img/ar_frame_scan_path.png)
+
+Once fixations and saccades are identified, they are automatically appended to the ScanPath if required.
+
+The [ScanPath.duration_max](../../argaze.md/#argaze.GazeFeatures.ScanPath.duration_max) attribute is the duration from which older scan steps are removed each time new scan steps are added.
+
+!!! note
+ JSON *scan_path* entry is not mandatory. If scan_path_analyzers entry is not empty, the ScanPath step is automatically enabled.
+
+### Scan Path Analyzers
+
+Finally, the last [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step consists in passing the previously built [ScanPath](../../argaze.md/#argaze.GazeFeatures.ScanPath) to each loaded [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer).
+
+Each analysis algorithm can be selected by instantiating a particular [ScanPathAnalyzer](../../argaze.md/#argaze.GazeFeatures.ScanPathAnalyzer) from the [argaze.GazeAnalysis](../../argaze.md/#argaze.GazeAnalysis) submodule or [from another python package](advanced_topics/module_loading.md).
+
+## Pipeline execution
+
+Timestamped gaze positions have to be passed one by one to [ArFrame.look](../../argaze.md/#argaze.ArFeatures.ArFrame.look) method to execute the whole intanciated pipeline.
+
+```python
+# Assuming that timestamped gaze positions are available
+...
+
+ # Look ArFrame at a timestamped gaze position
+ ar_frame.look(timestamp, gaze_position)
+```
diff --git a/docs/user_guide/aruco_markers_pipeline/aruco_scene_creation.md b/docs/user_guide/aruco_markers_pipeline/aruco_scene_creation.md
new file mode 100644
index 0000000..ee5cca7
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/aruco_scene_creation.md
@@ -0,0 +1,130 @@
+Build ArUco markers scene
+=========================
+
+First of all, ArUco markers needs to be printed and placed into the scene.
+
+## Print ArUco markers from a ArUco dictionary
+
+ArUco markers always belongs to a set of markers called ArUco markers dictionary.
+
+![ArUco dictionaries](../../img/aruco_dictionaries.png)
+
+Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking.
+
+Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975).
+
+The creation of [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures from a dictionary is illustrated in the code below:
+
+``` python
+from argaze.ArUcoMarkers import ArUcoMarkersDictionary
+
+# Create a dictionary of specific April tags
+aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5')
+
+# Export marker n°5 as 3.5 cm picture with 300 dpi resolution
+aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300)
+
+# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution
+aruco_dictionary.save('./markers/', 3.5, 300)
+```
+
+!!! note
+ There is **A4_DICT_APRILTAG_16h5_5cm_0-7.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A4 paper sheet.
+
+Let's print some of them before to go further.
+
+!!! warning
+ Print markers with a blank zone around them to help in their detection.
+
+## Describe ArUco markers place
+
+Once [ArUcoMarkers](../../argaze.md/#argaze.ArUcoMarkers.ArUcoMarker) pictures are placed into a scene it is possible to describe their 3D places into a file.
+
+![ArUco scene](../../img/aruco_scene.png)
+
+Where ever the origin point is, all markers places need to be described in a [right-handed 3D axis](https://robotacademy.net.au/lesson/right-handed-3d-coordinate-frame/) where:
+
+* +X is pointing to the right,
+* +Y is pointing to the top,
+* +Z is pointing to the backward.
+
+!!! warning
+ All ArUco markers spatial values must be given in **centimeters**.
+
+### Edit OBJ file description
+
+OBJ file format could be exported from most 3D editors.
+
+``` obj
+o DICT_APRILTAG_16h5#0_Marker
+v -5.000000 14.960000 0.000000
+v 0.000000 14.960000 0.000000
+v -5.000000 19.959999 0.000000
+v 0.000000 19.959999 0.000000
+vn 0.0000 0.0000 1.0000
+s off
+f 1//1 2//1 4//1 3//1
+o DICT_APRILTAG_16h5#1_Marker
+v 25.000000 14.960000 0.000000
+v 30.000000 14.960000 0.000000
+v 25.000000 19.959999 0.000000
+v 30.000000 19.959999 0.000000
+vn 0.0000 0.0000 1.0000
+s off
+f 5//2 6//2 8//2 7//2
+o DICT_APRILTAG_16h5#2_Marker
+v -5.000000 -5.000000 0.000000
+v 0.000000 -5.000000 0.000000
+v -5.000000 0.000000 0.000000
+v 0.000000 0.000000 0.000000
+vn 0.0000 0.0000 1.0000
+s off
+f 9//3 10//3 12//3 11//3
+o DICT_APRILTAG_16h5#3_Marker
+v 25.000000 -5.000000 0.000000
+v 30.000000 -5.000000 0.000000
+v 25.000000 0.000000 0.000000
+v 30.000000 0.000000 0.000000
+vn 0.0000 0.0000 1.0000
+s off
+f 13//4 14//4 16//4 15//4
+```
+
+Here are common OBJ file features needed to describe ArUco markers places:
+
+* Object lines (starting with *o* key) indicate markers dictionary and id by following this format: **DICTIONARY**#**ID**\_Marker.
+* Vertice lines (starting with *v* key) indicate markers corners. The marker size will be automatically deducted from the geometry.
+* Plane normals (starting with *vn* key) need to be exported for further pose estimation.
+* Face (starting with *f* key) link vertices and normals indexes together.
+
+!!! warning
+ All markers must have the same size and belong to the same dictionary.
+
+### Edit JSON file description
+
+JSON file format allows to describe markers places using translation and euler angle rotation vectors.
+
+``` json
+{
+ "dictionary": "DICT_APRILTAG_16h5",
+ "marker_size": 5,
+ "places": {
+ "0": {
+ "translation": [-2.5, 17.5, 0],
+ "rotation": [0.0, 0.0, 0.0]
+ },
+ "1": {
+ "translation": [27.5, 17.5, 0],
+ "rotation": [0.0, 0.0, 0.0]
+ },
+ "2": {
+ "translation": [-2.5, -2.5, 0],
+ "rotation": [0.0, 0.0, 0.0]
+ },
+ "3": {
+ "translation": [27.5, -2.5, 0],
+ "rotation": [0.0, 0.0, 0.0]
+ }
+ }
+}
+```
diff --git a/docs/user_guide/aruco_markers_pipeline/introduction.md b/docs/user_guide/aruco_markers_pipeline/introduction.md
new file mode 100644
index 0000000..d2b19eb
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/introduction.md
@@ -0,0 +1,26 @@
+Overview
+========
+
+This section explains how to build augmented reality pipelines based on ArUco Markers technology for various use cases.
+
+The OpenCV library provides a module to detect fiducial markers into a picture and estimate their poses (cf [OpenCV ArUco tutorial page](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html)).
+
+![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png)
+
+The ArGaze [ArUcoMarkers submodule](../../argaze.md/#argaze.ArUcoMarkers) eases markers creation, optic calibration, markers detection and 3D scene pose estimation through a set of high level classes.
+
+First, let's look at the schema below: it gives an overview of the main notions involved in the following chapters.
+
+![ArUco markers pipeline](../../img/aruco_markers_pipeline.png)
+
+To build your own ArUco markers pipeline, you need to know:
+
+* [How to build an ArUco markers scene](aruco_scene_creation.md),
+* [How to calibrate optic parameters](optic_parameters_calibration.md),
+* [How to deal with an ArUcoCamera instance](aruco_camera_configuration_and_execution.md),
+* [How to add ArScene instance](ar_scene.md),
+* [How to visualize ArCamera and ArScenes](visualisation.md)
+
+More advanced features are also explained like:
+
+* [How to script ArUco markers pipeline](advanced_topics/scripting.md)
diff --git a/docs/user_guide/aruco_markers_pipeline/optic_parameters_calibration.md b/docs/user_guide/aruco_markers_pipeline/optic_parameters_calibration.md
new file mode 100644
index 0000000..0561112
--- /dev/null
+++ b/docs/user_guide/aruco_markers_pipeline/optic_parameters_calibration.md
@@ -0,0 +1,133 @@
+Calibrate optic parameters
+==========================
+
+A camera device have to be calibrated to compensate its optical distorsion.
+
+![Optic parameters calibration](../../img/optic_calibration.png)
+
+## Print calibration board
+
+The first step to calibrate a camera is to create an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below:
+
+``` python
+from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard
+
+# Create ArUco dictionary
+aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5')
+
+# Create an ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm ArUco markers inside
+aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary)
+
+# Export ArUco board with 300 dpi resolution
+aruco_board.save('./calibration_board.png', 300)
+```
+
+!!! note
+ There is **A3_DICT_APRILTAG_16h5_3cm_35cmx25cm.pdf** file located in *./src/argaze/ArUcoMarkers/utils/* folder ready to be printed on A3 paper sheet.
+
+Let's print the calibration board before to go further.
+
+## Capture board pictures
+
+Then, the calibration process needs to make many different captures of an [ArUcoBoard](../../argaze.md/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance to detect board corners and store them as calibration data into an [ArUcoOpticCalibrator](../../argaze.md/#argaze.ArUcoMarkers.ArUcoOpticCalibrator) for final calibration process.
+
+![Calibration step](../../img/optic_calibration_step.png)
+
+The sample of code below illustrates how to:
+
+* load all required ArGaze objects,
+* detect board corners into a Full HD camera video stream,
+* store detected corners as calibration data then,
+* once enough captures are made, process them to find optic parameters and,
+* finally, save optic parameters into a JSON file.
+
+``` python
+from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoOpticCalibrator, ArUcoBoard, ArUcoDetector
+
+# Create ArUco dictionary
+aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5')
+
+# Create ArUco optic calibrator
+aruco_optic_calibrator = ArUcoOpticCalibrator.ArUcoOpticCalibrator()
+
+# Create ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm aruco markers inside
+# Note: This board is the one expected during further board tracking
+expected_aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary)
+
+# Create ArUco detector
+aruco_detector = ArUcoDetector.ArUcoDetector(dictionary=aruco_dictionary, marker_size=3)
+
+# Assuming that live Full HD (1920x1080) video stream is enabled
+...
+
+# Assuming there is a way to escape the while loop
+...
+
+ # Capture images from video stream
+ while video_stream.is_alive():
+
+ image = video_stream.read()
+
+ # Detect all board corners in image
+ aruco_detector.detect_board(image, expected_aruco_board, expected_aruco_board.markers_number)
+
+ # If all board corners are detected
+ if aruco_detector.board_corners_number == expected_aruco_board.corners_number:
+
+ # Draw board corners to show that board tracking succeeded
+ aruco_detector.draw_board(image)
+
+ # Append tracked board data for further calibration processing
+ aruco_optic_calibrator.store_calibration_data(aruco_detector.board_corners, aruco_detector.board_corners_identifier)
+
+# Start optic calibration processing for Full HD image resolution
+print('Calibrating optic...')
+optic_parameters = aruco_optic_calibrator.calibrate(aruco_board, dimensions=(1920, 1080))
+
+if optic_parameters:
+
+ # Export optic parameters
+ optic_parameters.to_json('./optic_parameters.json')
+
+ print('Calibration succeeded: optic_parameters.json file exported.')
+
+else:
+
+ print('Calibration failed.')
+```
+
+Below, an optic_parameters JSON file example:
+
+```json
+{
+ "rms": 0.6688921504088245,
+ "dimensions": [
+ 1920,
+ 1080
+ ],
+ "K": [
+ [
+ 1135.6524381415752,
+ 0.0,
+ 956.0685325355497
+ ],
+ [
+ 0.0,
+ 1135.9272506869524,
+ 560.059099810324
+ ],
+ [
+ 0.0,
+ 0.0,
+ 1.0
+ ]
+ ],
+ "D": [
+ 0.01655492265003404,
+ 0.1985524264972037,
+ 0.002129965902489484,
+ -0.0019528582922179365,
+ -0.5792910353639452
+ ]
+}
+```