aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--docs/user_guide/aruco_markers/camera_calibration.md6
-rw-r--r--docs/user_guide/aruco_markers/introduction.md3
-rw-r--r--docs/user_guide/aruco_markers/markers_creation.md2
-rw-r--r--docs/user_guide/aruco_markers/markers_detection.md6
-rw-r--r--docs/user_guide/aruco_markers/markers_pose_estimation.md2
-rw-r--r--docs/user_guide/aruco_markers/markers_scene_description.md12
6 files changed, 16 insertions, 15 deletions
diff --git a/docs/user_guide/aruco_markers/camera_calibration.md b/docs/user_guide/aruco_markers/camera_calibration.md
index 2a1ba84..c8a0be9 100644
--- a/docs/user_guide/aruco_markers/camera_calibration.md
+++ b/docs/user_guide/aruco_markers/camera_calibration.md
@@ -5,7 +5,7 @@ Any camera device have to be calibrated to compensate its optical distorsion.
![Camera calibration](../../img/camera_calibration.png)
-The first step to calibrate a camera is to create an ArUco calibration board like in the code below:
+The first step to calibrate a [ArUcoCamera](/argaze/#argaze.ArUcoMarkers.ArUcoCamera) is to create an [ArUcoBoard](/argaze/#argaze.ArUcoMarkers.ArUcoBoard) like in the code below:
``` python
from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard
@@ -20,7 +20,7 @@ aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary)
aruco_board.save('./calibration_board.png', 300)
```
-Then, the calibration process needs to make many different captures of an ArUco board through the camera and then, pass them to an ArUco detector instance.
+Then, the calibration process needs to make many different captures of an [ArUcoBoard](/argaze/#argaze.ArUcoMarkers.ArUcoBoard) through the camera and then, pass them to an [ArUcoDetector](/argaze/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) instance.
![Calibration step](../../img/camera_calibration_step.png)
@@ -73,7 +73,7 @@ print(f'Distortion coefficients:{aruco_camera.D}')
aruco_camera.to_json('calibration.json')
```
-Then, the camera calibration data are loaded to compensate optical distorsion during ArUco marker detection:
+Then, the camera calibration data are loaded to compensate optical distorsion during [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) detection:
``` python
from argaze.ArUcoMarkers import ArUcoCamera
diff --git a/docs/user_guide/aruco_markers/introduction.md b/docs/user_guide/aruco_markers/introduction.md
index 59795b5..fbf01cf 100644
--- a/docs/user_guide/aruco_markers/introduction.md
+++ b/docs/user_guide/aruco_markers/introduction.md
@@ -1,4 +1,4 @@
-About Aruco markers
+About ArUco markers
===================
![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png)
@@ -8,6 +8,7 @@ The OpenCV library provides a module to detect fiducial markers into a picture a
The ArGaze [ArUcoMarkers submodule](/argaze/#argaze.ArUcoMarkers) eases markers creation, camera calibration, markers detection and 3D scene pose estimation through a set of high level classes:
* [ArUcoMarkersDictionary](/argaze/#argaze.ArUcoMarkers.ArUcoMarkersDictionary)
+* [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker)
* [ArUcoBoard](/argaze/#argaze.ArUcoMarkers.ArUcoBoard)
* [ArUcoCamera](/argaze/#argaze.ArUcoMarkers.ArUcoCamera)
* [ArUcoDetector](/argaze/#argaze.ArUcoMarkers.ArUcoDetector)
diff --git a/docs/user_guide/aruco_markers/markers_creation.md b/docs/user_guide/aruco_markers/markers_creation.md
index 9909dc7..1725fe4 100644
--- a/docs/user_guide/aruco_markers/markers_creation.md
+++ b/docs/user_guide/aruco_markers/markers_creation.md
@@ -1,7 +1,7 @@
Markers creation
================
-The creation of ArUco markers from a dictionary is illustrated in the code below:
+The creation of [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) from a dictionary is illustrated in the code below:
``` python
from argaze.ArUcoMarkers import ArUcoMarkersDictionary
diff --git a/docs/user_guide/aruco_markers/markers_detection.md b/docs/user_guide/aruco_markers/markers_detection.md
index 886ee69..d962b7b 100644
--- a/docs/user_guide/aruco_markers/markers_detection.md
+++ b/docs/user_guide/aruco_markers/markers_detection.md
@@ -3,7 +3,7 @@ Markers detection
![Detected markers](../../img/detected_markers.png)
-Firstly, the ArUco detector needs to know the expected dictionary and size (in centimeter) of the markers it have to detect.
+Firstly, the [ArUcoDetector](/argaze/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) needs to know the expected dictionary and size (in centimeter) of the [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) it have to detect.
Notice that extra parameters are passed to detector: see [OpenCV ArUco markers detection parameters documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html) to know more.
@@ -19,7 +19,7 @@ extra_parameters = ArUcoDetector.DetectorParameters.from_json('./detector_parame
aruco_detector = ArUcoDetector.ArUcoDetector(camera=aruco_camera, dictionary='DICT_APRILTAG_16h5', marker_size=5, parameters=extra_parameters)
```
-Here is detector parameters JSON file example:
+Here is [DetectorParameters](/argaze/#argaze.ArUcoMarkers.ArUcoDetector.DetectorParameters) JSON file example:
```
{
@@ -29,7 +29,7 @@ Here is detector parameters JSON file example:
}
```
-The ArUco detector processes frame to detect markers and allows to draw detection results onto it:
+The [ArUcoDetector](/argaze/#argaze.ArUcoMarkers.ArUcoDetector.ArUcoDetector) processes frame to detect markers and allows to draw detection results onto it:
``` python
# Detect markers into a frame and draw them
diff --git a/docs/user_guide/aruco_markers/markers_pose_estimation.md b/docs/user_guide/aruco_markers/markers_pose_estimation.md
index 2459715..09db325 100644
--- a/docs/user_guide/aruco_markers/markers_pose_estimation.md
+++ b/docs/user_guide/aruco_markers/markers_pose_estimation.md
@@ -1,7 +1,7 @@
Markers pose estimation
=======================
-After marker detection, it is possible to estimate markers pose in camera axis.
+After [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) detection, it is possible to estimate [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) pose in camera axis.
![Pose estimation](../../img/pose_estimation.png)
diff --git a/docs/user_guide/aruco_markers/markers_scene_description.md b/docs/user_guide/aruco_markers/markers_scene_description.md
index 9938f23..6dbc4fd 100644
--- a/docs/user_guide/aruco_markers/markers_scene_description.md
+++ b/docs/user_guide/aruco_markers/markers_scene_description.md
@@ -1,11 +1,11 @@
Markers scene description
=========================
-The ArGaze toolkit provides ArUcoScene class to describe where ArUco markers are placed into a 3D model.
+The ArGaze toolkit provides [ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene) class to describe where [ArUcoMarkers](/argaze/#argaze.ArUcoMarkers.ArUcoMarker) are placed into a 3D model.
![ArUco scene](../../img/aruco_scene.png)
-ArUco scene is useful to:
+[ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene) is useful to:
* filter markers that belongs to this predefined scene,
* check the consistency of detected markers according the place where each marker is expected to be,
@@ -33,7 +33,7 @@ f 5//2 6//2 8//2 7//2
...
```
-ArUco scene description can also be written in a JSON file format.
+[ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene) description can also be written in a JSON file format.
``` json
{
@@ -56,7 +56,7 @@ ArUco scene description can also be written in a JSON file format.
}
```
-Here is a sample of code to show the loading of an ArUcoScene OBJ file description:
+Here is a sample of code to show the loading of an [ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene) OBJ file description:
``` python
from argaze.ArUcoMarkers import ArUcoScene
@@ -91,7 +91,7 @@ consistent_markers, unconsistent_markers, unconsistencies = aruco_scene.check_ma
## Scene pose estimation
-Several approaches are available to perform ArUco scene pose estimation from markers belonging to the scene.
+Several approaches are available to perform [ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene) pose estimation from markers belonging to the scene.
The first approach considers that scene pose can be estimated **from a single marker pose**:
@@ -103,7 +103,7 @@ marker_id, marker = consistent_markers.popitem()
tvec, rmat = self.aruco_scene.estimate_pose_from_single_marker(marker)
```
-The second approach considers that scene pose can be estimated **by averaging several marker poses**:
+The second approach considers that scene pose can be estimated by **averaging several marker poses**:
``` python
# Estimate scene pose from all consistent scene markers