diff options
author | Théo de la Hogue | 2023-06-07 14:34:14 +0200 |
---|---|---|
committer | Théo de la Hogue | 2023-06-07 14:34:14 +0200 |
commit | c4552e04e1271a9210a934233beae5be1943d034 (patch) | |
tree | a44041e544bc700976237bfea9058ec06f9a2904 /docs/user_guide/aruco_markers | |
parent | bd9cd27c9d44c072164f564ffffeb22e37106b89 (diff) | |
download | argaze-c4552e04e1271a9210a934233beae5be1943d034.zip argaze-c4552e04e1271a9210a934233beae5be1943d034.tar.gz argaze-c4552e04e1271a9210a934233beae5be1943d034.tar.bz2 argaze-c4552e04e1271a9210a934233beae5be1943d034.tar.xz |
Writing User guide and use cases section.
Diffstat (limited to 'docs/user_guide/aruco_markers')
-rw-r--r-- | docs/user_guide/aruco_markers/camera_calibration.md | 83 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/dictionary_selection.md | 17 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/introduction.md | 14 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/markers_creation.md | 17 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/markers_detection.md | 47 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/markers_pose_estimation.md | 20 | ||||
-rw-r--r-- | docs/user_guide/aruco_markers/markers_scene_description.md | 117 |
7 files changed, 315 insertions, 0 deletions
diff --git a/docs/user_guide/aruco_markers/camera_calibration.md b/docs/user_guide/aruco_markers/camera_calibration.md new file mode 100644 index 0000000..2a1ba84 --- /dev/null +++ b/docs/user_guide/aruco_markers/camera_calibration.md @@ -0,0 +1,83 @@ +Camera calibration +================== + +Any camera device have to be calibrated to compensate its optical distorsion. + +![Camera calibration](../../img/camera_calibration.png) + +The first step to calibrate a camera is to create an ArUco calibration board like in the code below: + +``` python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoBoard + +# Create ArUco dictionary +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Create an ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm ArUco markers inside +aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) + +# Export ArUco board with 300 dpi resolution +aruco_board.save('./calibration_board.png', 300) +``` + +Then, the calibration process needs to make many different captures of an ArUco board through the camera and then, pass them to an ArUco detector instance. + +![Calibration step](../../img/camera_calibration_step.png) + +The sample of code below shows how to detect board corners into camera frames, store detected corners then process them to build calibration data and, finally, save it into a JSON file: + +``` python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary, ArUcoCamera, ArUcoBoard, ArUcoDetector + +# Create ArUco dictionary +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Create ArUco camera +aruco_camera = ArUcoCamera.ArUcoCamera(dimensions=(1920, 1080)) + +# Create ArUco board of 7 columns and 5 rows with 5 cm squares with 3cm aruco markers inside +# Note: This board is the one expected during further board tracking +expected_aruco_board = ArUcoBoard.ArUcoBoard(7, 5, 5, 3, aruco_dictionary) + +# Create ArUco detector +aruco_detector = ArUcoDetector.ArUcoDetector(dictionary=aruco_dictionary, marker_size=3) + +# Capture frames from a live Full HD video stream (1920x1080) +while video_stream.is_alive(): + + frame = video_stream.read() + + # Detect all board corners in frame + aruco_detector.detect_board(frame, expected_aruco_board, expected_aruco_board.markers_number) + + # If board corners are detected + if aruco_detector.board_corners_number > 0: + + # Draw board corners to show that board tracking succeeded + aruco_detector.draw_board(frame) + + # Append tracked board data for further calibration processing + aruco_camera.store_calibration_data(aruco_detector.board_corners, aruco_detector.board_corners_identifier) + +# Start camera calibration processing for Full HD image resolution +print('Calibrating camera...') +aruco_camera.calibrate(expected_aruco_board) + +# Print camera calibration data +print('Calibration succeeded!') +print(f'RMS:{aruco_camera.rms}') +print(f'Camera matrix:{aruco_camera.K}') +print(f'Distortion coefficients:{aruco_camera.D}') + +# Save camera calibration data +aruco_camera.to_json('calibration.json') +``` + +Then, the camera calibration data are loaded to compensate optical distorsion during ArUco marker detection: + +``` python +from argaze.ArUcoMarkers import ArUcoCamera + +# Load camera calibration data +aruco_camera = ArUcoCamera.ArUcoCamera.from_json('./calibration.json') +``` diff --git a/docs/user_guide/aruco_markers/dictionary_selection.md b/docs/user_guide/aruco_markers/dictionary_selection.md new file mode 100644 index 0000000..b9ba510 --- /dev/null +++ b/docs/user_guide/aruco_markers/dictionary_selection.md @@ -0,0 +1,17 @@ +Dictionary selection +==================== + +ArUco markers always belongs to a set of markers called ArUco markers dictionary. + +![ArUco dictionaries](../../img/aruco_dictionaries.png) + +Many ArUco dictionaries exist with properties concerning the format, the number of markers or the difference between each markers to avoid error in tracking. + +Here is the documention [about ArUco markers dictionaries](https://docs.opencv.org/3.4/d9/d6a/group__aruco.html#gac84398a9ed9dd01306592dd616c2c975). + +``` python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary + +# Create a dictionary of specific April tags +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') +``` diff --git a/docs/user_guide/aruco_markers/introduction.md b/docs/user_guide/aruco_markers/introduction.md new file mode 100644 index 0000000..59795b5 --- /dev/null +++ b/docs/user_guide/aruco_markers/introduction.md @@ -0,0 +1,14 @@ +About Aruco markers +=================== + +![OpenCV ArUco markers](https://pyimagesearch.com/wp-content/uploads/2020/12/aruco_generate_tags_header.png) + +The OpenCV library provides a module to detect fiducial markers into a picture and estimate its pose (cf [OpenCV ArUco tutorial page](https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html)). + +The ArGaze [ArUcoMarkers submodule](/argaze/#argaze.ArUcoMarkers) eases markers creation, camera calibration, markers detection and 3D scene pose estimation through a set of high level classes: + +* [ArUcoMarkersDictionary](/argaze/#argaze.ArUcoMarkers.ArUcoMarkersDictionary) +* [ArUcoBoard](/argaze/#argaze.ArUcoMarkers.ArUcoBoard) +* [ArUcoCamera](/argaze/#argaze.ArUcoMarkers.ArUcoCamera) +* [ArUcoDetector](/argaze/#argaze.ArUcoMarkers.ArUcoDetector) +* [ArUcoScene](/argaze/#argaze.ArUcoMarkers.ArUcoScene)
\ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_creation.md b/docs/user_guide/aruco_markers/markers_creation.md new file mode 100644 index 0000000..9909dc7 --- /dev/null +++ b/docs/user_guide/aruco_markers/markers_creation.md @@ -0,0 +1,17 @@ +Markers creation +================ + +The creation of ArUco markers from a dictionary is illustrated in the code below: + +``` python +from argaze.ArUcoMarkers import ArUcoMarkersDictionary + +# Create a dictionary of specific April tags +aruco_dictionary = ArUcoMarkersDictionary.ArUcoMarkersDictionary('DICT_APRILTAG_16h5') + +# Export marker n°5 as 3.5 cm picture with 300 dpi resolution +aruco_dictionary.create_marker(5, 3.5).save('./markers/', 300) + +# Export all dictionary markers as 3.5 cm pictures with 300 dpi resolution +aruco_dictionary.save('./markers/', 3.5, 300) +```
\ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_detection.md b/docs/user_guide/aruco_markers/markers_detection.md new file mode 100644 index 0000000..886ee69 --- /dev/null +++ b/docs/user_guide/aruco_markers/markers_detection.md @@ -0,0 +1,47 @@ +Markers detection +================= + +![Detected markers](../../img/detected_markers.png) + +Firstly, the ArUco detector needs to know the expected dictionary and size (in centimeter) of the markers it have to detect. + +Notice that extra parameters are passed to detector: see [OpenCV ArUco markers detection parameters documentation](https://docs.opencv.org/4.x/d1/dcd/structcv_1_1aruco_1_1DetectorParameters.html) to know more. + +``` python +from argaze.ArUcoMarkers import ArUcoDetector, ArUcoCamera + +# Assuming camera calibration data are loaded + +# Loading extra detector parameters +extra_parameters = ArUcoDetector.DetectorParameters.from_json('./detector_parameters.json') + +# Create ArUco detector to track DICT_APRILTAG_16h5 5cm length markers +aruco_detector = ArUcoDetector.ArUcoDetector(camera=aruco_camera, dictionary='DICT_APRILTAG_16h5', marker_size=5, parameters=extra_parameters) +``` + +Here is detector parameters JSON file example: + +``` +{ + "cornerRefinementMethod": 1, + "aprilTagQuadSigma": 2, + "aprilTagDeglitch": 1 +} +``` + +The ArUco detector processes frame to detect markers and allows to draw detection results onto it: + +``` python +# Detect markers into a frame and draw them +aruco_detector.detect_markers(frame) +aruco_detector.draw_detected_markers(frame) + +# Get corners position into frame related to each detected markers +for marker_id, marker in aruco_detector.detected_markers.items(): + + print(f'marker {marker_id} corners: ', marker.corners) + + # Do something with detected marker i corners + ... + +``` diff --git a/docs/user_guide/aruco_markers/markers_pose_estimation.md b/docs/user_guide/aruco_markers/markers_pose_estimation.md new file mode 100644 index 0000000..2459715 --- /dev/null +++ b/docs/user_guide/aruco_markers/markers_pose_estimation.md @@ -0,0 +1,20 @@ +Markers pose estimation +======================= + +After marker detection, it is possible to estimate markers pose in camera axis. + +![Pose estimation](../../img/pose_estimation.png) + +``` python +# Estimate markers pose +aruco_detector.estimate_markers_pose() + +# Get pose estimation related to each detected markers +for marker_id, marker in aruco_detector.detected_markers.items(): + + print(f'marker {marker_id} translation: ', marker.translation) + print(f'marker {marker_id} rotation: ', marker.rotation) + + # Do something with each marker pose estimation + ... +```
\ No newline at end of file diff --git a/docs/user_guide/aruco_markers/markers_scene_description.md b/docs/user_guide/aruco_markers/markers_scene_description.md new file mode 100644 index 0000000..9938f23 --- /dev/null +++ b/docs/user_guide/aruco_markers/markers_scene_description.md @@ -0,0 +1,117 @@ +Markers scene description +========================= + +The ArGaze toolkit provides ArUcoScene class to describe where ArUco markers are placed into a 3D model. + +![ArUco scene](../../img/aruco_scene.png) + +ArUco scene is useful to: + +* filter markers that belongs to this predefined scene, +* check the consistency of detected markers according the place where each marker is expected to be, +* estimate the pose of the scene from the pose of detected markers. + +ArUco scene description uses common OBJ file format that can be exported from most 3D editors. Notice that plane normals (vn) needs to be exported. + +``` obj +o DICT_APRILTAG_16h5#0_Marker +v -3.004536 0.022876 2.995370 +v 2.995335 -0.015498 3.004618 +v -2.995335 0.015498 -3.004618 +v 3.004536 -0.022876 -2.995370 +vn 0.0064 1.0000 -0.0012 +s off +f 1//1 2//1 4//1 3//1 +o DICT_APRILTAG_16h5#1_Marker +v -33.799068 46.450645 -32.200436 +v -27.852505 47.243549 -32.102116 +v -34.593925 52.396473 -32.076626 +v -28.647360 53.189377 -31.978306 +vn -0.0135 -0.0226 0.9997 +s off +f 5//2 6//2 8//2 7//2 +... +``` + +ArUco scene description can also be written in a JSON file format. + +``` json +{ + "dictionary": "DICT_ARUCO_ORIGINAL", + "marker_size": 1, + "places": { + "0": { + "translation": [0, 0, 0], + "rotation": [0, 0, 0] + }, + "1": { + "translation": [10, 10, 0], + "rotation": [0, 0, 0] + }, + "2": { + "translation": [0, 10, 0], + "rotation": [0, 0, 0] + } + } +} +``` + +Here is a sample of code to show the loading of an ArUcoScene OBJ file description: + +``` python +from argaze.ArUcoMarkers import ArUcoScene + +# Create an ArUco scene from a OBJ file description +aruco_scene = ArUcoScene.ArUcoScene.from_obj('./markers.obj') + +# Print loaded marker places +for place_id, place in aruco_scene.places.items(): + + print(f'place {place_id} for marker: ', place.marker.identifier) + print(f'place {place_id} translation: ', place.translation) + print(f'place {place_id} rotation: ', place.rotation) +``` + +## Markers filtering + +Considering markers are detected, here is how to filter them to consider only those which belongs to the scene: + +``` python +scene_markers, remaining_markers = aruco_scene.filter_markers(aruco_detector.detected_markers) +``` + +## Marker poses consistency + +Then, scene markers poses can be validated by verifying their spatial consistency considering angle and distance tolerance. This is particularly useful to discard ambiguous marker pose estimations when markers are parallel to camera plane (see [issue on OpenCV Contribution repository](https://github.com/opencv/opencv_contrib/issues/3190#issuecomment-1181970839)). + +``` python +# Check scene markers consistency with 10° angle tolerance and 1 cm distance tolerance +consistent_markers, unconsistent_markers, unconsistencies = aruco_scene.check_markers_consistency(scene_markers, 10, 1) +``` + +## Scene pose estimation + +Several approaches are available to perform ArUco scene pose estimation from markers belonging to the scene. + +The first approach considers that scene pose can be estimated **from a single marker pose**: + +``` python +# Let's select one consistent scene marker +marker_id, marker = consistent_markers.popitem() + +# Estimate scene pose from a single marker +tvec, rmat = self.aruco_scene.estimate_pose_from_single_marker(marker) +``` + +The second approach considers that scene pose can be estimated **by averaging several marker poses**: + +``` python +# Estimate scene pose from all consistent scene markers +tvec, rmat = self.aruco_scene.estimate_pose_from_markers(consistent_markers) +``` + +The third approach is only available when ArUco markers are placed in such a configuration that is possible to **define orthogonal axis**: + +``` python +tvec, rmat = self.aruco_scene.estimate_pose_from_axis_markers(origin_marker, horizontal_axis_marker, vertical_axis_marker) +``` |