aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/aruco_markers/markers_scene_description.md
blob: 9938f239da6c4076dd8e60c059e583b48e01eab4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
Markers scene description
=========================

The ArGaze toolkit provides ArUcoScene class to describe where ArUco markers are placed into a 3D model.

![ArUco scene](../../img/aruco_scene.png)

ArUco scene is useful to: 

* filter markers that belongs to this predefined scene, 
* check the consistency of detected markers according the place where each marker is expected to be,
* estimate the pose of the scene from the pose of detected markers.

ArUco scene description uses common OBJ file format that can be exported from most 3D editors. Notice that plane normals (vn) needs to be exported.

``` obj
o DICT_APRILTAG_16h5#0_Marker
v -3.004536 0.022876 2.995370
v 2.995335 -0.015498 3.004618
v -2.995335 0.015498 -3.004618
v 3.004536 -0.022876 -2.995370
vn 0.0064 1.0000 -0.0012
s off
f 1//1 2//1 4//1 3//1
o DICT_APRILTAG_16h5#1_Marker
v -33.799068 46.450645 -32.200436
v -27.852505 47.243549 -32.102116
v -34.593925 52.396473 -32.076626
v -28.647360 53.189377 -31.978306
vn -0.0135 -0.0226 0.9997
s off
f 5//2 6//2 8//2 7//2
...
```

ArUco scene description can also be written in a JSON file format.

``` json
{
	"dictionary": "DICT_ARUCO_ORIGINAL",
	"marker_size": 1,
	"places": {
		"0": {
			"translation": [0, 0, 0],
			"rotation": [0, 0, 0]
		},
		"1": {
			"translation": [10, 10, 0],
			"rotation": [0, 0, 0]
		},
		"2": {
			"translation": [0, 10, 0],
			"rotation": [0, 0, 0]
		}
	}
}
```

Here is a sample of code to show the loading of an ArUcoScene OBJ file description:

``` python
from argaze.ArUcoMarkers import ArUcoScene

# Create an ArUco scene from a OBJ file description
aruco_scene = ArUcoScene.ArUcoScene.from_obj('./markers.obj')

# Print loaded marker places
for place_id, place in aruco_scene.places.items():

       print(f'place {place_id} for marker: ', place.marker.identifier)
       print(f'place {place_id} translation: ', place.translation)
       print(f'place {place_id} rotation: ', place.rotation)
```

## Markers filtering

Considering markers are detected, here is how to filter them to consider only those which belongs to the scene:

``` python
scene_markers, remaining_markers = aruco_scene.filter_markers(aruco_detector.detected_markers)
```

## Marker poses consistency

Then, scene markers poses can be validated by verifying their spatial consistency considering angle and distance tolerance. This is particularly useful to discard ambiguous marker pose estimations when markers are parallel to camera plane (see [issue on OpenCV Contribution repository](https://github.com/opencv/opencv_contrib/issues/3190#issuecomment-1181970839)).

``` python
# Check scene markers consistency with 10° angle tolerance and 1 cm distance tolerance
consistent_markers, unconsistent_markers, unconsistencies = aruco_scene.check_markers_consistency(scene_markers, 10, 1)
```

## Scene pose estimation

Several approaches are available to perform ArUco scene pose estimation from markers belonging to the scene.

The first approach considers that scene pose can be estimated **from a single marker pose**:

``` python
# Let's select one consistent scene marker
marker_id, marker = consistent_markers.popitem()

# Estimate scene pose from a single marker
tvec, rmat = self.aruco_scene.estimate_pose_from_single_marker(marker)
```

The second approach considers that scene pose can be estimated **by averaging several marker poses**:

``` python
# Estimate scene pose from all consistent scene markers
tvec, rmat = self.aruco_scene.estimate_pose_from_markers(consistent_markers)
```

The third approach is only available when ArUco markers are placed in such a configuration that is possible to **define orthogonal axis**:

``` python
tvec, rmat = self.aruco_scene.estimate_pose_from_axis_markers(origin_marker, horizontal_axis_marker, vertical_axis_marker)
```