aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/aruco_marker_pipeline/configuration_and_execution.md
blob: c2ee1b9a5b25225da62c3ea182063316f29dcf7b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
Edit and execute pipeline
=========================

Once [ArUco markers are placed into a scene](aruco_marker_description.md), they can be detected thanks to [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class.

As [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) inherits from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame), the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) class also benefits from all the services described in the [gaze analysis pipeline section](../gaze_analysis_pipeline/introduction.md).

Once defined, an ArUco marker pipeline needs to embedded inside a context that will provides it both gaze positions and camera images to process.

![ArUco camera frame](../../img/aruco_camera_frame.png)

## Edit JSON configuration

Here is a simple JSON [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) configuration example:

```json
{
	"argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
		"name": "My FullHD camera",
		"size": [1920, 1080],
		"aruco_detector": {
			"dictionary": "DICT_APRILTAG_16h5"
		},
		"gaze_movement_identifier": {
			"argaze.GazeAnalysis.DispersionThresholdIdentification.GazeMovementIdentifier": {
				"deviation_max_threshold": 25,
				"duration_min_threshold": 150
			}
		},
		"image_parameters": {
			"background_weight": 1,
			"draw_detected_markers": {
				"color": [0, 255, 0],
				"draw_axes": {
					"thickness": 3
				}
			},
			"draw_gaze_positions": {
				"color": [0, 255, 255],
				"size": 2
			},
			"draw_fixations": {
				"deviation_circle_color": [255, 0, 255],
				"duration_border_color": [127, 0, 127],
				"duration_factor": 1e-2
			}, 
			"draw_saccades": {
				"line_color": [255, 0, 255]
			}
		}
	}
}
```

Let's understand the meaning of each JSON entry.

### argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera

The class name of the object being loaded.

### *name - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*

The name of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) frame. Basically, it is useful for visualization purposes.

### *size - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*

The size of the [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) frame  in pixels. Be aware that gaze positions have to be in the same range of value to be projected.

### *aruco_detector*

The first [ArUcoCamera](../../argaze.md/#argaze.ArUcoMarker.ArUcoCamera) pipeline step is to detect ArUco markers inside the input image.

![ArUco markers detection](../../img/aruco_camera_markers_detection.png)

The [ArUcoDetector](../../argaze.md/#argaze.ArUcoMarker.ArUcoDetector) is in charge of detecting all markers from a specific dictionary.

!!! warning "Mandatory"
	JSON *aruco_detector* entry is mandatory.

### *gaze_movement_identifier - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*

The first [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame) pipeline step is dedicated to identify fixations or saccades from consecutive timestamped gaze positions.

![Gaze movement identification](../../img/aruco_camera_gaze_movement_identification.png)

### *image_parameters - inherited from [ArFrame](../../argaze.md/#argaze.ArFeatures.ArFrame)*

The usual [ArFrame visualization parameters](../gaze_analysis_pipeline/visualization.md) plus one additional *draw_detected_markers* field.

## Pipeline execution

A pipeline needs to be embedded into a context to be executed.

Copy the gaze analysis pipeline configuration defined above inside the following context configuration.

```json
{
    "argaze.utils.contexts.OpenCV.Movie": {
        "name": "Movie player",
        "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
        "pipeline": JSON CONFIGURATION
    }
}
```

Then, use the [*load* command](../utils/main_commands.md) to execute the context.

```shell
python -m argaze load CONFIGURATION
```

This command should open a GUI window with the detected markers and identified cursor fixations circles when the mouse moves over the window.

![ArGaze load GUI](../../img/argaze_load_gui_opencv_pipeline.png)

!!! note ""

	At this point, the pipeline only processes gaze movement identification without any AOI support as no scene description is provided into the JSON configuration file.

	Read the next chapters to learn [how to estimate scene pose](pose_estimation.md), [how to describe a 3D scene's AOI](aoi_3d_description.md) and [how to project them into the camera frame](aoi_3d_projection.md).