aboutsummaryrefslogtreecommitdiff
path: root/docs/user_guide/utils
diff options
context:
space:
mode:
Diffstat (limited to 'docs/user_guide/utils')
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md48
-rw-r--r--docs/user_guide/utils/estimate_aruco_markers_pose.md60
-rw-r--r--docs/user_guide/utils/main_commands.md (renamed from docs/user_guide/utils/ready-made_scripts.md)35
3 files changed, 106 insertions, 37 deletions
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index f293980..59df85b 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -9,20 +9,45 @@ Collection of command-line scripts for demonstration purpose.
!!! note
*Use -h option to get command arguments documentation.*
+!!! note
+ Each demonstration outputs metrics into *_export/records* folder.
+
## Random context
-Load **random_context.json** file to analyze random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
```
-## OpenCV window context
+## OpenCV
+
+### Cursor context
+
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
+```
+
+### Movie context
+
+Load **opencv_movie_context.json** file to playback a movie and also capture cursor pointer positions over OpenCV window:
+
+```shell
+python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
+```
+
+### Camera context
+
+Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to reduce the value of the *sides_mask*.
-Load **opencv_window_context.json** file to analyze mouse pointer positions over OpenCV window:
+Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0).
+
+Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window:
```shell
-python -m argaze load ./src/argaze/utils/demo/opencv_window_context.json
+python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json
```
## Tobii Pro Glasses 2
@@ -61,27 +86,24 @@ Then, load **tobii_live_stream_context.json** file to find ArUco marker into cam
python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
```
-### Post-processing context
-
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
+### Segment playback context
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_segment_playback_context.json
```
## Pupil Invisible
diff --git a/docs/user_guide/utils/estimate_aruco_markers_pose.md b/docs/user_guide/utils/estimate_aruco_markers_pose.md
new file mode 100644
index 0000000..55bd232
--- /dev/null
+++ b/docs/user_guide/utils/estimate_aruco_markers_pose.md
@@ -0,0 +1,60 @@
+Estimate ArUco markers pose
+===========================
+
+This **ArGaze** application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
+
+Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
+
+```json
+{
+ "argaze.utils.contexts.OpenCV.Movie" : {
+ "name": "ArUco markers pose estimator",
+ "path": "./src/argaze/utils/demo/tobii_record/segments/1/fullstream.mp4",
+ "pipeline": "pipeline.json"
+ }
+}
+```
+
+Secondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco camera *size*, ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
+
+```json
+{
+ "argaze.ArUcoMarker.ArUcoCamera.ArUcoCamera": {
+ "name": "Full HD Camera",
+ "size": [1920, 1080],
+ "aruco_detector": {
+ "dictionary": "DICT_APRILTAG_16h5",
+ "pose_size": 4,
+ "pose_ids": [],
+ "parameters": {
+ "useAruco3Detection": true
+ },
+ "observers":{
+ "observers.ArUcoMarkersPoseRecorder": {
+ "output_folder": "_export/records/aruco_markers_group"
+ }
+ }
+ },
+ "sides_mask": 420,
+ "image_parameters": {
+ "background_weight": 1,
+ "draw_gaze_positions": {
+ "color": [0, 255, 255],
+ "size": 4
+ },
+ "draw_detected_markers": {
+ "color": [255, 255, 255],
+ "draw_axes": {
+ "thickness": 4
+ }
+ }
+ }
+ }
+}
+```
+
+Then, launch the application.
+
+```shell
+python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
+``` \ No newline at end of file
diff --git a/docs/user_guide/utils/ready-made_scripts.md b/docs/user_guide/utils/main_commands.md
index 892fef8..c4887a4 100644
--- a/docs/user_guide/utils/ready-made_scripts.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -1,15 +1,12 @@
-Ready-made scripts
-==================
+Main commands
+=============
-Collection of command-line scripts to provide useful features.
-
-!!! note
- *Consider that all inline commands below have to be executed at the root of ArGaze package folder.*
+The **ArGaze** package comes with top-level commands.
!!! note
*Use -h option to get command arguments documentation.*
-## Load ArContext JSON configuration
+## Load
Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) from a JSON CONFIGURATION file
@@ -17,6 +14,10 @@ Load and execute any [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) f
python -m argaze load CONFIGURATION
```
+This command should open a GUI window to display the image of the context's pipeline.
+
+![ArGaze load GUI](../../img/argaze_load_gui.png)
+
### Send command
Use -p option to enable pipe communication at given address:
@@ -34,36 +35,22 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
```
-## Edit JSON configuration
+## Edit
Modify the content of JSON CONFIGURATION file with another JSON CHANGES file then, save the result into an OUTPUT file
```shell
python -m argaze edit CONFIGURATION CHANGES OUTPUT
```
-
-## Estimate ArUco markers pose
-
-This application detects ArUco markers inside a movie frame then, export pose estimation as .obj file into a folder.
-
-Firstly, edit **utils/estimate_markers_pose/context.json** file as to select a movie *path*.
-
-Sencondly, edit **utils/estimate_markers_pose/pipeline.json** file to setup ArUco detector *dictionary*, *pose_size* and *pose_ids* attributes.
-
-Then, launch the application.
-
-```shell
-python -m argaze load ./src/argaze/utils/estimate_markers_pose/context.json
-``` \ No newline at end of file