diff options
Diffstat (limited to 'docs/user_guide/utils/demonstrations_scripts.md')
-rw-r--r-- | docs/user_guide/utils/demonstrations_scripts.md | 46 |
1 files changed, 33 insertions, 13 deletions
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md index f293980..1d136f2 100644 --- a/docs/user_guide/utils/demonstrations_scripts.md +++ b/docs/user_guide/utils/demonstrations_scripts.md @@ -9,20 +9,43 @@ Collection of command-line scripts for demonstration purpose. !!! note *Use -h option to get command arguments documentation.* +!!! note + Each demonstration outputs metrics into *_export/records* folder. + ## Random context -Load **random_context.json** file to analyze random gaze positions: +Load **random_context.json** file to generate random gaze positions: ```shell python -m argaze load ./src/argaze/utils/demo/random_context.json ``` -## OpenCV window context +## OpenCV cursor context -Load **opencv_window_context.json** file to analyze mouse pointer positions over OpenCV window: +Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window: ```shell -python -m argaze load ./src/argaze/utils/demo/opencv_window_context.json +python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json +``` + +## OpenCV movie context + +Load **opencv_movie_context.json** file to playback movie pictures and also capture cursor pointer positions over OpenCV window: + +```shell +python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json +``` + +## OpenCV camera context + +Edit **aruco_markers_pipeline.json** file as to adapt the *size* to the camera resolution and to reduce the value of the *sides_mask*. + +Edit **opencv_camera_context.json** file as to select camera device identifier (default is 0). + +Then, load **opencv_camera_context.json** file to capture camera pictures and also capture cursor pointer positions over OpenCV window: + +```shell +python -m argaze load ./src/argaze/utils/demo/opencv_camera_context.json ``` ## Tobii Pro Glasses 2 @@ -61,27 +84,24 @@ Then, load **tobii_live_stream_context.json** file to find ArUco marker into cam python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json ``` -### Post-processing context - -!!! note - This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet. +### Segment playback context -Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder: +Edit **tobii_segment_playback_context.json** file to select an existing Tobii *segment* folder: ```json { - "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : { - "name": "Tobii Pro Glasses 2 post-processing", + "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : { + "name": "Tobii Pro Glasses 2 segment playback", "segment": "record/segments/1", "pipeline": "aruco_markers_pipeline.json" } } ``` -Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: +Then, load **tobii_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI: ```shell -python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json +python -m argaze load ./src/argaze/utils/demo/tobii_segment_playback_context.json ``` ## Pupil Invisible |