aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorThéo de la Hogue2024-07-09 13:42:21 +0200
committerThéo de la Hogue2024-07-09 13:42:21 +0200
commit94ccab6f91c00b1f669b09445bd5af8c32957e72 (patch)
treea910239e1892ae420ae1f33442d17454c6096902
parent2753c71f0121cd380b67b150e1ea296bd7e39600 (diff)
downloadargaze-94ccab6f91c00b1f669b09445bd5af8c32957e72.zip
argaze-94ccab6f91c00b1f669b09445bd5af8c32957e72.tar.gz
argaze-94ccab6f91c00b1f669b09445bd5af8c32957e72.tar.bz2
argaze-94ccab6f91c00b1f669b09445bd5af8c32957e72.tar.xz
Replacing processing word by capture or playback words.
-rw-r--r--docs/index.md2
-rw-r--r--docs/use_cases/air_controller_gaze_study/context.md12
-rw-r--r--docs/use_cases/air_controller_gaze_study/introduction.md4
-rw-r--r--docs/use_cases/air_controller_gaze_study/pipeline.md2
-rw-r--r--docs/use_cases/pilot_gaze_monitoring/context.md6
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md72
-rw-r--r--docs/user_guide/eye_tracking_context/advanced_topics/scripting.md8
-rw-r--r--docs/user_guide/eye_tracking_context/configuration_and_execution.md6
-rw-r--r--docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md8
-rw-r--r--docs/user_guide/utils/demonstrations_scripts.md21
-rw-r--r--docs/user_guide/utils/main_commands.md4
-rw-r--r--src/argaze/ArFeatures.py34
-rw-r--r--src/argaze/__main__.py18
-rw-r--r--src/argaze/utils/contexts/OpenCV.py14
-rw-r--r--src/argaze/utils/contexts/TobiiProGlasses2.py12
-rw-r--r--src/argaze/utils/demo/tobii_segment_playback_context.json (renamed from src/argaze/utils/demo/tobii_post_processing_context.json)4
-rw-r--r--utils/processTobiiRecords.sh8
17 files changed, 116 insertions, 119 deletions
diff --git a/docs/index.md b/docs/index.md
index 2b668a3..ca9271a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -14,7 +14,7 @@ By offering a wide array of gaze metrics and supporting easy extension to incorp
## Eye tracking context
-**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **real-time and/or post-processing analysis**.
+**ArGaze** facilitates the integration of both **screen-based and head-mounted** eye tracking systems for **live data capture and afterward data playback**.
[Learn how to handle various eye tracking context by reading the dedicated user guide section](./user_guide/eye_tracking_context/introduction.md).
diff --git a/docs/use_cases/air_controller_gaze_study/context.md b/docs/use_cases/air_controller_gaze_study/context.md
index ca9adf7..d32095b 100644
--- a/docs/use_cases/air_controller_gaze_study/context.md
+++ b/docs/use_cases/air_controller_gaze_study/context.md
@@ -1,18 +1,18 @@
-Live streaming context
+Data playback context
======================
The context handles incoming eye tracker data before to pass them to a processing pipeline.
-## post_processing_context.json
+## data_playback_context.json
-For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to read data from records made by this device.
+For this use case we need to read Tobii Pro Glasses 2 records: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to playback data from records made by this device.
-While *segment* entries are specific to the [TobiiProGlasses2.PostProcessing](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.PostProcessing) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
+While *segment* entries are specific to the [TobiiProGlasses2.SegmentPlayback](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback) class, *name* and *pipeline* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing": {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback": {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "/Volumes/projects/fbr6k3e/records/4rcbdzk/segments/1",
"pipeline": "post_processing_pipeline.json"
}
diff --git a/docs/use_cases/air_controller_gaze_study/introduction.md b/docs/use_cases/air_controller_gaze_study/introduction.md
index 313e492..b7cccbe 100644
--- a/docs/use_cases/air_controller_gaze_study/introduction.md
+++ b/docs/use_cases/air_controller_gaze_study/introduction.md
@@ -26,14 +26,14 @@ A traffic simulation of moderate difficulty with a maximum of 13 and 16 aircraft
The setup to integrate **ArGaze** to the experiment is defined by 3 main files detailled in the next chapters:
-* The context file that reads gaze data and scene camera video records: [post_processing_context.json](context.md)
+* The context file that playback gaze data and scene camera video records: [data_playback_context.json](context.md)
* The pipeline file that processes gaze data and scene camera video: [post_processing_pipeline.json](pipeline.md)
* The observers file that exports analysis outputs: [observers.py](observers.md)
As any **ArGaze** setup, it is loaded by executing the [*load* command](../../user_guide/utils/main_commands.md):
```shell
-python -m argaze load post_processing_context.json
+python -m argaze load segment_playback_context.json
```
This command opens one GUI window per frame (one for the scene camera, one for the sector screen and one for the info screen) that allow to monitor gaze mapping while processing.
diff --git a/docs/use_cases/air_controller_gaze_study/pipeline.md b/docs/use_cases/air_controller_gaze_study/pipeline.md
index ec1aa59..39d6427 100644
--- a/docs/use_cases/air_controller_gaze_study/pipeline.md
+++ b/docs/use_cases/air_controller_gaze_study/pipeline.md
@@ -1,4 +1,4 @@
-Live processing pipeline
+Post processing pipeline
========================
The pipeline processes camera image and gaze data to enable gaze mapping and gaze analysis.
diff --git a/docs/use_cases/pilot_gaze_monitoring/context.md b/docs/use_cases/pilot_gaze_monitoring/context.md
index 71d2628..477276d 100644
--- a/docs/use_cases/pilot_gaze_monitoring/context.md
+++ b/docs/use_cases/pilot_gaze_monitoring/context.md
@@ -1,11 +1,11 @@
-Live streaming context
-======================
+Data capture context
+====================
The context handles incoming eye tracker data before to pass them to a processing pipeline.
## live_streaming_context.json
-For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to live stream data from this device.
+For this use case we need to connect to a Tobii Pro Glasses 2 device: **ArGaze** provides a [ready-made context](../../user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md) class to capture data from this device.
While *address*, *project*, *participant* and *configuration* entries are specific to the [TobiiProGlasses2.LiveStream](../../argaze.md/#argaze.utils.contexts.TobiiProGlasses2.LiveStream) class, *name*, *pipeline* and *observers* entries are part of the parent [ArContext](../../argaze.md/#argaze.ArFeatures.ArContext) class.
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
index c163696..0702c8e 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/context_definition.md
@@ -3,27 +3,27 @@ Define a context class
The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class defines a generic base class interface to handle incoming eye tracker data before to pass them to a processing pipeline according to [Python context manager feature](https://docs.python.org/3/reference/datamodel.html#context-managers).
-The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides playback features to stop or pause processings, performance assement features to measure how many times processings are called and the time spent by the process.
+The [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) class interface provides control features to stop or pause working threads, performance assement features to measure how many times processings are called and the time spent by the process.
-Besides, there is also a [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
+Besides, there is also a [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines an abstract *calibrate* method to write specific device calibration process.
-In the same way, there is a [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and processing advancement.
+In the same way, there is a [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class that inherits from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext) and that defines abstract *previous* and *next* playback methods to move into record's frames and also defines *duration* and *progression* properties to get information about a record length and playback advancement.
-Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [LiveProcessingContext](../../../argaze.md/#argaze.ArFeatures.LiveProcessingContext) or [PostProcessingContext](../../../argaze.md/#argaze.ArFeatures.PostProcessingContext) class.
+Finally, a specific eye tracking context can be defined into a Python file by writing a class that inherits either from [ArContext](../../../argaze.md/#argaze.ArFeatures.ArContext), [DataCaptureContext](../../../argaze.md/#argaze.ArFeatures.DataCaptureContext) or [DataPlaybackContext](../../../argaze.md/#argaze.ArFeatures.DataPlaybackContext) class.
-## Write live processing context
+## Write data capture context
-Here is a live processing context example that processes gaze positions and camera images in two separated threads:
+Here is a data cpature context example that processes gaze positions and camera images in two separated threads:
```python
from argaze import ArFeatures, DataFeatures
-class LiveProcessingExample(ArFeatures.LiveProcessingContext):
+class DataCaptureExample(ArFeatures.DataCaptureContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -45,23 +45,23 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a gaze position processing thread
- self.__gaze_thread = threading.Thread(target = self.__gaze_position_processing)
+ # Start a gaze position capture thread
+ self.__gaze_thread = threading.Thread(target = self.__gaze_position_capture)
self.__gaze_thread.start()
- # Start a camera image processing thread if applicable
- self.__camera_thread = threading.Thread(target = self.__camera_image_processing)
+ # Start a camera image capture thread if applicable
+ self.__camera_thread = threading.Thread(target = self.__camera_image_capture)
self.__camera_thread.start()
return self
- def __gaze_position_processing(self):
- """Process gaze position."""
+ def __gaze_position_capture(self):
+ """Capture gaze position."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, x and y values are available
@@ -73,13 +73,13 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
# Wait some time eventually
...
- def __camera_image_processing(self):
- """Process camera image if applicable."""
+ def __camera_image_capture(self):
+ """Capture camera image if applicable."""
- # Processing loop
+ # Capture loop
while self.is_running():
- # Pause processing
+ # Pause capture
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -95,10 +95,10 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop capture loops
self.stop()
- # Stop processing threads
+ # Stop capture threads
threading.Thread.join(self.__gaze_thread)
threading.Thread.join(self.__camera_thread)
@@ -108,19 +108,19 @@ class LiveProcessingExample(ArFeatures.LiveProcessingContext):
...
```
-## Write post processing context
+## Write data playback context
-Here is a post processing context example that processes gaze positions and camera images in a same thread:
+Here is a data playback context example that reads gaze positions and camera images in a same thread:
```python
from argaze import ArFeatures, DataFeatures
-class PostProcessingExample(ArFeatures.PostProcessingContext):
+class DataPlaybackExample(ArFeatures.DataPlaybackContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attribute
@@ -142,19 +142,19 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
# Start context according any specific parameter
... self.parameter
- # Start a reading data thread
- self.__read_thread = threading.Thread(target = self.__data_reading)
- self.__read_thread.start()
+ # Start a data playback thread
+ self.__data_thread = threading.Thread(target = self.__data_playback)
+ self.__data_thread.start()
return self
- def __data_reading(self):
- """Process gaze position and camera image if applicable."""
+ def __data_playback(self):
+ """Playback gaze position and camera image if applicable."""
- # Processing loop
+ # Playback loop
while self.is_running():
- # Pause processing
+ # Pause playback
if not self.is_paused():
# Assuming that timestamp, camera_image are available
@@ -176,11 +176,11 @@ class PostProcessingExample(ArFeatures.PostProcessingContext):
def __exit__(self, exception_type, exception_value, exception_traceback):
"""End context."""
- # Stop processing loops
+ # Stop playback loop
self.stop()
- # Stop processing threads
- threading.Thread.join(self.__read_thread)
+ # Stop playback threads
+ threading.Thread.join(self.__data_thread)
def previous(self):
"""Go to previous camera image frame."""
diff --git a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
index 8753eb6..d8eb389 100644
--- a/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
+++ b/docs/user_guide/eye_tracking_context/advanced_topics/scripting.md
@@ -68,12 +68,12 @@ from argaze import ArFeatures
# Check context type
- # Live processing case: calibration method is available
- if issubclass(type(context), ArFeatures.LiveProcessingContext):
+ # Data capture case: calibration method is available
+ if issubclass(type(context), ArFeatures.DataCaptureContext):
...
- # Post processing case: more playback methods are available
- if issubclass(type(context), ArFeatures.PostProcessingContext):
+ # Data playback case: playback methods are available
+ if issubclass(type(context), ArFeatures.DataPlaybackContext):
...
# Check pipeline type
diff --git a/docs/user_guide/eye_tracking_context/configuration_and_execution.md b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
index f13c6a2..e1123fb 100644
--- a/docs/user_guide/eye_tracking_context/configuration_and_execution.md
+++ b/docs/user_guide/eye_tracking_context/configuration_and_execution.md
@@ -3,9 +3,9 @@ Edit and execute context
The [utils.contexts module](../../argaze.md/#argaze.utils.contexts) provides ready-made contexts like:
-* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) live stream and post processing contexts,
-* [Pupil Labs](context_modules/pupil_labs.md) live stream context,
-* [OpenCV](context_modules/opencv.md) window cursor position and movie processing,
+* [Tobii Pro Glasses 2](context_modules/tobii_pro_glasses_2.md) data capture and data playback contexts,
+* [Pupil Labs](context_modules/pupil_labs.md) data capture context,
+* [OpenCV](context_modules/opencv.md) window cursor position capture and movie playback,
* [Random](context_modules/random.md) gaze position generator.
## Edit JSON configuration
diff --git a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
index fba6931..6ff44bd 100644
--- a/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
+++ b/docs/user_guide/eye_tracking_context/context_modules/tobii_pro_glasses_2.md
@@ -42,16 +42,16 @@ Read more about [ArContext base class in code reference](../../../argaze.md/#arg
}
```
-## Post Processing
+## Segment Playback
-::: argaze.utils.contexts.TobiiProGlasses2.PostProcessing
+::: argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback
### JSON sample
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "./src/argaze/utils/demo/tobii_record/segments/1",
"pipeline": ...
}
diff --git a/docs/user_guide/utils/demonstrations_scripts.md b/docs/user_guide/utils/demonstrations_scripts.md
index dd1b8e0..79a9b40 100644
--- a/docs/user_guide/utils/demonstrations_scripts.md
+++ b/docs/user_guide/utils/demonstrations_scripts.md
@@ -11,7 +11,7 @@ Collection of command-line scripts for demonstration purpose.
## Random context
-Load **random_context.json** file to process random gaze positions:
+Load **random_context.json** file to generate random gaze positions:
```shell
python -m argaze load ./src/argaze/utils/demo/random_context.json
@@ -19,7 +19,7 @@ python -m argaze load ./src/argaze/utils/demo/random_context.json
## OpenCV cursor context
-Load **opencv_cursor_context.json** file to process cursor pointer positions over OpenCV window:
+Load **opencv_cursor_context.json** file to capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
@@ -27,7 +27,7 @@ python -m argaze load ./src/argaze/utils/demo/opencv_cursor_context.json
## OpenCV movie context
-Load **opencv_movie_context.json** file to process movie pictures and also cursor pointer positions over OpenCV window:
+Load **opencv_movie_context.json** file to playback movie pictures and also capture cursor pointer positions over OpenCV window:
```shell
python -m argaze load ./src/argaze/utils/demo/opencv_movie_context.json
@@ -69,27 +69,24 @@ Then, load **tobii_live_stream_context.json** file to find ArUco marker into cam
python -m argaze load ./src/argaze/utils/demo/tobii_live_stream_context.json
```
-### Post-processing context
+### Segment playback context
-!!! note
- This demonstration requires to print **A3_demo.pdf** file located in *./src/argaze/utils/demo/* folder on A3 paper sheet.
-
-Edit **tobii_post_processing_context.json** file to select an existing Tobii *segment* folder:
+Edit **tobii_segment_playback_context.json** file to select an existing Tobii *segment* folder:
```json
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
}
```
-Then, load **tobii_post_processing_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
+Then, load **tobii_segment_playback_context.json** file to find ArUco marker into camera image and, project gaze positions into AOI:
```shell
-python -m argaze load ./src/argaze/utils/demo/tobii_post_processing_context.json
+python -m argaze load ./src/argaze/utils/demo/tobii_segment_playback_context.json
```
## Pupil Invisible
diff --git a/docs/user_guide/utils/main_commands.md b/docs/user_guide/utils/main_commands.md
index 4dd3434..c4887a4 100644
--- a/docs/user_guide/utils/main_commands.md
+++ b/docs/user_guide/utils/main_commands.md
@@ -35,13 +35,13 @@ For example:
echo "print(context)" > /tmp/argaze
```
-* Pause context processing:
+* Pause context:
```shell
echo "context.pause()" > /tmp/argaze
```
-* Resume context processing:
+* Resume context:
```shell
echo "context.resume()" > /tmp/argaze
diff --git a/src/argaze/ArFeatures.py b/src/argaze/ArFeatures.py
index 8d9eceb..a36cb94 100644
--- a/src/argaze/ArFeatures.py
+++ b/src/argaze/ArFeatures.py
@@ -1521,7 +1521,7 @@ class ArContext(DataFeatures.PipelineStepObject):
self._image_parameters = DEFAULT_ARCONTEXT_IMAGE_PARAMETERS
@property
- def pipeline(self) -> DataFeatures.PipelineStepObject:
+ def pipeline(self) -> ArFrame|ArCamera:
"""ArFrame used to process gaze data or ArCamera used to process gaze data and video of environment."""
return self.__pipeline
@@ -1538,7 +1538,7 @@ class ArContext(DataFeatures.PipelineStepObject):
return self.__exceptions
def as_dict(self) -> dict:
- """Export ArContext properties as dictionary."""
+ """Export context properties as dictionary."""
return {
**DataFeatures.PipelineStepObject.as_dict(self),
@@ -1548,13 +1548,13 @@ class ArContext(DataFeatures.PipelineStepObject):
@DataFeatures.PipelineStepEnter
def __enter__(self):
- """Enter into ArContext."""
+ """Enter into context."""
return self
@DataFeatures.PipelineStepExit
def __exit__(self, exception_type, exception_value, exception_traceback):
- """Exit from ArContext."""
+ """Exit from context."""
pass
def _process_gaze_position(self, timestamp: int | float, x: int | float = None, y: int | float = None, precision: int | float = None):
@@ -1709,24 +1709,24 @@ class ArContext(DataFeatures.PipelineStepObject):
@DataFeatures.PipelineStepMethod
def pause(self):
- """Pause pipeline processing."""
+ """Pause context."""
self._pause_event.set()
def is_paused(self) -> bool:
- """Is pipeline processing paused?"""
+ """Is context paused?"""
return self._pause_event.is_set()
@DataFeatures.PipelineStepMethod
def resume(self):
- """Resume pipeline processing."""
+ """Resume context."""
self._pause_event.clear()
-class LiveProcessingContext(ArContext):
+class DataCaptureContext(ArContext):
"""
- Defines abstract live data processing context.
+ Defines abstract data capture context.
"""
@DataFeatures.PipelineStepInit
@@ -1739,14 +1739,14 @@ class LiveProcessingContext(ArContext):
raise NotImplementedError
-# Define default PostProcessingContext image parameters
-DEFAULT_POST_PROCESSING_CONTEXT_IMAGE_PARAMETERS = {
+# Define default DataPlaybackContext image parameters
+DEFAULT_DATA_PLAYBACK_CONTEXT_IMAGE_PARAMETERS = {
"draw_progression": True
}
-class PostProcessingContext(ArContext):
+class DataPlaybackContext(ArContext):
"""
- Defines abstract post data processing context.
+ Defines abstract data playback context.
"""
@DataFeatures.PipelineStepInit
@@ -1754,7 +1754,7 @@ class PostProcessingContext(ArContext):
super().__init__()
- self._image_parameters = {**DEFAULT_ARCONTEXT_IMAGE_PARAMETERS, **DEFAULT_POST_PROCESSING_CONTEXT_IMAGE_PARAMETERS}
+ self._image_parameters = {**DEFAULT_ARCONTEXT_IMAGE_PARAMETERS, **DEFAULT_DATA_PLAYBACK_CONTEXT_IMAGE_PARAMETERS}
def previous(self):
"""Go to previous frame"""
@@ -1774,19 +1774,19 @@ class PostProcessingContext(ArContext):
@property
def progression(self) -> float:
- """Get data processing progression between 0 and 1."""
+ """Get data playback progression between 0 and 1."""
raise NotImplementedError
@DataFeatures.PipelineStepImage
def image(self, draw_progression: bool = True, **kwargs):
"""
- Get pipeline image with post processing information.
+ Get pipeline image with data playback information.
Parameters:
draw_progression: draw progress bar
"""
- logging.debug('PostProcessingContext.image %s', self.name)
+ logging.debug('DataPlaybackContext.image %s', self.name)
image = super().image(**kwargs)
height, width, _ = image.shape
diff --git a/src/argaze/__main__.py b/src/argaze/__main__.py
index 76e9664..f759be0 100644
--- a/src/argaze/__main__.py
+++ b/src/argaze/__main__.py
@@ -27,7 +27,7 @@ import stat
from . import load
from .DataFeatures import SharedObjectBusy
-from .ArFeatures import ArCamera, ArContext, PostProcessingContext, LiveProcessingContext
+from .ArFeatures import ArCamera, ArContext, DataPlaybackContext, DataCaptureContext
from .utils.UtilsFeatures import print_progress_bar
import cv2
@@ -68,7 +68,7 @@ def load_context(args):
# Blanck line
info_stack += 1
- if issubclass(type(context), LiveProcessingContext):
+ if issubclass(type(context), DataCaptureContext):
info_stack += 1
cv2.putText(image, f'Press Enter to start calibration', (int(width/4)+20, int(height/3)+(info_stack*40)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
@@ -76,7 +76,7 @@ def load_context(args):
info_stack += 1
cv2.putText(image, f'Press r to start/stop recording', (int(width/4)+20, int(height/3)+(info_stack*40)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
- if issubclass(type(context), PostProcessingContext):
+ if issubclass(type(context), DataPlaybackContext):
info_stack += 1
cv2.putText(image, f'Press Space bar to pause/resume processing', (int(width/4)+20, int(height/3)+(info_stack*40)), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 1, cv2.LINE_AA)
@@ -199,8 +199,8 @@ def load_context(args):
raise KeyboardInterrupt()
- # Keys specific to live processing contexts
- if issubclass(type(context), LiveProcessingContext):
+ # Keys specific to data capture contexts
+ if issubclass(type(context), DataCaptureContext):
# Enter: start calibration
if key_pressed == 13:
@@ -222,10 +222,10 @@ def load_context(args):
context.create_recording()
context.start_recording()
- # Keys specific to post processing contexts
- if issubclass(type(context), PostProcessingContext):
+ # Keys specific to data playback contexts
+ if issubclass(type(context), DataPlaybackContext):
- # Space bar: pause/resume pipeline processing
+ # Space bar: pause/resume data playback
if key_pressed == 32:
@@ -250,7 +250,7 @@ def load_context(args):
# Window mode off
else:
- if issubclass(type(context), PostProcessingContext):
+ if issubclass(type(context), DataPlaybackContext):
prefix = f'Progression'
suffix = f'| {int(context.progression*context.duration * 1e-3)}s in {int(time.time()-start_time)}s'
diff --git a/src/argaze/utils/contexts/OpenCV.py b/src/argaze/utils/contexts/OpenCV.py
index 273705a..ff3ed82 100644
--- a/src/argaze/utils/contexts/OpenCV.py
+++ b/src/argaze/utils/contexts/OpenCV.py
@@ -27,7 +27,7 @@ from argaze import ArFeatures, DataFeatures
class Cursor(ArFeatures.ArContext):
- """Process cursor position over OpenCV window.
+ """Capture cursor position over OpenCV window.
!!! warning
It is assumed that an OpenCV window with the same name than the context is used to display context's pipeline image.
@@ -36,7 +36,7 @@ class Cursor(ArFeatures.ArContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
@DataFeatures.PipelineStepEnter
@@ -75,7 +75,7 @@ class Cursor(ArFeatures.ArContext):
class Movie(Cursor):
- """Process movie images and cursor position over OpenCV window.
+ """Playback movie images and capture cursor position over OpenCV window.
!!! warning
It is assumed that an OpenCV window with the same name than the context is used to display context's pipeline image.
@@ -139,10 +139,10 @@ class Movie(Cursor):
current_image_time = self.__movie.get(cv2.CAP_PROP_POS_MSEC)
self.__next_image_index = 0 #int(self.__start * self.__movie_fps)
- while not self._stop_event.is_set():
+ while self.is_running():
# Check pause event (and stop event)
- while self._pause_event.is_set() and not self._stop_event.is_set():
+ while self.is_paused() and self.is_running():
logging.debug('> reading is paused at %i', current_image_time)
@@ -182,7 +182,7 @@ class Movie(Cursor):
# Exit from Cursor context
super().__exit__(exception_type, exception_value, exception_traceback)
- # Close data stream
+ # Close data capture
self.stop()
# Stop reading thread
@@ -217,7 +217,7 @@ class Movie(Cursor):
@property
def progression(self) -> float:
- """Get movie processing progression between 0 and 1."""
+ """Get movie playback progression between 0 and 1."""
if self.__current_image_index is not None:
diff --git a/src/argaze/utils/contexts/TobiiProGlasses2.py b/src/argaze/utils/contexts/TobiiProGlasses2.py
index 3dd0161..80c4bdc 100644
--- a/src/argaze/utils/contexts/TobiiProGlasses2.py
+++ b/src/argaze/utils/contexts/TobiiProGlasses2.py
@@ -330,12 +330,12 @@ class TobiiJsonDataParser():
return MarkerPosition(data['marker3d'], data['marker2d'])
-class LiveStream(ArFeatures.LiveProcessingContext):
+class LiveStream(ArFeatures.DataCaptureContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
- # Init LiveProcessingContext class
+ # Init DataCaptureContext class
super().__init__()
# Init private attributes
@@ -1067,9 +1067,9 @@ class LiveStream(ArFeatures.LiveProcessingContext):
@DataFeatures.PipelineStepImage
def image(self, **kwargs):
"""
- Get pipeline image with live processing information.
+ Get pipeline image with data capture information.
"""
- logging.debug('LiveProcessingContext.image %s', self.name)
+ logging.debug('DataCaptureContext.image %s', self.name)
image = super().image(**kwargs)
height, width, _ = image.shape
@@ -1131,7 +1131,7 @@ class LiveStream(ArFeatures.LiveProcessingContext):
return image
-class PostProcessing(ArFeatures.PostProcessingContext):
+class SegmentPlayback(ArFeatures.DataPlaybackContext):
@DataFeatures.PipelineStepInit
def __init__(self, **kwargs):
@@ -1519,6 +1519,6 @@ class PostProcessing(ArFeatures.PostProcessingContext):
@property
def progression(self) -> float:
- """Get data processing progression between 0 and 1."""
+ """Get data playback progression between 0 and 1."""
return self.__progression \ No newline at end of file
diff --git a/src/argaze/utils/demo/tobii_post_processing_context.json b/src/argaze/utils/demo/tobii_segment_playback_context.json
index 7a73512..d481b23 100644
--- a/src/argaze/utils/demo/tobii_post_processing_context.json
+++ b/src/argaze/utils/demo/tobii_segment_playback_context.json
@@ -1,6 +1,6 @@
{
- "argaze.utils.contexts.TobiiProGlasses2.PostProcessing" : {
- "name": "Tobii Pro Glasses 2 post-processing",
+ "argaze.utils.contexts.TobiiProGlasses2.SegmentPlayback" : {
+ "name": "Tobii Pro Glasses 2 segment playback",
"segment": "./src/argaze/utils/demo/tobii_record/segments/1",
"pipeline": "aruco_markers_pipeline.json"
}
diff --git a/utils/processTobiiRecords.sh b/utils/processTobiiRecords.sh
index 0cc3eb4..bbe6c86 100644
--- a/utils/processTobiiRecords.sh
+++ b/utils/processTobiiRecords.sh
@@ -12,7 +12,7 @@
# Arguments:
# $1: ArGaze context file
# $2: folder from where to look for Tobii records
-# $3: folder where to export ArGaze processing outputs
+# $3: folder where to export processing outputs
#######################################
# Check required arguments
@@ -103,12 +103,12 @@ function process_segment() {
mkdir -p $seg_output
cd $seg_output
- # Launch argaze with modified context
- echo "*** ArGaze processing starts"
+ # Launch modified context with argaze load command
+ echo "*** ArGaze starts context"
python -m argaze load $context_file
- echo "*** ArGaze processing ends"
+ echo "*** ArGaze ends context"
# Move back to context folder
cd $ctx_folder