aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md14
1 files changed, 9 insertions, 5 deletions
diff --git a/README.md b/README.md
index 42a1326..31a5b63 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,17 @@
-# Welcome to ArGaze's documentation
+# ArGaze documentation
+
+**Useful links**: [Installation](getting_started#installation) | [Source Repository](https://git.recherche.enac.fr/projects/argaze/repository) | [Issue Tracker](https://git.recherche.enac.fr/projects/argaze/issues) | [Contact](mailto:achil-contact@recherche.enac.fr)
+
+![Logo](logo-large.png){ width=640px }
**ArGaze** is a python toolkit to deal with gaze tracking in **Augmented Reality (AR) environment**.
-The ArGaze toolkit provides solutions to build 3D modeled AR environment defining **Areas Of Interest (AOI)** mapped on **<a href="https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html" target="_blank">OpenCV ArUco markers</a>** and so ease experimentation design with wearable eye tracker device.
+The ArGaze toolkit provides solutions to build 3D modeled AR environment defining **Areas Of Interest (AOI)** mapped on <a href="https://docs.opencv.org/4.x/d5/dae/tutorial_aruco_detection.html" target="_blank">OpenCV ArUco markers</a> and so ease experimentation design with wearable eye tracker device.
Further, tracked gaze can be projected onto AR environment for live or post **gaze analysis** thanks to **timestamped data** features.
ArGaze can be combined with any wearable eye tracking device python library like Tobii or Pupil glasses.
-Check out the [installation](getting_started#installation) section to start.
-
-Check out the [Code reference](argaze) section to get deeper into library architecture.
+!!! note
+
+ *This work is greatly inspired by [Andrew T. Duchowski, Vsevolod Peysakhovich and Krzysztof Krejtz article](https://git.recherche.enac.fr/attachments/download/1942/Using_Pose_Estimation_to_Map_Gaze_to_Detected_Fidu.pdf) about using pose estimation to map gaze to detected fiducial markers.*