gazetimation module#
- class gazetimation.Gazetimation(face_model_points_3d: Optional[ndarray] = None, left_eye_ball_center: Optional[ndarray] = None, right_eye_ball_center: Optional[ndarray] = None, camera_matrix: Optional[ndarray] = None, device: int = 0, visualize: bool = True)[source]#
Bases:
object
Initialize the Gazetimation object.
This holds the configurations of the Gazetimation class.
- Parameters:
face_model_points_3d (np.ndarray, optional) –
Predefine 3D reference points for face model. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._face_model_points_3d = np.array( [ (0.0, 0.0, 0.0), # Nose tip (0, -63.6, -12.5), # Chin (-43.3, 32.7, -26), # Left eye, left corner (43.3, 32.7, -26), # Right eye, right corner (-28.9, -28.9, -24.1), # Left Mouth corner (28.9, -28.9, -24.1), # Right mouth corner ] )
left_eye_ball_center (np.ndarray, optional) –
Predefine 3D reference points for left eye ball center. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._left_eye_ball_center = np.array([[29.05], [32.7], [-39.5]])
right_eye_ball_center (np.ndarray, optional) –
Predefine 3D reference points for right eye ball center. Defaults to None.
Note
If not provided, it will be assigned the following values. And, the passed values should conform to the same facial points.
self._right_eye_ball_center = np.array([[-29.05], [32.7], [-39.5]])
camera_matrix (np.ndarray, optional) –
Camera matrix. Defaults to None.
Important
if not provided, the system tries to calculate the camera matrix using thefind_camera_matrix method
.This calculated camera matrix is estimated from the width and height of the frame, it’s not an exact solution.device (int, optional) –
Device index for the video device. Defaults to 0.
Attention
if a negative device index is provided, the system tries to find the first available video device index using the
find_device method
. So, if not sure, pass device = -1.if device < 0: self._device = self.find_device() else: self._device = device
visualize (bool, optional) – If visualize is true then it shows annotated images. Defaults to True.
- calculate_arrowhead(start_coordinate: tuple, end_coordinate: tuple, arrow_length: int = 15, arrow_angle: int = 70) tuple [source]#
Calculate the lines for arrowhead.
For a given line, it calculates the arrowhead from the end (tip) of the line.
- Parameters:
start_coordinate (tuple) – Start point of the line.
end_coordinate (tuple) – End point of the line.
arrow_length (int, optional) – Length of the arrowhead lines. Defaults to 15.
arrow_angle (int, optional) – Angle (degree) of the arrowhead lines with the arrow-line. Defaults to 70.
- Returns:
Endpoints in the image of the two arrowhead lines.
- Return type:
tuple
- calculate_head_eye_poses(frame: ndarray, points: object, gaze_distance: int = 10, face_model_points_3d: Optional[ndarray] = None) tuple [source]#
Calculates the head and eye poses (gaze)
- Parameters:
frame (np.ndarray) – The image.
points (object) – Holds the facial landmarks points.
gaze_distance (int, optional) – Gaze distance. Defaults to 10.
face_model_points_3d (np.ndarray, optional) – Predefined 3D reference points for face model. Defaults to None.
- Returns:
Returns two tuples (left and right eye) containing the pupil location and the projected gaze on the image plane.
- Return type:
tuple
- property camera_matrix: ndarray#
Getter method for camera_matrix.
- Returns:
The camera matrix.
- Return type:
np.ndarray
- property device: int#
Getter method for device.
- Returns:
Index for the video device.
- Return type:
int
- draw(frame: ndarray, pupil: ndarray, gaze: ndarray)[source]#
Draws the gaze direction onto the frame
- Parameters:
frame (np.ndarray) – The image.
pupil (np.ndarray) – 2D pupil location on the image.
gaze (np.ndarray) – Gaze direction.
- property face_model_points_3d: ndarray#
Getter method for face_model_points_3d. :returns: 3D face model points. :rtype: np.ndarray
- property facial_landmark_index: list#
Getter method for facial_landmark_index.
- Returns:
Required facial landmark indexes.
- Return type:
list
- find_camera_matrix(frame: ndarray) ndarray [source]#
Calculates the camera matrix from image dimensions.
- Parameters:
frame (np.ndarray) – The image.
- Returns:
Camera matrix.
- Return type:
np.ndarray
- find_device(max_try: int = 10) int [source]#
Find the video device index.
It tries to iterate over a number of system device and returns the first eligible device.
- Parameters:
max_try (int, optional) – Max number of devices to try. Defaults to 10.
- Returns:
Index of the video device.
- Return type:
int
- find_face_num(max_try: int = 100, video_path: Optional[str] = None) int [source]#
Finds number of faces/people present in the scene
- Parameters:
max_try (int, optional) – Maximum number of frames to try. Defaults to 100.
video_path (str, optional) – Path to the video file. Defaults to None.
- Returns:
The number of faces/people present in the scene.
- Return type:
int
- property left_eye_ball_center: ndarray#
Getter method for left_eye_ball_center.
- Returns:
3D points for left eye ball center.
- Return type:
np.ndarray
- property right_eye_ball_center: ndarray#
Getter method for right_eye_ball_center.
- Returns:
3D points for right eye ball center.
- Return type:
np.ndarray
- run(max_num_faces: int = 1, video_path: Optional[str] = None, smoothing: bool = True, smoothing_frame_range: int = 8, smoothing_weight='uniform', custom_smoothing_func=None, video_output_path: Optional[str] = None, handler=None)[source]#
Runs the solution
- Parameters:
max_num_faces (int, optional) – Maximum number of face(s)/people present in the scene. Defaults to 1.
video_path (str, optional) – Path to the video. Defaults to None.
smoothing (bool, optional) – If smoothing should be performed. Defaults to True.
smoothing_frame_range (int, optional) – Number of frame to consider to perform smoothing.. Defaults to 8.
smoothing_weight (str, optional) – Type of weighting scheme (“uniform”, “linear”, “logarithmic”). Defaults to “uniform”.
custom_smoothing_func (function, optional) – Custom smoothing function. Defaults to None.
video_output_path (str, optional) – Output path and format for output video.
handler (function, optional) –
If provided the output is passed to the handler function for further processing.
Attention
The handler will be called by passing the frame and the gaze information as shown below
if handler is not None: handler([frame, left_pupil, right_pupil, gaze_left_eye, gaze_right_eye])
- smoothing(smoothing_weight: str, smoothing_frame_range: int, left_pupil: ndarray, right_pupil: ndarray, gaze_left_eye: ndarray, gaze_right_eye: ndarray) tuple [source]#
Smoothing is performed so the result doesn’t have abrupt changes.
- Parameters:
smoothing_weight (str) – Type of smoothing.
smoothing_frame_range (int) – Number of frame to consider to perform smoothing.
left_pupil (np.ndarray) – Position of the left pupil.
right_pupil (np.ndarray) – Position of the right pupil.
gaze_left_eye (np.ndarray) – Position of the estimated gaze of left eye.
gaze_right_eye (np.ndarray) – Position of the estimated gaze of right eye.
- Returns:
Smoothed position of left_pupil, right_pupil, gaze_left_eye, gaze_right_eye
- Return type:
tuple
- property visualize: bool#
Getter method for visualize.
- Returns:
Whether to show annotated images.
- Return type:
bool