PoseEstimation

class HumanPoseResult(poses, duration, input_dimension, image, **kwargs)

The results of pose estimation from PoseEstimation.

Parameters:
  • poses (List[Pose]) – The poses from the inference.

  • duration (float) – Total time taken by the inference.

  • input_dimension (Tuple[int, int]) – The dimensions of the input image after padding.

  • image (ndarray) – The image that the inference was performed on.

property duration: float

The duration of the inference in seconds.

property poses: List[Pose]

Poses found in image.

property image: ndarray

The image the results were processed on.

draw_poses_background(color)

Draw poses found on image on a background color.

Parameters:

color (Tuple[int, int, int]) – The color of the background in which the poses will be drawn on, in the format (B, G, R).

Return type:

ndarray

Returns:

image: numpy array of image in BGR format

draw_poses(image=None)

Draws poses found on image.

Parameters:

image (Optional[ndarray]) – An image to draw the poses found

Return type:

ndarray

Returns:

image: numpy array of image in BGR format

draw_aliens()
Return type:

ndarray

Returns:

image: numpy array of image in BGR format

class PoseEstimation(model_id, model_config=None)

Find poses within an image.

Typical usage:

pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose")
pose_estimator.load(engine=edgeiq.Engine.DNN)

<get image>
results = pose_estimator.estimate(image)

for ind, pose in enumerate(results.poses):
            print('Person {}'.format(ind))
            print('-'*10)
            print('Key Points:')
            for key_point in pose.key_points:
                print(str(key_point))
image = results.draw_poses(image)
Parameters:

model_id (str) – The ID of the model you want to use for pose estimation.

estimate(image)

Estimate poses within the specified image.

Parameters:

image (ndarray) – The image to analyze.

Return type:

HumanPoseResult

property accelerator: Accelerator | None

The accelerator being used.

property colors: ndarray | None

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

property engine: Engine | None

The engine being used.

property labels: List[str] | None

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

load(engine=Engine.DNN, accelerator=Accelerator.DEFAULT)

Load the model to an engine and accelerator.

Parameters:
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config: ModelConfig

The configuration of the model that was loaded

property model_id: str

The ID of the loaded model.

property model_purpose: SupportedPurposes

The purpose of the model being used.

publish_analytics(results, tag=None, **kwargs)

Publish results to the alwaysAI Analytics Service

Example usage:

try:
    inference.publish_analytics(results, tag='custom_tag')
except edgeiq.PublishError as e:
    # Retry publish
except edgeiq.ConnectionError as e:
    # Save state and exit app to reconnect
Parameters:
  • results (TypeVar(ResultsT)) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

Raises:

ConnectionBlockedError when using connection to the alwaysAI Device Agent and resources are at capacity,

Raises:

PacketRateError when publish rate exceeds current limit,

Raises:

PacketSizeError when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.