PoseEstimation¶
-
class
HumanPoseResult
(poses, duration, input_dimension, image, **kwargs)¶ The results of pose estimation from
PoseEstimation
.- Parameters
poses (
List
[Pose
]) – The poses from the inference.duration (
float
) – Total time taken by the inference.input_dimension (
Tuple
[int
,int
]) – The dimensions of the input image after padding.image (
ndarray
) – The image that the inference was performed on.
-
property
duration
¶ The duration of the inference in seconds.
- Return type
float
-
property
poses
¶ Poses found in image.
- Return type
List
[Pose
]
-
property
image
¶ The image the results were processed on.
- Return type
ndarray
-
draw_poses_background
(color)¶ Draw poses found on image on a background color.
- Parameters
color (
Tuple
[int
,int
,int
]) – The color of the background in which the poses will be drawn on, in the format (B, G, R).- Return type
ndarray
- Returns
image: numpy array of image in BGR format
-
draw_poses
(image=None)¶ Draws poses found on image.
- Parameters
image (
Optional
[ndarray
]) – An image to draw the poses found- Return type
ndarray
- Returns
image: numpy array of image in BGR format
-
draw_aliens
()¶ - Return type
ndarray
- Returns
image: numpy array of image in BGR format
-
class
PoseEstimation
(model_id, model_config=None)¶ Find poses within an image.
Typical usage:
pose_estimator = edgeiq.PoseEstimation("alwaysai/human-pose") pose_estimator.load(engine=edgeiq.Engine.DNN) <get image> results = pose_estimator.estimate(image) for ind, pose in enumerate(results.poses): print('Person {}'.format(ind)) print('-'*10) print('Key Points:') for key_point in pose.key_points: print(str(key_point)) image = results.draw_poses(image)
- Parameters
model_id (
str
) – The ID of the model you want to use for pose estimation.
-
estimate
(image)¶ Estimate poses within the specified image.
- Parameters
image (
ndarray
) – The image to analyze.- Return type
-
property
accelerator
¶ The accelerator being used.
- Return type
Optional
[Accelerator
]
-
property
colors
¶ The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- Return type
Optional
[ndarray
]
-
property
labels
¶ The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- Return type
Optional
[List
[str
]]
-
load
(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)¶ Load the model to an engine and accelerator.
- Parameters
engine (
Engine
) – The engine to load the model toaccelerator (
Accelerator
) – The accelerator to load the model to
-
property
model_config
¶ The configuration of the model that was loaded
- Return type
ModelConfig
-
property
model_id
¶ The ID of the loaded model.
- Return type
str
-
property
model_purpose
¶ The purpose of the model being used.
- Return type
str
-
publish_analytics
(results, tag=None, **kwargs)¶ Publish results to the alwaysAI Analytics Service
Example usage:
try: inference.publish_analytics(results, tag='custom_tag') except edgeiq.PublishError as e: # Retry publish except edgeiq.ConnectionError as e: # Save state and exit app to reconnect
- Parameters
results (~ResultsT) – The results to publish.
tag (
Optional
[Any
]) – Additional information to assist in querying and visualizations.
- Raises
ConnectionBlockedError
when using connection to the alwaysAI Device Agent and resources are at capacity,- Raises
PacketRateError
when publish rate exceeds current limit,- Raises
PacketSizeError
when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.