ObjectDetection
- class ObjectDetectionPrediction(box, confidence, label, index)
A single prediction from
ObjectDetection
.- Parameters:
box (
BoundingBox
) – The bounding box around the detected object.confidence (
float
) – The confidence of this prediction.label (
str
) – The label describing this prediction result.index (
int
) – The index of this result in the master label list.
- property box: BoundingBox
The bounding box around the object.
- class ObjectDetectionResults(predictions, duration, image, **kwargs)
All the results of object detection from
ObjectDetection
.Predictions are stored in sorted order, with descending order of confidence.
- Parameters:
predictions (
List
[ObjectDetectionPrediction
]) – The boxes for each prediction.duration (
float
) – The duration of the inference.image (
Optional
[ndarray
]) – The image that the inference was performed on.
- property predictions: List[ObjectDetectionPrediction]
The list of predictions.
- class ObjectDetection(model_id, model_config=None, pre_process=None, pre_process_batch=None, post_process=None, post_process_batch=None)
Analyze and discover objects within an image.
Typical usage:
obj_detect = edgeiq.ObjectDetection( 'alwaysai/ssd_mobilenet_v1_coco_2018_01_28' ) obj_detect.load(engine=edgeiq.Engine.DNN) <get image> results = obj_detect.detect_objects(image, confidence_level=.5) image = edgeiq.markup_image( image, results.predictions, colors=obj_detect.colors ) for prediction in results.predictions: text.append("{}: {:2.2f}%".format( prediction.label, prediction.confidence * 100) )
Please refer to this app for example of custom pre and post processing configuration for the model.
- Parameters:
model_id (
str
) – The ID of the model you want to use for object detection.model_config (
Optional
[ModelConfig
]) – The model configuration to load. model_id is ignored when model_config is set.pre_process (
Optional
[Callable
[[ObjectDetectionPreProcessParams
],ndarray
]]) – The pre processing to use for inferencing. This is needed when using a model architecture not supported by edgeIQ.pre_process_batch (
Optional
[Callable
[[ObjectDetectionPreProcessBatchParams
],ndarray
]]) – The pre processing to use for batch inference mode. This is needed when using a model architecture not supported by edgeIQ.post_process (
Optional
[Callable
[[ObjectDetectionPostProcessParams
],Tuple
[List
[BoundingBox
],List
[float
],List
[int
]]]]) – The post processing to use for inferencing. This is needed when using a model architecture not supported by edgeIQ.post_process_batch (
Optional
[Callable
[[ObjectDetectionPostProcessBatchParams
],Tuple
[List
[List
[BoundingBox
]],List
[List
[float
]],List
[List
[int
]]]]]) – The post processing to use for batch inference mode. This is needed when using a model architecture not supported by edgeIQ.
- detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)
Perform Object Detection on an image
- Parameters:
image (
ndarray
) – The image to analyze.confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type:
- detect_objects_batch(images, confidence_level=0.3, overlap_threshold=0.3)
Perform Object Detection on a list of images
- Parameters:
confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type:
- property accelerator: Accelerator | None
The accelerator being used.
- property colors: ndarray | None
The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- property labels: List[str] | None
The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- load(engine=Engine.DNN, accelerator=Accelerator.DEFAULT)
Load the model to an engine and accelerator.
- Parameters:
engine (
Engine
) – The engine to load the model toaccelerator (
Accelerator
) – The accelerator to load the model to
- property model_config: ModelConfig
The configuration of the model that was loaded
- property model_purpose: SupportedPurposes
The purpose of the model being used.
- publish_analytics(results, tag=None, **kwargs)
Publish results to the alwaysAI Analytics Service
Example usage:
try: inference.publish_analytics(results, tag='custom_tag') except edgeiq.PublishError as e: # Retry publish except edgeiq.ConnectionError as e: # Save state and exit app to reconnect
- Parameters:
- Raises:
ConnectionBlockedError
when using connection to the alwaysAI Device Agent and resources are at capacity,- Raises:
PacketRateError
when publish rate exceeds current limit,- Raises:
PacketSizeError
when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.
- class ObjectDetectionAnalytics(annotations, model_id, model_config=None)
Reads an analytics file and returns result with similar interface as ObjectDetection.
- Parameters:
annotations (
List
[List
[ObjectDetectionResults
]]) – Object detection results from all streams.model_id (
str
) – The ID of the model you want to use for object detection.model_config (
Optional
[ModelConfig
]) – The model configuration to load. model_id is ignored when model_config is set.
Typical usage:
# get object detection results from annotation file annotation_files = ['cam0.txt', 'cam1.txt', 'cam2.txt', 'cam3.txt'] annotation_results = [edgeiq.analytics_services.load_analytics_results(annotation) for annotation in annotation_files] obj_detect = edgeiq.ObjectDetectionAnalytics(annotations=annotation_results, model_id=model_id) results = obj_detect.detect_objects(image, confidence_level=.5)
- detect_objects_for_stream(stream_idx, confidence_level=0.3, overlap_threshold=0.3)
Perform Object Detection on an image for a particular stream by reading results from analytics file
- Parameters:
image – The image to analyze.
confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type:
- detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)
Perform Object Detection on an image by reading results from analytics file
- Parameters:
confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type:
- detect_objects_batch(images, confidence_level=0.3, overlap_threshold=0.3)
Perform Object Detection on a list of images by reading results from analytics file
- Parameters:
images (
Optional
[List
[ndarray
]]) – The list of images to analyze.confidence_level (
float
) – The minimum confidence level required to successfully accept a detection.overlap_threshold (
float
) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.
- Return type:
- property accelerator: Accelerator | None
The accelerator being used.
- property colors: ndarray | None
The auto-generated colors for the loaded model.
Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.
- property labels: List[str] | None
The labels for the loaded model.
Note: Initialized to None when the model doesn’t have any labels.
- load(engine=Engine.DNN, accelerator=Accelerator.DEFAULT)
Load the model to an engine and accelerator.
- Parameters:
engine (
Engine
) – The engine to load the model toaccelerator (
Accelerator
) – The accelerator to load the model to
- property model_config: ModelConfig
The configuration of the model that was loaded
- property model_purpose: SupportedPurposes
The purpose of the model being used.
- publish_analytics(results, tag=None, **kwargs)
Publish results to the alwaysAI Analytics Service
Example usage:
try: inference.publish_analytics(results, tag='custom_tag') except edgeiq.PublishError as e: # Retry publish except edgeiq.ConnectionError as e: # Save state and exit app to reconnect
- Parameters:
- Raises:
ConnectionBlockedError
when using connection to the alwaysAI Device Agent and resources are at capacity,- Raises:
PacketRateError
when publish rate exceeds current limit,- Raises:
PacketSizeError
when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.
- filter_predictions_by_label(predictions, label_list)
Filter a prediction list by label.
Typical usage:
people_and_apples = edgeiq.filter_predictions_by_label(predictions, ['person', 'apple'])
- Parameters:
predictions (
List
[TypeVar
(PredictionT
, bound=ObjectDetectionPrediction
)]) – A list of predictions to filter.label_list (
List
[str
]) – The list of labels to keep in the filtered output.
- Return type:
List
[TypeVar
(PredictionT
, bound=ObjectDetectionPrediction
)]- Returns:
The filtered predictions.
- markup_image(image, predictions, show_labels=True, show_confidences=True, colors=None, line_thickness=2, font_size=0.5, font_thickness=2, text_box_padding=10, bounding_box_corner_radius=0, text_box_corner_radius=0, text_box_alignment=('left', 'top'), text_box_position=('left', 'top'))
Draw boxes, labels, and confidences on the specified image.
Typical usage:
output_image_default= edgeiq.markup_image( image=input_image, predictions=predictions, ) output_image_no_label= edgeiq.markup_image( image=input_image, predictions=predictions, show_labels=False, show_confidences=False ) output_image_rounded_corners= edgeiq.markup_image( image=input_image, predictions=predictions, bounding_box_corner_radius=5, text_box_corner_radius=5 ) output_image_label_centered_top_of_bbox= edgeiq.markup_image( image=input_image, predictions=predictions, text_box_alignment=('center', 'bottom'), text_box_position=('center', 'top') ) output_image_label_centered_middle_of_bbox= edgeiq.markup_image( image=input_image, predictions=predictions, text_box_alignment=('center', 'middle'), text_box_position=('center', 'middle') ) output_image_label_right_aligned_bottom_of_bbox= edgeiq.markup_image( image=input_image, predictions=predictions, text_box_alignment=('right', 'bottom'), text_box_position=('right', 'top') )
- Parameters:
image (
ndarray
) – The image to draw on.predictions (
List
[ObjectDetectionPrediction
]) – The list of prediction results.show_labels (
bool
) – Indicates whether to show the label of the prediction.show_confidences (
bool
) – Indicates whether to show the confidence of the prediction.colors (
Optional
[List
[Tuple
[int
,int
,int
]]]) – A custom color list to use for the bounding boxes. The index of the color will be matched with a label index.line_thickness (
int
) – The thickness of the lines that make up the bounding box.font_size (
float
) – The scale factor for the text.font_thickness (
int
) – The thickness of the lines used to draw the text.text_box_padding (
int
) – The padding around the text in each text box.bounding_box_corner_radius (
int
) – The corner radius for the bounding boxes.text_box_corner_radius (
int
) – The corner radius for the text boxes.text_box_alignment (
Tuple
[Literal
['left'
,'center'
,'right'
],Literal
['top'
,'middle'
,'bottom'
]]) – Specifies the alignment of the text relative to the reference point. Accepts a tuple of horizontal (‘left’, ‘center’, ‘right’) and vertical (‘top’, ‘middle’, ‘bottom’) alignment literals.text_box_position (
Union
[Tuple
[Literal
['left'
,'center'
,'right'
],Literal
['top'
,'middle'
,'bottom'
]],Tuple
[int
,int
]]) – Defines the position of the text box’s reference point relative to the bounding box. Can either be a tuple of alignment literals (horizontal, vertical) for automatic positioning, or a tuple of integers (offset_x, offset_y) specifying a custom offset from the center of the bounding box.
- Return type:
- Returns:
The marked-up image.
- filter_predictions_by_area(predictions, min_area_thresh)
Filter a prediction list by bounding box area.
Typical usage:
larger_boxes = edgeiq.filter_predictions_by_area(predictions, 450)
- Parameters:
predictions (
List
[TypeVar
(PredictionT
, bound=ObjectDetectionPrediction
)]) – A list of predictions to filter.min_area_thresh (
float
) – The minimum bounding box area to keep in the filtered output.
- Return type:
List
[TypeVar
(PredictionT
, bound=ObjectDetectionPrediction
)]- Returns:
The filtered predictions.
- overlay_transparent_boxes(image, predictions, alpha=0.5, colors=None, show_labels=False, show_confidences=False)
Overlay area(s) of interest within an image. This utility is designed to work with object detection to display colored bounding boxes on the original image.
- Parameters:
image (
ndarray
) – The image to manipulate.predictions (
List
[ObjectDetectionPrediction
]) – The list of prediction results.alpha (
float
) – Transparency of the overlay. The closer alpha is to 1.0, the more opaque the overlay will be. Similarly, the closer alpha is to 0.0, the more transparent the overlay will appear.colors (
Optional
[List
[Tuple
[int
,int
,int
]]]) – A custom color list to use for the bounding boxes or object classes pixel mapshow_labels (
bool
) – Indicates whether to show the label of the prediction.show_confidences (
bool
) – Indicates whether to show the confidence of the prediction.
- Returns:
The overlaid image.
- blur_objects(image, predictions)
Blur objects detected in an image.
- Parameters:
image (
ndarray
) – The image to draw on.predictions (
List
[ObjectDetectionPrediction
]) – A list of predictions objects to blur.
- Return type:
- Returns:
The image with objects blurred.
- class ObjectDetectionPreProcessParams(image, size, scalefactor, mean, swaprb, crop)
- class ObjectDetectionPreProcessBatchParams(images, size, scalefactor, mean, swaprb, crop)
- class ObjectDetectionPostProcessParams(results, image, confidence_level, overlap_threshold, num_classes, model_input_size)