ObjectDetection

class ObjectDetectionPrediction(box, confidence, label, index)

A single prediction from ObjectDetection.

Parameters
  • box (BoundingBox) – The bounding box around the detected object.

  • confidence (float) – The confidence of this prediction.

  • label (str) – The label describing this prediction result.

  • index (int) – The index of this result in the master label list.

property label

The label describing this prediction result.

Return type

str

property box

The bounding box around the object.

Return type

BoundingBox

property confidence

The confidence of this prediction.

Return type

float

property index

The index of this result in the master label list.

Return type

int

class ObjectDetectionResults(predictions, duration, image, **kwargs)

All the results of object detection from ObjectDetection.

Predictions are stored in sorted order, with descending order of confidence.

Parameters
  • predictions (List[ObjectDetectionPrediction]) – The boxes for each prediction.

  • duration (float) – The duration of the inference.

  • image (Optional[ndarray]) – The image that the inference was performed on.

property duration

The duration of the inference in seconds.

Return type

float

property predictions

The list of predictions.

Return type

List[ObjectDetectionPrediction]

property image

The image the results were processed on.

Return type

Optional[ndarray]

class ObjectDetection(model_id, model_config=None, pre_process=None, pre_process_batch=None, post_process=None, post_process_batch=None)

Analyze and discover objects within an image.

Typical usage:

obj_detect = edgeiq.ObjectDetection(
    'alwaysai/ssd_mobilenet_v1_coco_2018_01_28'
)
obj_detect.load(engine=edgeiq.Engine.DNN)

<get image>
results = obj_detect.detect_objects(image, confidence_level=.5)
image = edgeiq.markup_image(
    image,
    results.predictions,
    colors=obj_detect.colors
)

for prediction in results.predictions:
    text.append("{}: {:2.2f}%".format(
        prediction.label, prediction.confidence * 100)
    )
Parameters
  • model_id (str) – The ID of the model you want to use for object detection.

  • model_config (Optional[ModelConfig]) – The model configuration to load. model_id is ignored when model_config is set.

  • pre_process (Optional[Callable[[ObjectDetectionPreProcessParams], ndarray]]) – The pre processing to use for inferencing. This is needed when using a model architecture not supported by edgeIQ.

  • pre_process_batch (Optional[Callable[[ObjectDetectionPreProcessBatchParams], ndarray]]) – The pre processing to use for batch inference mode. This is needed when using a model architecture not supported by edgeIQ.

  • post_process (Optional[Callable[[ObjectDetectionPostProcessParams], Tuple[List[BoundingBox], List[float], List[int]]]]) – The post processing to use for inferencing. This is needed when using a model architecture not supported by edgeIQ.

  • post_process_batch (Optional[Callable[[ObjectDetectionPostProcessBatchParams], Tuple[List[List[BoundingBox]], List[List[float]], List[List[int]]]]]) – The post processing to use for batch inference mode. This is needed when using a model architecture not supported by edgeIQ.

detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on an image

Parameters
  • image (ndarray) – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

ObjectDetectionResults

detect_objects_batch(images, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on a list of images

Parameters
  • images (List[ndarray]) – The list of images to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

List[ObjectDetectionResults]

publish_analytics(results, tag=None, **kwargs)

Publish Object Detection results to the alwaysAI Analytics Service

Parameters
  • results (ObjectDetectionResults) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

Raises

ConnectionBlockedError when using connection to the alwaysAI Device Agent and resources are at capacity,

Raises

PacketRateError when publish rate exceeds current limit,

Raises

PacketSizeError when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.

property accelerator

The accelerator being used.

Return type

Optional[Accelerator]

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Return type

Optional[ndarray]

property engine

The engine being used.

Return type

Optional[Engine]

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Return type

Optional[List[str]]

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config

The configuration of the model that was loaded

Return type

ModelConfig

property model_id

The ID of the loaded model.

Return type

str

property model_purpose

The purpose of the model being used.

Return type

str

class ObjectDetectionAnalytics(annotations, model_id, model_config=None)

Reads an analytics file and returns result with similar interface as ObjectDetection.

Parameters
  • annotations (List[List[ObjectDetectionResults]]) – Object detection results from all streams.

  • model_id (str) – The ID of the model you want to use for object detection.

  • model_config (Optional[ModelConfig]) – The model configuration to load. model_id is ignored when model_config is set.

Typical usage:

# get object detection results from annotation file
annotation_files = ['cam0.txt', 'cam1.txt', 'cam2.txt', 'cam3.txt']
annotation_results = [edgeiq.analytics_services.load_analytics_results(annotation)
                      for annotation in annotation_files]

obj_detect = edgeiq.ObjectDetectionAnalytics(annotations=annotation_results,
                                             model_id=model_id)

results = obj_detect.detect_objects(image, confidence_level=.5)
detect_objects_for_stream(stream_idx, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on an image for a particular stream by reading results from analytics file

Parameters
  • image – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

ObjectDetectionResults

detect_objects(image, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on an image by reading results from analytics file

Parameters
  • image (Optional[ndarray]) – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

ObjectDetectionResults

detect_objects_batch(images, confidence_level=0.3, overlap_threshold=0.3)

Perform Object Detection on a list of images by reading results from analytics file

Parameters
  • images (Optional[List[ndarray]]) – The list of images to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

  • overlap_threshold (float) – The minimum IOU threshold used to reject detections with Non-maximal Suppression during object detection using YOLO models. A higher value will result in a greater number of overlapping bounding boxes returned.

Return type

List[ObjectDetectionResults]

publish_analytics(results, tag=None, **kwargs)

Publish Object Detection results to the alwaysAI Analytics Service

Parameters
  • results (ObjectDetectionResults) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

Raises

ConnectionBlockedError when using connection to the alwaysAI Device Agent and resources are at capacity,

Raises

PacketRateError when publish rate exceeds current limit,

Raises

PacketSizeError when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.

property accelerator

The accelerator being used.

Return type

Optional[Accelerator]

property colors

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

Return type

Optional[ndarray]

property engine

The engine being used.

Return type

Optional[Engine]

property labels

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

Return type

Optional[List[str]]

load(engine=<Engine.DNN: 'DNN'>, accelerator=<Accelerator.DEFAULT: 'DEFAULT'>)

Load the model to an engine and accelerator.

Parameters
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config

The configuration of the model that was loaded

Return type

ModelConfig

property model_id

The ID of the loaded model.

Return type

str

property model_purpose

The purpose of the model being used.

Return type

str

filter_predictions_by_label(predictions, label_list)

Filter a prediction list by label.

Typical usage:

people_and_apples = edgeiq.filter_predictions_by_label(predictions, ['person', 'apple'])
Parameters
  • predictions (List[ObjectDetectionPrediction]) – A list of predictions to filter.

  • label_list (List[str]) – The list of labels to keep in the filtered output.

Return type

List[ObjectDetectionPrediction]

Returns

The filtered predictions.

markup_image(image, predictions, show_labels=True, show_confidences=True, colors=None, line_thickness=2, font_size=0.5, font_thickness=2, background_padding=10)

Draw boxes, labels, and confidences on the specified image.

Parameters
  • image (ndarray) – The image to draw on.

  • predictions (List[ObjectDetectionPrediction]) – The list of prediction results.

  • show_labels (bool) – Indicates whether to show the label of the prediction.

  • show_confidences (bool) – Indicates whether to show the confidence of the prediction.

  • colors (Optional[List[Tuple[int, int, int]]]) – A custom color list to use for the bounding boxes. The index of the color will be matched with a label index.

  • line_thickness (int) – The thickness of the lines that make up the bounding box.

  • font_size (float) – The scale factor for the text.

  • font_thickness (int) – The thickness of the lines used to draw the text.

Return type

ndarray

Returns

The marked-up image.

class ObjectDetectionPreProcessParams(image, size, scalefactor, mean, swaprb, crop)
image: numpy.ndarray
size: Tuple[int, int]
scalefactor: float
mean: Tuple[float, float, float]
swaprb: bool
crop: bool
class ObjectDetectionPreProcessBatchParams(images, size, scalefactor, mean, swaprb, crop)
images: List[numpy.ndarray]
size: Tuple[int, int]
scalefactor: float
mean: Tuple[float, float, float]
swaprb: bool
crop: bool
class ObjectDetectionPostProcessParams(results, image, confidence_level, overlap_threshold, num_classes, model_input_size)
results: Any
image: numpy.ndarray
confidence_level: float
overlap_threshold: float
num_classes: int
model_input_size: Tuple[int, int]
class ObjectDetectionPostProcessBatchParams(results, images, confidence_level, overlap_threshold, num_classes, model_input_size)
results: List[Any]
images: List[numpy.ndarray]
confidence_level: float
overlap_threshold: float
num_classes: int
model_input_size: Tuple[int, int]