InstanceSegmentation

class InstanceSegmentationPrediction(box, mask, contours, hierarchy, confidence, label, index)

A single prediction from InstanceSegmentation.

Parameters:
  • box (BoundingBox) – The bounding box around the detected object.

  • mask (ndarray) – The mask of the detected instance of the object.

  • contours (list) – The contours of the mask.

  • hierarchy (list) – Hierarchy of contours

  • confidence (float) – The confidence of this prediction.

  • label (str) – The label describing this prediction result.

  • index (int) – The index of this result in the master label list.

property label: str

The label describing this prediction result.

property index: int

The index of this result in the master label list.

property mask: ndarray

The mask of this detected instance of the object.

property contours: list

The contours generated for the mask of the detected instance of the object

property hierarchy: list

The hierarchy of contours generated for the mask of the detected instance of the object

property box: BoundingBox

The bounding box around the object.

property confidence: float

The confidence of this prediction.

class InstanceSegmentationResults(predictions, duration, image, **kwargs)

All the results of instance segmentation from :class: InstanceSegmentation.

Predictions are stored in sorted order, with descending order of confidence.

Parameters:
property duration: float

The duration of the inference in seconds.

property predictions: List[InstanceSegmentationPrediction]

The list of predictions.

property image: ndarray

The image the results were processed on.

class InstanceSegmentation(model_id, model_config=None)

Detect, segment and classify individual objects in an image.

Typical usage:

instance_segmentation = edgeiq.InstanceSegmentation(
        'alwaysai/mask_rcnn')
instance_segmentation.load(engine=edgeiq.Engine.DNN)

<get image>
results = instance_segmentation.segment_image(image, confidence_level=.5)
image = instance_segmentation.markup_image(
        image, results.predictions)

for prediction in results.predictions:
        text.append("{}: {:2.2f}%".format(
            prediction.label, prediction.confidence * 100))
Parameters:

model_id (str) – The ID of the model you want to use for instance segmentation.

property accelerator: Accelerator | None

The accelerator being used.

property colors: ndarray | None

The auto-generated colors for the loaded model.

Note: Initialized to None when the model doesn’t have any labels. Note: To update, the new colors list must be same length as the label list.

property engine: Engine | None

The engine being used.

property labels: List[str] | None

The labels for the loaded model.

Note: Initialized to None when the model doesn’t have any labels.

load(engine=Engine.DNN, accelerator=Accelerator.DEFAULT)

Load the model to an engine and accelerator.

Parameters:
  • engine (Engine) – The engine to load the model to

  • accelerator (Accelerator) – The accelerator to load the model to

property model_config: ModelConfig

The configuration of the model that was loaded

property model_id: str

The ID of the loaded model.

property model_purpose: SupportedPurposes

The purpose of the model being used.

publish_analytics(results, tag=None, **kwargs)

Publish results to the alwaysAI Analytics Service

Example usage:

try:
    inference.publish_analytics(results, tag='custom_tag')
except edgeiq.PublishError as e:
    # Retry publish
except edgeiq.ConnectionError as e:
    # Save state and exit app to reconnect
Parameters:
  • results (TypeVar(ResultsT)) – The results to publish.

  • tag (Optional[Any]) – Additional information to assist in querying and visualizations.

Raises:

ConnectionBlockedError when using connection to the alwaysAI Device Agent and resources are at capacity,

Raises:

PacketRateError when publish rate exceeds current limit,

Raises:

PacketSizeError when packet size exceeds current limit. Packet publish size and rate limits will be provided in the error message.

segment_image(image, confidence_level=0.3)

Detect, segment and classify individual objects in an image.

Parameters:
  • image (ndarray) – The image to analyze.

  • confidence_level (float) – The minimum confidence level required to successfully accept a detection.

Return type:

InstanceSegmentationResults

markup_image(image, predictions, show_labels=True, show_confidences=True, show_masks=True, colors=None, line_thickness=2, font_size=0.5, font_thickness=2)

Draw boxes, masks, labels, and confidences on the specified image.

Parameters:
  • image (ndarray) – The image to draw on.

  • predictions (List[InstanceSegmentationPrediction]) – The list of prediction results.

  • show_labels (bool) – Indicates whether to show the label of the prediction.

  • show_confidences (bool) – Indicates whether to show the confidences of the prediction.

  • show_masks (bool) – Indicates whether to show the masks of the prediction.

  • colors (Optional[List[Tuple[int, int, int]]]) – A custom color list to use for the bounding boxes. The index of the color will be matched with a label index.

  • line_thickness (int) – The thickness of the lines that make up the bounding box.

  • font_size (float) – The scale factor for the text.

  • font_thickness (int) – The thickness of the lines used to draw the text.

Return type:

ndarray